1
|
Lin W, Wang P, Qi Y, Zhao Y, Wei X. Progress and challenges of in vivo flow cytometry and its applications in circulating cells of eyes. Cytometry A 2024; 105:437-445. [PMID: 38549391 DOI: 10.1002/cyto.a.24837] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2023] [Revised: 02/05/2024] [Accepted: 03/15/2024] [Indexed: 06/15/2024]
Abstract
Circulating inflammatory cells in eyes have emerged as early indicators of numerous major diseases, yet the monitoring of these cells remains an underdeveloped field. In vivo flow cytometry (IVFC), a noninvasive technique, offers the promise of real-time, dynamic quantification of circulating cells. However, IVFC has not seen extensive applications in the detection of circulating cells in eyes, possibly due to the eye's unique physiological structure and fundus imaging limitations. This study reviews the current research progress in retinal flow cytometry and other fundus examination techniques, such as adaptive optics, ultra-widefield retinal imaging, multispectral imaging, and optical coherence tomography, to propose novel ideas for circulating cell monitoring.
Collapse
Affiliation(s)
- Wei Lin
- Department of Public Scientific Research Platform, School of Clinical and Basic Medicine, Shandong First Medical University & Shandong Academy of Medical Sciences, Jinan, China
- Institute of Basic Medicine, Shandong First Medical University & Shandong Academy of Medical Sciences, Jinan, China
| | - Peng Wang
- Department of Public Scientific Research Platform, School of Clinical and Basic Medicine, Shandong First Medical University & Shandong Academy of Medical Sciences, Jinan, China
- Institute of Basic Medicine, Shandong First Medical University & Shandong Academy of Medical Sciences, Jinan, China
| | - Yingxin Qi
- Department of Public Scientific Research Platform, School of Clinical and Basic Medicine, Shandong First Medical University & Shandong Academy of Medical Sciences, Jinan, China
- Institute of Basic Medicine, Shandong First Medical University & Shandong Academy of Medical Sciences, Jinan, China
| | - Yanlong Zhao
- Department of Public Scientific Research Platform, School of Clinical and Basic Medicine, Shandong First Medical University & Shandong Academy of Medical Sciences, Jinan, China
- Institute of Basic Medicine, Shandong First Medical University & Shandong Academy of Medical Sciences, Jinan, China
| | - Xunbin Wei
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/Beijing), Peking University Cancer Hospital & Institute, Beijing, China
- Biomedical Engineering Department, Peking University, Beijing, China
- Institute of Medical Technology, Peking University Health Science Center, Beijing, China
- International Cancer Institute, Peking University, Beijing, China
- Department of Critical-care Medicine, Shandong Provincial Hospital Affiliated to Shandong First Medical University, Jinan, China
| |
Collapse
|
2
|
Fang H, Li F, Wu J, Fu H, Sun X, Orlando JI, Bogunović H, Zhang X, Xu Y. Open Fundus Photograph Dataset with Pathologic Myopia Recognition and Anatomical Structure Annotation. Sci Data 2024; 11:99. [PMID: 38245589 PMCID: PMC10799845 DOI: 10.1038/s41597-024-02911-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2022] [Accepted: 01/02/2024] [Indexed: 01/22/2024] Open
Abstract
Pathologic myopia (PM) is a common blinding retinal degeneration suffered by highly myopic population. Early screening of this condition can reduce the damage caused by the associated fundus lesions and therefore prevent vision loss. Automated diagnostic tools based on artificial intelligence methods can benefit this process by aiding clinicians to identify disease signs or to screen mass populations using color fundus photographs as inputs. This paper provides insights about PALM, our open fundus imaging dataset for pathological myopia recognition and anatomical structure annotation. Our databases comprises 1200 images with associated labels for the pathologic myopia category and manual annotations of the optic disc, the position of the fovea and delineations of lesions such as patchy retinal atrophy (including peripapillary atrophy) and retinal detachment. In addition, this paper elaborates on other details such as the labeling process used to construct the database, the quality and characteristics of the samples and provides other relevant usage notes.
Collapse
Affiliation(s)
- Huihui Fang
- South China University of Technology, Guangzhou, China
- Pazhou Lab., Guangzhou, China
| | - Fei Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangzhou, China
| | - Junde Wu
- National University of Singapore, Singapore, Singapore
| | - Huazhu Fu
- Institute of High Performance Computing, Agency for Science, Technology and Research, Singapore, Singapore
| | - Xu Sun
- Pazhou Lab., Guangzhou, China
| | | | - Hrvoje Bogunović
- Christian Doppler Lab for Artificial Intelligence in Retina, Department of Ophthalmology and Optometry, Medical University of Vienna, Vienna, Austria
| | - Xiulan Zhang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangzhou, China.
| | - Yanwu Xu
- South China University of Technology, Guangzhou, China.
- Pazhou Lab., Guangzhou, China.
| |
Collapse
|
3
|
Tochel C, Pead E, McTrusty A, Buckmaster F, MacGillivray T, Tatham AJ, Strang NC, Dhillon B, Bernabeu MO. Novel linkage approach to join community-acquired and national data. BMC Med Res Methodol 2024; 24:13. [PMID: 38233744 PMCID: PMC10792819 DOI: 10.1186/s12874-024-02143-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2023] [Accepted: 01/05/2024] [Indexed: 01/19/2024] Open
Abstract
BACKGROUND Community optometrists in Scotland have performed regular free-at-point-of-care eye examinations for all, for over 15 years. Eye examinations include retinal imaging but image storage is fragmented and they are not used for research. The Scottish Collaborative Optometry-Ophthalmology Network e-research project aimed to collect these images and create a repository linked to routinely collected healthcare data, supporting the development of pre-symptomatic diagnostic tools. METHODS As the image record was usually separate from the patient record and contained minimal patient information, we developed an efficient matching algorithm using a combination of deterministic and probabilistic steps which minimised the risk of false positives, to facilitate national health record linkage. We visited two practices and assessed the data contained in their image device and Practice Management Systems. Practice activities were explored to understand the context of data collection processes. Iteratively, we tested a series of matching rules which captured a high proportion of true positive records compared to manual matches. The approach was validated by testing manual matching against automated steps in three further practices. RESULTS A sequence of deterministic rules successfully matched 95% of records in the three test practices compared to manual matching. Adding two probabilistic rules to the algorithm successfully matched 99% of records. CONCLUSIONS The potential value of community-acquired retinal images can be harnessed only if they are linked to centrally-held healthcare care data. Despite the lack of interoperability between systems within optometry practices and inconsistent use of unique identifiers, data linkage is possible using robust, almost entirely automated processes.
Collapse
Affiliation(s)
- Claire Tochel
- Centre for Medical Informatics, University of Edinburgh, Edinburgh, UK.
| | - Emma Pead
- Centre for Clinical Brain Sciences, University of Edinburgh, Edinburgh, UK
| | - Alice McTrusty
- Centre for Clinical Brain Sciences, University of Edinburgh, Edinburgh, UK
| | - Fiona Buckmaster
- Centre for Clinical Brain Sciences, University of Edinburgh, Edinburgh, UK
| | - Tom MacGillivray
- Centre for Clinical Brain Sciences, University of Edinburgh, Edinburgh, UK
| | - Andrew J Tatham
- Centre for Clinical Brain Sciences, University of Edinburgh, Edinburgh, UK
- Princess Alexandra Eye Pavilion, NHS Lothian, Edinburgh, UK
| | - Niall C Strang
- Department of Vision Sciences, Glasgow Caledonian University, Glasgow, UK
| | - Baljean Dhillon
- Centre for Clinical Brain Sciences, University of Edinburgh, Edinburgh, UK
- Princess Alexandra Eye Pavilion, NHS Lothian, Edinburgh, UK
| | - Miguel O Bernabeu
- Centre for Medical Informatics, University of Edinburgh, Edinburgh, UK
| |
Collapse
|
4
|
Chiang CYN, Braeu FA, Chuangsuwanich T, Tan RKY, Chua J, Schmetterer L, Thiery AH, Buist ML, Girard MJA. Are Macula or Optic Nerve Head Structures Better at Diagnosing Glaucoma? An Answer Using Artificial Intelligence and Wide-Field Optical Coherence Tomography. Transl Vis Sci Technol 2024; 13:5. [PMID: 38197730 PMCID: PMC10787590 DOI: 10.1167/tvst.13.1.5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2023] [Accepted: 11/06/2023] [Indexed: 01/11/2024] Open
Abstract
Purpose We wanted to develop a deep-learning algorithm to automatically segment optic nerve head (ONH) and macula structures in three-dimensional (3D) wide-field optical coherence tomography (OCT) scans and to assess whether 3D ONH or macula structures (or a combination of both) provide the best diagnostic power for glaucoma. Methods A cross-sectional comparative study was performed using 319 OCT scans of glaucoma eyes and 298 scans of nonglaucoma eyes. Scans were compensated to improve deep-tissue visibility. We developed a deep-learning algorithm to automatically label major tissue structures, trained with 270 manually annotated B-scans. The performance was assessed using the Dice coefficient (DC). A glaucoma classification algorithm (3D-CNN) was then designed using 500 OCT volumes and corresponding automatically segmented labels. This algorithm was trained and tested on three datasets: cropped scans of macular tissues, those of ONH tissues, and wide-field scans. The classification performance for each dataset was reported using the area under the curve (AUC). Results Our segmentation algorithm achieved a DC of 0.94 ± 0.003. The classification algorithm was best able to diagnose glaucoma using wide-field scans, followed by ONH scans, and finally macula scans, with AUCs of 0.99 ± 0.01, 0.93 ± 0.06 and 0.91 ± 0.11, respectively. Conclusions This study showed that wide-field OCT may allow for significantly improved glaucoma diagnosis over typical OCTs of the ONH or macula. Translational Relevance This could lead to mainstream clinical adoption of 3D wide-field OCT scan technology.
Collapse
Affiliation(s)
- Charis Y. N. Chiang
- Department of Biomedical Engineering, National University of Singapore, Singapore
- Ophthalmic Engineering & Innovation Laboratory, Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
| | - Fabian A. Braeu
- Ophthalmic Engineering & Innovation Laboratory, Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
- Singapore-MIT Alliance for Research and Technology, Singapore
- Yong Loo Lin School of Medicine, National University of Singapore, Singapore
| | - Thanadet Chuangsuwanich
- Department of Biomedical Engineering, National University of Singapore, Singapore
- Ophthalmic Engineering & Innovation Laboratory, Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
| | - Royston K. Y. Tan
- Department of Biomedical Engineering, National University of Singapore, Singapore
- Ophthalmic Engineering & Innovation Laboratory, Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
| | - Jacqueline Chua
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
- SERI-NTU Advanced Ocular Engineering (STANCE), Singapore, Singapore
- Duke-NUS Graduate Medical School, Singapore
| | - Leopold Schmetterer
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
- SERI-NTU Advanced Ocular Engineering (STANCE), Singapore, Singapore
- Duke-NUS Graduate Medical School, Singapore
- School of Chemical and Biological Engineering, Nanyang Technological University, Singapore
- Department of Clinical Pharmacology, Medical University of Vienna, Austria
- Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Austria
- Institute of Molecular and Clinical Ophthalmology, Basel, Switzerland
| | - Alexandre H. Thiery
- Department of Statistics and Data Sciences, National University of Singapore, Singapore
| | - Martin L. Buist
- Department of Biomedical Engineering, National University of Singapore, Singapore
| | - Michaël J. A. Girard
- Ophthalmic Engineering & Innovation Laboratory, Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
- Duke-NUS Graduate Medical School, Singapore
- Institute of Molecular and Clinical Ophthalmology, Basel, Switzerland
| |
Collapse
|
5
|
Chen Q, Zhou M, Cao Y, Zheng X, Mao H, Lei C, Lin W, Jiang J, Chen Y, Song D, Xu X, Ye C, Liang Y. Quality assessment of non-mydriatic fundus photographs for glaucoma screening in primary healthcare centres: a real-world study. BMJ Open Ophthalmol 2023; 8:e001493. [PMID: 38092419 PMCID: PMC10729214 DOI: 10.1136/bmjophth-2023-001493] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2023] [Accepted: 11/22/2023] [Indexed: 12/18/2023] Open
Abstract
BACKGROUND This study assessed the quality distribution of non-mydriatic fundus photographs (NMFPs) in real-world glaucoma screening and analysed its influencing factors. METHODS This cross-sectional study was conducted in primary healthcare centres in the Yinzhou District, China, from 17 March to 3 December 2021. The quality distribution of bilateral NMFPs was assessed by the Digital Reading Department of the Eye Hospital of Wenzhou Medical University. Generalised estimating equations and logistic regression models identified factors affecting image quality. RESULTS A total of 17 232 photographs of 8616 subjects were assessed. Of these, 11.9% of images were reliable for the right eyes, while only 4.6% were reliable for the left eyes; 93.6% of images were readable in the right eyes, while 90.3% were readable in the left eyes. In adjusted models, older age was associated with decreased odds of image readability (adjusted OR (aOR)=1.07, 95% CI 1.06~1.08, p<0.001). A larger absolute value of spherical equivalent significantly decreased the odds of image readability (all p<0.001). Media opacity and worse visual acuity had a significantly lower likelihood of achieving readable NMFPs (aOR=1.52, 95% CI 1.31~1.75; aOR=1.70, 95% CI 1.42~2.02, respectively, all p<0.001). Astigmatism axes within 31°~60° and 121°~150° had lower odds of image readability (aOR=1.35, 95% CI 1.11~1.63, p<0.01) than astigmatism axes within 180°±30°. CONCLUSIONS The image readability of NMFPs in large-scale glaucoma screening for individuals 50 years and older is comparable with relevant studies, but image reliability is unsatisfactory. Addressing the associated factors may be vital when implementing ophthalmological telemedicine in underserviced areas. TRIAL REGISTRATION NUMBER ChiCTR2200059277.
Collapse
Affiliation(s)
- Qi Chen
- National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, Zhejiang, China
- Department of Ophthalmology, The Second Affiliated Hospital of Zhengzhou University, Zhengzhou, Henan, China
| | - Mengtian Zhou
- National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, Zhejiang, China
| | - Yang Cao
- National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, Zhejiang, China
| | - Xuanli Zheng
- National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, Zhejiang, China
| | - Huiyan Mao
- National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, Zhejiang, China
| | - Changrong Lei
- National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, Zhejiang, China
| | - Wanglong Lin
- National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, Zhejiang, China
| | - Junhong Jiang
- Department of Ophthalmology, Shanghai General Hospital, National Clinical Research Center for Eye Diseases, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Yize Chen
- Department of Ophthalmology, First Hospital of Shanxi Medical University, Taiyuan, China
| | - Di Song
- Department of Ophthalmology, The First People's Hospital of Huzhou, The First Affiliated Hospital of Huzhou Teacher College, Huzhou, China
| | - Xiang Xu
- National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, Zhejiang, China
| | - Cong Ye
- National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, Zhejiang, China
| | - Yuanbo Liang
- National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, Zhejiang, China
- Glaucoma Research Institute, Wenzhou Medical University, Wenzhou, China
| |
Collapse
|
6
|
Yang Z, Zhang Y, Xu K, Sun J, Wu Y, Zhou M. DeepDrRVO: A GAN-auxiliary two-step masked transformer framework benefits early recognition and differential diagnosis of retinal vascular occlusion from color fundus photographs. Comput Biol Med 2023; 163:107148. [PMID: 37329618 DOI: 10.1016/j.compbiomed.2023.107148] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Revised: 05/25/2023] [Accepted: 06/07/2023] [Indexed: 06/19/2023]
Abstract
Retinal vascular occlusion (RVO) are common causes of visual impairment. Accurate recognition and differential diagnosis of RVO are unmet medical needs for determining appropriate treatments and health care to properly manage the ocular condition and minimize the damaging effects. To leverage deep learning as a potential solution to detect RVO reliably, we developed a deep learning model on color fundus photographs (CFPs) using a two-step masked SwinTransformer with a Few-Sample Generator (FSG)-auxiliary training framework (called DeepDrRVO) for early and differential RVO diagnosis. The DeepDrRVO was trained on the training set from the in-house cohort and achieved consistently high performance in early recognition and differential diagnosis of RVO in the validation set from the in-house cohort with an accuracy of 86.3%, and other three independent multi-center cohorts with the accuracy of 92.6%, 90.8%, and 100%. Further comparative analysis showed that the proposed DeepDrRVO outperforms conventional state-of-the-art classification models, such as ResNet18, ResNet50d, MobileNetv3, and EfficientNetb1. These results highlight the potential benefits of the deep learning model in automatic early RVO detection and differential diagnosis for improving clinical outcomes and providing insights into diagnosing other ocular diseases with a few-shot learning challenge. The DeepDrRVO is publicly available on https://github.com/ZhouSunLab-Workshops/DeepDrRVO.
Collapse
Affiliation(s)
- Zijian Yang
- National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, PR China
| | - Yibo Zhang
- National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, PR China
| | - Ke Xu
- National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, PR China
| | - Jie Sun
- National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, PR China.
| | - Yue Wu
- The Affiliated Ningbo Eye Hospital of Wenzhou Medical University, Ningbo, 315042, PR China.
| | - Meng Zhou
- National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, PR China; Institute of PSI Genomics, Wenzhou, 325027, PR China.
| |
Collapse
|
7
|
Aurangzeb K. A residual connection enabled deep neural network model for optic disk and optic cup segmentation for glaucoma diagnosis. Sci Prog 2023; 106:368504231201329. [PMID: 37743660 PMCID: PMC10521305 DOI: 10.1177/00368504231201329] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/26/2023]
Abstract
Glaucoma diagnosis at an early stage is vital for the timely initiation of its treatment for and preventing possible vision loss. For glaucoma diagnosis, an accurate estimation of the cup-to-disk ratio (CDR) is required. The current automatic CDR computation techniques attribute lower accuracy and higher complexity, which are important considerations for diagnostics system design to be used for such critical diagnoses. The current methods involve a deeper deep learning model, comprising a large number of parameters, which results in higher system complexity and training/testing time. To address these challenges, this paper proposes a Residual Connection (non-identity)-based Deep Neural Network (RC-DNN), which is based on non-identity residual connectivity for joint optic disk (OD) and optic cup (OC) detection. The proposed model is emboldened by efficient residual connectivity, which is beneficial in several ways. First, the model is efficient and can perform simultaneous segmentation of the OC and OD. Second, the efficient residual information flow permeates the vanishing gradient problem which results in faster converges of the model. Third, feature inspiration empowers the network to perform the segmentation with only a few network layers. We performed a comprehensive performance evaluation of the developed model based on its training in RIM-ONE and DRISHTIGS databases. For OC segmentation, for the images (test set) from {DRISHTI-GS and RIM-ONE} datasets, our proposed model achieves the dice coefficient, Jaccard coefficient, sensitivity, specificity, and balanced accuracy of {92.62, 86.52}, {86.87, 77.54}, {94.21, 95.36}, {99.83, 99.639}, and {94.2, 98.9}, respectively. These experimental results indicate that the developed model provides significant performance enhancement for joint OC and OD segmentation. Additionally, the reduced computational complexity based on reduced model parameters and higher segmentation accuracy provides the additional features of efficacy, robustness, and reliability of the developed model. These attributes of the developed model advocate for its deployment of population-scale glaucoma screening programs.
Collapse
Affiliation(s)
- Khursheed Aurangzeb
- Department of Computer Engineering, College of Computer and Information Sciences, King Saud University, Riyadh, Saudi Arabia
| |
Collapse
|
8
|
Zedan MJM, Zulkifley MA, Ibrahim AA, Moubark AM, Kamari NAM, Abdani SR. Automated Glaucoma Screening and Diagnosis Based on Retinal Fundus Images Using Deep Learning Approaches: A Comprehensive Review. Diagnostics (Basel) 2023; 13:2180. [PMID: 37443574 DOI: 10.3390/diagnostics13132180] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2023] [Revised: 06/16/2023] [Accepted: 06/17/2023] [Indexed: 07/15/2023] Open
Abstract
Glaucoma is a chronic eye disease that may lead to permanent vision loss if it is not diagnosed and treated at an early stage. The disease originates from an irregular behavior in the drainage flow of the eye that eventually leads to an increase in intraocular pressure, which in the severe stage of the disease deteriorates the optic nerve head and leads to vision loss. Medical follow-ups to observe the retinal area are needed periodically by ophthalmologists, who require an extensive degree of skill and experience to interpret the results appropriately. To improve on this issue, algorithms based on deep learning techniques have been designed to screen and diagnose glaucoma based on retinal fundus image input and to analyze images of the optic nerve and retinal structures. Therefore, the objective of this paper is to provide a systematic analysis of 52 state-of-the-art relevant studies on the screening and diagnosis of glaucoma, which include a particular dataset used in the development of the algorithms, performance metrics, and modalities employed in each article. Furthermore, this review analyzes and evaluates the used methods and compares their strengths and weaknesses in an organized manner. It also explored a wide range of diagnostic procedures, such as image pre-processing, localization, classification, and segmentation. In conclusion, automated glaucoma diagnosis has shown considerable promise when deep learning algorithms are applied. Such algorithms could increase the accuracy and efficiency of glaucoma diagnosis in a better and faster manner.
Collapse
Affiliation(s)
- Mohammad J M Zedan
- Department of Electrical, Electronic and Systems Engineering, Faculty of Engineering and Built Environment, Universiti Kebangsaan Malaysia, Bangi 43600, Selangor, Malaysia
- Computer and Information Engineering Department, College of Electronics Engineering, Ninevah University, Mosul 41002, Iraq
| | - Mohd Asyraf Zulkifley
- Department of Electrical, Electronic and Systems Engineering, Faculty of Engineering and Built Environment, Universiti Kebangsaan Malaysia, Bangi 43600, Selangor, Malaysia
| | - Ahmad Asrul Ibrahim
- Department of Electrical, Electronic and Systems Engineering, Faculty of Engineering and Built Environment, Universiti Kebangsaan Malaysia, Bangi 43600, Selangor, Malaysia
| | - Asraf Mohamed Moubark
- Department of Electrical, Electronic and Systems Engineering, Faculty of Engineering and Built Environment, Universiti Kebangsaan Malaysia, Bangi 43600, Selangor, Malaysia
| | - Nor Azwan Mohamed Kamari
- Department of Electrical, Electronic and Systems Engineering, Faculty of Engineering and Built Environment, Universiti Kebangsaan Malaysia, Bangi 43600, Selangor, Malaysia
| | - Siti Raihanah Abdani
- School of Computing Sciences, College of Computing, Informatics and Media, Universiti Teknologi MARA, Shah Alam 40450, Selangor, Malaysia
| |
Collapse
|
9
|
Wawer Matos PA, Reimer RP, Rokohl AC, Caldeira L, Heindl LM, Große Hokamp N. Artificial Intelligence in Ophthalmology - Status Quo and Future Perspectives. Semin Ophthalmol 2023; 38:226-237. [PMID: 36356300 DOI: 10.1080/08820538.2022.2139625] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
Artificial intelligence (AI) is an emerging technology in healthcare and holds the potential to disrupt many arms in medical care. In particular, disciplines using medical imaging modalities, including e.g. radiology but ophthalmology as well, are already confronted with a wide variety of AI implications. In ophthalmologic research, AI has demonstrated promising results limited to specific diseases and imaging tools, respectively. Yet, implementation of AI in clinical routine is not widely spread due to availability, heterogeneity in imaging techniques and AI methods. In order to describe the status quo, this narrational review provides a brief introduction to AI ("what the ophthalmologist needs to know"), followed by an overview of different AI-based applications in ophthalmology and a discussion on future challenges.Abbreviations: Age-related macular degeneration, AMD; Artificial intelligence, AI; Anterior segment OCT, AS-OCT; Coronary artery calcium score, CACS; Convolutional neural network, CNN; Deep convolutional neural network, DCNN; Diabetic retinopathy, DR; Machine learning, ML; Optical coherence tomography, OCT; Retinopathy of prematurity, ROP; Support vector machine, SVM; Thyroid-associated ophthalmopathy, TAO.
Collapse
Affiliation(s)
| | - Robert P Reimer
- Department of Diagnostic and Interventional Radiology, University Hospital of Cologne, Köln, Germany
| | - Alexander C Rokohl
- Department of Ophthalmology, University Hospital of Cologne, Köln, Germany
| | - Liliana Caldeira
- Department of Diagnostic and Interventional Radiology, University Hospital of Cologne, Köln, Germany
| | - Ludwig M Heindl
- Department of Ophthalmology, University Hospital of Cologne, Köln, Germany
| | - Nils Große Hokamp
- Department of Diagnostic and Interventional Radiology, University Hospital of Cologne, Köln, Germany
| |
Collapse
|
10
|
A novel deep learning model for breast lesion classification using ultrasound Images: A multicenter data evaluation. Phys Med 2023; 107:102560. [PMID: 36878133 DOI: 10.1016/j.ejmp.2023.102560] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/16/2022] [Revised: 02/20/2023] [Accepted: 02/26/2023] [Indexed: 03/07/2023] Open
Abstract
PURPOSE Breast cancer is one of the major reasons of death due to cancer in women. Early diagnosis is the most critical key for disease screening, control, and reducing mortality. A robust diagnosis relies on the correct classification of breast lesions. While breast biopsy is referred to as the "gold standard" in assessing both the activity and degree of breast cancer, it is an invasive and time-consuming approach. METHOD The current study's primary objective was to develop a novel deep-learning architecture based on the InceptionV3 network to classify ultrasound breast lesions. The main promotions of the proposed architecture were converting the InceptionV3 modules to residual inception ones, increasing their number, and altering the hyperparameters. In addition, we used a combination of five datasets (three public datasets and two prepared from different imaging centers) for training and evaluating the model. RESULTS The dataset was split into the train (80%) and test (20%) groups. The model achieved 0.83, 0.77, 0.8, 0.81, 0.81, 0.18, and 0.77 for the precision, recall, F1 score, accuracy, AUC, Root Mean Squared Error, and Cronbach's α in the test group, respectively. CONCLUSIONS This study illustrates that the improved InceptionV3 can robustly classify breast tumors, potentially reducing the need for biopsy in many cases.
Collapse
|
11
|
Parashar D, Agrawal DK. Classification of Glaucoma Stages Using Image Empirical Mode Decomposition from Fundus Images. J Digit Imaging 2022; 35:1283-1292. [PMID: 35581407 PMCID: PMC9582090 DOI: 10.1007/s10278-022-00648-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2020] [Revised: 03/18/2022] [Accepted: 04/03/2022] [Indexed: 11/29/2022] Open
Abstract
One of the most prevalent causes of visual loss and blindness is glaucoma. Conventionally, instrument-based tools are employed for glaucoma screening. However, they are inefficient, time-consuming, and manual. Hence, computerized methodologies are needed for fast and accurate diagnosis of glaucoma. Therefore, we proposed a Computer-Aided Diagnosis (CAD) method for the classification of glaucoma stages using Image Empirical Mode decomposition (IEMD). In this study, IEMD is applied to decompose the preprocessed fundus photographs into different Intrinsic Mode Functions (IMFs) to capture the pixel variations. Then, the significant texture-based descriptors have been computed from the IMFs. A dimensionality reduction approach called Principal Component Analysis (PCA) has been employed to pick the robust descriptors from the retrieved feature set. We used the Analysis of Variance (ANOVA) test for feature ranking. Finally, the LS-SVM classifier has been employed to classify glaucoma stages. The proposed CAD system achieved a classification accuracy of 94.45% for the binary classification on the RIM-ONE r12 database. Our approach demonstrated better glaucoma classification performance than the existing automated systems.
Collapse
Affiliation(s)
- Deepak Parashar
- Department of Electronics and Communication Engineering, IES College of Technology, Bhopal, 462044, MP, India.
- Department of Electronics and Communication Engineering, Maulana Azad National Institute of Technology, Bhopal, 462003, MP, India.
| | - Dheraj Kumar Agrawal
- Department of Electronics and Communication Engineering, Maulana Azad National Institute of Technology, Bhopal, 462003, MP, India
| |
Collapse
|
12
|
Retinal Glaucoma Public Datasets: What Do We Have and What Is Missing? J Clin Med 2022; 11:jcm11133850. [PMID: 35807135 PMCID: PMC9267177 DOI: 10.3390/jcm11133850] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2022] [Revised: 06/29/2022] [Accepted: 06/30/2022] [Indexed: 11/16/2022] Open
Abstract
Public databases for glaucoma studies contain color images of the retina, emphasizing the optic papilla. These databases are intended for research and standardized automated methodologies such as those using deep learning techniques. These techniques are used to solve complex problems in medical imaging, particularly in the automated screening of glaucomatous disease. The development of deep learning techniques has demonstrated potential for implementing protocols for large-scale glaucoma screening in the population, eliminating possible diagnostic doubts among specialists, and benefiting early treatment to delay the onset of blindness. However, the images are obtained by different cameras, in distinct locations, and from various population groups and are centered on multiple parts of the retina. We can also cite the small number of data, the lack of segmentation of the optic papillae, and the excavation. This work is intended to offer contributions to the structure and presentation of public databases used in the automated screening of glaucomatous papillae, adding relevant information from a medical point of view. The gold standard public databases present images with segmentations of the disc and cupping made by experts and division between training and test groups, serving as a reference for use in deep learning architectures. However, the data offered are not interchangeable. The quality and presentation of images are heterogeneous. Moreover, the databases use different criteria for binary classification with and without glaucoma, do not offer simultaneous pictures of the two eyes, and do not contain elements for early diagnosis.
Collapse
|
13
|
Joshi A, Sharma KK. Graph deep network for optic disc and optic cup segmentation for glaucoma disease using retinal imaging. Phys Eng Sci Med 2022; 45:847-858. [PMID: 35737221 DOI: 10.1007/s13246-022-01154-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2022] [Accepted: 06/07/2022] [Indexed: 11/25/2022]
Abstract
The fundus imaging method of eye screening detects eye diseases by segmenting the optic disc (OD) and optic cup (OC). OD and OC are still challenging to segment accurately. This work proposes three-layer graph-based deep architecture with an enhanced fusion method for OD and OC segmentation. CNN encoder-decoder architecture, extended graph network, and approximation via fusion-based rule are explored for connecting local and global information. A graph-based model is developed for combining local and overall knowledge. By extending feature masking, regularization of repetitive features with fusion for combining channels has been done. The performance of the proposed network is evaluated through the analysis of different metric parameters such as dice similarity coefficient (DSC), intersection of union (IOU), accuracy, specificity, sensitivity. Experimental verification of this methodology has been done using the four benchmarks publicly available datasets DRISHTI-GS, RIM-ONE for OD, and OC segmentation. In addition, DRIONS-DB and HRF fundus imaging datasets were analyzed for optimizing the model's performance based on OD segmentation. DSC metric of methodology achieved 0.97 and 0.96 for DRISHTI-GS and RIM-ONE, respectively. Similarly, IOU measures for DRISHTI-GS and RIM-ONE datasets were 0.96 and 0.93, respectively, for OD measurement. For OC segmentation, DSC and IOU were measured as 0.93 and 0.90 respectively for DRISHTI-GS and 0.83 and 0.82 for RIM-ONE data. The proposed technique improved value of metrics with most of the existing methods in terms of DSC and IOU of the results metric of the experiments for OD and OC segmentation.
Collapse
Affiliation(s)
- Abhilasha Joshi
- Electronics and Communication Engineering, Malaviya National Institute of Technology, Jaipur, Rajasthan, 302017, India.
| | - K K Sharma
- Electronics and Communication Engineering, Malaviya National Institute of Technology, Jaipur, Rajasthan, 302017, India
| |
Collapse
|
14
|
Kovalyk O, Morales-Sánchez J, Verdú-Monedero R, Sellés-Navarro I, Palazón-Cabanes A, Sancho-Gómez JL. PAPILA: Dataset with fundus images and clinical data of both eyes of the same patient for glaucoma assessment. Sci Data 2022; 9:291. [PMID: 35680965 PMCID: PMC9184612 DOI: 10.1038/s41597-022-01388-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2021] [Accepted: 05/16/2022] [Indexed: 12/28/2022] Open
Abstract
Glaucoma is one of the ophthalmological diseases that frequently causes loss of vision in today's society. Previous studies assess which anatomical parameters of the optic nerve can be predictive of glaucomatous damage, but to date there is no test that by itself has sufficient sensitivity and specificity to diagnose this disease. This work provides a public dataset with medical data and fundus images of both eyes of the same patient. Segmentations of the cup and optic disc, as well as the labeling of the patients based on the evaluation of clinical data are also provided. The dataset has been tested with a neural network to classify healthy and glaucoma patients. Specifically, the ResNet-50 has been used as the basis to classify patients using information from each eye independently as well as using the joint information from both eyes of each patient. Results provide the baseline metrics, with the aim of promoting research in the early detection of glaucoma based on the joint analysis of both eyes of the same patient.
Collapse
|
15
|
Wang Y, Yu X, Wu C. An Efficient Hierarchical Optic Disc and Cup Segmentation Network Combined with Multi-task Learning and Adversarial Learning. J Digit Imaging 2022; 35:638-653. [PMID: 35212860 PMCID: PMC9156633 DOI: 10.1007/s10278-021-00579-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2021] [Revised: 12/24/2021] [Accepted: 12/29/2021] [Indexed: 12/15/2022] Open
Abstract
Automatic and accurate segmentation of optic disc (OD) and optic cup (OC) in fundus images is a fundamental task in computer-aided ocular pathologies diagnosis. The complex structures, such as blood vessels and macular region, and the existence of lesions in fundus images bring great challenges to the segmentation task. Recently, the convolutional neural network-based methods have exhibited its potential in fundus image analysis. In this paper, we propose a cascaded two-stage network architecture for robust and accurate OD and OC segmentation in fundus images. In the first stage, the U-Net like framework with an improved attention mechanism and focal loss is proposed to detect accurate and reliable OD location from the full-scale resolution fundus images. Based on the outputs of the first stage, a refined segmentation network in the second stage that integrates multi-task framework and adversarial learning is further designed for OD and OC segmentation separately. The multi-task framework is conducted to predict the OD and OC masks by simultaneously estimating contours and distance maps as auxiliary tasks, which can guarantee the smoothness and shape of object in segmentation predictions. The adversarial learning technique is introduced to encourage the segmentation network to produce an output that is consistent with the true labels in space and shape distribution. We evaluate the performance of our method using two public retinal fundus image datasets (RIM-ONE-r3 and REFUGE). Extensive ablation studies and comparison experiments with existing methods demonstrate that our approach can produce competitive performance compared with state-of-the-art methods.
Collapse
Affiliation(s)
- Ying Wang
- grid.412252.20000 0004 0368 6968College of Information Science and Engineering, Northeastern University, Liaoning, 110819 China
| | - Xiaosheng Yu
- grid.412252.20000 0004 0368 6968Faculty of Robot Science and Engineering, Northeastern University, Liaoning, 110819 China
| | - Chengdong Wu
- grid.412252.20000 0004 0368 6968Faculty of Robot Science and Engineering, Northeastern University, Liaoning, 110819 China
| |
Collapse
|
16
|
Wang W, Zhou W, Ji J, Yang J, Guo W, Gong Z, Yi Y, Wang J. Deep sparse autoencoder integrated with three‐stage framework for glaucoma diagnosis. INT J INTELL SYST 2022. [DOI: 10.1002/int.22911] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Affiliation(s)
- Wenle Wang
- School of Software Jiangxi Normal University Nanchang China
| | - Wei Zhou
- College of Computer Science Shenyang Aerospace University Shenyang China
| | - Jianhang Ji
- College of Computer Science Shenyang Aerospace University Shenyang China
| | - Jikun Yang
- Shenyang Aier Excellence Eye Hospital Co. Ltd. Shenyang China
| | - Wei Guo
- College of Computer Science Shenyang Aerospace University Shenyang China
| | - Zhaoxuan Gong
- College of Computer Science Shenyang Aerospace University Shenyang China
| | - Yugen Yi
- School of Software Jiangxi Normal University Nanchang China
| | - Jianzhong Wang
- College of Information Science and Technology Northeast Normal University Changchun China
| |
Collapse
|
17
|
A Comprehensive Review of Methods and Equipment for Aiding Automatic Glaucoma Tracking. Diagnostics (Basel) 2022; 12:diagnostics12040935. [PMID: 35453985 PMCID: PMC9031684 DOI: 10.3390/diagnostics12040935] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2022] [Revised: 04/05/2022] [Accepted: 04/07/2022] [Indexed: 02/01/2023] Open
Abstract
Glaucoma is a chronic optic neuropathy characterized by irreversible damage to the retinal nerve fiber layer (RNFL), resulting in changes in the visual field (VC). Glaucoma screening is performed through a complete ophthalmological examination, using images of the optic papilla obtained in vivo for the evaluation of glaucomatous characteristics, eye pressure, and visual field. Identifying the glaucomatous papilla is quite important, as optical papillary images are considered the gold standard for tracking. Therefore, this article presents a review of the diagnostic methods used to identify the glaucomatous papilla through technology over the last five years. Based on the analyzed works, the current state-of-the-art methods are identified, the current challenges are analyzed, and the shortcomings of these methods are investigated, especially from the point of view of automation and independence in performing these measurements. Finally, the topics for future work and the challenges that need to be solved are proposed.
Collapse
|
18
|
Peng W, Chen S, Kong D, Zhou X, Lu X, Chang C. Grade classification of human glioma using a convolutional neural network based on mid-infrared spectroscopy mapping. JOURNAL OF BIOPHOTONICS 2022; 15:e202100313. [PMID: 34931464 DOI: 10.1002/jbio.202100313] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/09/2021] [Revised: 11/15/2021] [Accepted: 12/17/2021] [Indexed: 06/14/2023]
Abstract
This study proposes a convolutional neural network (CNN)-based computer-aided diagnosis (CAD) system for the grade classification of human glioma by using mid-infrared (MIR) spectroscopic mappings. Through data augmentation of pixels recombination, the mappings in the training set increased almost 161 times relative to the original mappings. The pixels of the recombined mappings in the training set came from all of the one-dimensional (1D) vibrational spectroscopy of 62 (almost 80% of all 77 patients) patients at specific bands. Compared with the performance of the CNN-CAD system based on the 1D vibrational spectroscopy, we found that the mean diagnostic accuracy of the recombined MIR spectroscopic mappings at peaks of 2917 cm-1 , 1539 cm-1 and 1234 cm-1 on the test set performed higher and the model also had more stable patterns. This research demonstrates that two-dimensional MIR mapping at a single frequency can be used by the CNN-CAD system for diagnosis and the research also gives a prompt that the mapping collection process can be replaced by a single-frequency IR imaging system, which is cheaper and more portable than a Fourier transform infrared microscopy and thus may be widely utilized in hospitals to provide meaningful assistance for pathologists in clinics.
Collapse
Affiliation(s)
- Wenyu Peng
- Key Laboratory of Biomedical Information Engineering of Ministry of Education, School of Life Science, Xi'an Jiaotong University, Xi'an, China
- Innovation Laboratory of Terahertz Biophysics, National Innovation Institute of Defense Technology, Beijing, China
| | - Shuo Chen
- Innovation Laboratory of Terahertz Biophysics, National Innovation Institute of Defense Technology, Beijing, China
| | - Dongsheng Kong
- Department of Neurosurgery, Chinese People's Liberation Army (PLA) General Hospital, Beijing, China
| | - Xiaojie Zhou
- National Facility for Protein Science in Shanghai, Shanghai Advanced Research Institute, Chinese Academy of Science, Shanghai, China
| | - Xiaoyun Lu
- Key Laboratory of Biomedical Information Engineering of Ministry of Education, School of Life Science, Xi'an Jiaotong University, Xi'an, China
| | - Chao Chang
- Key Laboratory of Biomedical Information Engineering of Ministry of Education, School of Life Science, Xi'an Jiaotong University, Xi'an, China
- Innovation Laboratory of Terahertz Biophysics, National Innovation Institute of Defense Technology, Beijing, China
| |
Collapse
|
19
|
Discrimination of Breast Cancer Based on Ultrasound Images and Convolutional Neural Network. JOURNAL OF ONCOLOGY 2022; 2022:7733583. [PMID: 35345516 PMCID: PMC8957444 DOI: 10.1155/2022/7733583] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/17/2021] [Revised: 02/10/2022] [Accepted: 02/24/2022] [Indexed: 11/17/2022]
Abstract
The aim of our study was to establish an artificial intelligence tool for the diagnosis of breast disease base on ultrasound (US) images. A deep learning algorithm Efficient-Det assisted US diagnosis method was developed to determine breast suspicious lesions as benign, malignant, or normal. Totally 1181 US images from 487 patients of our hospital and 694 publicly accessible images were employed for modeling, including 558 benign images, 370 malignant images, and 253 normal tissue images. The actual diagnosis results for the patients were determined by the biopsy or surgery. Efficient-Det was first retrained using an exclusive public breast cancer US dataset with transfer learning techniques. A blind test set consisting of 50 benign, 50 malignant, and 50 normal tissue images was randomly picked from the patients' images as the independent test set to test its searching ability on suspicious tumor regions. Furthermore, the confusion matrix and classification accuracy were employed as evaluation metrics to select the optimal classification models. Efficient-Det has demonstrated remarkable progress in general image recognition tasks with specific advantages of locating and identifying tumor areas simultaneously. Compared to the manual method (mean accuracy: 95.3% and 60 s per image) and traditional feature engineering method (mean accuracy: 90% and 15 s per image), our Efficient-Det has the capability of providing a competitive (mean accuracy: 92.6%) and fast (0.06 s per image) classification result. The deployment of Efficient-Det in our local breast cancer discrimination task exhibits specific applicability within real clinical workflows.
Collapse
|
20
|
Glaucoma Detection Using Image Processing and Supervised Learning for Classification. JOURNAL OF HEALTHCARE ENGINEERING 2022; 2022:2988262. [PMID: 35273784 PMCID: PMC8904131 DOI: 10.1155/2022/2988262] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/02/2021] [Revised: 12/27/2021] [Accepted: 01/24/2022] [Indexed: 11/22/2022]
Abstract
A difficult challenge in the realm of biomedical engineering is the detection of physiological changes occurring inside the human body, which is a difficult undertaking. At the moment, these irregularities are graded manually, which is very difficult, time-consuming, and tiresome due to the many complexities associated with the methods involved in their identification. In order to identify illnesses at an early stage, the use of computer-assisted diagnostics has acquired increased attention as a result of the requirement of a disease detection system. The major goal of this proposed work is to build a computer-aided design (CAD) system to help in the early identification of glaucoma as well as the screening and treatment of the disease. The fundus camera is the most affordable image analysis modality available, and it meets the financial needs of the general public. The extraction of structural characteristics from the segmented optic disc and the segmented optic cup may be used to characterize glaucoma and determine its severity. For this study, the primary goal is to estimate the potential of the image analysis model for the early identification and diagnosis of glaucoma, as well as for the evaluation of ocular disorders. The suggested CAD system would aid the ophthalmologist in the diagnosis of ocular illnesses by providing a second opinion as a judgment made by human specialists in a controlled environment. An ensemble-based deep learning model for the identification and diagnosis of glaucoma is in its early stages now. This method's initial module is an ensemble-based deep learning model for glaucoma diagnosis, which is the first of its kind ever developed. It was decided to use three pretrained convolutional neural networks for the categorization of glaucoma. These networks included the residual network (ResNet), the visual geometry group network (VGGNet), and the GoogLeNet. It was necessary to use five different data sets in order to determine how well the proposed algorithm performed. These data sets included the DRISHTI-GS, the Optic Nerve Segmentation Database (DRIONS-DB), and the High-Resolution Fundus (HRF). Accuracy of 91.11% for the PSGIMSR data set and the sensitivity of 85.55% and specificity of 95.20% for the suggested ensemble architecture on the PSGIMSR data set were achieved. Similarly, accuracy rates of 95.63%, 98.67%, 95.64%, and 88.96% were achieved using the DRIONS-DB, HRF, DRISHTI-GS, and combined data sets, respectively.
Collapse
|
21
|
Alawad M, Aljouie A, Alamri S, Alghamdi M, Alabdulkader B, Alkanhal N, Almazroa A. Machine Learning and Deep Learning Techniques for Optic Disc and Cup Segmentation – A Review. Clin Ophthalmol 2022; 16:747-764. [PMID: 35300031 PMCID: PMC8923700 DOI: 10.2147/opth.s348479] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2021] [Accepted: 02/11/2022] [Indexed: 12/12/2022] Open
Abstract
Background Globally, glaucoma is the second leading cause of blindness. Detecting glaucoma in the early stages is essential to avoid disease complications, which lead to blindness. Thus, computer-aided diagnosis systems are powerful tools to overcome the shortage of glaucoma screening programs. Methods A systematic search of public databases, including PubMed, Google Scholar, and other sources, was performed to identify relevant studies to overview the publicly available fundus image datasets used to train, validate, and test machine learning and deep learning methods. Additionally, existing machine learning and deep learning methods for optic cup and disc segmentation were surveyed and critically reviewed. Results Eight fundus images datasets were publicly available with 15,445 images labeled with glaucoma or non-glaucoma, and manually annotated optic disc and cup boundaries were found. Five metrics were identified for evaluating the developed models. Finally, three main deep learning architectural designs were commonly used for optic disc and optic cup segmentation. Conclusion We provided future research directions to formulate robust optic cup and disc segmentation systems. Deep learning can be utilized in clinical settings for this task. However, many challenges need to be addressed before using this strategy in clinical trials. Finally, two deep learning architectural designs have been widely adopted, such as U-net and its variants.
Collapse
Affiliation(s)
- Mohammed Alawad
- Department of Biostatistics and Bioinformatics, King Abdullah International Medical Research Center, King Saud bin Abdulaziz University for Health Sciences, Riyadh, Saudi Arabia
| | - Abdulrhman Aljouie
- Department of Biostatistics and Bioinformatics, King Abdullah International Medical Research Center, King Saud bin Abdulaziz University for Health Sciences, Riyadh, Saudi Arabia
| | - Suhailah Alamri
- Department of Imaging Research, King Abdullah International Medical Research Center, King Saud bin Abdulaziz University for health Sciences, Riyadh, Saudi Arabia
- Research Labs, National Center for Artificial Intelligence, Riyadh, Saudi Arabia
| | - Mansour Alghamdi
- Department of Optometry and Vision Sciences College of Applied Medical Sciences, King Saud University, Riyadh, Saudi Arabia
| | - Balsam Alabdulkader
- Department of Optometry and Vision Sciences College of Applied Medical Sciences, King Saud University, Riyadh, Saudi Arabia
| | - Norah Alkanhal
- Department of Imaging Research, King Abdullah International Medical Research Center, King Saud bin Abdulaziz University for health Sciences, Riyadh, Saudi Arabia
| | - Ahmed Almazroa
- Department of Imaging Research, King Abdullah International Medical Research Center, King Saud bin Abdulaziz University for health Sciences, Riyadh, Saudi Arabia
- Correspondence: Ahmed Almazroa; Abdulrhman Aljouie, Email ;
| |
Collapse
|
22
|
Neto A, Camara J, Cunha A. Evaluations of Deep Learning Approaches for Glaucoma Screening Using Retinal Images from Mobile Device. SENSORS 2022; 22:s22041449. [PMID: 35214351 PMCID: PMC8874723 DOI: 10.3390/s22041449] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/29/2021] [Revised: 02/09/2022] [Accepted: 02/10/2022] [Indexed: 02/04/2023]
Abstract
Glaucoma is a silent disease that leads to vision loss or irreversible blindness. Current deep learning methods can help glaucoma screening by extending it to larger populations using retinal images. Low-cost lenses attached to mobile devices can increase the frequency of screening and alert patients earlier for a more thorough evaluation. This work explored and compared the performance of classification and segmentation methods for glaucoma screening with retinal images acquired by both retinography and mobile devices. The goal was to verify the results of these methods and see if similar results could be achieved using images captured by mobile devices. The used classification methods were the Xception, ResNet152 V2 and the Inception ResNet V2 models. The models’ activation maps were produced and analysed to support glaucoma classifier predictions. In clinical practice, glaucoma assessment is commonly based on the cup-to-disc ratio (CDR) criterion, a frequent indicator used by specialists. For this reason, additionally, the U-Net architecture was used with the Inception ResNet V2 and Inception V3 models as the backbone to segment and estimate CDR. For both tasks, the performance of the models reached close to that of state-of-the-art methods, and the classification method applied to a low-quality private dataset illustrates the advantage of using cheaper lenses.
Collapse
Affiliation(s)
- Alexandre Neto
- Escola de Ciências de Tecnologia, University of Trás-os-Montes and Alto Douro, Quinta de Prados, 5001-801 Vila Real, Portugal;
- INESC TEC—Institute for Systems and Computer Engineering, Technology and Science, 4200-465 Porto, Portugal;
| | - José Camara
- INESC TEC—Institute for Systems and Computer Engineering, Technology and Science, 4200-465 Porto, Portugal;
- Departamento de Ciências e Tecnologia, University Aberta, 1250-100 Lisboa, Portugal
| | - António Cunha
- Escola de Ciências de Tecnologia, University of Trás-os-Montes and Alto Douro, Quinta de Prados, 5001-801 Vila Real, Portugal;
- INESC TEC—Institute for Systems and Computer Engineering, Technology and Science, 4200-465 Porto, Portugal;
- Correspondence: ; Tel.: +351-931-636-373
| |
Collapse
|
23
|
End-to-end multi-task learning for simultaneous optic disc and cup segmentation and glaucoma classification in eye fundus images. Appl Soft Comput 2022. [DOI: 10.1016/j.asoc.2021.108347] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
24
|
Glaucoma disease diagnosis with an artificial algae-based deep learning algorithm. Med Biol Eng Comput 2022; 60:785-796. [PMID: 35080695 DOI: 10.1007/s11517-022-02510-6] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2021] [Accepted: 01/18/2022] [Indexed: 10/19/2022]
Abstract
Glaucoma disease is optic neuropathy; in glaucoma, the optic nerve is damaged because the long duration of intraocular pressure can be caused blindness. Nowadays, deep learning classification algorithms are widely used to diagnose various diseases. However, in general, the training of deep learning algorithms is carried out by traditional gradient-based learning techniques that converge slowly and are highly likely to fall to the local minimum. In this study, we proposed a novel decision support system based on deep learning to diagnose glaucoma. The proposed system has two stages. In the first stage, the preprocessing of glaucoma disease data is performed by normalization and mean absolute deviation method, and in the second stage, the training of the deep learning is made by the artificial algae optimization algorithm. The proposed system is compared to traditional gradient-based deep learning and deep learning trained with other optimization algorithms like genetic algorithm, particle swarm optimization, bat algorithm, salp swarm algorithm, and equilibrium optimizer. Furthermore, the proposed system is compared to the state-of-the-art algorithms proposed for the glaucoma detection. The proposed system has outperformed other algorithms in terms of classification accuracy, recall, precision, false positive rate, and F1-measure by 0.9815, 0.9795, 0.9835, 0.0165, and 0.9815, respectively.
Collapse
|
25
|
Toptaş B, Toptaş M, Hanbay D. Detection of Optic Disc Localization from Retinal Fundus Image Using Optimized Color Space. J Digit Imaging 2022; 35:302-319. [PMID: 35018540 PMCID: PMC8921449 DOI: 10.1007/s10278-021-00566-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2021] [Revised: 11/25/2021] [Accepted: 12/06/2021] [Indexed: 10/19/2022] Open
Abstract
Optic disc localization offers an important clue in detecting other retinal components such as the macula, fovea, and retinal vessels. With the correct detection of this area, sudden vision loss caused by diseases such as age-related macular degeneration and diabetic retinopathy can be prevented. Therefore, there is an increase in computer-aided diagnosis systems in this field. In this paper, an automated method for detecting optic disc localization is proposed. In the proposed method, the fundus images are moved from RGB color space to a new color space by using an artificial bee colony algorithm. In the new color space, the localization of the optical disc is clearer than in the RGB color space. In this method, a matrix called the feature matrix is created. This matrix is obtained from the color pixel values of the image patches containing the optical disc and the image patches not containing the optical disc. Then, the conversion matrix is created. The initial values of this matrix are randomly determined. These two matrices are processed in the artificial bee colony algorithm. Ultimately, the conversion matrix becomes optimal and is applied over the original fundus images. Thus, the images are moved to the new color space. Thresholding is applied to these images, and the optic disc localization is obtained. The success rate of the proposed method has been tested on three general datasets. The accuracy success rate for the DRIVE, DRIONS, and MESSIDOR datasets, respectively, is 100%, 96.37%, and 94.42% for the proposed method.
Collapse
Affiliation(s)
- Buket Toptaş
- Computer Eng. Dept, Engineering and Natural Science Faculty, Bandırma Onyedi Eylül University, Balıkesir, Turkey.
| | - Murat Toptaş
- Software Eng. Dept, Engineering and Natural Science Faculty, Bandırma Onyedi Eylül University, Balıkesir, Turkey
| | - Davut Hanbay
- Computer Eng. Dept., Engineering Faculty, Inonu University, 44280, Malatya, Turkey
| |
Collapse
|
26
|
Thainimit S, Chaipayom P, Sa-arnwong N, Gansawat D, Petchyim S, Pongrujikorn S. Robotic process automation support in telemedicine: Glaucoma screening usage case. INFORMATICS IN MEDICINE UNLOCKED 2022. [DOI: 10.1016/j.imu.2022.101001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022] Open
|
27
|
AlRyalat SA, Al-Ryalat N, Ryalat S. Machine learning in glaucoma: a bibliometric analysis comparing computer science and medical fields’ research. EXPERT REVIEW OF OPHTHALMOLOGY 2021. [DOI: 10.1080/17469899.2021.1964956] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Affiliation(s)
| | - Nosaiba Al-Ryalat
- Department Of Radiology And Nuclear Science, The University of Jordan, Amman, Jordan
| | - Soukaina Ryalat
- Department Of Maxillofacial Surgery, The University of Jordan, Amman, Jordan
| |
Collapse
|
28
|
Han Y, Li W, Liu M, Wu Z, Zhang F, Liu X, Tao L, Li X, Guo X. Application of an Anomaly Detection Model to Screen for Ocular Diseases Using Color Retinal Fundus Images: Design and Evaluation Study. J Med Internet Res 2021; 23:e27822. [PMID: 34255681 PMCID: PMC8317033 DOI: 10.2196/27822] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2021] [Revised: 05/07/2021] [Accepted: 05/24/2021] [Indexed: 12/15/2022] Open
Abstract
BACKGROUND The supervised deep learning approach provides state-of-the-art performance in a variety of fundus image classification tasks, but it is not applicable for screening tasks with numerous or unknown disease types. The unsupervised anomaly detection (AD) approach, which needs only normal samples to develop a model, may be a workable and cost-saving method of screening for ocular diseases. OBJECTIVE This study aimed to develop and evaluate an AD model for detecting ocular diseases on the basis of color fundus images. METHODS A generative adversarial network-based AD method for detecting possible ocular diseases was developed and evaluated using 90,499 retinal fundus images derived from 4 large-scale real-world data sets. Four other independent external test sets were used for external testing and further analysis of the model's performance in detecting 6 common ocular diseases (diabetic retinopathy [DR], glaucoma, cataract, age-related macular degeneration, hypertensive retinopathy [HR], and myopia), DR of different severity levels, and 36 categories of abnormal fundus images. The area under the receiver operating characteristic curve (AUC), accuracy, sensitivity, and specificity of the model's performance were calculated and presented. RESULTS Our model achieved an AUC of 0.896 with 82.69% sensitivity and 82.63% specificity in detecting abnormal fundus images in the internal test set, and it achieved an AUC of 0.900 with 83.25% sensitivity and 85.19% specificity in 1 external proprietary data set. In the detection of 6 common ocular diseases, the AUCs for DR, glaucoma, cataract, AMD, HR, and myopia were 0.891, 0.916, 0.912, 0.867, 0.895, and 0.961, respectively. Moreover, the AD model had an AUC of 0.868 for detecting any DR, 0.908 for detecting referable DR, and 0.926 for detecting vision-threatening DR. CONCLUSIONS The AD approach achieved high sensitivity and specificity in detecting ocular diseases on the basis of fundus images, which implies that this model might be an efficient and economical tool for optimizing current clinical pathways for ophthalmologists. Future studies are required to evaluate the practical applicability of the AD approach in ocular disease screening.
Collapse
Affiliation(s)
- Yong Han
- Department of Epidemiology and Health Statistics, School of Public Health, Capital Medical University, Beijing, China
- Beijing Municipal Key Laboratory of Clinical Epidemiology, Capital Medical University, Beijing, China
| | - Weiming Li
- Department of Epidemiology and Health Statistics, School of Public Health, Capital Medical University, Beijing, China
- Beijing Municipal Key Laboratory of Clinical Epidemiology, Capital Medical University, Beijing, China
| | - Mengmeng Liu
- Department of Epidemiology and Health Statistics, School of Public Health, Capital Medical University, Beijing, China
- Beijing Municipal Key Laboratory of Clinical Epidemiology, Capital Medical University, Beijing, China
| | - Zhiyuan Wu
- Department of Epidemiology and Health Statistics, School of Public Health, Capital Medical University, Beijing, China
- Beijing Municipal Key Laboratory of Clinical Epidemiology, Capital Medical University, Beijing, China
| | - Feng Zhang
- Department of Epidemiology and Health Statistics, School of Public Health, Capital Medical University, Beijing, China
- Beijing Municipal Key Laboratory of Clinical Epidemiology, Capital Medical University, Beijing, China
| | - Xiangtong Liu
- Department of Epidemiology and Health Statistics, School of Public Health, Capital Medical University, Beijing, China
- Beijing Municipal Key Laboratory of Clinical Epidemiology, Capital Medical University, Beijing, China
| | - Lixin Tao
- Department of Epidemiology and Health Statistics, School of Public Health, Capital Medical University, Beijing, China
- Beijing Municipal Key Laboratory of Clinical Epidemiology, Capital Medical University, Beijing, China
| | - Xia Li
- Department of Mathematics and Statistics, La Trobe University, Melbourne, Australia
| | - Xiuhua Guo
- Department of Epidemiology and Health Statistics, School of Public Health, Capital Medical University, Beijing, China
- Beijing Municipal Key Laboratory of Clinical Epidemiology, Capital Medical University, Beijing, China
| |
Collapse
|
29
|
Ruamviboonsuk P, Chantra S, Seresirikachorn K, Ruamviboonsuk V, Sangroongruangsri S. Economic Evaluations of Artificial Intelligence in Ophthalmology. Asia Pac J Ophthalmol (Phila) 2021; 10:307-316. [PMID: 34261102 DOI: 10.1097/apo.0000000000000403] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023] Open
Abstract
ABSTRACT Artificial intelligence (AI) is expected to cause significant medical quality enhancements and cost-saving improvements in ophthalmology. Although there has been a rapid growth of studies on AI in the recent years, real-world adoption of AI is still rare. One reason may be because the data derived from economic evaluations of AI in health care, which policy makers used for adopting new technology, have been fragmented and scarce. Most data on economics of AI in ophthalmology are from diabetic retinopathy (DR) screening. Few studies classified costs of AI software, which has been considered as a medical device, into direct medical costs. These costs of AI are composed of initial and maintenance costs. The initial costs may include investment in research and development, and costs for validation of different datasets. Meanwhile, the maintenance costs include costs for algorithms upgrade and hardware maintenance in the long run. The cost of AI should be balanced between manufacturing price and reimbursements since it may pose significant challenges and barriers to providers. Evidence from cost-effectiveness analyses showed that AI, either standalone or used with humans, was more cost-effective than manual DR screening. Notably, economic evaluation of AI for DR screening can be used as a model for AI to other ophthalmic diseases.
Collapse
Affiliation(s)
- Paisan Ruamviboonsuk
- Department of Ophthalmology, Rajavithi Hospital, College of Medicine, Rangsit University, Bangkok, Thailand
| | - Somporn Chantra
- Department of Ophthalmology, Rajavithi Hospital, College of Medicine, Rangsit University, Bangkok, Thailand
| | - Kasem Seresirikachorn
- Department of Ophthalmology, Rajavithi Hospital, College of Medicine, Rangsit University, Bangkok, Thailand
| | - Varis Ruamviboonsuk
- Department of Biochemistry, Faculty of Medicine, Chulalongkorn University, Bangkok, Thailand
| | - Sermsiri Sangroongruangsri
- Social and Administrative Pharmacy Division, Department of Pharmacy, Faculty of Pharmacy, Mahidol University, Bangkok, Thailand
| |
Collapse
|
30
|
Accurate Diagnosis of Diabetic Retinopathy and Glaucoma Using Retinal Fundus Images Based on Hybrid Features and Genetic Algorithm. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app11136178] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/12/2023]
Abstract
Diabetic retinopathy (DR) and glaucoma can both be incurable if they are not detected early enough. Therefore, ophthalmologists worldwide are striving to detect them by personally screening retinal fundus images. However, this procedure is not only tedious, subjective, and labor-intensive, but also error-prone. Worse yet, it may not even be attainable in some countries where ophthalmologists are in short supply. A practical solution to this complicated problem is a computer-aided diagnosis (CAD) system—the objective of this work. We propose an accurate system to detect at once any of the two diseases from retinal fundus images. The accuracy stems from two factors. First, we calculate a large set of hybrid features belonging to three groups: first-order statistics (FOS), higher-order statistics (HOS), and histogram of oriented gradient (HOG). Then, these features are skillfully reduced using a genetic algorithm scheme that selects only the most relevant and significant of them. Finally, the selected features are fed to a classifier to detect one of three classes: DR, glaucoma, or normal. Four classifiers are tested for this job: decision tree (DT), naive Bayes (NB), k-nearest neighbor (kNN), and linear discriminant analysis (LDA). The experimental work, conducted on three publicly available datasets, two of them merged into one, shows impressive performance in terms of four standard classification metrics, each computed using k-fold crossvalidation for added credibility. The highest accuracy has been provided by DT—96.67% for DR, 100% for glaucoma, and 96.67% for normal.
Collapse
|
31
|
Zheng B, Jiang Q, Lu B, He K, Wu MN, Hao XL, Zhou HX, Zhu SJ, Yang WH. Five-Category Intelligent Auxiliary Diagnosis Model of Common Fundus Diseases Based on Fundus Images. Transl Vis Sci Technol 2021; 10:20. [PMID: 34132760 PMCID: PMC8212443 DOI: 10.1167/tvst.10.7.20] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023] Open
Abstract
Purpose The discrepancy of the number between ophthalmologists and patients in China is large. Retinal vein occlusion (RVO), high myopia, glaucoma, and diabetic retinopathy (DR) are common fundus diseases. Therefore, in this study, a five-category intelligent auxiliary diagnosis model for common fundus diseases is proposed; the model's area of focus is marked. Methods A total of 2000 fundus images were collected; 3 different 5-category intelligent auxiliary diagnosis models for common fundus diseases were trained via different transfer learning and image preprocessing techniques. A total of 1134 fundus images were used for testing. The clinical diagnostic results were compared with the diagnostic results. The main evaluation indicators included sensitivity, specificity, F1-score, area under the concentration-time curve (AUC), 95% confidence interval (CI), kappa, and accuracy. The interpretation methods were used to obtain the model's area of focus in the fundus image. Results The accuracy rates of the 3 intelligent auxiliary diagnosis models on the 1134 fundus images were all above 90%, the kappa values were all above 88%, the diagnosis consistency was good, and the AUC approached 0.90. For the 4 common fundus diseases, the best results of sensitivity, specificity, and F1-scores of the 3 models were 88.27%, 97.12%, and 84.02%; 89.94%, 99.52%, and 93.90%; 95.24%, 96.43%, and 85.11%; and 88.24%, 98.21%, and 89.55%, respectively. Conclusions This study designed a five-category intelligent auxiliary diagnosis model for common fundus diseases. It can be used to obtain the diagnostic category of fundus images and the model's area of focus. Translational Relevance This study will help the primary doctors to provide effective services to all ophthalmologic patients.
Collapse
Affiliation(s)
- Bo Zheng
- School of Information Engineering, Huzhou University, Huzhou, Zhejiang, China.,Zhejiang Province Key Laboratory of Smart Management & Application of Modern Agricultural Resources, Huzhou University, Huzhou, Zhejiang Province, China
| | - Qin Jiang
- Affiliated Eye Hospital of Nanjing Medical University, Nanjing, Jiangsu, China
| | - Bing Lu
- School of Information Engineering, Huzhou University, Huzhou, Zhejiang, China.,Zhejiang Province Key Laboratory of Smart Management & Application of Modern Agricultural Resources, Huzhou University, Huzhou, Zhejiang Province, China
| | - Kai He
- School of Information Engineering, Huzhou University, Huzhou, Zhejiang, China.,Zhejiang Province Key Laboratory of Smart Management & Application of Modern Agricultural Resources, Huzhou University, Huzhou, Zhejiang Province, China
| | - Mao-Nian Wu
- School of Information Engineering, Huzhou University, Huzhou, Zhejiang, China.,Zhejiang Province Key Laboratory of Smart Management & Application of Modern Agricultural Resources, Huzhou University, Huzhou, Zhejiang Province, China
| | - Xiu-Lan Hao
- School of Information Engineering, Huzhou University, Huzhou, Zhejiang, China.,Zhejiang Province Key Laboratory of Smart Management & Application of Modern Agricultural Resources, Huzhou University, Huzhou, Zhejiang Province, China
| | - Hong-Xia Zhou
- School of Information Engineering, Huzhou University, Huzhou, Zhejiang, China.,Zhejiang Province Key Laboratory of Smart Management & Application of Modern Agricultural Resources, Huzhou University, Huzhou, Zhejiang Province, China.,College of Computer and Information, Hehai University, Nanjing, Jiangsu, China
| | - Shao-Jun Zhu
- School of Information Engineering, Huzhou University, Huzhou, Zhejiang, China.,Zhejiang Province Key Laboratory of Smart Management & Application of Modern Agricultural Resources, Huzhou University, Huzhou, Zhejiang Province, China
| | - Wei-Hua Yang
- Affiliated Eye Hospital of Nanjing Medical University, Nanjing, Jiangsu, China
| |
Collapse
|
32
|
Liu Y, Yip LWL, Zheng Y, Wang L. Glaucoma screening using an attention-guided stereo ensemble network. Methods 2021; 202:14-21. [PMID: 34153436 DOI: 10.1016/j.ymeth.2021.06.010] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2021] [Revised: 06/09/2021] [Accepted: 06/16/2021] [Indexed: 11/19/2022] Open
Abstract
Glaucoma is a chronic eye disease, which causes gradual vision loss and eventually blindness. Accurate glaucoma screening at early stage is critical to mitigate its aggravation. Extracting high-quality features are critical in training of classification models. In this paper, we propose a deep ensemble network with attention mechanism that detects glaucoma using optic nerve head stereo images. The network consists of two main sub-components, a deep Convolutional Neural Network that obtains global information and an Attention-Guided Network that localizes optic disc while maintaining beneficial information from other image regions. Both images in a stereo pair are fed into these sub-components, the outputs are fused together to generate the final prediction result. Abundant image features from different views and regions are being extracted, providing compensation when one of the stereo images is of poor quality. The attention-based localization method is trained in a weakly-supervised manner and only image-level annotation is required, which avoids expensive segmentation labelling. Results from real patient images show that our approach increases recall (sensitivity) from the state-of-the-art 88.89% to 95.48%, while maintaining precision and performance stability. The marked reduction in false-negative rate can significantly enhance the chance of successful early diagnosis of glaucoma.
Collapse
Affiliation(s)
- Yuan Liu
- School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore.
| | | | - Yuanjin Zheng
- School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore.
| | - Lipo Wang
- School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore.
| |
Collapse
|
33
|
Mrad Y, Elloumi Y, Akil M, Bedoui MH. A fast and accurate method for glaucoma screening from smartphone-captured fundus images. Ing Rech Biomed 2021. [DOI: 10.1016/j.irbm.2021.06.004] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
|
34
|
ECNet: An evolutionary convolutional network for automated glaucoma detection using fundus images. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102559] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/01/2023]
|
35
|
Automated segmentation of optic disc and optic cup for glaucoma assessment using improved UNET++ architecture. Biocybern Biomed Eng 2021. [DOI: 10.1016/j.bbe.2021.05.011] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/29/2023]
|
36
|
Singh LK, Garg H, Khanna M, Bhadoria RS. An enhanced deep image model for glaucoma diagnosis using feature-based detection in retinal fundus. Med Biol Eng Comput 2021; 59:333-353. [PMID: 33439453 DOI: 10.1007/s11517-020-02307-5] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2020] [Accepted: 12/26/2020] [Indexed: 11/26/2022]
Abstract
This paper proposes a deep image analysis-based model for glaucoma diagnosis that uses several features to detect the formation of glaucoma in retinal fundus. These features are combined with most extracted parameters like inferior, superior, nasal, and temporal region area, and cup-to-disc ratio that overall forms a deep image analysis. This proposed model is exercised to investigate the various aspects related to the prediction of glaucoma in retinal fundus images that help the ophthalmologist in making better decisions for the human eye. The proposed model is presented with the combination of four machine learning algorithms that provide the classification accuracy of 98.60% while other existing models like support vector machine (SVM), K-nearest neighbors (KNN), and Naïve Bayes provide individually with accuracies of 97.61%, 90.47%, and 95.23% respectively. These results clearly demonstrate that this proposed model offers the best methodology to an early diagnosis of glaucoma in retinal fundus.
Collapse
Affiliation(s)
- Law Kumar Singh
- Department of Computer Science and Engineering, School of Engineering and Technology, Sharda University, Knowledge Park III, Greater Noida, India
- Department of Computer Science and Engineering, Hindustan College of Science and Technology, Mathura, India
| | - Hitendra Garg
- Department of Computer Engineering and Applications, GLA University, Mathura, India
| | - Munish Khanna
- Department of Computer Science and Engineering, Hindustan College of Science and Technology, Mathura, India
| | - Robin Singh Bhadoria
- Department of Computer Science and Engineering, Birla Institute of Applied Sciences (BIAS), Bhimtal, Uttarakhand, India.
| |
Collapse
|
37
|
Noninvasive temporal detection of early retinal vascular changes during diabetes. Sci Rep 2020; 10:17370. [PMID: 33060607 PMCID: PMC7567079 DOI: 10.1038/s41598-020-73486-2] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2020] [Accepted: 09/10/2020] [Indexed: 12/15/2022] Open
Abstract
Diabetes associated complications, including diabetic retinopathy and loss of vision, are major health concerns. Detecting early retinal vascular changes during diabetes is not well documented, and only few studies have addressed this domain. The purpose of this study was to noninvasively evaluate temporal changes in retinal vasculature at very early stages of diabetes using fundus images from preclinical models of diabetes.
Non-diabetic and Akita/+ male mice with different duration of diabetes were subjected to fundus imaging using a Micron III imaging system. The images were obtained from 4 weeks- (onset of diabetes), 8 weeks-, 16 weeks-, and 24 weeks-old male Akita/+ and non-diabetic mice. In total 104 fundus images were subjected to analysis for various feature extractions. A combination of Canny Edge Detector and Angiogenesis Analyzer plug-ins in ImageJ were utilized to quantify various retinal vascular changes in fundus images. Statistical analyses were conducted to determine significant differences in the various extracted features from fundus images of diabetic and non-diabetic animals. Our novel image analysis method led to extraction of over 20 features. These results indicated that some of these features were significantly changed with a short duration of diabetes, and others remained the same but changed after longer duration of diabetes. These patterns likely distinguish acute (protective) and chronic (damaging) associated changes with diabetes. We show that with a combination of various plugging one can extract over 20 features from retinal vasculature fundus images. These features change during diabetes, thus allowing the quantification of quality of retinal vascular architecture as biomarkers for disease progression. In addition, our method was able to identify unique differences among diabetic mice with different duration of diabetes. The ability to noninvasively detect temporal retinal vascular changes during diabetes could lead to identification of specific markers important in the development and progression of diabetes mediated-microvascular changes, evaluation of therapeutic interventions, and eventual reversal of these changes in order to stop or delay disease progression.
Collapse
|
38
|
A Review on the optic disc and optic cup segmentation and classification approaches over retinal fundus images for detection of glaucoma. SN APPLIED SCIENCES 2020. [DOI: 10.1007/s42452-020-03221-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2023] Open
|
39
|
Martins J, Cardoso JS, Soares F. Offline computer-aided diagnosis for Glaucoma detection using fundus images targeted at mobile devices. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 192:105341. [PMID: 32155534 DOI: 10.1016/j.cmpb.2020.105341] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/22/2019] [Revised: 12/14/2019] [Accepted: 01/14/2020] [Indexed: 06/10/2023]
Abstract
BACKGROUND AND OBJECTIVE Glaucoma, an eye condition that leads to permanent blindness, is typically asymptomatic and therefore difficult to be diagnosed in time. However, if diagnosed in time, Glaucoma can effectively be slowed down by using adequate treatment; hence, an early diagnosis is of utmost importance. Nonetheless, the conventional approaches to diagnose Glaucoma adopt expensive and bulky equipment that requires qualified experts, making it difficult, costly and time-consuming to diagnose large amounts of people. Consequently, new alternatives to diagnose Glaucoma that suppress these issues should be explored. METHODS This work proposes an interpretable computer-aided diagnosis (CAD) pipeline that is capable of diagnosing Glaucoma using fundus images and run offline in mobile devices. Several public datasets of fundus images were merged and used to build Convolutional Neural Networks (CNNs) that perform segmentation and classification tasks. These networks are then used to build a pipeline for Glaucoma assessment that outputs a Glaucoma confidence level and also provides several morphological features and segmentations of relevant structures, resulting in an interpretable Glaucoma diagnosis. To assess the performance of this method in a restricted environment, this pipeline was integrated into a mobile application and time and space complexities were assessed. RESULTS Considering the test set, the developed pipeline achieved 0.91 and 0.75 of Intersection over Union (IoU) in the optic disc and optic cup segmentation, respectively. With regards to the classification, an accuracy of 0.87 with a sensitivity of 0.85 and an AUC of 0.93 were attained. Moreover, this pipeline runs on an average Android smartphone in under two seconds. CONCLUSIONS The results demonstrate the potential that this method can have in the contribution to an early Glaucoma diagnosis. The proposed approach achieved similar or slightly better metrics than the current CAD systems for Glaucoma assessment while running on more restricted devices. This pipeline can, therefore, be used to construct accurate and affordable CAD systems that could enable large Glaucoma screenings, contributing to an earlier diagnose of this condition.
Collapse
Affiliation(s)
- José Martins
- Fraunhofer Portugal AICOS, Rua Alfredo Allen 455/461, Porto 4200-135, Portugal
| | - Jaime S Cardoso
- INESC TEC and Faculty of Engineering of the University of Porto, Portugal
| | - Filipe Soares
- Fraunhofer Portugal AICOS, Rua Alfredo Allen 455/461, Porto 4200-135, Portugal.
| |
Collapse
|
40
|
Channel and Spatial Attention Regression Network for Cup-to-Disc Ratio Estimation. ELECTRONICS 2020. [DOI: 10.3390/electronics9060909] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Cup-to-disc ratio (CDR) is of great importance during assessing structural changes at the optic nerve head (ONH) and diagnosis of glaucoma. While most efforts have been put on acquiring the CDR number through CNN-based segmentation algorithms followed by the calculation of CDR, these methods usually only focus on the features in the convolution kernel, which is, after all, the operation of the local region, ignoring the contribution of rich global features (such as distant pixels) to the current features. In this paper, a new end-to-end channel and spatial attention regression deep learning network is proposed to deduces CDR number from the regression perspective and combine the self-attention mechanism with the regression network. Our network consists of four modules: the feature extraction module to extract deep features expressing the complicated pattern of optic disc (OD) and optic cup (OC), the attention module including the channel attention block (CAB) and the spatial attention block (SAB) to improve feature representation by aggregating long-range contextual information, the regression module to deduce CDR number directly, and the segmentation-auxiliary module to focus the model’s attention on the relevant features instead of the background region. Especially, the CAB selects relatively important feature maps in channel dimension, shifting the emphasis on the OD and OC region; meanwhile, the SAB learns the discriminative ability of feature representation at pixel level by capturing the relationship of intra-feature map. The experimental results of ORIGA dataset show that our method obtains absolute CDR error of 0.067 and the Pearson’s correlation coefficient of 0.694 in estimating CDR and our method has a great potential in predicting the CDR number.
Collapse
|
41
|
Barros DMS, Moura JCC, Freire CR, Taleb AC, Valentim RAM, Morais PSG. Machine learning applied to retinal image processing for glaucoma detection: review and perspective. Biomed Eng Online 2020; 19:20. [PMID: 32293466 PMCID: PMC7160894 DOI: 10.1186/s12938-020-00767-2] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2019] [Accepted: 04/06/2020] [Indexed: 02/07/2023] Open
Abstract
INTRODUCTION This is a systematic review on the main algorithms using machine learning (ML) in retinal image processing for glaucoma diagnosis and detection. ML has proven to be a significant tool for the development of computer aided technology. Furthermore, secondary research has been widely conducted over the years for ophthalmologists. Such aspects indicate the importance of ML in the context of retinal image processing. METHODS The publications that were chosen to compose this review were gathered from Scopus, PubMed, IEEEXplore and Science Direct databases. Then, the papers published between 2014 and 2019 were selected . Researches that used the segmented optic disc method were excluded. Moreover, only the methods which applied the classification process were considered. The systematic analysis was performed in such studies and, thereupon, the results were summarized. DISCUSSION Based on architectures used for ML in retinal image processing, some studies applied feature extraction and dimensionality reduction to detect and isolate important parts of the analyzed image. Differently, other works utilized a deep convolutional network. Based on the evaluated researches, the main difference between the architectures is the number of images demanded for processing and the high computational cost required to use deep learning techniques. CONCLUSIONS All the analyzed publications indicated it was possible to develop an automated system for glaucoma diagnosis. The disease severity and its high occurrence rates justify the researches which have been carried out. Recent computational techniques, such as deep learning, have shown to be promising technologies in fundus imaging. Although such a technique requires an extensive database and high computational costs, the studies show that the data augmentation and transfer learning techniques have been applied as an alternative way to optimize and reduce networks training.
Collapse
Affiliation(s)
- Daniele M S Barros
- Laboratory of Technological Innovation in Health, Federal University of Rio Grande do Norte, Natal, Brazil.
| | - Julio C C Moura
- Laboratory of Technological Innovation in Health, Federal University of Rio Grande do Norte, Natal, Brazil
| | - Cefas R Freire
- Laboratory of Technological Innovation in Health, Federal University of Rio Grande do Norte, Natal, Brazil
| | | | - Ricardo A M Valentim
- Laboratory of Technological Innovation in Health, Federal University of Rio Grande do Norte, Natal, Brazil
| | - Philippi S G Morais
- Laboratory of Technological Innovation in Health, Federal University of Rio Grande do Norte, Natal, Brazil
| |
Collapse
|
42
|
Murtagh P, Greene G, O'Brien C. Current applications of machine learning in the screening and diagnosis of glaucoma: a systematic review and Meta-analysis. Int J Ophthalmol 2020; 13:149-162. [PMID: 31956584 DOI: 10.18240/ijo.2020.01.22] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2019] [Accepted: 09/23/2019] [Indexed: 12/22/2022] Open
Abstract
AIM To compare the effectiveness of two well described machine learning modalities, ocular coherence tomography (OCT) and fundal photography, in terms of diagnostic accuracy in the screening and diagnosis of glaucoma. METHODS A systematic search of Embase and PubMed databases was undertaken up to 1st of February 2019. Articles were identified alongside their reference lists and relevant studies were aggregated. A Meta-analysis of diagnostic accuracy in terms of area under the receiver operating curve (AUROC) was performed. For the studies which did not report an AUROC, reported sensitivity and specificity values were combined to create a summary ROC curve which was included in the Meta-analysis. RESULTS A total of 23 studies were deemed suitable for inclusion in the Meta-analysis. This included 10 papers from the OCT cohort and 13 from the fundal photos cohort. Random effects Meta-analysis gave a pooled AUROC of 0.957 (95%CI=0.917 to 0.997) for fundal photos and 0.923 (95%CI=0.889 to 0.957) for the OCT cohort. The slightly higher accuracy of fundal photos methods is likely attributable to the much larger database of images used to train the models (59 788 vs 1743). CONCLUSION No demonstrable difference is shown between the diagnostic accuracy of the two modalities. The ease of access and lower cost associated with fundal photo acquisition make that the more appealing option in terms of screening on a global scale, however further studies need to be undertaken, owing largely to the poor study quality associated with the fundal photography cohort.
Collapse
Affiliation(s)
- Patrick Murtagh
- Department of Ophthalmology, Mater Misericordiae University Hospital, Eccles Street, Dublin D07 R2WY, Ireland
| | - Garrett Greene
- RCSI Education and Research Centre, Beaumont Hospital, Dublin D05 AT88, Ireland
| | - Colm O'Brien
- Department of Ophthalmology, Mater Misericordiae University Hospital, Eccles Street, Dublin D07 R2WY, Ireland
| |
Collapse
|
43
|
Orlando JI, Fu H, Barbosa Breda J, van Keer K, Bathula DR, Diaz-Pinto A, Fang R, Heng PA, Kim J, Lee J, Lee J, Li X, Liu P, Lu S, Murugesan B, Naranjo V, Phaye SSR, Shankaranarayana SM, Sikka A, Son J, van den Hengel A, Wang S, Wu J, Wu Z, Xu G, Xu Y, Yin P, Li F, Zhang X, Xu Y, Bogunović H. REFUGE Challenge: A unified framework for evaluating automated methods for glaucoma assessment from fundus photographs. Med Image Anal 2020; 59:101570. [DOI: 10.1016/j.media.2019.101570] [Citation(s) in RCA: 83] [Impact Index Per Article: 20.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2019] [Revised: 07/26/2019] [Accepted: 10/01/2019] [Indexed: 01/01/2023]
|
44
|
Luo LJ, Nguyen DD, Lai JY. Benzoic acid derivative-modified chitosan-g-poly(N-isopropylacrylamide): Methoxylation effects and pharmacological treatments of Glaucoma-related neurodegeneration. J Control Release 2020; 317:246-258. [DOI: 10.1016/j.jconrel.2019.11.038] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2019] [Revised: 11/11/2019] [Accepted: 11/28/2019] [Indexed: 01/29/2023]
|
45
|
Automated detection of glaucoma using optical coherence tomography angiogram images. Comput Biol Med 2019; 115:103483. [PMID: 31698235 DOI: 10.1016/j.compbiomed.2019.103483] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2019] [Revised: 09/25/2019] [Accepted: 10/03/2019] [Indexed: 11/24/2022]
Abstract
Glaucoma is a malady that occurs due to the buildup of fluid pressure in the inner eye. Detection of glaucoma at an early stage is crucial as by 2040, 111.8 million people are expected to be afflicted with glaucoma globally. Feature extraction methods prove to be promising in the diagnosis of glaucoma. In this study, we have used optical coherence tomography angiogram (OCTA) images for automated glaucoma detection. Ocular sinister (OS) from the left eye while ocular dexter (OD) were obtained from right eye of subjects. We have used OS macular, OS disc, OD macular and OD disc images. In this work, local phase quantization (LPQ) technique was applied to extract the features. Information fusion and principal component analysis (PCA) are used to combine and reduce the features. Our method achieved the highest accuracy of 94.3% using LPQ coupled with PCA for right eye optic disc images with AdaBoost classifier. The proposed technique can aid clinicians in glaucoma detection at an early stage. The developed model is ready to be tested with more images before deploying for clinical application.
Collapse
|
46
|
Benzebouchi NE, Azizi N, Ashour AS, Dey N, Sherratt RS. Multi-modal classifier fusion with feature cooperation for glaucoma diagnosis. J EXP THEOR ARTIF IN 2019. [DOI: 10.1080/0952813x.2019.1653383] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Affiliation(s)
- Nacer Eddine Benzebouchi
- Computer Science Department, Labged Laboratory, Badji Mokhtar Annaba University, Annaba, Algeria
| | - Nabiha Azizi
- Computer Science Department, Labged Laboratory, Badji Mokhtar Annaba University, Annaba, Algeria
| | - Amira S. Ashour
- Department of Electronics Engineering and Communication Engineering, Tanta University, Tanta, Egypt
| | - Nilanjan Dey
- Department of Information Technology, Techno India College of Technology, Kolkata, India
| | - R. Simon Sherratt
- Department of Biomedical Engineering, University of Reading, Reading, UK
| |
Collapse
|
47
|
Raghavendra U, Gudigar A, Bhandary SV, Rao TN, Ciaccio EJ, Acharya UR. A Two Layer Sparse Autoencoder for Glaucoma Identification with Fundus Images. J Med Syst 2019; 43:299. [DOI: 10.1007/s10916-019-1427-x] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2018] [Accepted: 07/21/2019] [Indexed: 12/12/2022]
|
48
|
Maheshwari S, Kanhangad V, Pachori RB, Bhandary SV, Acharya UR. Automated glaucoma diagnosis using bit-plane slicing and local binary pattern techniques. Comput Biol Med 2019; 105:72-80. [DOI: 10.1016/j.compbiomed.2018.11.028] [Citation(s) in RCA: 29] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2018] [Revised: 11/30/2018] [Accepted: 11/30/2018] [Indexed: 12/18/2022]
|
49
|
Abstract
Automated medical image analysis is an emerging field of research that identifies the disease with the help of imaging technology. Diabetic retinopathy (DR) is a retinal disease that is diagnosed in diabetic patients. Deep neural network (DNN) is widely used to classify diabetic retinopathy from fundus images collected from suspected persons. The proposed DR classification system achieves a symmetrically optimized solution through the combination of a Gaussian mixture model (GMM), visual geometry group network (VGGNet), singular value decomposition (SVD) and principle component analysis (PCA), and softmax, for region segmentation, high dimensional feature extraction, feature selection and fundus image classification, respectively. The experiments were performed using a standard KAGGLE dataset containing 35,126 images. The proposed VGG-19 DNN based DR model outperformed the AlexNet and spatial invariant feature transform (SIFT) in terms of classification accuracy and computational time. Utilization of PCA and SVD feature selection with fully connected (FC) layers demonstrated the classification accuracies of 92.21%, 98.34%, 97.96%, and 98.13% for FC7-PCA, FC7-SVD, FC8-PCA, and FC8-SVD, respectively.
Collapse
|
50
|
Nguyen PA, Jack Li YC. Artificial Intelligence in Clinical Implications. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2018; 166:A1. [PMID: 30415724 DOI: 10.1016/j.cmpb.2018.10.022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Affiliation(s)
- Phung-Anh Nguyen
- International Center for Health Information Technology (ICHIT), Taipei Medical University, Taipei, Taiwan
| | - Yu-Chuan Jack Li
- International Center for Health Information Technology (ICHIT), Taipei Medical University, Taipei, Taiwan;; Graduate Institute of Biomedical Informatics, College of Medicine Science and Technology, Taipei Medical University, Taipei, Taiwan;; Chair, Dept. of Dermatology, Wan Fang Hospital, Taipei, Taiwan.
| |
Collapse
|