1
|
Jones A, Vijayan TB, John S. Diagnosing Cataracts in the Digital Age: A Survey on AI, Metaverse, and Digital Twin Applications. Semin Ophthalmol 2024:1-8. [PMID: 39300918 DOI: 10.1080/08820538.2024.2403436] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/22/2024] [Revised: 08/21/2024] [Accepted: 09/02/2024] [Indexed: 09/22/2024]
Abstract
PURPOSE The study explores the evolving landscape of cataract diagnosis, focusing on both traditional methods and innovative technological integrations. It aims to address challenges with subjectivity in traditional cataract grading and to evaluate how new technologies can enhance diagnostic accuracy and accessibility. METHODS The research introduces and examines the use of Artificial Intelligence (AI), Machine Learning (ML), and Deep Learning (DL) in automating and improving cataract screening processes. It also explores the role of the Metaverse, Digital Twins, and Teleophthalmology for immersive patient education, real-time virtual replicas of eyes, and remote access to specialized care. RESULTS Various ML and DL techniques demonstrated significant accuracy in cataract detection. The integration of these technologies, along with the Metaverse, Digital Twins, and Teleophthalmology, provides a comprehensive framework for accurate and accessible cataract diagnosis. CONCLUSION There is a notable paradigm shift toward individualized, predictive, and transformative eye care. The advancements in technology address existing diagnostic challenges and mitigate the shortage of ophthalmologists by extending high-quality care to underserved regions. These developments pave the way for improved cataract management and broader accessibility.
Collapse
Affiliation(s)
- Aida Jones
- Department of ECE, KCG College of Technology, Chennai, India
| | | | - Sheila John
- Department of Teleophthalmology, Sankara Nethralaya, Medical Research Foundation, Chennai, India
| |
Collapse
|
2
|
Saqib SM, Iqbal M, Zubair Asghar M, Mazhar T, Almogren A, Ur Rehman A, Hamam H. Cataract and glaucoma detection based on Transfer Learning using MobileNet. Heliyon 2024; 10:e36759. [PMID: 39281545 PMCID: PMC11402175 DOI: 10.1016/j.heliyon.2024.e36759] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2024] [Accepted: 08/21/2024] [Indexed: 09/18/2024] Open
Abstract
A serious eye condition called cataracts can cause blindness. Early and accurate cataract detection is the most effective method for reducing risk and averting blindness. The optic nerve head is harmed by the neurodegenerative condition known as glaucoma. Machine learning and deep learning systems for glaucoma and cataract detection have recently received much attention in research. The automatic detection of these diseases also depends on deep learning transfer learning platforms like VeggNet, ResNet, and MobilNet. The authors proposed MobileNetV1 and MobileNetV2 based on an optimized architecture building lightweight deep neural networks using depth-wise separable convolutions. The experiments used publicly available data sets with both cataract & normal and glaucoma & normal images, and the results showed that the proposed model had the highest accuracy compared to the other models.
Collapse
Affiliation(s)
- Sheikh Muhammad Saqib
- Department of Computing and Information Technology, Gomal University, D.I.Khan 29050, Pakistan
| | - Muhammad Iqbal
- Gomal Research Institute of Computing (GRIC), Faculty of Computing, Gomal University, D.I. Khan 29050, Pakistan
| | - Muhammad Zubair Asghar
- Gomal Research Institute of Computing (GRIC), Faculty of Computing, Gomal University, D.I. Khan 29050, Pakistan
| | - Tehseen Mazhar
- Department of Computer Science, Virtual University of Pakistan, Lahore, 51000, Pakistan
| | - Ahmad Almogren
- Department of Computer Science, College of Computer and Information Sciences, King Saud University, Riyadh, 11633, Saudi Arabia
| | - Ateeq Ur Rehman
- School of Computing, Gachon University, Seongnam, 13120, Republic of Korea
| | - Habib Hamam
- Faculty of Engineering, Uni de Moncton, Moncton, NB, E1A3E9, Canada
- School of Electrical Engineering, University of Johannesburg, Johannesburg, 2006, South Africa
- Hodmas University College, Taleh Area, Mogadishu, Somalia
- Bridges for Academic Excellence, Tunis, Tunisia
| |
Collapse
|
3
|
Quan X, Ou X, Gao L, Yin W, Hou G, Zhang H. SCINet: A Segmentation and Classification Interaction CNN Method for Arteriosclerotic Retinopathy Grading. Interdiscip Sci 2024:10.1007/s12539-024-00650-x. [PMID: 39222258 DOI: 10.1007/s12539-024-00650-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2024] [Revised: 08/09/2024] [Accepted: 08/12/2024] [Indexed: 09/04/2024]
Abstract
As a common disease, cardiovascular and cerebrovascular diseases pose a great harm threat to human wellness. Even using advanced and comprehensive treatment methods, there is still a high mortality rate. Arteriosclerosis, as an important factor reflecting the severity of cardiovascular and cerebrovascular diseases, is imperative to detect the arteriosclerotic retinopathy. However, the detection of arteriosclerosis retinopathy requires expensive and time-consuming manual evaluation, while end-to-end deep learning detection methods also need interpretable design to high light task-related features. Considering the importance of automatic arteriosclerotic retinopathy grading, we propose a segmentation and classification interaction network (SCINet). We propose a segmentation and classification interaction architecture for grading arteriosclerotic retinopathy. After IterNet is used to segment retinal vessel from original fundus images, the backbone feature extractor roughly extracts features from the segmented and original fundus arteriosclerosis images and further enhances them through the vessel aware module. The last classifier module generates fundus arteriosclerosis grading results. Specifically, the vessel aware module is designed to highlight the important areal vessel features segmented from original images by attention mechanism, thereby achieving information interaction. The attention mechanism selectively learns the vessel features of segmentation region information under the proposed interactive architecture, which leads to reweighting the extracted features and enhances significant feature information. Extensive experiments have confirmed the effect of our model. SCINet has the best performance on the task of arteriosclerotic retinopathy grading. Additionally, the CNN method is scalable to similar tasks by incorporating segmented images as auxiliary information.
Collapse
Affiliation(s)
- Xiongwen Quan
- National Key Laboratory of Intelligent Tracking and Forecasting for Infectious Diseases, Engineering Research Center of Trusted Behavior Intelligence, Ministry of Education, College of Artificial Intelligence, Nankai University, Tianjin, 300000, China
| | - Xingyuan Ou
- National Key Laboratory of Intelligent Tracking and Forecasting for Infectious Diseases, Engineering Research Center of Trusted Behavior Intelligence, Ministry of Education, College of Artificial Intelligence, Nankai University, Tianjin, 300000, China
| | - Li Gao
- Ophthalmology, Tianjin Huanhu Hospital, Tianjin, 300000, China
| | - Wenya Yin
- National Key Laboratory of Intelligent Tracking and Forecasting for Infectious Diseases, Engineering Research Center of Trusted Behavior Intelligence, Ministry of Education, College of Artificial Intelligence, Nankai University, Tianjin, 300000, China
| | - Guangyao Hou
- National Key Laboratory of Intelligent Tracking and Forecasting for Infectious Diseases, Engineering Research Center of Trusted Behavior Intelligence, Ministry of Education, College of Artificial Intelligence, Nankai University, Tianjin, 300000, China
| | - Han Zhang
- National Key Laboratory of Intelligent Tracking and Forecasting for Infectious Diseases, Engineering Research Center of Trusted Behavior Intelligence, Ministry of Education, College of Artificial Intelligence, Nankai University, Tianjin, 300000, China.
| |
Collapse
|
4
|
Grzybowski A, Jin K, Zhou J, Pan X, Wang M, Ye J, Wong TY. Retina Fundus Photograph-Based Artificial Intelligence Algorithms in Medicine: A Systematic Review. Ophthalmol Ther 2024; 13:2125-2149. [PMID: 38913289 PMCID: PMC11246322 DOI: 10.1007/s40123-024-00981-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2024] [Accepted: 04/15/2024] [Indexed: 06/25/2024] Open
Abstract
We conducted a systematic review of research in artificial intelligence (AI) for retinal fundus photographic images. We highlighted the use of various AI algorithms, including deep learning (DL) models, for application in ophthalmic and non-ophthalmic (i.e., systemic) disorders. We found that the use of AI algorithms for the interpretation of retinal images, compared to clinical data and physician experts, represents an innovative solution with demonstrated superior accuracy in identifying many ophthalmic (e.g., diabetic retinopathy (DR), age-related macular degeneration (AMD), optic nerve disorders), and non-ophthalmic disorders (e.g., dementia, cardiovascular disease). There has been a significant amount of clinical and imaging data for this research, leading to the potential incorporation of AI and DL for automated analysis. AI has the potential to transform healthcare by improving accuracy, speed, and workflow, lowering cost, increasing access, reducing mistakes, and transforming healthcare worker education and training.
Collapse
Affiliation(s)
- Andrzej Grzybowski
- Institute for Research in Ophthalmology, Foundation for Ophthalmology Development, Poznań , Poland.
| | - Kai Jin
- Eye Center, School of Medicine, The Second Affiliated Hospital, Zhejiang University, Hangzhou, Zhejiang, China
| | - Jingxin Zhou
- Eye Center, School of Medicine, The Second Affiliated Hospital, Zhejiang University, Hangzhou, Zhejiang, China
| | - Xiangji Pan
- Eye Center, School of Medicine, The Second Affiliated Hospital, Zhejiang University, Hangzhou, Zhejiang, China
| | - Meizhu Wang
- Eye Center, School of Medicine, The Second Affiliated Hospital, Zhejiang University, Hangzhou, Zhejiang, China
| | - Juan Ye
- Eye Center, School of Medicine, The Second Affiliated Hospital, Zhejiang University, Hangzhou, Zhejiang, China.
| | - Tien Y Wong
- School of Clinical Medicine, Tsinghua Medicine, Tsinghua University, Beijing, China
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore, Singapore
| |
Collapse
|
5
|
Chen S, Huang L, Li X, Feng Q, Lu H, Mu J. Hotspots and trends of artificial intelligence in the field of cataracts: a bibliometric analysis. Int Ophthalmol 2024; 44:258. [PMID: 38909343 PMCID: PMC11194187 DOI: 10.1007/s10792-024-03207-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2024] [Accepted: 06/15/2024] [Indexed: 06/24/2024]
Abstract
PURPOSE To analyze the hotspots and trends in artificial intelligence (AI) research in the field of cataracts. METHODS The Science Citation Index Expanded of the Web of Science Core Collection was used to collect the research literature related to AI in the field of cataracts, which was analyzed for valuable information such as years, countries/regions, journals, institutions, citations, and keywords. Visualized co-occurrence network graphs were generated through the library online analysis platform, VOSviewer, and CiteSpace tools. RESULTS A total of 222 relevant research articles from 41 countries were selected. Since 2019, the number of related articles has increased significantly every year. China (n = 82, 24.92%), the United States (n = 55, 16.72%) and India (n = 26, 7.90%) were the three countries with the most publications, accounting for 49.54% of the total. The Journal of Cataract and Refractive Surgery (n = 13, 5.86%) and Translational Vision Science & Technology (n = 10, 4.50%) had the most publications. Sun Yat-sen University (n = 25, 11.26%), the Chinese Academy of Sciences (n = 17, 7.66%), and Capital Medical University (n = 16, 7.21%) are the three institutions with the highest number of publications. We discovered through keyword analysis that cataract, diagnosis, imaging, classification, intraocular lens, and formula are the main topics of current study. CONCLUSIONS This study revealed the hot spots and potential trends of AI in terms of cataract diagnosis and intraocular lens power calculation. AI will become more prevalent in the field of ophthalmology in the future.
Collapse
Affiliation(s)
- Si Chen
- Department of Ophthalmology, Jinshan Branch of Shanghai Sixth People's Hospital, Shanghai, 201599, China
| | - Li Huang
- Department of Ophthalmology, Jinshan Branch of Shanghai Sixth People's Hospital, Shanghai, 201599, China
| | - Xiaoqing Li
- Department of Ophthalmology, Jinshan Branch of Shanghai Sixth People's Hospital, Shanghai, 201599, China
| | - Qin Feng
- Department of Ophthalmology, Jinshan Branch of Shanghai Sixth People's Hospital, Shanghai, 201599, China
| | - Huilong Lu
- Department of Ophthalmology, Jinshan Branch of Shanghai Sixth People's Hospital, Shanghai, 201599, China
| | - Jing Mu
- Department of Ophthalmology, Jinshan Branch of Shanghai Sixth People's Hospital, Shanghai, 201599, China.
- Department of Ophthalmology, Shanghai Sixth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, 200235, China.
| |
Collapse
|
6
|
Mai EL, Chen BH, Su TY. Innovative utilization of ultra-wide field fundus images and deep learning algorithms for screening high-risk posterior polar cataract. J Cataract Refract Surg 2024; 50:618-623. [PMID: 38350234 PMCID: PMC11146186 DOI: 10.1097/j.jcrs.0000000000001419] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2023] [Revised: 01/27/2024] [Accepted: 02/01/2024] [Indexed: 02/15/2024]
Abstract
PURPOSE To test a cataract shadow projection theory and validate it by developing a deep learning algorithm that enables automatic and stable posterior polar cataract (PPC) screening using fundus images. SETTING Department of Ophthalmology, Far Eastern Memorial Hospital, New Taipei, Taiwan. DESIGN Retrospective chart review. METHODS A deep learning algorithm to automatically detect PPC was developed based on the cataract shadow projection theory. Retrospective data (n = 546) with ultra-wide field fundus images were collected, and various model architectures and fields of view were tested for optimization. RESULTS The final model achieved 80% overall accuracy, with 88.2% sensitivity and 93.4% specificity in PPC screening on a clinical validation dataset (n = 103). CONCLUSIONS This study established a significant relationship between PPC and the projected shadow, which may help surgeons to identify potential PPC risks preoperatively and reduce the incidence of posterior capsular rupture during cataract surgery.
Collapse
Affiliation(s)
- Elsa L.C. Mai
- From the Department of Electric Engineering, Yuan-Ze University, Taoyuan City, Taiwan (Mai, Chen, Su); Department of Ophthalmology, Far Eastern Memorial Hospital, Taiwan (Mai); Yuanpei University of Medical Technology, Hsinchu, Taiwan (Mai)
| | - Bing-Hong Chen
- From the Department of Electric Engineering, Yuan-Ze University, Taoyuan City, Taiwan (Mai, Chen, Su); Department of Ophthalmology, Far Eastern Memorial Hospital, Taiwan (Mai); Yuanpei University of Medical Technology, Hsinchu, Taiwan (Mai)
| | - Tai-Yuan Su
- From the Department of Electric Engineering, Yuan-Ze University, Taoyuan City, Taiwan (Mai, Chen, Su); Department of Ophthalmology, Far Eastern Memorial Hospital, Taiwan (Mai); Yuanpei University of Medical Technology, Hsinchu, Taiwan (Mai)
| |
Collapse
|
7
|
Arias-Serrano I, Velásquez-López PA, Avila-Briones LN, Laurido-Mora FC, Villalba-Meneses F, Tirado-Espin A, Cruz-Varela J, Almeida-Galárraga D. Artificial intelligence based glaucoma and diabetic retinopathy detection using MATLAB - retrained AlexNet convolutional neural network. F1000Res 2024; 12:14. [PMID: 38826575 PMCID: PMC11143403 DOI: 10.12688/f1000research.122288.2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 03/18/2024] [Indexed: 06/04/2024] Open
Abstract
Background Glaucoma and diabetic retinopathy (DR) are the leading causes of irreversible retinal damage leading to blindness. Early detection of these diseases through regular screening is especially important to prevent progression. Retinal fundus imaging serves as the principal method for diagnosing glaucoma and DR. Consequently, automated detection of eye diseases represents a significant application of retinal image analysis. Compared with classical diagnostic techniques, image classification by convolutional neural networks (CNN) exhibits potential for effective eye disease detection. Methods This paper proposes the use of MATLAB - retrained AlexNet CNN for computerized eye diseases identification, particularly glaucoma and diabetic retinopathy, by employing retinal fundus images. The acquisition of the database was carried out through free access databases and access upon request. A transfer learning technique was employed to retrain the AlexNet CNN for non-disease (Non_D), glaucoma (Sus_G) and diabetic retinopathy (Sus_R) classification. Moreover, model benchmarking was conducted using ResNet50 and GoogLeNet architectures. A Grad-CAM analysis is also incorporated for each eye condition examined. Results Metrics for validation accuracy, false positives, false negatives, precision, and recall were reported. Validation accuracies for the NetTransfer (I-V) and netAlexNet ranged from 89.7% to 94.3%, demonstrating varied effectiveness in identifying Non_D, Sus_G, and Sus_R categories, with netAlexNet achieving a 93.2% accuracy in the benchmarking of models against netResNet50 at 93.8% and netGoogLeNet at 90.4%. Conclusions This study demonstrates the efficacy of using a MATLAB-retrained AlexNet CNN for detecting glaucoma and diabetic retinopathy. It emphasizes the need for automated early detection tools, proposing CNNs as accessible solutions without replacing existing technologies.
Collapse
Affiliation(s)
- Isaac Arias-Serrano
- School of Biological Sciences and Engineering, Universidad Yachay Tech, Urcuquí, Imbabura, 100119, Ecuador
| | - Paolo A. Velásquez-López
- School of Biological Sciences and Engineering, Universidad Yachay Tech, Urcuquí, Imbabura, 100119, Ecuador
| | - Laura N. Avila-Briones
- School of Biological Sciences and Engineering, Universidad Yachay Tech, Urcuquí, Imbabura, 100119, Ecuador
| | - Fanny C. Laurido-Mora
- School of Biological Sciences and Engineering, Universidad Yachay Tech, Urcuquí, Imbabura, 100119, Ecuador
| | - Fernando Villalba-Meneses
- School of Biological Sciences and Engineering, Universidad Yachay Tech, Urcuquí, Imbabura, 100119, Ecuador
- Department of Design and Manufacturing Engineering, University of Zaragoza, Zaragoza, Aragon, 50018, Spain
| | - Andrés Tirado-Espin
- School of Mathematical and Computational Sciences, Universidad Yachay Tech, Urcuquí, Imbabura, 100119, Ecuador
| | - Jonathan Cruz-Varela
- School of Biological Sciences and Engineering, Universidad Yachay Tech, Urcuquí, Imbabura, 100119, Ecuador
| | - Diego Almeida-Galárraga
- School of Biological Sciences and Engineering, Universidad Yachay Tech, Urcuquí, Imbabura, 100119, Ecuador
| |
Collapse
|
8
|
Arias-Serrano I, Velásquez-López PA, Avila-Briones LN, Laurido-Mora FC, Villalba-Meneses F, Tirado-Espin A, Cruz-Varela J, Almeida-Galárraga D. Artificial intelligence based glaucoma and diabetic retinopathy detection using MATLAB - retrained AlexNet convolutional neural network. F1000Res 2024; 12:14. [PMID: 38826575 PMCID: PMC11143403 DOI: 10.12688/f1000research.122288.1] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 03/18/2024] [Indexed: 08/15/2024] Open
Abstract
BACKGROUND Glaucoma and diabetic retinopathy (DR) are the leading causes of irreversible retinal damage leading to blindness. Early detection of these diseases through regular screening is especially important to prevent progression. Retinal fundus imaging serves as the principal method for diagnosing glaucoma and DR. Consequently, automated detection of eye diseases represents a significant application of retinal image analysis. Compared with classical diagnostic techniques, image classification by convolutional neural networks (CNN) exhibits potential for effective eye disease detection. METHODS This paper proposes the use of MATLAB - retrained AlexNet CNN for computerized eye diseases identification, particularly glaucoma and diabetic retinopathy, by employing retinal fundus images. The acquisition of the database was carried out through free access databases and access upon request. A transfer learning technique was employed to retrain the AlexNet CNN for non-disease (Non_D), glaucoma (Sus_G) and diabetic retinopathy (Sus_R) classification. Moreover, model benchmarking was conducted using ResNet50 and GoogLeNet architectures. A Grad-CAM analysis is also incorporated for each eye condition examined. RESULTS Metrics for validation accuracy, false positives, false negatives, precision, and recall were reported. Validation accuracies for the NetTransfer (I-V) and netAlexNet ranged from 89.7% to 94.3%, demonstrating varied effectiveness in identifying Non_D, Sus_G, and Sus_R categories, with netAlexNet achieving a 93.2% accuracy in the benchmarking of models against netResNet50 at 93.8% and netGoogLeNet at 90.4%. CONCLUSIONS This study demonstrates the efficacy of using a MATLAB-retrained AlexNet CNN for detecting glaucoma and diabetic retinopathy. It emphasizes the need for automated early detection tools, proposing CNNs as accessible solutions without replacing existing technologies.
Collapse
Affiliation(s)
- Isaac Arias-Serrano
- School of Biological Sciences and Engineering, Universidad Yachay Tech, Urcuquí, Imbabura, 100119, Ecuador
| | - Paolo A. Velásquez-López
- School of Biological Sciences and Engineering, Universidad Yachay Tech, Urcuquí, Imbabura, 100119, Ecuador
| | - Laura N. Avila-Briones
- School of Biological Sciences and Engineering, Universidad Yachay Tech, Urcuquí, Imbabura, 100119, Ecuador
| | - Fanny C. Laurido-Mora
- School of Biological Sciences and Engineering, Universidad Yachay Tech, Urcuquí, Imbabura, 100119, Ecuador
| | - Fernando Villalba-Meneses
- School of Biological Sciences and Engineering, Universidad Yachay Tech, Urcuquí, Imbabura, 100119, Ecuador
- Department of Design and Manufacturing Engineering, University of Zaragoza, Zaragoza, Aragon, 50018, Spain
| | - Andrés Tirado-Espin
- School of Mathematical and Computational Sciences, Universidad Yachay Tech, Urcuquí, Imbabura, 100119, Ecuador
| | - Jonathan Cruz-Varela
- School of Biological Sciences and Engineering, Universidad Yachay Tech, Urcuquí, Imbabura, 100119, Ecuador
| | - Diego Almeida-Galárraga
- School of Biological Sciences and Engineering, Universidad Yachay Tech, Urcuquí, Imbabura, 100119, Ecuador
| |
Collapse
|
9
|
Mackenbrock LHB, Labuz G, Baur ID, Yildirim TM, Auffarth GU, Khoramnia R. Cataract Classification Systems: A Review. Klin Monbl Augenheilkd 2024; 241:75-83. [PMID: 38242135 DOI: 10.1055/a-2003-2369] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/21/2024]
Abstract
Cataract is among the leading causes of visual impairment worldwide. Innovations in treatment have drastically improved patient outcomes, but to be properly implemented, it is necessary to have the right diagnostic tools. This review explores the cataract grading systems developed by researchers in recent decades and provides insight into both merits and limitations. To this day, the gold standard for cataract classification is the Lens Opacity Classification System III. Different cataract features are graded according to standard photographs during slit lamp examination. Although widely used in research, its clinical application is rare, and it is limited by its subjective nature. Meanwhile, recent advancements in imaging technology, notably Scheimpflug imaging and optical coherence tomography, have opened the possibility of objective assessment of lens structure. With the use of automatic lens anatomy detection software, researchers demonstrated a good correlation to functional and surgical metrics such as visual acuity, phacoemulsification energy, and surgical time. The development of deep learning networks has further increased the capability of these grading systems by improving interpretability and increasing robustness when applied to norm-deviating cases. These classification systems, which can be used for both screening and preoperative diagnostics, are of value for targeted prospective studies, but still require implementation and validation in everyday clinical practice.
Collapse
Affiliation(s)
- Lars H B Mackenbrock
- Department of Ophthalmology, Heidelberg University Hospital, Heidelberg, Germany
| | - Grzegorz Labuz
- Department of Ophthalmology, Heidelberg University Hospital, Heidelberg, Germany
| | - Isabella D Baur
- Department of Ophthalmology, Heidelberg University Hospital, Heidelberg, Germany
| | - Timur M Yildirim
- Department of Ophthalmology, Heidelberg University Hospital, Heidelberg, Germany
| | - Gerd U Auffarth
- Department of Ophthalmology, Heidelberg University Hospital, Heidelberg, Germany
| | - Ramin Khoramnia
- Department of Ophthalmology, Heidelberg University Hospital, Heidelberg, Germany
| |
Collapse
|
10
|
Elsawy A, Keenan TDL, Chen Q, Thavikulwat AT, Bhandari S, Quek TC, Goh JHL, Tham YC, Cheng CY, Chew EY, Lu Z. A deep network DeepOpacityNet for detection of cataracts from color fundus photographs. COMMUNICATIONS MEDICINE 2023; 3:184. [PMID: 38104223 PMCID: PMC10725427 DOI: 10.1038/s43856-023-00410-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2022] [Accepted: 11/21/2023] [Indexed: 12/19/2023] Open
Abstract
BACKGROUND Cataract diagnosis typically requires in-person evaluation by an ophthalmologist. However, color fundus photography (CFP) is widely performed outside ophthalmology clinics, which could be exploited to increase the accessibility of cataract screening by automated detection. METHODS DeepOpacityNet was developed to detect cataracts from CFP and highlight the most relevant CFP features associated with cataracts. We used 17,514 CFPs from 2573 AREDS2 participants curated from the Age-Related Eye Diseases Study 2 (AREDS2) dataset, of which 8681 CFPs were labeled with cataracts. The ground truth labels were transferred from slit-lamp examination of nuclear cataracts and reading center grading of anterior segment photographs for cortical and posterior subcapsular cataracts. DeepOpacityNet was internally validated on an independent test set (20%), compared to three ophthalmologists on a subset of the test set (100 CFPs), externally validated on three datasets obtained from the Singapore Epidemiology of Eye Diseases study (SEED), and visualized to highlight important features. RESULTS Internally, DeepOpacityNet achieved a superior accuracy of 0.66 (95% confidence interval (CI): 0.64-0.68) and an area under the curve (AUC) of 0.72 (95% CI: 0.70-0.74), compared to that of other state-of-the-art methods. DeepOpacityNet achieved an accuracy of 0.75, compared to an accuracy of 0.67 for the ophthalmologist with the highest performance. Externally, DeepOpacityNet achieved AUC scores of 0.86, 0.88, and 0.89 on SEED datasets, demonstrating the generalizability of our proposed method. Visualizations show that the visibility of blood vessels could be characteristic of cataract absence while blurred regions could be characteristic of cataract presence. CONCLUSIONS DeepOpacityNet could detect cataracts from CFPs in AREDS2 with performance superior to that of ophthalmologists and generate interpretable results. The code and models are available at https://github.com/ncbi/DeepOpacityNet ( https://doi.org/10.5281/zenodo.10127002 ).
Collapse
Affiliation(s)
- Amr Elsawy
- National Center for Biotechnology Information, National Library of Medicine, National Institutes of Health, Bethesda, MD, 20894, USA
| | - Tiarnan D L Keenan
- National Center for Biotechnology Information, National Library of Medicine, National Institutes of Health, Bethesda, MD, 20894, USA
| | - Qingyu Chen
- National Center for Biotechnology Information, National Library of Medicine, National Institutes of Health, Bethesda, MD, 20894, USA
| | - Alisa T Thavikulwat
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, MD, 20892, USA
| | - Sanjeeb Bhandari
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, MD, 20892, USA
| | - Ten Cheer Quek
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
| | - Jocelyn Hui Lin Goh
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
| | - Yih-Chung Tham
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Ophthalmology & Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore, Singapore
- Centre for Innovation and Precision Eye Health & Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| | - Ching-Yu Cheng
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Ophthalmology & Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore, Singapore
- Centre for Innovation and Precision Eye Health & Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| | - Emily Y Chew
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, MD, 20892, USA.
| | - Zhiyong Lu
- National Center for Biotechnology Information, National Library of Medicine, National Institutes of Health, Bethesda, MD, 20894, USA.
| |
Collapse
|
11
|
Wan C, Hua R, Li K, Hong X, Fang D, Yang W. Automatic Diagnosis of Different Types of Retinal Vein Occlusion Based on Fundus Images. INT J INTELL SYST 2023; 2023:1-13. [DOI: 10.1155/2023/1587410] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/22/2024]
Abstract
Retinal vein occlusion (RVO) is the second common cause of blindness following diabetic retinopathy. The manual screening of fundus images to detect RVO is time consuming. Deep-learning techniques have been used for screening RVO due to their outstanding performance in many applications. However, unlike other images, medical images have smaller lesions, which require a more elaborate approach. To provide patients with an accurate diagnosis, followed by timely and effective treatment, we developed an intelligent method for automatic RVO screening on fundus images. Swin Transformer learns the hierarchy of low-to high-level features like the convolutional neural network. However, Swin Transformer extracts features from fundus images through attention modules, which pay more attention to the interrelationship between the features and each other. The model is more universal, does not rely entirely on the data itself, and focuses not only on local information but has a diffusion mechanism from local to global. To suppress overfitting, we adopt a regularization strategy, label smoothing, which uses one-hot to add noise to reduce the weight of the categories of true sample labels when calculating the loss function. The choice of different models using a 5-fold cross-validation on our own datasets indicates that Swin Transformer performs better. The accuracy of classifying all datasets is 98.75 ± 0.000, and the accuracy of identifying MRVO, CRVO, BRVO, and normal, using the method proposed in the paper, is 94.49 ± 0.094, 99.98 ± 0.015, 98.88 ± 0.08, and 99.42 ± 0.012, respectively. The method will be useful to diagnose RVO and help decide grade through fundus images, which has the potency to provide patients with further diagnosis and treatment.
Collapse
Affiliation(s)
- Cheng Wan
- College of Electronic and Information Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 211100, China
| | - Rongrong Hua
- College of Electronic and Information Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 211100, China
| | - Kunke Li
- Shenzhen Eye Hospital, Jinan University, Shenzhen 518040, China
| | - Xiangqian Hong
- Shenzhen Eye Hospital, Jinan University, Shenzhen 518040, China
| | - Dong Fang
- Shenzhen Eye Hospital, Jinan University, Shenzhen 518040, China
| | - Weihua Yang
- Shenzhen Eye Hospital, Jinan University, Shenzhen 518040, China
| |
Collapse
|
12
|
Xie H, Li Z, Wu C, Zhao Y, Lin C, Wang Z, Wang C, Gu Q, Wang M, Zheng Q, Jiang J, Chen W. Deep learning for detecting visually impaired cataracts using fundus images. Front Cell Dev Biol 2023; 11:1197239. [PMID: 37576595 PMCID: PMC10416247 DOI: 10.3389/fcell.2023.1197239] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2023] [Accepted: 07/20/2023] [Indexed: 08/15/2023] Open
Abstract
Purpose: To develop a visual function-based deep learning system (DLS) using fundus images to screen for visually impaired cataracts. Materials and methods: A total of 8,395 fundus images (5,245 subjects) with corresponding visual function parameters collected from three clinical centers were used to develop and evaluate a DLS for classifying non-cataracts, mild cataracts, and visually impaired cataracts. Three deep learning algorithms (DenseNet121, Inception V3, and ResNet50) were leveraged to train models to obtain the best one for the system. The performance of the system was evaluated using the area under the receiver operating characteristic curve (AUC), sensitivity, and specificity. Results: The AUC of the best algorithm (DenseNet121) on the internal test dataset and the two external test datasets were 0.998 (95% CI, 0.996-0.999) to 0.999 (95% CI, 0.998-1.000),0.938 (95% CI, 0.924-0.951) to 0.966 (95% CI, 0.946-0.983) and 0.937 (95% CI, 0.918-0.953) to 0.977 (95% CI, 0.962-0.989), respectively. In the comparison between the system and cataract specialists, better performance was observed in the system for detecting visually impaired cataracts (p < 0.05). Conclusion: Our study shows the potential of a function-focused screening tool to identify visually impaired cataracts from fundus images, enabling timely patient referral to tertiary eye hospitals.
Collapse
Affiliation(s)
- He Xie
- National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, China
| | - Zhongwen Li
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo, China
| | - Chengchao Wu
- School of Electronic Engineering, Xi’an University of Posts and Telecommunications, Xi’an, China
| | - Yitian Zhao
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo, China
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China
| | - Chengmin Lin
- Department of Ophthalmology, Wenzhou Hospital of Integrated Traditional Chinese and Western Medicine, Wenzhou, China
| | - Zhouqian Wang
- National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, China
| | - Chenxi Wang
- National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, China
| | - Qinyi Gu
- National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, China
| | - Minye Wang
- National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, China
| | - Qinxiang Zheng
- National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, China
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo, China
| | - Jiewei Jiang
- School of Electronic Engineering, Xi’an University of Posts and Telecommunications, Xi’an, China
| | - Wei Chen
- National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, China
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo, China
| |
Collapse
|
13
|
Li Z, Wang L, Wu X, Jiang J, Qiang W, Xie H, Zhou H, Wu S, Shao Y, Chen W. Artificial intelligence in ophthalmology: The path to the real-world clinic. Cell Rep Med 2023:101095. [PMID: 37385253 PMCID: PMC10394169 DOI: 10.1016/j.xcrm.2023.101095] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2022] [Revised: 04/17/2023] [Accepted: 06/07/2023] [Indexed: 07/01/2023]
Abstract
Artificial intelligence (AI) has great potential to transform healthcare by enhancing the workflow and productivity of clinicians, enabling existing staff to serve more patients, improving patient outcomes, and reducing health disparities. In the field of ophthalmology, AI systems have shown performance comparable with or even better than experienced ophthalmologists in tasks such as diabetic retinopathy detection and grading. However, despite these quite good results, very few AI systems have been deployed in real-world clinical settings, challenging the true value of these systems. This review provides an overview of the current main AI applications in ophthalmology, describes the challenges that need to be overcome prior to clinical implementation of the AI systems, and discusses the strategies that may pave the way to the clinical translation of these systems.
Collapse
Affiliation(s)
- Zhongwen Li
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315000, China; School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou 325027, China.
| | - Lei Wang
- School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou 325027, China
| | - Xuefang Wu
- Guizhou Provincial People's Hospital, Guizhou University, Guiyang 550002, China
| | - Jiewei Jiang
- School of Electronic Engineering, Xi'an University of Posts and Telecommunications, Xi'an 710121, China
| | - Wei Qiang
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315000, China
| | - He Xie
- School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou 325027, China
| | - Hongjian Zhou
- Department of Computer Science, University of Oxford, Oxford, Oxfordshire OX1 2JD, UK
| | - Shanjun Wu
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315000, China
| | - Yi Shao
- Department of Ophthalmology, the First Affiliated Hospital of Nanchang University, Nanchang 330006, China.
| | - Wei Chen
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315000, China; School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou 325027, China.
| |
Collapse
|
14
|
Wang S, He J, He X, Liu Y, Lin X, Xu C, Zhu L, Kang J, Wang Y, Li Y, Guo S, Zhang Y, Luo Z, Liu Z. AES-CSFS: an automatic evaluation system for corneal sodium fluorescein staining based on deep learning. Ther Adv Chronic Dis 2023; 14:20406223221148266. [PMID: 36798527 PMCID: PMC9926379 DOI: 10.1177/20406223221148266] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2022] [Accepted: 12/13/2022] [Indexed: 02/15/2023] Open
Abstract
Background Corneal fluorescein sodium staining is a valuable diagnostic method for various ocular surface diseases. However, the examination results are highly dependent on the subjective experience of ophthalmologists. Objectives To develop an artificial intelligence system based on deep learning to provide an accurate quantitative assessment of sodium fluorescein staining score and the size of cornea epithelial patchy defect. Design A prospective study. Methods We proposed an artificial intelligence system for automatically evaluating corneal staining scores and accurately measuring patchy corneal epithelial defects based on corneal fluorescein sodium staining images. The design incorporates two segmentation models and a classification model to forecast and assess the stained images. Meanwhile, we compare the evaluation findings from the system with ophthalmologists with varying expertise. Results For the segmentation task of cornea boundary and cornea epithelial patchy defect area, our proposed method can achieve the performance of dice similarity coefficient (DSC) is 0.98/0.97 and Hausdorff distance (HD) is 3.60/8.39, respectively, when compared with the manually labeled gold standard. This method significantly outperforms the four leading algorithms (Unet, Unet++, Swin-Unet, and TransUnet). For the classification task, our algorithm achieves the best performance in accuracy, recall, and F1-score, which are 91.2%, 78.6%, and 79.2%, respectively. The performance of our developed system exceeds seven different approaches (Inception, ShuffleNet, Xception, EfficientNet_B7, DenseNet, ResNet, and VIT) in classification tasks. In addition, three ophthalmologists were selected to rate corneal staining images. The results showed that the performance of our artificial intelligence system significantly outperformed the junior doctors. Conclusion The system offers a promising automated assessment method for corneal fluorescein staining, decreasing incorrect evaluations caused by ophthalmologists' subjective variance and limited knowledge.
Collapse
Affiliation(s)
| | | | | | | | - Xiang Lin
- Department of Ophthalmology, Xiang’an Hospital of Xiamen University, Xiamen University, Xiamen, China
| | - Changsheng Xu
- Institute of Artificial Intelligence, Xiamen University, Xiamen, China
- Eye Institute of Xiamen University, School of Medicine, Xiamen University, Xiamen, China
- Fujian Provincial Key Laboratory of Ophthalmology and Visual Science, Xiamen University, Xiamen, China
| | - Linfangzi Zhu
- Eye Institute of Xiamen University, School of Medicine, Xiamen University, Xiamen, China
- Fujian Provincial Key Laboratory of Ophthalmology and Visual Science, Xiamen University, Xiamen, China
| | - Jie Kang
- Department of Ophthalmology, Xiang’an Hospital of Xiamen University, Xiamen University, Xiamen, China
| | - Yuqian Wang
- Eye Institute of Xiamen University, School of Medicine, Xiamen University, Xiamen, China
- Fujian Provincial Key Laboratory of Ophthalmology and Visual Science, Xiamen University, Xiamen, China
| | - Yong Li
- Eye Institute of Xiamen University, School of Medicine, Xiamen University, Xiamen, China
- Fujian Provincial Key Laboratory of Ophthalmology and Visual Science, Xiamen University, Xiamen, China
| | - Shujia Guo
- Eye Institute of Xiamen University, School of Medicine, Xiamen University, Xiamen, China
- Fujian Provincial Key Laboratory of Ophthalmology and Visual Science, Xiamen University, Xiamen, China
| | - Yunuo Zhang
- Eye Institute of Xiamen University, School of Medicine, Xiamen University, Xiamen, China
- Fujian Provincial Key Laboratory of Ophthalmology and Visual Science, Xiamen University, Xiamen, China
| | - Zhiming Luo
- Institute of Artificial Intelligence, Xiamen University, Xiamen, China
- School of Informatics, Xiamen University, 422 Siming South Road, Xiamen 361005, Fujian, China
| | | |
Collapse
|
15
|
Wang K, Xu C, Li G, Zhang Y, Zheng Y, Sun C. Combining convolutional neural networks and self-attention for fundus diseases identification. Sci Rep 2023; 13:76. [PMID: 36593268 PMCID: PMC9807560 DOI: 10.1038/s41598-022-27358-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2022] [Accepted: 12/30/2022] [Indexed: 01/03/2023] Open
Abstract
Early detection of lesions is of great significance for treating fundus diseases. Fundus photography is an effective and convenient screening technique by which common fundus diseases can be detected. In this study, we use color fundus images to distinguish among multiple fundus diseases. Existing research on fundus disease classification has achieved some success through deep learning techniques, but there is still much room for improvement in model evaluation metrics using only deep convolutional neural network (CNN) architectures with limited global modeling ability; the simultaneous diagnosis of multiple fundus diseases still faces great challenges. Therefore, given that the self-attention (SA) model with a global receptive field may have robust global-level feature modeling ability, we propose a multistage fundus image classification model MBSaNet which combines CNN and SA mechanism. The convolution block extracts the local information of the fundus image, and the SA module further captures the complex relationships between different spatial positions, thereby directly detecting one or more fundus diseases in retinal fundus image. In the initial stage of feature extraction, we propose a multiscale feature fusion stem, which uses convolutional kernels of different scales to extract low-level features of the input image and fuse them to improve recognition accuracy. The training and testing were performed based on the ODIR-5k dataset. The experimental results show that MBSaNet achieves state-of-the-art performance with fewer parameters. The wide range of diseases and different fundus image collection conditions confirmed the applicability of MBSaNet.
Collapse
Affiliation(s)
- Keya Wang
- grid.411594.c0000 0004 1777 9452School of Artificial Intelligence, Chongqing University of Technology, Chongqing, 401135 China
| | - Chuanyun Xu
- grid.411594.c0000 0004 1777 9452School of Artificial Intelligence, Chongqing University of Technology, Chongqing, 401135 China ,grid.411575.30000 0001 0345 927XCollege of Computer and Information Science, Chongqing Normal University, Chongqing, 401331 China
| | - Gang Li
- grid.411594.c0000 0004 1777 9452School of Artificial Intelligence, Chongqing University of Technology, Chongqing, 401135 China
| | - Yang Zhang
- grid.411575.30000 0001 0345 927XCollege of Computer and Information Science, Chongqing Normal University, Chongqing, 401331 China
| | - Yu Zheng
- grid.411594.c0000 0004 1777 9452School of Artificial Intelligence, Chongqing University of Technology, Chongqing, 401135 China
| | - Chengjie Sun
- grid.411594.c0000 0004 1777 9452School of Artificial Intelligence, Chongqing University of Technology, Chongqing, 401135 China
| |
Collapse
|
16
|
An empirical study of preprocessing techniques with convolutional neural networks for accurate detection of chronic ocular diseases using fundus images. APPL INTELL 2023; 53:1548-1566. [PMID: 35528131 PMCID: PMC9059700 DOI: 10.1007/s10489-022-03490-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/08/2022] [Indexed: 01/07/2023]
Abstract
Chronic Ocular Diseases (COD) such as myopia, diabetic retinopathy, age-related macular degeneration, glaucoma, and cataract can affect the eye and may even lead to severe vision impairment or blindness. According to a recent World Health Organization (WHO) report on vision, at least 2.2 billion individuals worldwide suffer from vision impairment. Often, overt signs indicative of COD do not manifest until the disease has progressed to an advanced stage. However, if COD is detected early, vision impairment can be avoided by early intervention and cost-effective treatment. Ophthalmologists are trained to detect COD by examining certain minute changes in the retina, such as microaneurysms, macular edema, hemorrhages, and alterations in the blood vessels. The range of eye conditions is diverse, and each of these conditions requires a unique patient-specific treatment. Convolutional neural networks (CNNs) have demonstrated significant potential in multi-disciplinary fields, including the detection of a variety of eye diseases. In this study, we combined several preprocessing approaches with convolutional neural networks to accurately detect COD in eye fundus images. To the best of our knowledge, this is the first work that provides a qualitative analysis of preprocessing approaches for COD classification using CNN models. Experimental results demonstrate that CNNs trained on the region of interest segmented images outperform the models trained on the original input images by a substantial margin. Additionally, an ensemble of three preprocessing techniques outperformed other state-of-the-art approaches by 30% and 3%, in terms of Kappa and F 1 scores, respectively. The developed prototype has been extensively tested and can be evaluated on more comprehensive COD datasets for deployment in the clinical setup.
Collapse
|
17
|
Wan C, Fang J, Hua X, Chen L, Zhang S, Yang W. Automated detection of myopic maculopathy using five-category models based on vision outlooker for visual recognition. Front Comput Neurosci 2023; 17:1169464. [PMID: 37152298 PMCID: PMC10157024 DOI: 10.3389/fncom.2023.1169464] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2023] [Accepted: 04/06/2023] [Indexed: 05/09/2023] Open
Abstract
Purpose To propose a five-category model for the automatic detection of myopic macular lesions to help grassroots medical institutions conduct preliminary screening of myopic macular lesions from limited number of color fundus images. Methods First, 1,750 fundus images of non-myopic retinal lesions and four categories of pathological myopic maculopathy were collected, graded, and labeled. Subsequently, three five-classification models based on Vision Outlooker for Visual Recognition (VOLO), EfficientNetV2, and ResNet50 for detecting myopic maculopathy were trained with data-augmented images, and the diagnostic results of the different trained models were compared and analyzed. The main evaluation metrics were sensitivity, specificity, negative predictive value (NPV), positive predictive value (PPV), area under the curve (AUC), kappa and accuracy, and receiver operating characteristic curve (ROC). Results The diagnostic accuracy of the VOLO-D2 model was 96.60% with a kappa value of 95.60%. All indicators used for the diagnosis of myopia-free macular degeneration were 100%. The sensitivity, NPV, specificity, and PPV for diagnosis of leopard fundus were 96.43, 98.33, 100, and 100%, respectively. The sensitivity, specificity, PPV, and NPV for the diagnosis of diffuse chorioretinal atrophy were 96.88, 98.59, 93.94, and 99.29%, respectively. The sensitivity, specificity, PPV, and NPV for the diagnosis of patchy chorioretinal atrophy were 92.31, 99.26, 97.30, and 97.81%, respectively. The sensitivity, specificity, PPV, and NPV for the diagnosis of macular atrophy were 100, 98.10, 84.21, and 100%, respectively. Conclusion The VOLO-D2 model accurately identified myopia-free macular lesions and four pathological myopia-related macular lesions with high sensitivity and specificity. It can be used in screening pathological myopic macular lesions and can help ophthalmologists and primary medical institution providers complete the initial screening diagnosis of patients.
Collapse
Affiliation(s)
- Cheng Wan
- College of Electronic Information Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| | - Jiyi Fang
- College of Electronic Information Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| | - Xiao Hua
- Nanjing Star-mile Technology Co., Ltd., Nanjing, China
| | - Lu Chen
- Shenzhen Eye Hospital, Jinan University, Shenzhen, China
- Shenzhen Eye Institute, Shenzhen, China
| | - Shaochong Zhang
- Shenzhen Eye Hospital, Jinan University, Shenzhen, China
- Shenzhen Eye Institute, Shenzhen, China
- Shaochong Zhang,
| | - Weihua Yang
- Shenzhen Eye Hospital, Jinan University, Shenzhen, China
- Shenzhen Eye Institute, Shenzhen, China
- *Correspondence: Weihua Yang,
| |
Collapse
|
18
|
Zhang X, Xiao Z, Li X, Wu X, Sun H, Yuan J, Higashita R, Liu J. Mixed pyramid attention network for nuclear cataract classification based on anterior segment OCT images. Health Inf Sci Syst 2022; 10:3. [PMID: 35401971 PMCID: PMC8956780 DOI: 10.1007/s13755-022-00170-2] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2022] [Accepted: 03/04/2022] [Indexed: 11/25/2022] Open
Abstract
Nuclear cataract (NC) is a leading ocular disease globally for blindness and vision impairment. NC patients can improve their vision through cataract surgery or slow the opacity development with early intervention. Anterior segment optical coherence tomography (AS-OCT) image is an emerging ophthalmic image type, which can clearly observe the whole lens structure. Recently, clinicians have been increasingly studying the correlation between NC severity levels and clinical features from the nucleus region on AS-OCT images, and the results suggested the correlation is strong. However, automatic NC classification research based on AS-OCT images has rarely been studied. This paper presents a novel mixed pyramid attention network (MPANet) to classify NC severity levels on AS-OCT images automatically. In the MPANet, we design a novel mixed pyramid attention (MPA) block, which first applies the group convolution method to enhance the feature representation difference of feature maps and then construct a mixed pyramid pooling structure to extract local-global feature representations and different feature representation types simultaneously. We conduct extensive experiments on a clinical AS-OCT image dataset and a public OCT dataset to evaluate the effectiveness of our method. The results demonstrate that our method achieves competitive classification performance through comparisons to state-of-the-art methods and previous works. Moreover, this paper also uses the class activation mapping (CAM) technique to improve our method's interpretability of classification results.
Collapse
Affiliation(s)
- Xiaoqing Zhang
- Research Institute of Trustworthy Autonomous Systems, Southern University of Science and Technology, Shenzhen, 518055 China
- Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, 518055 China
| | - Zunjie Xiao
- Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, 518055 China
| | - Xiaoling Li
- School of Ophthalmology and Optometry, Wenzhou Medical University, Wenzhou, 325035 China
| | - Xiao Wu
- Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, 518055 China
| | - Hanxi Sun
- Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, 518055 China
| | - Jin Yuan
- State Key Laboratory of Ophthalmology, Sun Yat-sen University, Guangzhou, 510060 China
| | - Risa Higashita
- Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, 518055 China
- Present Address: Tomey Corporation, Nagoya, Japan
| | - Jiang Liu
- Research Institute of Trustworthy Autonomous Systems, Southern University of Science and Technology, Shenzhen, 518055 China
- Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, 518055 China
- School of Ophthalmology and Optometry, Wenzhou Medical University, Wenzhou, 325035 China
- Guangdong Provincial Key Laboratory of Brain-inspired Intelligent Computation, Southern University of Science and Technology, Shenzhen, 518055 China
| |
Collapse
|
19
|
Wu X, Xu D, Ma T, Li ZH, Ye Z, Wang F, Gao XY, Wang B, Chen YZ, Wang ZH, Chen JL, Hu YT, Ge ZY, Wang DJ, Zeng Q. Artificial Intelligence Model for Antiinterference Cataract Automatic Diagnosis: A Diagnostic Accuracy Study. Front Cell Dev Biol 2022; 10:906042. [PMID: 35938155 PMCID: PMC9355278 DOI: 10.3389/fcell.2022.906042] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2022] [Accepted: 06/21/2022] [Indexed: 11/23/2022] Open
Abstract
Background: Cataract is the leading cause of blindness worldwide. In order to achieve large-scale cataract screening and remarkable performance, several studies have applied artificial intelligence (AI) to cataract detection based on fundus images. However, the fundus images they used are original from normal optical circumstances, which is less impractical due to the existence of poor-quality fundus images for inappropriate optical conditions in actual scenarios. Furthermore, these poor-quality images are easily mistaken as cataracts because both show fuzzy imaging characteristics, which may decline the performance of cataract detection. Therefore, we aimed to develop and validate an antiinterference AI model for rapid and efficient diagnosis based on fundus images. Materials and Methods: The datasets (including both cataract and noncataract labels) were derived from the Chinese PLA general hospital. The antiinterference AI model consisted of two AI submodules, a quality recognition model for cataract labeling and a convolutional neural networks-based model for cataract classification. The quality recognition model was performed to distinguish poor-quality images from normal-quality images and further generate the pseudo labels related to image quality for noncataract. Through this, the original binary-class label (cataract and noncataract) was adjusted to three categories (cataract, noncataract with normal-quality images, and noncataract with poor-quality images), which could be used to guide the model to distinguish cataract from suspected cataract fundus images. In the cataract classification stage, the convolutional-neural-network-based model was proposed to classify cataracts based on the label of the previous stage. The performance of the model was internally validated and externally tested in real-world settings, and the evaluation indicators included area under the receiver operating curve (AUC), accuracy (ACC), sensitivity (SEN), and specificity (SPE). Results: In the internal and external validation, the antiinterference AI model showed robust performance in cataract diagnosis (three classifications with AUCs >91%, ACCs >84%, SENs >71%, and SPEs >89%). Compared with the model that was trained on the binary-class label, the antiinterference cataract model improved its performance by 10%. Conclusion: We proposed an efficient antiinterference AI model for cataract diagnosis, which could achieve accurate cataract screening even with the interference of poor-quality images and help the government formulate a more accurate aid policy.
Collapse
Affiliation(s)
- Xing Wu
- Senior Department of Ophthalmology, The Third Medical Center of Chinese PLA General Hospital, Beijing, China
| | - Di Xu
- Beijing Airdoc Technology Co., Ltd., Beijing, China
| | - Tong Ma
- Beijing Airdoc Technology Co., Ltd., Beijing, China
| | - Zhao Hui Li
- Senior Department of Ophthalmology, The Third Medical Center of Chinese PLA General Hospital, Beijing, China
| | - Zi Ye
- Senior Department of Ophthalmology, The Third Medical Center of Chinese PLA General Hospital, Beijing, China
| | - Fei Wang
- Health Management Institute, The Second Medical Center & National Clinical Research Center for Geriatric Diseases, Chinese PLA General Hospital, Beijing, China
| | - Xiang Yang Gao
- Health Management Institute, The Second Medical Center & National Clinical Research Center for Geriatric Diseases, Chinese PLA General Hospital, Beijing, China
| | - Bin Wang
- Beijing Airdoc Technology Co., Ltd., Beijing, China
| | | | - Zhao Hui Wang
- IKang Guobin Healthcare Group Co., Ltd., Beijing, China
| | - Ji Li Chen
- Department of Ophthalmology, Shanghai Shibei Hospital of Jing’an District, Shanghai, China
| | - Yun Tao Hu
- Department of Ophthalmology, Beijing Tisnghua Changgung Hospital, Beijing, China
| | - Zong Yuan Ge
- Beijing Airdoc Technology Co., Ltd., Beijing, China
| | - Da Jiang Wang
- Senior Department of Ophthalmology, The Third Medical Center of Chinese PLA General Hospital, Beijing, China
- *Correspondence: Da Jiang Wang, ; Qiang Zeng,
| | - Qiang Zeng
- Health Management Institute, The Second Medical Center & National Clinical Research Center for Geriatric Diseases, Chinese PLA General Hospital, Beijing, China
- *Correspondence: Da Jiang Wang, ; Qiang Zeng,
| |
Collapse
|
20
|
Deng Z, Cai Y, Chen L, Gong Z, Bao Q, Yao X, Fang D, Yang W, Zhang S, Ma L. RFormer: Transformer-Based Generative Adversarial Network for Real Fundus Image Restoration on a New Clinical Benchmark. IEEE J Biomed Health Inform 2022; 26:4645-4655. [PMID: 35767498 DOI: 10.1109/jbhi.2022.3187103] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Ophthalmologists have used fundus images to screen and diagnose eye diseases. However, different equipments and ophthalmologists pose large variations to the quality of fundus images. Low-quality (LQ) degraded fundus images easily lead to uncertainty in clinical screening and generally increase the risk of misdiagnosis. Thus, real fundus image restoration is worth studying. Unfortunately, real clinical benchmark has not been explored for this task so far. In this paper, we investigate the real clinical fundus image restoration problem. Firstly, We establish a clinical dataset, Real Fundus (RF), including 120 low- and high-quality (HQ) image pairs. Then we propose a novel Transformer-based Generative Adversarial Network (RFormer) to restore the real degradation of clinical fundus images. The key component in our network is the Window-based Self-Attention Block (WSAB) which captures non-local self-similarity and long-range dependencies. To produce more visually pleasant results, a Transformer-based discriminator is introduced. Extensive experiments on our clinical benchmark show that the proposed RFormer significantly outperforms the state-of-the-art (SOTA) methods. In addition, experiments of downstream tasks such as vessel segmentation and optic disc/cup detection demonstrate that our proposed RFormer benefits clinical fundus image analysis and applications. The dataset, code, and models will be made publicly available at https://github.com/dengzhuo-AI/Real-Fundus.
Collapse
|
21
|
Ahn H, Jun I, Seo KY, Kim EK, Kim TI. Artificial Intelligence for the Estimation of Visual Acuity Using Multi-Source Anterior Segment Optical Coherence Tomographic Images in Senile Cataract. Front Med (Lausanne) 2022; 9:871382. [PMID: 35655854 PMCID: PMC9152093 DOI: 10.3389/fmed.2022.871382] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2022] [Accepted: 04/04/2022] [Indexed: 12/05/2022] Open
Abstract
Purpose To investigate an artificial intelligence (AI) model performance using multi-source anterior segment optical coherence tomographic (OCT) images in estimating the preoperative best-corrected visual acuity (BCVA) in patients with senile cataract. Design Retrospective, cross-instrument validation study. Subjects A total of 2,332 anterior segment images obtained using swept-source OCT, optical biometry for intraocular lens calculation, and a femtosecond laser platform in patients with senile cataract and postoperative BCVA ≥ 0.0 logMAR were included in the training/validation dataset. A total of 1,002 images obtained using optical biometry and another femtosecond laser platform in patients who underwent cataract surgery in 2021 were used for the test dataset. Methods AI modeling was based on an ensemble model of Inception-v4 and ResNet. The BCVA training/validation dataset was used for model training. The model performance was evaluated using the test dataset. Analysis of absolute error (AE) was performed by comparing the difference between true preoperative BCVA and estimated preoperative BCVA, as ≥0.1 logMAR (AE≥0.1) or <0.1 logMAR (AE <0.1). AE≥0.1 was classified into underestimation and overestimation groups based on the logMAR scale. Outcome Measurements Mean absolute error (MAE), root mean square error (RMSE), mean percentage error (MPE), and correlation coefficient between true preoperative BCVA and estimated preoperative BCVA. Results The test dataset MAE, RMSE, and MPE were 0.050 ± 0.130 logMAR, 0.140 ± 0.134 logMAR, and 1.3 ± 13.9%, respectively. The correlation coefficient was 0.969 (p < 0.001). The percentage of cases with AE≥0.1 was 8.4%. The incidence of postoperative BCVA > 0.1 was 21.4% in the AE≥0.1 group, of which 88.9% were in the underestimation group. The incidence of vision-impairing disease in the underestimation group was 95.7%. Preoperative corneal astigmatism and lens thickness were higher, and nucleus cataract was more severe (p < 0.001, 0.007, and 0.024, respectively) in AE≥0.1 than that in AE <0.1. The longer the axial length and the more severe the cortical/posterior subcapsular opacity, the better the estimated BCVA than the true BCVA. Conclusions The AI model achieved high-level visual acuity estimation in patients with senile cataract. This quantification method encompassed both visual acuity and cataract severity of OCT image, which are the main indications for cataract surgery, showing the potential to objectively evaluate cataract severity.
Collapse
Affiliation(s)
- Hyunmin Ahn
- Department of Ophthalmology, Institute of Vision Research, Yonsei University College of Medicine, Seoul, South Korea
| | - Ikhyun Jun
- Department of Ophthalmology, Institute of Vision Research, Yonsei University College of Medicine, Seoul, South Korea.,Corneal Dystrophy Research Institute, Yonsei University College of Medicine, Seoul, South Korea
| | - Kyoung Yul Seo
- Department of Ophthalmology, Institute of Vision Research, Yonsei University College of Medicine, Seoul, South Korea
| | - Eung Kweon Kim
- Corneal Dystrophy Research Institute, Yonsei University College of Medicine, Seoul, South Korea.,Saevit Eye Hospital, Goyang, South Korea
| | - Tae-Im Kim
- Department of Ophthalmology, Institute of Vision Research, Yonsei University College of Medicine, Seoul, South Korea.,Corneal Dystrophy Research Institute, Yonsei University College of Medicine, Seoul, South Korea
| |
Collapse
|
22
|
Ou X, Gao L, Quan X, Zhang H, Yang J, Li W. BFENet: A two-stream interaction CNN method for multi-label ophthalmic diseases classification with bilateral fundus images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 219:106739. [PMID: 35344766 DOI: 10.1016/j.cmpb.2022.106739] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/15/2021] [Revised: 02/23/2022] [Accepted: 03/07/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND AND OBJECTIVE Early fundus screening and timely treatment of ophthalmology diseases can effectively prevent blindness. Previous studies just focus on fundus images of single eye without utilizing the useful relevant information of the left and right eyes. While clinical ophthalmologists usually use binocular fundus images to help ocular disease diagnosis. Besides, previous works usually target only one ocular diseases at a time. Considering the importance of patient-level bilateral eye diagnosis and multi-label ophthalmic diseases classification, we propose a bilateral feature enhancement network (BFENet) to address the above two problems. METHODS We propose a two-stream interactive CNN architecture for multi-label ophthalmic diseases classification with bilateral fundus images. Firstly, we design a feature enhancement module, which makes use of the interaction between bilateral fundus images to strengthen the extracted feature information. Specifically, attention mechanism is used to learn the interdependence between local and global information in the designed interactive architecture for two-stream, which leads to the reweighting of these features, and recover more details. In order to capture more disease characteristics, we further design a novel multiscale module, which enriches the feature maps by superimposing feature information of different resolutions images extracted through dilated convolution. RESULTS In the off-site set, the Kappa, F1, AUC and Final score are 0.535, 0.892, 0.912 and 0.780, respectively. In the on-site set, the Kappa, F1, AUC and Final score are 0.513, 0.886, 0.903 and 0.767 respectively. Comparing with existing methods, BFENet achieves the best classification performance. CONCLUSIONS Comprehensive experiments are conducted to demonstrate the effectiveness of this proposed model. Besides, our method can be extended to similar tasks where the correlation between different images is important.
Collapse
Affiliation(s)
- Xingyuan Ou
- College of Artificial Intelligence, Nankai University, Tianjin, China
| | - Li Gao
- Ophthalmology, Tianjin Huanhu Hospital, Tianjin, China
| | - Xiongwen Quan
- College of Artificial Intelligence, Nankai University, Tianjin, China.
| | - Han Zhang
- College of Artificial Intelligence, Nankai University, Tianjin, China
| | - Jinglong Yang
- College of Artificial Intelligence, Nankai University, Tianjin, China
| | - Wei Li
- College of Artificial Intelligence, Nankai University, Tianjin, China
| |
Collapse
|
23
|
Zhu S, Lu B, Wang C, Wu M, Zheng B, Jiang Q, Wei R, Cao Q, Yang W. Screening of Common Retinal Diseases Using Six-Category Models Based on EfficientNet. Front Med (Lausanne) 2022; 9:808402. [PMID: 35280876 PMCID: PMC8904395 DOI: 10.3389/fmed.2022.808402] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2021] [Accepted: 01/12/2022] [Indexed: 11/21/2022] Open
Abstract
Purpose A six-category model of common retinal diseases is proposed to help primary medical institutions in the preliminary screening of the five common retinal diseases. Methods A total of 2,400 fundus images of normal and five common retinal diseases were provided by a cooperative hospital. Two six-category deep learning models of common retinal diseases based on the EfficientNet-B4 and ResNet50 models were trained. The results from the six-category models in this study and the results from a five-category model in our previous study based on ResNet50 were compared. A total of 1,315 fundus images were used to test the models, the clinical diagnosis results and the diagnosis results of the two six-category models were compared. The main evaluation indicators were sensitivity, specificity, F1-score, area under the curve (AUC), 95% confidence interval, kappa and accuracy, and the receiver operator characteristic curves of the two six-category models were compared in the study. Results The diagnostic accuracy rate of EfficientNet-B4 model was 95.59%, the kappa value was 94.61%, and there was high diagnostic consistency. The AUC of the normal diagnosis and the five retinal diseases were all above 0.95. The sensitivity, specificity, and F1-score for the diagnosis of normal fundus images were 100, 99.9, and 99.83%, respectively. The specificity and F1-score for RVO diagnosis were 95.68, 98.61, and 93.09%, respectively. The sensitivity, specificity, and F1-score for high myopia diagnosis were 96.1, 99.6, and 97.37%, respectively. The sensitivity, specificity, and F1-score for glaucoma diagnosis were 97.62, 99.07, and 94.62%, respectively. The sensitivity, specificity, and F1-score for DR diagnosis were 90.76, 99.16, and 93.3%, respectively. The sensitivity, specificity, and F1-score for MD diagnosis were 92.27, 98.5, and 91.51%, respectively. Conclusion The EfficientNet-B4 model was used to design a six-category model of common retinal diseases. It can be used to diagnose the normal fundus and five common retinal diseases based on fundus images. It can help primary doctors in the screening for common retinal diseases, and give suitable suggestions and recommendations. Timely referral can improve the efficiency of diagnosis of eye diseases in rural areas and avoid delaying treatment.
Collapse
Affiliation(s)
- Shaojun Zhu
- School of Information Engineering, Huzhou University, Huzhou, China.,Zhejiang Province Key Laboratory of Smart Management and Application of Modern Agricultural Resources, Huzhou University, Huzhou, China
| | - Bing Lu
- School of Information Engineering, Huzhou University, Huzhou, China
| | - Chenghu Wang
- The Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
| | - Maonian Wu
- School of Information Engineering, Huzhou University, Huzhou, China.,Zhejiang Province Key Laboratory of Smart Management and Application of Modern Agricultural Resources, Huzhou University, Huzhou, China
| | - Bo Zheng
- School of Information Engineering, Huzhou University, Huzhou, China.,Zhejiang Province Key Laboratory of Smart Management and Application of Modern Agricultural Resources, Huzhou University, Huzhou, China
| | - Qin Jiang
- The Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
| | - Ruili Wei
- Department of Ophthalmology, Shanghai Changzheng Hospital, Huangpu, China
| | - Qixin Cao
- Huzhou Traditional Chinese Medicine Hospital Affiliated to Zhejiang University of Traditional Chinese Medicine, Huzhou, China
| | - Weihua Yang
- The Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
| |
Collapse
|
24
|
Gutierrez L, Lim JS, Foo LL, Ng WY, Yip M, Lim GYS, Wong MHY, Fong A, Rosman M, Mehta JS, Lin H, Ting DSJ, Ting DSW. Application of artificial intelligence in cataract management: current and future directions. EYE AND VISION (LONDON, ENGLAND) 2022; 9:3. [PMID: 34996524 PMCID: PMC8739505 DOI: 10.1186/s40662-021-00273-z] [Citation(s) in RCA: 23] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/05/2021] [Accepted: 12/07/2021] [Indexed: 02/10/2023]
Abstract
The rise of artificial intelligence (AI) has brought breakthroughs in many areas of medicine. In ophthalmology, AI has delivered robust results in the screening and detection of diabetic retinopathy, age-related macular degeneration, glaucoma, and retinopathy of prematurity. Cataract management is another field that can benefit from greater AI application. Cataract is the leading cause of reversible visual impairment with a rising global clinical burden. Improved diagnosis, monitoring, and surgical management are necessary to address this challenge. In addition, patients in large developing countries often suffer from limited access to tertiary care, a problem further exacerbated by the ongoing COVID-19 pandemic. AI on the other hand, can help transform cataract management by improving automation, efficacy and overcoming geographical barriers. First, AI can be applied as a telediagnostic platform to screen and diagnose patients with cataract using slit-lamp and fundus photographs. This utilizes a deep-learning, convolutional neural network (CNN) to detect and classify referable cataracts appropriately. Second, some of the latest intraocular lens formulas have used AI to enhance prediction accuracy, achieving superior postoperative refractive results compared to traditional formulas. Third, AI can be used to augment cataract surgical skill training by identifying different phases of cataract surgery on video and to optimize operating theater workflows by accurately predicting the duration of surgical procedures. Fourth, some AI CNN models are able to effectively predict the progression of posterior capsule opacification and eventual need for YAG laser capsulotomy. These advances in AI could transform cataract management and enable delivery of efficient ophthalmic services. The key challenges include ethical management of data, ensuring data security and privacy, demonstrating clinically acceptable performance, improving the generalizability of AI models across heterogeneous populations, and improving the trust of end-users.
Collapse
Affiliation(s)
| | - Jane Sujuan Lim
- Singapore Eye Research Institute, Singapore, Singapore.,Singapore National Eye Center, 11 Third Hospital Avenue, Singapore, 168751, Singapore
| | - Li Lian Foo
- Singapore Eye Research Institute, Singapore, Singapore.,Singapore National Eye Center, 11 Third Hospital Avenue, Singapore, 168751, Singapore
| | - Wei Yan Ng
- Singapore Eye Research Institute, Singapore, Singapore.,Singapore National Eye Center, 11 Third Hospital Avenue, Singapore, 168751, Singapore
| | - Michelle Yip
- Singapore Eye Research Institute, Singapore, Singapore.,Singapore National Eye Center, 11 Third Hospital Avenue, Singapore, 168751, Singapore
| | | | - Melissa Hsing Yi Wong
- Singapore National Eye Center, 11 Third Hospital Avenue, Singapore, 168751, Singapore
| | - Allan Fong
- Singapore National Eye Center, 11 Third Hospital Avenue, Singapore, 168751, Singapore
| | - Mohamad Rosman
- Singapore Eye Research Institute, Singapore, Singapore.,Singapore National Eye Center, 11 Third Hospital Avenue, Singapore, 168751, Singapore
| | - Jodhbir Singth Mehta
- Singapore Eye Research Institute, Singapore, Singapore.,Singapore National Eye Center, 11 Third Hospital Avenue, Singapore, 168751, Singapore
| | - Haotian Lin
- Zhongshan Ophthalmic Center, Sun Yet Sen University, Guangzhou, China
| | - Darren Shu Jeng Ting
- Academic Ophthalmology, School of Medicine, University of Nottingham, Nottingham, UK
| | - Daniel Shu Wei Ting
- Singapore Eye Research Institute, Singapore, Singapore. .,Singapore National Eye Center, 11 Third Hospital Avenue, Singapore, 168751, Singapore.
| |
Collapse
|
25
|
Zheng B, Wu MN, Zhu SJ, Zhou HX, Hao XL, Fei FQ, Jia Y, Wu J, Yang WH, Pan XP. Attitudes of medical workers in China toward artificial intelligence in ophthalmology: a comparative survey. BMC Health Serv Res 2021; 21:1067. [PMID: 34627239 PMCID: PMC8501607 DOI: 10.1186/s12913-021-07044-5] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2020] [Accepted: 09/17/2021] [Indexed: 12/20/2022] Open
Abstract
Background In the development of artificial intelligence in ophthalmology, the ophthalmic AI-related recognition issues are prominent, but there is a lack of research into people’s familiarity with and their attitudes toward ophthalmic AI. This survey aims to assess medical workers’ and other professional technicians’ familiarity with, attitudes toward, and concerns about AI in ophthalmology. Methods This is a cross-sectional study design study. An electronic questionnaire was designed through the app Questionnaire Star, and was sent to respondents through WeChat, China’s version of Facebook or WhatsApp. The participation was voluntary and anonymous. The questionnaire consisted of four parts, namely the respondents’ background, their basic understanding of AI, their attitudes toward AI, and their concerns about AI. A total of 562 respondents were counted, with 562 valid questionnaires returned. The results of the questionnaires are displayed in an Excel 2003 form. Results There were 291 medical workers and 271 other professional technicians completed the questionnaire. About 1/3 of the respondents understood AI and ophthalmic AI. The percentages of people who understood ophthalmic AI among medical workers and other professional technicians were about 42.6 % and 15.6 %, respectively. About 66.0 % of the respondents thought that AI in ophthalmology would partly replace doctors, about 59.07 % having a relatively high acceptance level of ophthalmic AI. Meanwhile, among those with AI in ophthalmology application experiences (30.6 %), above 70 % of respondents held a full acceptance attitude toward AI in ophthalmology. The respondents expressed medical ethics concerns about AI in ophthalmology. And among the respondents who understood AI in ophthalmology, almost all the people said that there was a need to increase the study of medical ethics issues in the ophthalmic AI field. Conclusions The survey results revealed that the medical workers had a higher understanding level of AI in ophthalmology than other professional technicians, making it necessary to popularize ophthalmic AI education among other professional technicians. Most of the respondents did not have any experience in ophthalmic AI but generally had a relatively high acceptance level of AI in ophthalmology, and there was a need to strengthen research into medical ethics issues.
Collapse
Affiliation(s)
- Bo Zheng
- School of Information Engineering, Huzhou University, Zhejiang, 313000, Huzhou, China.,Zhejiang Province Key Laboratory of Smart Management & Application of Modern Agricultural Resources, Huzhou University, 313000, Huzhou, China, Zhejiang Province
| | - Mao-Nian Wu
- School of Information Engineering, Huzhou University, Zhejiang, 313000, Huzhou, China.,College of Computer and Information, Hehai University, 210013, Nanjing, China, Jiangsu
| | - Shao-Jun Zhu
- School of Information Engineering, Huzhou University, Zhejiang, 313000, Huzhou, China.,Zhejiang Province Key Laboratory of Smart Management & Application of Modern Agricultural Resources, Huzhou University, 313000, Huzhou, China, Zhejiang Province
| | - Hong-Xia Zhou
- School of Information Engineering, Huzhou University, Zhejiang, 313000, Huzhou, China.,Zhejiang Province Key Laboratory of Smart Management & Application of Modern Agricultural Resources, Huzhou University, 313000, Huzhou, China, Zhejiang Province.,College of Computer and Information, Hehai University, 210013, Nanjing, China, Jiangsu
| | - Xiu-Lan Hao
- School of Information Engineering, Huzhou University, Zhejiang, 313000, Huzhou, China.,Zhejiang Province Key Laboratory of Smart Management & Application of Modern Agricultural Resources, Huzhou University, 313000, Huzhou, China, Zhejiang Province
| | - Fang-Qin Fei
- Department of Endocrinology, First Affiliated Hospital of Huzhou University, 313000, Huzhou, China, Zhejiang
| | - Yun Jia
- School of Medicine, Huzhou University, 313000, Huzhou, China, Zhejiang
| | - Jian Wu
- Zhejiang University Real Doctor AI Research Center, 310000, Hangzhou, Zhejiang, P.R. China
| | - Wei-Hua Yang
- Affiliated Eye Hospital of Nanjing Medical University, No.138 Hanzhong Road, Gulou District, 210029, Nanjing, Jiangsu, China.
| | - Xue-Ping Pan
- First People's Hospital of Huzhou, 313000, Huzhou, China, Zhejiang
| |
Collapse
|
26
|
Tognetto D, Giglio R, Vinciguerra AL, Milan S, Rejdak R, Rejdak M, Zaluska-Ogryzek K, Zweifel S, Toro MD. Artificial intelligence applications and cataract management: A systematic review. Surv Ophthalmol 2021; 67:817-829. [PMID: 34606818 DOI: 10.1016/j.survophthal.2021.09.004] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2021] [Revised: 09/27/2021] [Accepted: 09/27/2021] [Indexed: 11/26/2022]
Abstract
Artificial intelligence (AI)-based applications exhibit the potential to improve the quality and efficiency of patient care in different fields, including cataract management. A systematic review of the different applications of AI-based software on all aspects of a cataract patient's management, from diagnosis to follow-up, was carried out in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. All selected articles were analyzed to assess the level of evidence according to the Oxford Centre for Evidence-Based Medicine 2011 guidelines, and the quality of evidence according to the Grading of Recommendations Assessment, Development and Evaluation system. Of the articles analyzed, 49 met the inclusion criteria. No data synthesis was possible for the heterogeneity of available data and the design of the available studies. The AI-driven diagnosis seemed to be comparable and, in selected cases, to even exceed the accuracy of experienced clinicians in classifying disease, supporting the operating room scheduling, and intraoperative and postoperative management of complications. Considering the heterogeneity of data analyzed, however, further randomized controlled trials to assess the efficacy and safety of AI application in the management of cataract should be highly warranted.
Collapse
Affiliation(s)
- Daniele Tognetto
- Eye Clinic, Department of Medicine, Surgery and Health Sciences, University of Trieste, Trieste, Italy
| | - Rosa Giglio
- Eye Clinic, Department of Medicine, Surgery and Health Sciences, University of Trieste, Trieste, Italy.
| | - Alex Lucia Vinciguerra
- Eye Clinic, Department of Medicine, Surgery and Health Sciences, University of Trieste, Trieste, Italy
| | - Serena Milan
- Eye Clinic, Department of Medicine, Surgery and Health Sciences, University of Trieste, Trieste, Italy
| | - Robert Rejdak
- Chair and Department of General and Pediatric Ophthalmology, Medical University of Lublin, Lublin, Poland
| | | | | | | | - Mario Damiano Toro
- Department of Ophthalmology, University of Zurich, Zurich; Department of Medical Sciences, Collegium Medicum, Cardinal Stefan Wyszyński University, Warsaw, Poland
| |
Collapse
|
27
|
Luo X, Li J, Chen M, Yang X, Li X. Ophthalmic Disease Detection via Deep Learning With a Novel Mixture Loss Function. IEEE J Biomed Health Inform 2021; 25:3332-3339. [PMID: 34033552 DOI: 10.1109/jbhi.2021.3083605] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
With the popularization of computer-aided diagnosis (CAD) technologies, more and more deep learning methods are developed to facilitate the detection of ophthalmic diseases. In this article, the deep learning-based detections for some common eye diseases, including cataract, glaucoma, and age-related macular degeneration (AMD), are analyzed. Generally speaking, morphological change in retina reveals the presence of eye disease. Then, while using some existing deep learning methods to achieve this analysis task, the satisfactory performance may not be given, since fundus images usually suffer from the impact of data imbalance and outliers. It is, therefore, expected that with the exploration of effective and robust deep learning algorithms, the detection performance could be further improved. Here, we propose a deep learning model combined with a novel mixture loss function to automatically detect eye diseases, through the analysis of retinal fundus color images. Specifically, given the good generalization and robustness of focal loss and correntropy-induced loss functions in addressing complex dataset with class imbalance and outliers, we present a mixture of those two losses in deep neural network model to improve the recognition performance of classifier for biomedical data. The proposed model is evaluated on a real-life ophthalmic dataset. Meanwhile, the performance of deep learning model with our proposed loss function is compared with the baseline models, while adopting accuracy, sensitivity, specificity, Kappa, and area under the receiver operating characteristic curve (AUC) as the evaluation metrics. The experimental results verify the effectiveness and robustness of the proposed algorithm.
Collapse
|
28
|
Gao S, Gao L, Quan X, Zhang H, Bai H, Kang C. Automatic arteriosclerotic retinopathy grading using four-channel with image merging. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 208:106274. [PMID: 34325376 DOI: 10.1016/j.cmpb.2021.106274] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/12/2021] [Accepted: 07/01/2021] [Indexed: 06/13/2023]
Abstract
BACKGROUND AND OBJECTIVE Arteriosclerosis can reflect the severity of hypertension, which is one of the main diseases threatening human life safety. But Arteriosclerosis retinopathy detection involves costly and time-consuming manual assessment. To meet the urgent needs of automation, this paper developed a novel arteriosclerosis retinopathy grading method based on convolutional neural network. METHODS Firstly, we propose a good scheme for extracting features facing the fundus blood vessel background using image merging for contour enhancement. In this step, the original image is dealt with adaptive threshold processing to generate the new contour channel, which merge with the original three-channel image. Then, we employ the pre-trained convolutional neural network with transfer learning to speed up training and contour image channel parameter with Kaiming initialization. Moreover, ArcLoss is applied to increase inter-class differences and intra-class similarity aiming to the high similarity of images of different classes in the dataset. RESULTS The accuracy of arteriosclerosis retinopathy grading achieved by the proposed method is up to 65.354%, which is nearly 4% higher than those of the exiting methods. The Kappa of our method is 0.508 in arteriosclerosis retinopathy grading. CONCLUSIONS An experimental study on multiple metrics demonstrates the superiority of our method, which will be a useful to the toolbox for arteriosclerosis retinopathy grading.
Collapse
Affiliation(s)
- Shuo Gao
- College of Artificial Intelligence, Nankai University, Tianjin, China.
| | - Li Gao
- Ophthalmology, Tianjin Huanhu Hospital, Tianjin, China
| | - Xiongwen Quan
- College of Artificial Intelligence, Nankai University, Tianjin, China.
| | - Han Zhang
- College of Artificial Intelligence, Nankai University, Tianjin, China.
| | - Hang Bai
- College of Artificial Intelligence, Nankai University, Tianjin, China.
| | - Chuanze Kang
- College of Computer Science, Nankai University, Tianjin, China.
| |
Collapse
|
29
|
Pratap T, Kokil P. Deep neural network based robust computer-aided cataract diagnosis system using fundus retinal images. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102985] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
30
|
Nuzzi R, Boscia G, Marolo P, Ricardi F. The Impact of Artificial Intelligence and Deep Learning in Eye Diseases: A Review. Front Med (Lausanne) 2021; 8:710329. [PMID: 34527682 PMCID: PMC8437147 DOI: 10.3389/fmed.2021.710329] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2021] [Accepted: 07/23/2021] [Indexed: 12/21/2022] Open
Abstract
Artificial intelligence (AI) is a subset of computer science dealing with the development and training of algorithms that try to replicate human intelligence. We report a clinical overview of the basic principles of AI that are fundamental to appreciating its application to ophthalmology practice. Here, we review the most common eye diseases, focusing on some of the potential challenges and limitations emerging with the development and application of this new technology into ophthalmology.
Collapse
Affiliation(s)
- Raffaele Nuzzi
- Ophthalmology Unit, A.O.U. City of Health and Science of Turin, Department of Surgical Sciences, University of Turin, Turin, Italy
| | | | | | | |
Collapse
|
31
|
Research on an Intelligent Lightweight-Assisted Pterygium Diagnosis Model Based on Anterior Segment Images. DISEASE MARKERS 2021; 2021:7651462. [PMID: 34367378 PMCID: PMC8342163 DOI: 10.1155/2021/7651462] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/19/2021] [Accepted: 07/16/2021] [Indexed: 12/13/2022]
Abstract
Aims The lack of primary ophthalmologists in China results in the inability of basic-level hospitals to diagnose pterygium patients. To solve this problem, an intelligent-assisted lightweight pterygium diagnosis model based on anterior segment images is proposed in this study. Methods Pterygium is a common and frequently occurring disease in ophthalmology, and fibrous tissue hyperplasia is both a diagnostic biomarker and a surgical biomarker. The model diagnosed pterygium based on biomarkers of pterygium. First, a total of 436 anterior segment images were collected; then, two intelligent-assisted lightweight pterygium diagnosis models (MobileNet 1 and MobileNet 2) based on raw data and augmented data were trained via transfer learning. The results of the lightweight models were compared with the clinical results. The classic models (AlexNet, VGG16 and ResNet18) were also used for training and testing, and their results were compared with the lightweight models. A total of 188 anterior segment images were used for testing. Sensitivity, specificity, F1-score, accuracy, kappa, area under the concentration-time curve (AUC), 95% CI, size, and parameters are the evaluation indicators in this study. Results There are 188 anterior segment images that were used for testing the five intelligent-assisted pterygium diagnosis models. The overall evaluation index for the MobileNet2 model was the best. The sensitivity, specificity, F1-score, and AUC of the MobileNet2 model for the normal anterior segment image diagnosis were 96.72%, 98.43%, 96.72%, and 0976, respectively; for the pterygium observation period anterior segment image diagnosis, the sensitivity, specificity, F1-score, and AUC were 83.7%, 90.48%, 82.54%, and 0.872, respectively; for the surgery period anterior segment image diagnosis, the sensitivity, specificity, F1-score, and AUC were 84.62%, 93.50%, 85.94%, and 0.891, respectively. The kappa value of the MobileNet2 model was 77.64%, the accuracy was 85.11%, the model size was 13.5 M, and the parameter size was 4.2 M. Conclusion This study used deep learning methods to propose a three-category intelligent lightweight-assisted pterygium diagnosis model. The developed model can be used to screen patients for pterygium problems initially, provide reasonable suggestions, and provide timely referrals. It can help primary doctors improve pterygium diagnoses, confer social benefits, and lay the foundation for future models to be embedded in mobile devices.
Collapse
|
32
|
Han Y, Li W, Liu M, Wu Z, Zhang F, Liu X, Tao L, Li X, Guo X. Application of an Anomaly Detection Model to Screen for Ocular Diseases Using Color Retinal Fundus Images: Design and Evaluation Study. J Med Internet Res 2021; 23:e27822. [PMID: 34255681 PMCID: PMC8317033 DOI: 10.2196/27822] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2021] [Revised: 05/07/2021] [Accepted: 05/24/2021] [Indexed: 12/15/2022] Open
Abstract
BACKGROUND The supervised deep learning approach provides state-of-the-art performance in a variety of fundus image classification tasks, but it is not applicable for screening tasks with numerous or unknown disease types. The unsupervised anomaly detection (AD) approach, which needs only normal samples to develop a model, may be a workable and cost-saving method of screening for ocular diseases. OBJECTIVE This study aimed to develop and evaluate an AD model for detecting ocular diseases on the basis of color fundus images. METHODS A generative adversarial network-based AD method for detecting possible ocular diseases was developed and evaluated using 90,499 retinal fundus images derived from 4 large-scale real-world data sets. Four other independent external test sets were used for external testing and further analysis of the model's performance in detecting 6 common ocular diseases (diabetic retinopathy [DR], glaucoma, cataract, age-related macular degeneration, hypertensive retinopathy [HR], and myopia), DR of different severity levels, and 36 categories of abnormal fundus images. The area under the receiver operating characteristic curve (AUC), accuracy, sensitivity, and specificity of the model's performance were calculated and presented. RESULTS Our model achieved an AUC of 0.896 with 82.69% sensitivity and 82.63% specificity in detecting abnormal fundus images in the internal test set, and it achieved an AUC of 0.900 with 83.25% sensitivity and 85.19% specificity in 1 external proprietary data set. In the detection of 6 common ocular diseases, the AUCs for DR, glaucoma, cataract, AMD, HR, and myopia were 0.891, 0.916, 0.912, 0.867, 0.895, and 0.961, respectively. Moreover, the AD model had an AUC of 0.868 for detecting any DR, 0.908 for detecting referable DR, and 0.926 for detecting vision-threatening DR. CONCLUSIONS The AD approach achieved high sensitivity and specificity in detecting ocular diseases on the basis of fundus images, which implies that this model might be an efficient and economical tool for optimizing current clinical pathways for ophthalmologists. Future studies are required to evaluate the practical applicability of the AD approach in ocular disease screening.
Collapse
Affiliation(s)
- Yong Han
- Department of Epidemiology and Health Statistics, School of Public Health, Capital Medical University, Beijing, China
- Beijing Municipal Key Laboratory of Clinical Epidemiology, Capital Medical University, Beijing, China
| | - Weiming Li
- Department of Epidemiology and Health Statistics, School of Public Health, Capital Medical University, Beijing, China
- Beijing Municipal Key Laboratory of Clinical Epidemiology, Capital Medical University, Beijing, China
| | - Mengmeng Liu
- Department of Epidemiology and Health Statistics, School of Public Health, Capital Medical University, Beijing, China
- Beijing Municipal Key Laboratory of Clinical Epidemiology, Capital Medical University, Beijing, China
| | - Zhiyuan Wu
- Department of Epidemiology and Health Statistics, School of Public Health, Capital Medical University, Beijing, China
- Beijing Municipal Key Laboratory of Clinical Epidemiology, Capital Medical University, Beijing, China
| | - Feng Zhang
- Department of Epidemiology and Health Statistics, School of Public Health, Capital Medical University, Beijing, China
- Beijing Municipal Key Laboratory of Clinical Epidemiology, Capital Medical University, Beijing, China
| | - Xiangtong Liu
- Department of Epidemiology and Health Statistics, School of Public Health, Capital Medical University, Beijing, China
- Beijing Municipal Key Laboratory of Clinical Epidemiology, Capital Medical University, Beijing, China
| | - Lixin Tao
- Department of Epidemiology and Health Statistics, School of Public Health, Capital Medical University, Beijing, China
- Beijing Municipal Key Laboratory of Clinical Epidemiology, Capital Medical University, Beijing, China
| | - Xia Li
- Department of Mathematics and Statistics, La Trobe University, Melbourne, Australia
| | - Xiuhua Guo
- Department of Epidemiology and Health Statistics, School of Public Health, Capital Medical University, Beijing, China
- Beijing Municipal Key Laboratory of Clinical Epidemiology, Capital Medical University, Beijing, China
| |
Collapse
|
33
|
Ruamviboonsuk P, Chantra S, Seresirikachorn K, Ruamviboonsuk V, Sangroongruangsri S. Economic Evaluations of Artificial Intelligence in Ophthalmology. Asia Pac J Ophthalmol (Phila) 2021; 10:307-316. [PMID: 34261102 DOI: 10.1097/apo.0000000000000403] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023] Open
Abstract
ABSTRACT Artificial intelligence (AI) is expected to cause significant medical quality enhancements and cost-saving improvements in ophthalmology. Although there has been a rapid growth of studies on AI in the recent years, real-world adoption of AI is still rare. One reason may be because the data derived from economic evaluations of AI in health care, which policy makers used for adopting new technology, have been fragmented and scarce. Most data on economics of AI in ophthalmology are from diabetic retinopathy (DR) screening. Few studies classified costs of AI software, which has been considered as a medical device, into direct medical costs. These costs of AI are composed of initial and maintenance costs. The initial costs may include investment in research and development, and costs for validation of different datasets. Meanwhile, the maintenance costs include costs for algorithms upgrade and hardware maintenance in the long run. The cost of AI should be balanced between manufacturing price and reimbursements since it may pose significant challenges and barriers to providers. Evidence from cost-effectiveness analyses showed that AI, either standalone or used with humans, was more cost-effective than manual DR screening. Notably, economic evaluation of AI for DR screening can be used as a model for AI to other ophthalmic diseases.
Collapse
Affiliation(s)
- Paisan Ruamviboonsuk
- Department of Ophthalmology, Rajavithi Hospital, College of Medicine, Rangsit University, Bangkok, Thailand
| | - Somporn Chantra
- Department of Ophthalmology, Rajavithi Hospital, College of Medicine, Rangsit University, Bangkok, Thailand
| | - Kasem Seresirikachorn
- Department of Ophthalmology, Rajavithi Hospital, College of Medicine, Rangsit University, Bangkok, Thailand
| | - Varis Ruamviboonsuk
- Department of Biochemistry, Faculty of Medicine, Chulalongkorn University, Bangkok, Thailand
| | - Sermsiri Sangroongruangsri
- Social and Administrative Pharmacy Division, Department of Pharmacy, Faculty of Pharmacy, Mahidol University, Bangkok, Thailand
| |
Collapse
|
34
|
Zheng C, Xie X, Wang Z, Li W, Chen J, Qiao T, Qian Z, Liu H, Liang J, Chen X. Development and validation of deep learning algorithms for automated eye laterality detection with anterior segment photography. Sci Rep 2021; 11:586. [PMID: 33436781 PMCID: PMC7803760 DOI: 10.1038/s41598-020-79809-7] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2020] [Accepted: 12/09/2020] [Indexed: 02/05/2023] Open
Abstract
This paper aimed to develop and validate a deep learning (DL) model for automated detection of the laterality of the eye on anterior segment photographs. Anterior segment photographs for training a DL model were collected with the Scheimpflug anterior segment analyzer. We applied transfer learning and fine-tuning of pre-trained deep convolutional neural networks (InceptionV3, VGG16, MobileNetV2) to develop DL models for determining the eye laterality. Testing datasets, from Scheimpflug and slit-lamp digital camera photography, were employed to test the DL model, and the results were compared with a classification performed by human experts. The performance of the DL model was evaluated by accuracy, sensitivity, specificity, operating characteristic curves, and corresponding area under the curve values. A total of 14,468 photographs were collected for the development of DL models. After training for 100 epochs, the DL models of the InceptionV3 mode achieved the area under the receiver operating characteristic curve of 0.998 (with 95% CI 0.924-0.958) for detecting eye laterality. In the external testing dataset (76 primary gaze photographs taken by a digital camera), the DL model achieves an accuracy of 96.1% (95% CI 91.7%-100%), which is better than an accuracy of 72.3% (95% CI 62.2%-82.4%), 82.8% (95% CI 78.7%-86.9%) and 86.8% (95% CI 82.5%-91.1%) achieved by human graders. Our study demonstrated that this high-performing DL model can be used for automated labeling for the laterality of eyes. Our DL model is useful for managing a large volume of the anterior segment images with a slit-lamp camera in the clinical setting.
Collapse
Affiliation(s)
- Ce Zheng
- Department of Ophthalmology, Xinhua Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Xiaolin Xie
- Joint Shantou International Eye Center of Shantou University and the Chinese University of Hong Kong, Shantou University Medical College, Shantou, Guangdong, China
| | - Zhilei Wang
- Department of Ophthalmology, Shanghai Children's Hospital, Shanghai Jiao Tong University, Shanghai, China
| | - Wen Li
- Department of Ophthalmology, Shanghai Children's Hospital, Shanghai Jiao Tong University, Shanghai, China
| | - Jili Chen
- Department of Ophthalmology, Shibei Hospital, Shanghai, China
| | - Tong Qiao
- Department of Ophthalmology, Shanghai Children's Hospital, Shanghai Jiao Tong University, Shanghai, China
| | - Zhuyun Qian
- Department of Ophthalmology, Shanghai Aier Eye Hospital, No. 1286, Hongqiao Road, Changning District, Shanghai, 200050, China
| | - Hui Liu
- Aier School of Ophthalmology, Central South University, Changsha, Hunan Province, China
| | - Jianheng Liang
- Aier School of Ophthalmology, Central South University, Changsha, Hunan Province, China
| | - Xu Chen
- Department of Ophthalmology, Shanghai Aier Eye Hospital, No. 1286, Hongqiao Road, Changning District, Shanghai, 200050, China.
- Aier School of Ophthalmology, Central South University, Changsha, Hunan Province, China.
| |
Collapse
|
35
|
Ting DSJ, Foo VH, Yang LWY, Sia JT, Ang M, Lin H, Chodosh J, Mehta JS, Ting DSW. Artificial intelligence for anterior segment diseases: Emerging applications in ophthalmology. Br J Ophthalmol 2020; 105:158-168. [PMID: 32532762 DOI: 10.1136/bjophthalmol-2019-315651] [Citation(s) in RCA: 92] [Impact Index Per Article: 23.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2019] [Revised: 02/21/2020] [Accepted: 03/24/2020] [Indexed: 12/12/2022]
Abstract
With the advancement of computational power, refinement of learning algorithms and architectures, and availability of big data, artificial intelligence (AI) technology, particularly with machine learning and deep learning, is paving the way for 'intelligent' healthcare systems. AI-related research in ophthalmology previously focused on the screening and diagnosis of posterior segment diseases, particularly diabetic retinopathy, age-related macular degeneration and glaucoma. There is now emerging evidence demonstrating the application of AI to the diagnosis and management of a variety of anterior segment conditions. In this review, we provide an overview of AI applications to the anterior segment addressing keratoconus, infectious keratitis, refractive surgery, corneal transplant, adult and paediatric cataracts, angle-closure glaucoma and iris tumour, and highlight important clinical considerations for adoption of AI technologies, potential integration with telemedicine and future directions.
Collapse
Affiliation(s)
- Darren Shu Jeng Ting
- Academic Ophthalmology, University of Nottingham, Nottingham, UK.,Department of Ophthalmology, Queen's Medical Centre, Nottingham, UK.,Singapore Eye Research Institute, Singapore
| | | | | | - Josh Tjunrong Sia
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
| | - Marcus Ang
- Singapore Eye Research Institute, Singapore.,Cornea And Ext Disease, Singapore National Eye Centre, Singapore
| | - Haotian Lin
- Sun Yat-Sen University Zhongshan Ophthalmic Center, Guangzhou, China
| | - James Chodosh
- Ophthalmology, Massachusetts Eye and Ear Infirmary Howe Laboratory Harvard Medical School, Boston, Massachusetts, USA
| | - Jodhbir S Mehta
- Singapore Eye Research Institute, Singapore.,Cornea And Ext Disease, Singapore National Eye Centre, Singapore
| | - Daniel Shu Wei Ting
- Singapore Eye Research Institute, Singapore .,Vitreo-retinal Department, Singapore National Eye Center, Singapore
| |
Collapse
|