1
|
Zhang H, Zhang K, Wang J, Yu S, Li Z, Yin S, Zhu J, Wei W. Quickly diagnosing Bietti crystalline dystrophy with deep learning. iScience 2024; 27:110579. [PMID: 39220263 PMCID: PMC11365386 DOI: 10.1016/j.isci.2024.110579] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2024] [Revised: 06/18/2024] [Accepted: 07/22/2024] [Indexed: 09/04/2024] Open
Abstract
Bietti crystalline dystrophy (BCD) is an autosomal recessive inherited retinal disease (IRD) and its early precise diagnosis is much challenging. This study aims to diagnose BCD and classify the clinical stage based on ultra-wide-field (UWF) color fundus photographs (CFPs) via deep learning (DL). All CFPs were labeled as BCD, retinitis pigmentosa (RP) or normal, and the BCD patients were further divided into three stages. DL models ResNeXt, Wide ResNet, and ResNeSt were developed, and model performance was evaluated using accuracy and confusion matrix. Then the diagnostic interpretability was verified by the heatmaps. The models achieved good classification results. Our study established the largest BCD database of Chinese population. We developed a quick diagnosing method for BCD and evaluated the potential efficacy of an automatic diagnosis and grading DL algorithm based on UWF fundus photography in a Chinese cohort of BCD patients.
Collapse
Affiliation(s)
- Haihan Zhang
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Kai Zhang
- Chongqing Chang’an Industrial Group Co. Ltd, Chongqing, China
| | - Jinyuan Wang
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
- School of Clinical Medicine, Tsinghua University, Beijing, China
| | - Shicheng Yu
- Department of Ophthalmology, Beijing Key Laboratory of Restoration of Damaged Ocular Nerve, Peking University Third Hospital, Beijing, China
| | - Zhixi Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangzhou 510060, China
- Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Shiyi Yin
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Jingyuan Zhu
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Wenbin Wei
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| |
Collapse
|
2
|
Du K, Dong L, Zhang K, Guan M, Chen C, Xie L, Kong W, Li H, Zhang R, Zhou W, Wu H, Dong H, Wei W. Deep learning system for screening AIDS-related cytomegalovirus retinitis with ultra-wide-field fundus images. Heliyon 2024; 10:e30881. [PMID: 38803983 PMCID: PMC11128864 DOI: 10.1016/j.heliyon.2024.e30881] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2023] [Revised: 05/03/2024] [Accepted: 05/07/2024] [Indexed: 05/29/2024] Open
Abstract
Background Ophthalmological screening for cytomegalovirus retinitis (CMVR) for HIV/AIDS patients is important to prevent lifelong blindness. Previous studies have shown good properties of automated CMVR screening using digital fundus images. However, the application of a deep learning (DL) system to CMVR with ultra-wide-field (UWF) fundus images has not been studied, and the feasibility and efficiency of this method are uncertain. Methods In this study, we developed, internally validated, externally validated, and prospectively validated a DL system to detect AIDS-related from UWF fundus images from different clinical datasets. We independently used the InceptionResnetV2 network to develop and internally validate a DL system for identifying active CMVR, inactive CMVR, and non-CMVR in 6960 UWF fundus images from 862 AIDS patients and validated the system in a prospective and an external validation data set using the area under the curve (AUC), accuracy, sensitivity, and specificity. A heat map identified the most important area (lesions) used by the DL system for differentiating CMVR. Results The DL system showed AUCs of 0.945 (95 % confidence interval [CI]: 0.929, 0.962), 0.964 (95 % CI: 0.870, 0.999) and 0.968 (95 % CI: 0.860, 1.000) for detecting active CMVR from non-CMVR and 0.923 (95 % CI: 0.908, 0.938), 0.902 (0.857, 0.948) and 0.884 (0.851, 0.917) for detecting active CMVR from non-CMVR in the internal cross-validation, external validation, and prospective validation, respectively. Deep learning performed promisingly in screening CMVR. It also showed the ability to differentiate active CMVR from non-CMVR and inactive CMVR as well as to identify active CMVR and inactive CMVR from non-CMVR (all AUCs in the three independent data sets >0.900). The heat maps successfully highlighted lesion locations. Conclusions Our UWF fundus image-based DL system showed reliable performance for screening AIDS-related CMVR showing its potential for screening CMVR in HIV/AIDS patients, especially in the absence of ophthalmic resources.
Collapse
Affiliation(s)
- Kuifang Du
- Department of Ophthalmology, Beijing Youan Hospital, Capital Medical University, Beijing, China
| | - Li Dong
- Beijing Tongren Eye Centre, Beijing Key Laboratory of Intraocular Tumour Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Kai Zhang
- Chongqing Chang'an Industrial Group Co. Ltd, Chongqing, China
| | - Meilin Guan
- Department of Ophthalmology, Beijing Youan Hospital, Capital Medical University, Beijing, China
| | - Chao Chen
- Department of Ophthalmology, Beijing Youan Hospital, Capital Medical University, Beijing, China
| | - Lianyong Xie
- Department of Ophthalmology, Beijing Youan Hospital, Capital Medical University, Beijing, China
| | - Wenjun Kong
- Department of Ophthalmology, Beijing Youan Hospital, Capital Medical University, Beijing, China
| | - Heyan Li
- Beijing Tongren Eye Centre, Beijing Key Laboratory of Intraocular Tumour Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Ruiheng Zhang
- Beijing Tongren Eye Centre, Beijing Key Laboratory of Intraocular Tumour Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Wenda Zhou
- Beijing Tongren Eye Centre, Beijing Key Laboratory of Intraocular Tumour Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Haotian Wu
- Beijing Tongren Eye Centre, Beijing Key Laboratory of Intraocular Tumour Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Hongwei Dong
- Department of Ophthalmology, Beijing Youan Hospital, Capital Medical University, Beijing, China
| | - Wenbin Wei
- Beijing Tongren Eye Centre, Beijing Key Laboratory of Intraocular Tumour Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| |
Collapse
|
3
|
Wan C, Mao Y, Xi W, Zhang Z, Wang J, Yang W. DBPF-net: dual-branch structural feature extraction reinforcement network for ocular surface disease image classification. Front Med (Lausanne) 2024; 10:1309097. [PMID: 38239621 PMCID: PMC10794599 DOI: 10.3389/fmed.2023.1309097] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2023] [Accepted: 12/11/2023] [Indexed: 01/22/2024] Open
Abstract
Pterygium and subconjunctival hemorrhage are two common types of ocular surface diseases that can cause distress and anxiety in patients. In this study, 2855 ocular surface images were collected in four categories: normal ocular surface, subconjunctival hemorrhage, pterygium to be observed, and pterygium requiring surgery. We propose a diagnostic classification model for ocular surface diseases, dual-branch network reinforced by PFM block (DBPF-Net), which adopts the conformer model with two-branch architectural properties as the backbone of a four-way classification model for ocular surface diseases. In addition, we propose a block composed of a patch merging layer and a FReLU layer (PFM block) for extracting spatial structure features to further strengthen the feature extraction capability of the model. In practice, only the ocular surface images need to be input into the model to discriminate automatically between the disease categories. We also trained the VGG16, ResNet50, EfficientNetB7, and Conformer models, and evaluated and analyzed the results of all models on the test set. The main evaluation indicators were sensitivity, specificity, F1-score, area under the receiver operating characteristics curve (AUC), kappa coefficient, and accuracy. The accuracy and kappa coefficient of the proposed diagnostic model in several experiments were averaged at 0.9789 and 0.9681, respectively. The sensitivity, specificity, F1-score, and AUC were, respectively, 0.9723, 0.9836, 0.9688, and 0.9869 for diagnosing pterygium to be observed, and, respectively, 0.9210, 0.9905, 0.9292, and 0.9776 for diagnosing pterygium requiring surgery. The proposed method has high clinical reference value for recognizing these four types of ocular surface images.
Collapse
Affiliation(s)
- Cheng Wan
- College of Electronic Information Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| | - Yulong Mao
- College of Electronic Information Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| | - Wenqun Xi
- Shenzhen Eye Institute, Shenzhen Eye Hospital, Jinan University, Shenzhen, China
| | - Zhe Zhang
- Shenzhen Eye Institute, Shenzhen Eye Hospital, Jinan University, Shenzhen, China
| | - Jiantao Wang
- Shenzhen Eye Institute, Shenzhen Eye Hospital, Jinan University, Shenzhen, China
| | - Weihua Yang
- Shenzhen Eye Institute, Shenzhen Eye Hospital, Jinan University, Shenzhen, China
| |
Collapse
|
4
|
Zhang R, Dong L, Li R, Zhang K, Li Y, Zhao H, Shi J, Ge X, Xu X, Jiang L, Shi X, Zhang C, Zhou W, Xu L, Wu H, Li H, Yu C, Li J, Ma J, Wei W. Automatic retinoblastoma screening and surveillance using deep learning. Br J Cancer 2023; 129:466-474. [PMID: 37344582 PMCID: PMC10403507 DOI: 10.1038/s41416-023-02320-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2022] [Revised: 05/17/2023] [Accepted: 06/12/2023] [Indexed: 06/23/2023] Open
Abstract
BACKGROUND Retinoblastoma is the most common intraocular malignancy in childhood. With the advanced management strategy, the globe salvage and overall survival have significantly improved, which proposes subsequent challenges regarding long-term surveillance and offspring screening. This study aimed to apply a deep learning algorithm to reduce the burden of follow-up and offspring screening. METHODS This cohort study includes retinoblastoma patients who visited Beijing Tongren Hospital from March 2018 to January 2022 for deep learning algorism development. Clinical-suspected and treated retinoblastoma patients from February 2022 to June 2022 were prospectively collected for prospective validation. Images from the posterior pole and peripheral retina were collected, and reference standards were made according to the consensus of the multidisciplinary management team. A deep learning algorithm was trained to identify "normal fundus", "stable retinoblastoma" in which specific treatment is not required, and "active retinoblastoma" in which specific treatment is required. The performance of each classifier included sensitivity, specificity, accuracy, and cost-utility. RESULTS A total of 36,623 images were included for developing the Deep Learning Assistant for Retinoblastoma Monitoring (DLA-RB) algorithm. In internal fivefold cross-validation, DLA-RB achieved an area under curve (AUC) of 0.998 (95% confidence interval [CI] 0.986-1.000) in distinguishing normal fundus and active retinoblastoma, and 0.940 (95% CI 0.851-0.996) in distinguishing stable and active retinoblastoma. From February 2022 to June 2022, 139 eyes of 103 patients were prospectively collected. In identifying active retinoblastoma tumours from all clinical-suspected patients and active retinoblastoma from all treated retinoblastoma patients, the AUC of DLA-RB reached 0.991 (95% CI 0.970-1.000), and 0.962 (95% CI 0.915-1.000), respectively. The combination between ophthalmologists and DLA-RB significantly improved the accuracy of competent ophthalmologists and residents regarding both binary tasks. Cost-utility analysis revealed DLA-RB-based diagnosis mode is cost-effective in both retinoblastoma diagnosis and active retinoblastoma identification. CONCLUSIONS DLA-RB achieved high accuracy and sensitivity in identifying active retinoblastoma from the normal and stable retinoblastoma fundus. It can be used to surveil the activity of retinoblastoma during follow-up and screen high-risk offspring. Compared with referral procedures to ophthalmologic centres, DLA-RB-based screening and surveillance is cost-effective and can be incorporated within telemedicine programs. CLINICAL TRIAL REGISTRATION This study was registered on ClinicalTrials.gov (NCT05308043).
Collapse
Affiliation(s)
- Ruiheng Zhang
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Li Dong
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Ruyue Li
- Beijing Institute of Ophthalmology, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Kai Zhang
- School of Optometry, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China
| | - Yitong Li
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Hongshu Zhao
- Beijing Ophthalmology and Visual Sciences Key Laboratory, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Jitong Shi
- Beijing Ophthalmology and Visual Sciences Key Laboratory, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Xin Ge
- Beijing Ophthalmology and Visual Sciences Key Laboratory, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Xiaolin Xu
- Beijing Ophthalmology and Visual Sciences Key Laboratory, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Libin Jiang
- Beijing Ophthalmology and Visual Sciences Key Laboratory, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Xuhan Shi
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Chuan Zhang
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Wenda Zhou
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Liangyuan Xu
- Beijing Ophthalmology and Visual Sciences Key Laboratory, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Haotian Wu
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Heyan Li
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Chuyao Yu
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Jing Li
- Beijing Ophthalmology and Visual Sciences Key Laboratory, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Jianmin Ma
- Beijing Ophthalmology and Visual Sciences Key Laboratory, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China.
| | - Wenbin Wei
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China.
| |
Collapse
|
5
|
Won YK, Lee H, Kim Y, Han G, Chung TY, Ro YM, Lim DH. Deep learning-based classification system of bacterial keratitis and fungal keratitis using anterior segment images. Front Med (Lausanne) 2023; 10:1162124. [PMID: 37275380 PMCID: PMC10233039 DOI: 10.3389/fmed.2023.1162124] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Accepted: 04/24/2023] [Indexed: 06/07/2023] Open
Abstract
Introduction Infectious keratitis is a vision threatening disease. Bacterial and fungal keratitis are often confused in the early stages, so right diagnosis and optimized treatment for causative organisms is crucial. Antibacterial and antifungal medications are completely different, and the prognosis for fungal keratitis is even much worse. Since the identification of microorganisms takes a long time, empirical treatment must be started according to the appearance of the lesion before an accurate diagnosis. Thus, we developed an automated deep learning (DL) based diagnostic system of bacterial and fungal keratitis based on the anterior segment photographs using two proposed modules, Lesion Guiding Module (LGM) and Mask Adjusting Module (MAM). Methods We used 684 anterior segment photographs from 107 patients confirmed as bacterial or fungal keratitis by corneal scraping culture. Both broad- and slit-beam images were included in the analysis. We set baseline classifier as ResNet-50. The LGM was designed to learn the location information of lesions annotated by ophthalmologists and the slit-beam MAM was applied to extract the correct feature points from two different images (broad- and slit-beam) during the training phase. Our algorithm was then externally validated using 98 images from Google image search and ophthalmology textbooks. Results A total of 594 images from 88 patients were used for training, and 90 images from 19 patients were used for test. Compared to the diagnostic accuracy of baseline network ResNet-50, the proposed method with LGM and MAM showed significantly higher accuracy (81.1 vs. 87.8%). We further observed that the model achieved significant improvement on diagnostic performance using open-source dataset (64.2 vs. 71.4%). LGM and MAM module showed positive effect on an ablation study. Discussion This study demonstrated that the potential of a novel DL based diagnostic algorithm for bacterial and fungal keratitis using two types of anterior segment photographs. The proposed network containing LGM and slit-beam MAM is robust in improving the diagnostic accuracy and overcoming the limitations of small training data and multi type of images.
Collapse
Affiliation(s)
- Yeo Kyoung Won
- Department of Ophthalmology, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Republic of Korea
| | - Hyebin Lee
- Department of Electrical Engineering, Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea
| | - Youngjun Kim
- Department of Ophthalmology, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Republic of Korea
| | - Gyule Han
- Department of Ophthalmology, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Republic of Korea
| | - Tae-Young Chung
- Department of Ophthalmology, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Republic of Korea
| | - Yong Man Ro
- Department of Electrical Engineering, Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea
| | - Dong Hui Lim
- Department of Ophthalmology, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Republic of Korea
- Department of Digital Health, Samsung Advanced Institute for Health Sciences and Technology, Sungkyunkwan University, Seoul, Republic of Korea
| |
Collapse
|
6
|
Wang N, Zhang Y, Wang W, Ye Z, Chen H, Hu G, Ouyang D. How can machine learning and multiscale modeling benefit ocular drug development? Adv Drug Deliv Rev 2023; 196:114772. [PMID: 36906232 DOI: 10.1016/j.addr.2023.114772] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2022] [Revised: 02/06/2023] [Accepted: 03/05/2023] [Indexed: 03/12/2023]
Abstract
The eyes possess sophisticated physiological structures, diverse disease targets, limited drug delivery space, distinctive barriers, and complicated biomechanical processes, requiring a more in-depth understanding of the interactions between drug delivery systems and biological systems for ocular formulation development. However, the tiny size of the eyes makes sampling difficult and invasive studies costly and ethically constrained. Developing ocular formulations following conventional trial-and-error formulation and manufacturing process screening procedures is inefficient. Along with the popularity of computational pharmaceutics, non-invasive in silico modeling & simulation offer new opportunities for the paradigm shift of ocular formulation development. The current work first systematically reviews the theoretical underpinnings, advanced applications, and unique advantages of data-driven machine learning and multiscale simulation approaches represented by molecular simulation, mathematical modeling, and pharmacokinetic (PK)/pharmacodynamic (PD) modeling for ocular drug development. Following this, a new computer-driven framework for rational pharmaceutical formulation design is proposed, inspired by the potential of in silico explorations in understanding drug delivery details and facilitating drug formulation design. Lastly, to promote the paradigm shift, integrated in silico methodologies were highlighted, and discussions on data challenges, model practicality, personalized modeling, regulatory science, interdisciplinary collaboration, and talent training were conducted in detail with a view to achieving more efficient objective-oriented pharmaceutical formulation design.
Collapse
Affiliation(s)
- Nannan Wang
- State Key Laboratory of Quality Research in Chinese Medicine, Institute of Chinese Medical Sciences (ICMS), University of Macau, Macau, China
| | - Yunsen Zhang
- State Key Laboratory of Quality Research in Chinese Medicine, Institute of Chinese Medical Sciences (ICMS), University of Macau, Macau, China
| | - Wei Wang
- State Key Laboratory of Quality Research in Chinese Medicine, Institute of Chinese Medical Sciences (ICMS), University of Macau, Macau, China
| | - Zhuyifan Ye
- State Key Laboratory of Quality Research in Chinese Medicine, Institute of Chinese Medical Sciences (ICMS), University of Macau, Macau, China
| | - Hongyu Chen
- State Key Laboratory of Quality Research in Chinese Medicine, Institute of Chinese Medical Sciences (ICMS), University of Macau, Macau, China; Faculty of Science and Technology (FST), University of Macau, Macau, China
| | - Guanghui Hu
- Faculty of Science and Technology (FST), University of Macau, Macau, China
| | - Defang Ouyang
- State Key Laboratory of Quality Research in Chinese Medicine, Institute of Chinese Medical Sciences (ICMS), University of Macau, Macau, China; Department of Public Health and Medicinal Administration, Faculty of Health Sciences (FHS), University of Macau, Macau, China.
| |
Collapse
|
7
|
Zhang Z, Wang Y, Zhang H, Samusak A, Rao H, Xiao C, Abula M, Cao Q, Dai Q. Artificial intelligence-assisted diagnosis of ocular surface diseases. Front Cell Dev Biol 2023; 11:1133680. [PMID: 36875760 PMCID: PMC9981656 DOI: 10.3389/fcell.2023.1133680] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2022] [Accepted: 02/08/2023] [Indexed: 02/19/2023] Open
Abstract
With the rapid development of computer technology, the application of artificial intelligence (AI) in ophthalmology research has gained prominence in modern medicine. Artificial intelligence-related research in ophthalmology previously focused on the screening and diagnosis of fundus diseases, particularly diabetic retinopathy, age-related macular degeneration, and glaucoma. Since fundus images are relatively fixed, their standards are easy to unify. Artificial intelligence research related to ocular surface diseases has also increased. The main issue with research on ocular surface diseases is that the images involved are complex, with many modalities. Therefore, this review aims to summarize current artificial intelligence research and technologies used to diagnose ocular surface diseases such as pterygium, keratoconus, infectious keratitis, and dry eye to identify mature artificial intelligence models that are suitable for research of ocular surface diseases and potential algorithms that may be used in the future.
Collapse
Affiliation(s)
- Zuhui Zhang
- The First People's Hospital of Aksu District in Xinjiang, Aksu City, China.,National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, China
| | - Ying Wang
- The First People's Hospital of Aksu District in Xinjiang, Aksu City, China
| | - Hongzhen Zhang
- The First People's Hospital of Aksu District in Xinjiang, Aksu City, China
| | - Arzigul Samusak
- The First People's Hospital of Aksu District in Xinjiang, Aksu City, China
| | - Huimin Rao
- The First People's Hospital of Aksu District in Xinjiang, Aksu City, China
| | - Chun Xiao
- The First People's Hospital of Aksu District in Xinjiang, Aksu City, China
| | - Muhetaer Abula
- The First People's Hospital of Aksu District in Xinjiang, Aksu City, China
| | - Qixin Cao
- Huzhou Traditional Chinese Medicine Hospital Affiliated to Zhejiang University of Traditional Chinese Medicine, Huzhou, China
| | - Qi Dai
- The First People's Hospital of Aksu District in Xinjiang, Aksu City, China.,National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, China
| |
Collapse
|
8
|
Li J, Wang S, Hu S, Sun Y, Wang Y, Xu P, Ye J. Class-Aware Attention Network for infectious keratitis diagnosis using corneal photographs. Comput Biol Med 2022; 151:106301. [PMID: 36403354 DOI: 10.1016/j.compbiomed.2022.106301] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2022] [Revised: 10/18/2022] [Accepted: 11/06/2022] [Indexed: 11/11/2022]
Abstract
Infectious keratitis is one of the common ophthalmic diseases and also one of the main blinding eye diseases in China, hence rapid and accurate diagnosis and treatment for infectious keratitis are urgent to prevent the progression of the disease and limit the degree of corneal injury. Unfortunately, the traditional manual diagnosis accuracy is usually unsatisfactory due to the indistinguishable visual features. In this paper, we propose a novel end-to-end fully convolutional network, named Class-Aware Attention Network (CAA-Net), for automatically diagnosing infectious keratitis (normal, viral keratitis, fungal keratitis, and bacterial keratitis) using corneal photographs. In CAA-Net, a class-aware classification module is first trained to learn class-related discriminative features using separate branches for each class. Then, the learned class-aware discriminative features are fed into the main branch and fused with other feature maps using two attention strategies to assist the final multi-class classification performance. For the experiments, we have built a new corneal photograph dataset with 1886 images from 519 patients and conducted comprehensive experiments to verify the effectiveness of our proposed method. The code is available at https://github.com/SWF-hao/CAA-Net_Pytorch.
Collapse
Affiliation(s)
- Jinhao Li
- School of Mechanical, Electrical and Information Engineering, Shandong University, Weihai, 264209, Shandong, China.
| | - Shuai Wang
- School of Mechanical, Electrical and Information Engineering, Shandong University, Weihai, 264209, Shandong, China; Suzhou Research Institute of Shandong University, Suzhou, 215123, Jiangsu, China.
| | - Shaodan Hu
- Eye Center, Second Affiliated Hospital of Zhejiang University, School of Medicine, Hangzhou, 310009, Zhejiang, China.
| | - Yiming Sun
- Eye Center, Second Affiliated Hospital of Zhejiang University, School of Medicine, Hangzhou, 310009, Zhejiang, China.
| | - Yaqi Wang
- College of Media Engineering, Communication University of Zhejiang, Hangzhou, 310018, Zhejiang, China.
| | - Peifang Xu
- Eye Center, Second Affiliated Hospital of Zhejiang University, School of Medicine, Hangzhou, 310009, Zhejiang, China.
| | - Juan Ye
- Eye Center, Second Affiliated Hospital of Zhejiang University, School of Medicine, Hangzhou, 310009, Zhejiang, China.
| |
Collapse
|
9
|
Fang X, Deshmukh M, Chee ML, Soh ZD, Teo ZL, Thakur S, Goh JHL, Liu YC, Husain R, Mehta JS, Wong TY, Cheng CY, Rim TH, Tham YC. Deep learning algorithms for automatic detection of pterygium using anterior segment photographs from slit-lamp and hand-held cameras. Br J Ophthalmol 2022; 106:1642-1647. [PMID: 34244208 PMCID: PMC9685734 DOI: 10.1136/bjophthalmol-2021-318866] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2021] [Accepted: 06/25/2021] [Indexed: 11/04/2022]
Abstract
BACKGROUND/AIMS To evaluate the performances of deep learning (DL) algorithms for detection of presence and extent pterygium, based on colour anterior segment photographs (ASPs) taken from slit-lamp and hand-held cameras. METHODS Referable pterygium was defined as having extension towards the cornea from the limbus of >2.50 mm or base width at the limbus of >5.00 mm. 2503 images from the Singapore Epidemiology of Eye Diseases (SEED) study were used as the development set. Algorithms were validated on an internal set from the SEED cohort (629 images (55.3% pterygium, 8.4% referable pterygium)), and tested on two external clinic-based sets (set 1 with 2610 images (2.8% pterygium, 0.7% referable pterygium, from slit-lamp ASP); and set 2 with 3701 images, 2.5% pterygium, 0.9% referable pterygium, from hand-held ASP). RESULTS The algorithm's area under the receiver operating characteristic curve (AUROC) for detection of any pterygium was 99.5%(sensitivity=98.6%; specificity=99.0%) in internal test set, 99.1% (sensitivity=95.9%, specificity=98.5%) in external test set 1 and 99.7% (sensitivity=100.0%; specificity=88.3%) in external test set 2. For referable pterygium, the algorithm's AUROC was 98.5% (sensitivity=94.0%; specificity=95.3%) in internal test set, 99.7% (sensitivity=87.2%; specificity=99.4%) in external set 1 and 99.0% (sensitivity=94.3%; specificity=98.0%) in external set 2. CONCLUSION DL algorithms based on ASPs can detect presence of and referable-level pterygium with optimal sensitivity and specificity. These algorithms, particularly if used with a handheld camera, may potentially be used as a simple screening tool for detection of referable pterygium. Further validation in community setting is warranted. SYNOPSIS/PRECIS DL algorithms based on ASPs can detect presence of and referable-level pterygium optimally, and may be used as a simple screening tool for the detection of referable pterygium in community screenings.
Collapse
Affiliation(s)
- Xiaoling Fang
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore.,Department of Ophthalmology, Shanghai Eye Diseases Prevention and Treatment Center, Shanghai Eye Hospital, Shanghai, China
| | - Mihir Deshmukh
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
| | - Miao Li Chee
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
| | - Zhi-Da Soh
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
| | - Zhen Ling Teo
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
| | - Sahil Thakur
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
| | | | - Yu-Chi Liu
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore.,Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore
| | - Rahat Husain
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore.,Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore
| | - Jodhbir S Mehta
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore.,Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore
| | - Tien Yin Wong
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore.,Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore.,Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore and National University Health System, Singapore
| | - Ching-Yu Cheng
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore.,Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore.,Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore and National University Health System, Singapore
| | - Tyler Hyungtaek Rim
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore.,Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore
| | - Yih-Chung Tham
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore .,Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore
| |
Collapse
|
10
|
An Image Diagnosis Algorithm for Keratitis Based on Deep Learning. Neural Process Lett 2022. [DOI: 10.1007/s11063-021-10716-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
11
|
Liao J, Liu L, Duan H, Huang Y, Zhou L, Chen L, Wang C. Using a Convolutional Neural Network and Convolutional Long Short-term Memory to Automatically Detect Aneurysms on 2D Digital Subtraction Angiography Images: Framework Development and Validation. JMIR Med Inform 2022; 10:e28880. [PMID: 35294371 PMCID: PMC8968557 DOI: 10.2196/28880] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2021] [Revised: 06/27/2021] [Accepted: 01/16/2022] [Indexed: 11/26/2022] Open
Abstract
BACKGROUND It is hard to distinguish cerebral aneurysms from overlapping vessels in 2D digital subtraction angiography (DSA) images due to these images' lack of spatial information. OBJECTIVE The aims of this study were to (1) construct a deep learning diagnostic system to improve the ability to detect posterior communicating artery aneurysms on 2D DSA images and (2) validate the efficiency of the deep learning diagnostic system in 2D DSA aneurysm detection. METHODS We proposed a 2-stage detection system. First, we established the region localization stage to automatically locate specific detection regions of raw 2D DSA sequences. Second, in the intracranial aneurysm detection stage, we constructed a bi-input+RetinaNet+convolutional long short-term memory (C-LSTM) framework to compare its performance for aneurysm detection with that of 3 existing frameworks. Each of the frameworks had a 5-fold cross-validation scheme. The receiver operating characteristic curve, the area under the curve (AUC) value, mean average precision, sensitivity, specificity, and accuracy were used to assess the abilities of different frameworks. RESULTS A total of 255 patients with posterior communicating artery aneurysms and 20 patients without aneurysms were included in this study. The best AUC values of the RetinaNet, RetinaNet+C-LSTM, bi-input+RetinaNet, and bi-input+RetinaNet+C-LSTM frameworks were 0.95, 0.96, 0.92, and 0.97, respectively. The mean sensitivities of the RetinaNet, RetinaNet+C-LSTM, bi-input+RetinaNet, and bi-input+RetinaNet+C-LSTM frameworks and human experts were 89% (range 67.02%-98.43%), 88% (range 65.76%-98.06%), 87% (range 64.53%-97.66%), 89% (range 67.02%-98.43%), and 90% (range 68.30%-98.77%), respectively. The mean specificities of the RetinaNet, RetinaNet+C-LSTM, bi-input+RetinaNet, and bi-input+RetinaNet+C-LSTM frameworks and human experts were 80% (range 56.34%-94.27%), 89% (range 67.02%-98.43%), 86% (range 63.31%-97.24%), 93% (range 72.30%-99.56%), and 90% (range 68.30%-98.77%), respectively. The mean accuracies of the RetinaNet, RetinaNet+C-LSTM, bi-input+RetinaNet, and bi-input+RetinaNet+C-LSTM frameworks and human experts were 84.50% (range 69.57%-93.97%), 88.50% (range 74.44%-96.39%), 86.50% (range 71.97%-95.22%), 91% (range 77.63%-97.72%), and 90% (range 76.34%-97.21%), respectively. CONCLUSIONS According to our results, more spatial and temporal information can help improve the performance of the frameworks. Therefore, the bi-input+RetinaNet+C-LSTM framework had the best performance when compared to that of the other frameworks. Our study demonstrates that our system can assist physicians in detecting intracranial aneurysms on 2D DSA images.
Collapse
Affiliation(s)
- JunHua Liao
- Department of Neurosurgery, West China Hospital, Sichuan University, Chengdu, China
- College of Computer Science, Sichuan University, Chengdu, China
| | - LunXin Liu
- Department of Neurosurgery, West China Hospital, Sichuan University, Chengdu, China
| | - HaiHan Duan
- School of Science and Engineering, The Chinese University of Hong Kong, Shenzhen, China
| | - YunZhi Huang
- School of Automation, Nanjing University of Information Science and Technology, Nanjing, China
| | - LiangXue Zhou
- Department of Neurosurgery, West China Hospital, Sichuan University, Chengdu, China
| | - LiangYin Chen
- College of Computer Science, Sichuan University, Chengdu, China
| | - ChaoHua Wang
- Department of Neurosurgery, West China Hospital, Sichuan University, Chengdu, China
| |
Collapse
|
12
|
Luo J, Chen Y, Yang Y, Zhang K, Liu Y, Zhao H, Dong L, Xu J, Li Y, Wei W. Prognosis Prediction of Uveal Melanoma After Plaque Brachytherapy Based on Ultrasound With Machine Learning. Front Med (Lausanne) 2022; 8:777142. [PMID: 35127747 PMCID: PMC8816318 DOI: 10.3389/fmed.2021.777142] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2021] [Accepted: 12/22/2021] [Indexed: 12/29/2022] Open
Abstract
INTRODUCTION Uveal melanoma (UM) is the most common intraocular malignancy in adults. Plaque brachytherapy remains the dominant eyeball-conserving therapy for UM. Tumor regression in UM after plaque brachytherapy has been reported as a valuable prognostic factor. The present study aimed to develop an accurate machine-learning model to predict the 4-year risk of metastasis and death in UM based on ocular ultrasound data. MATERIAL AND METHODS A total of 454 patients with UM were enrolled in this retrospective, single-center study. All patients were followed up for at least 4 years after plaque brachytherapy and underwent ophthalmologic evaluations before the therapy. B-scan ultrasonography was used to measure the basal diameters and thickness of tumors preoperatively and postoperatively. Random Forest (RF) algorithm was used to construct two prediction models: whether a patient will survive for more than 4 years and whether the tumor will develop metastasis within 4 years after treatment. RESULTS Our predictive model achieved an area under the receiver operating characteristic curve (AUC) of 0.708 for predicting death using only a one-time follow-up record. Including the data from two additional follow-ups increased the AUC of the model to 0.883. We attained AUCs of 0.730 and 0.846 with data from one and three-time follow-up, respectively, for predicting metastasis. The model found that the amount of postoperative follow-up data significantly improved death and metastasis prediction accuracy. Furthermore, we divided tumor treatment response into four patterns. The D(decrease)/S(stable) patterns are associated with a significantly better prognosis than the I(increase)/O(other) patterns. CONCLUSIONS The present study developed an RF model to predict the risk of metastasis and death from UM within 4 years based on ultrasound follow-up records following plaque brachytherapy. We intend to further validate our model in prospective datasets, enabling us to implement timely and efficient treatments.
Collapse
Affiliation(s)
- Jingting Luo
- Beijing Tongren Eye Center, Beijing key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Yuning Chen
- Beijing Tongren Eye Center, Beijing key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Yuhang Yang
- Beijing Tongren Eye Center, Beijing key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Kai Zhang
- InferVision Healthcare Science and Technology Limited Company, Shanghai, China
| | - Yueming Liu
- Beijing Tongren Eye Center, Beijing key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Hanqing Zhao
- Beijing Tongren Eye Center, Beijing key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Li Dong
- Beijing Tongren Eye Center, Beijing key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Jie Xu
- Beijing Tongren Eye Center, Beijing key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Yang Li
- Beijing Tongren Eye Center, Beijing key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Wenbin Wei
- Beijing Tongren Eye Center, Beijing key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| |
Collapse
|
13
|
Xu W, Jin L, Zhu PZ, He K, Yang WH, Wu MN. Implementation and Application of an Intelligent Pterygium Diagnosis System Based on Deep Learning. Front Psychol 2021; 12:759229. [PMID: 34744935 PMCID: PMC8569253 DOI: 10.3389/fpsyg.2021.759229] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2021] [Accepted: 10/04/2021] [Indexed: 12/27/2022] Open
Abstract
Objective: This study aims to implement and investigate the application of a special intelligent diagnostic system based on deep learning in the diagnosis of pterygium using anterior segment photographs. Methods: A total of 1,220 anterior segment photographs of normal eyes and pterygium patients were collected for training (using 750 images) and testing (using 470 images) to develop an intelligent pterygium diagnostic model. The images were classified into three categories by the experts and the intelligent pterygium diagnosis system: (i) the normal group, (ii) the observation group of pterygium, and (iii) the operation group of pterygium. The intelligent diagnostic results were compared with those of the expert diagnosis. Indicators including accuracy, sensitivity, specificity, kappa value, the area under the receiver operating characteristic curve (AUC), as well as 95% confidence interval (CI) and F1-score were evaluated. Results: The accuracy rate of the intelligent diagnosis system on the 470 testing photographs was 94.68%; the diagnostic consistency was high; the kappa values of the three groups were all above 85%. Additionally, the AUC values approached 100% in group 1 and 95% in the other two groups. The best results generated from the proposed system for sensitivity, specificity, and F1-scores were 100, 99.64, and 99.74% in group 1; 90.06, 97.32, and 92.49% in group 2; and 92.73, 95.56, and 89.47% in group 3, respectively. Conclusion: The intelligent pterygium diagnosis system based on deep learning can not only judge the presence of pterygium but also classify the severity of pterygium. This study is expected to provide a new screening tool for pterygium and benefit patients from areas lacking medical resources.
Collapse
Affiliation(s)
- Wei Xu
- Department of Optometry, Jinling Institute of Technology, Nanjing, China.,Nanjing Key Laboratory of Optometric Materials and Application Technology, Nanjing, China
| | - Ling Jin
- Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
| | - Peng-Zhi Zhu
- Guangdong Medical Devices Quality Surveillance and Test Institute, Guangzhou, China
| | - Kai He
- School of Information Engineering, Huzhou University, Huzhou, China.,Zhejiang Province Key Laboratory of Smart Management & Application of Modern Agricultural Resources, Huzhou, China
| | - Wei-Hua Yang
- Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
| | - Mao-Nian Wu
- School of Information Engineering, Huzhou University, Huzhou, China.,Zhejiang Province Key Laboratory of Smart Management & Application of Modern Agricultural Resources, Huzhou, China
| |
Collapse
|
14
|
Zhang H, Liu Y, Zhang K, Hui S, Feng Y, Luo J, Li Y, Wei W. Validation of the Relationship Between Iris Color and Uveal Melanoma Using Artificial Intelligence With Multiple Paths in a Large Chinese Population. Front Cell Dev Biol 2021; 9:713209. [PMID: 34490264 PMCID: PMC8417124 DOI: 10.3389/fcell.2021.713209] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2021] [Accepted: 07/23/2021] [Indexed: 11/24/2022] Open
Abstract
Previous studies have shown that light iris color is a predisposing factor for the development of uveal melanoma (UM) in a population of Caucasian ancestry. However, in all these studies, a remarkably low percentage of patients have brown eyes, so we applied deep learning methods to investigate the correlation between iris color and the prevalence of UM in the Chinese population. All anterior segment photos were automatically segmented with U-NET, and only the iris regions were retained. Then the iris was analyzed with machine learning methods (random forests and convolutional neural networks) to obtain the corresponding iris color spectra (classification probability). We obtained satisfactory segmentation results with high consistency with those from experts. The iris color spectrum is consistent with the raters’ view, but there is no significant correlation with UM incidence.
Collapse
Affiliation(s)
- Haihan Zhang
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology and Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Yueming Liu
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology and Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Kai Zhang
- SenseTime Group Ltd., Shanghai, China
| | - Shiqi Hui
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology and Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Yu Feng
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology and Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Jingting Luo
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology and Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Yang Li
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology and Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Wenbin Wei
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology and Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| |
Collapse
|
15
|
Pan Q, Zhang K, He L, Dong Z, Zhang L, Wu X, Wu Y, Gao Y. Automatically Diagnosing Disk Bulge and Disk Herniation With Lumbar Magnetic Resonance Images by Using Deep Convolutional Neural Networks: Method Development Study. JMIR Med Inform 2021; 9:e14755. [PMID: 34018488 PMCID: PMC8178733 DOI: 10.2196/14755] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2019] [Revised: 10/27/2020] [Accepted: 04/15/2021] [Indexed: 02/01/2023] Open
Abstract
Background Disk herniation and disk bulge are two common disorders of lumbar intervertebral disks (IVDs) that often result in numbness, pain in the lower limbs, and lower back pain. Magnetic resonance (MR) imaging is one of the most efficient techniques for detecting lumbar diseases and is widely used for making clinical diagnoses at hospitals. However, there is a lack of efficient tools for effectively interpreting massive amounts of MR images to meet the requirements of many radiologists. Objective The aim of this study was to present an automatic system for diagnosing disk bulge and herniation that saves time and can effectively and significantly reduce the workload of radiologists. Methods The diagnosis of lumbar vertebral disorders is highly dependent on medical images. Therefore, we chose the two most common diseases—disk bulge and herniation—as research subjects. This study is mainly about identifying the position of IVDs (lumbar vertebra [L] 1 to L2, L2-L3, L3-L4, L4-L5, and L5 to sacral vertebra [S] 1) by analyzing the geometrical relationship between sagittal and axial images and classifying axial lumbar disk MR images via deep convolutional neural networks. Results This system involved 4 steps. In the first step, it automatically located vertebral bodies (including the L1, L2, L3, L4, L5, and S1) in sagittal images by using the faster region-based convolutional neural network, and our fourfold cross-validation showed 100% accuracy. In the second step, it spontaneously identified the corresponding disk in each axial lumbar disk MR image with 100% accuracy. In the third step, the accuracy for automatically locating the intervertebral disk region of interest in axial MR images was 100%. In the fourth step, the 3-class classification (normal disk, disk bulge, and disk herniation) accuracies for the L1-L2, L2-L3, L3-L4, L4-L5, and L5-S1 IVDs were 92.7%, 84.4%, 92.1%, 90.4%, and 84.2%, respectively. Conclusions The automatic diagnosis system was successfully built, and it could classify images of normal disks, disk bulge, and disk herniation. This system provided a web-based test for interpreting lumbar disk MR images that could significantly improve diagnostic efficiency and standardized diagnosis reports. This system can also be used to detect other lumbar abnormalities and cervical spondylosis.
Collapse
Affiliation(s)
- Qiong Pan
- School of Telecommunications Engineering, Xidian University, Xi'an, China.,College of Science, Northwest A&F University, Yangling, China
| | - Kai Zhang
- School of Computer Science and Technology, Xidian University, Xi'an, China.,SenseTime Group Limited, Shanghai, China
| | - Lin He
- School of Computer Science and Technology, Xidian University, Xi'an, China
| | - Zhou Dong
- School of Computer Science, Northwestern Polytechnical University, Xi'an, China
| | - Lei Zhang
- School of Computer Science and Technology, Xidian University, Xi'an, China
| | - Xiaohang Wu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Yi Wu
- Medical Imaging Department, The Affiliated Hospital of Northwest University Xi'an Number 3 Hospital, Xi'an, China
| | - Yanjun Gao
- Xi'an Key Laboratory of Cardiovascular and Cerebrovascular Diseases, The Affiliated Hospital of Northwest University Xi'an Number 3 Hospital, Xi'an, China
| |
Collapse
|
16
|
Li Z, Jiang J, Chen K, Zheng Q, Liu X, Weng H, Wu S, Chen W. Development of a deep learning-based image quality control system to detect and filter out ineligible slit-lamp images: A multicenter study. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 203:106048. [PMID: 33765481 DOI: 10.1016/j.cmpb.2021.106048] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/15/2020] [Accepted: 03/08/2021] [Indexed: 06/12/2023]
Abstract
BACKGROUND AND OBJECTIVE Previous studies developed artificial intelligence (AI) diagnostic systems only using eligible slit-lamp images for detecting corneal diseases. However, images of ineligible quality (including poor-field, defocused, and poor-location images), which are inevitable in the real world, can cause diagnostic information loss and thus affect downstream AI-based image analysis. Manual evaluation for the eligibility of slit-lamp images often requires an ophthalmologist, and this procedure can be time-consuming and labor-intensive when applied on a large scale. Here, we aimed to develop a deep learning-based image quality control system (DLIQCS) to automatically detect and filter out ineligible slit-lamp images (poor-field, defocused, and poor-location images). METHODS We developed and externally evaluated the DLIQCS based on 48,530 slit-lamp images (19,890 individuals) that were derived from 4 independent institutions using different types of digital slit lamp cameras. To find the best deep learning model for the DLIQCS, we used 3 algorithms (AlexNet, DenseNet121, and InceptionV3) to train models. The area under the receiver operating characteristic curve (AUC), sensitivity, specificity, and accuracy were leveraged to assess the performance of each algorithm for the classification of poor-field, defocused, poor-location, and eligible images. RESULTS In an internal test dataset, the best algorithm DenseNet121 had AUCs of 0.999, 1.000, 1.000, and 1.000 in the detection of poor-field, defocused, poor-location, and eligible images, respectively. In external test datasets, the AUCs of the best algorithm DenseNet121 for identifying poor-field, defocused, poor-location, and eligible images were ranged from 0.997 to 0.997, 0.983 to 0.995, 0.995 to 0.998, and 0.999 to 0.999, respectively. CONCLUSIONS Our DLIQCS can accurately detect poor-field, defocused, poor-location, and eligible slit-lamp images in an automated fashion. This system may serve as a prescreening tool to filter out ineligible images and enable that only eligible images would be transferred to the subsequent AI diagnostic systems.
Collapse
Affiliation(s)
- Zhongwen Li
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo, 315000, China; School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, China
| | - Jiewei Jiang
- School of Electronics Engineering, Xi'an University of Posts and Telecommunications, Xi'an, 710121, China
| | - Kuan Chen
- School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, China
| | - Qinxiang Zheng
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo, 315000, China; School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, China
| | - Xiaotian Liu
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo, 315000, China
| | - Hongfei Weng
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo, 315000, China
| | - Shanjun Wu
- School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, China
| | - Wei Chen
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo, 315000, China; School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, China.
| |
Collapse
|
17
|
Li Y, Zhou D, Liu TT, Shen XZ. Application of deep learning in image recognition and diagnosis of gastric cancer. Artif Intell Gastrointest Endosc 2021; 2:12-24. [DOI: 10.37126/aige.v2.i2.12] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/15/2021] [Revised: 03/30/2021] [Accepted: 04/20/2021] [Indexed: 02/06/2023] Open
Abstract
In recent years, artificial intelligence has been extensively applied in the diagnosis of gastric cancer based on medical imaging. In particular, using deep learning as one of the mainstream approaches in image processing has made remarkable progress. In this paper, we also provide a comprehensive literature survey using four electronic databases, PubMed, EMBASE, Web of Science, and Cochrane. The literature search is performed until November 2020. This article provides a summary of the existing algorithm of image recognition, reviews the available datasets used in gastric cancer diagnosis and the current trends in applications of deep learning theory in image recognition of gastric cancer. covers the theory of deep learning on endoscopic image recognition. We further evaluate the advantages and disadvantages of the current algorithms and summarize the characteristics of the existing image datasets, then combined with the latest progress in deep learning theory, and propose suggestions on the applications of optimization algorithms. Based on the existing research and application, the label, quantity, size, resolutions, and other aspects of the image dataset are also discussed. The future developments of this field are analyzed from two perspectives including algorithm optimization and data support, aiming to improve the diagnosis accuracy and reduce the risk of misdiagnosis.
Collapse
Affiliation(s)
- Yu Li
- Department of Gastroenterology and Hepatology, Zhongshan Hospital Affiliated to Fudan University, Shanghai 200032, China
| | - Da Zhou
- Department of Gastroenterology and Hepatology, Zhongshan Hospital Affiliated to Fudan University, Shanghai 200032, China
| | - Tao-Tao Liu
- Department of Gastroenterology and Hepatology, Zhongshan Hospital Affiliated to Fudan University, Shanghai 200032, China
| | - Xi-Zhong Shen
- Department of Gastroenterology and Hepatology, Zhongshan Hospital Affiliated to Fudan University, Shanghai 200032, China
| |
Collapse
|
18
|
Owais M, Arsalan M, Mahmood T, Kang JK, Park KR. Automated Diagnosis of Various Gastrointestinal Lesions Using a Deep Learning-Based Classification and Retrieval Framework With a Large Endoscopic Database: Model Development and Validation. J Med Internet Res 2020; 22:e18563. [PMID: 33242010 PMCID: PMC7728528 DOI: 10.2196/18563] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2020] [Revised: 09/16/2020] [Accepted: 11/11/2020] [Indexed: 12/14/2022] Open
Abstract
Background The early diagnosis of various gastrointestinal diseases can lead to effective treatment and reduce the risk of many life-threatening conditions. Unfortunately, various small gastrointestinal lesions are undetectable during early-stage examination by medical experts. In previous studies, various deep learning–based computer-aided diagnosis tools have been used to make a significant contribution to the effective diagnosis and treatment of gastrointestinal diseases. However, most of these methods were designed to detect a limited number of gastrointestinal diseases, such as polyps, tumors, or cancers, in a specific part of the human gastrointestinal tract. Objective This study aimed to develop a comprehensive computer-aided diagnosis tool to assist medical experts in diagnosing various types of gastrointestinal diseases. Methods Our proposed framework comprises a deep learning–based classification network followed by a retrieval method. In the first step, the classification network predicts the disease type for the current medical condition. Then, the retrieval part of the framework shows the relevant cases (endoscopic images) from the previous database. These past cases help the medical expert validate the current computer prediction subjectively, which ultimately results in better diagnosis and treatment. Results All the experiments were performed using 2 endoscopic data sets with a total of 52,471 frames and 37 different classes. The optimal performances obtained by our proposed method in accuracy, F1 score, mean average precision, and mean average recall were 96.19%, 96.99%, 98.18%, and 95.86%, respectively. The overall performance of our proposed diagnostic framework substantially outperformed state-of-the-art methods. Conclusions This study provides a comprehensive computer-aided diagnosis framework for identifying various types of gastrointestinal diseases. The results show the superiority of our proposed method over various other recent methods and illustrate its potential for clinical diagnosis and treatment. Our proposed network can be applicable to other classification domains in medical imaging, such as computed tomography scans, magnetic resonance imaging, and ultrasound sequences.
Collapse
Affiliation(s)
- Muhammad Owais
- Division of Electronics and Electrical Engineering, Dongguk University, Seoul, Republic of Korea
| | - Muhammad Arsalan
- Division of Electronics and Electrical Engineering, Dongguk University, Seoul, Republic of Korea
| | - Tahir Mahmood
- Division of Electronics and Electrical Engineering, Dongguk University, Seoul, Republic of Korea
| | - Jin Kyu Kang
- Division of Electronics and Electrical Engineering, Dongguk University, Seoul, Republic of Korea
| | - Kang Ryoung Park
- Division of Electronics and Electrical Engineering, Dongguk University, Seoul, Republic of Korea
| |
Collapse
|
19
|
What does it mean to provide decision support to a responsible and competent expert? EURO JOURNAL ON DECISION PROCESSES 2020. [DOI: 10.1007/s40070-020-00116-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
20
|
Bi S, Chen R, Zhang K, Xiang Y, Wang R, Lin H, Yang H. Differentiate cavernous hemangioma from schwannoma with artificial intelligence (AI). ANNALS OF TRANSLATIONAL MEDICINE 2020; 8:710. [PMID: 32617330 PMCID: PMC7327353 DOI: 10.21037/atm.2020.03.150] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 01/21/2023]
Abstract
Background Cavernous hemangioma and schwannoma are tumors that both occur in the orbit. Because the treatment strategies of these two tumors are different, it is necessary to distinguish them at treatment initiation. Magnetic resonance imaging (MRI) is typically used to differentiate these two tumor types; however, they present similar features in MRI images which increases the difficulty of differential diagnosis. This study aims to devise and develop an artificial intelligence framework to improve the accuracy of clinicians' diagnoses and enable more effective treatment decisions by automatically distinguishing cavernous hemangioma from schwannoma. Methods Material: As the study materials, we chose MRI images as the study materials that represented patients from diverse areas in China who had been referred to our center from more than 45 different hospitals. All images were initially acquired on films, which we scanned into digital versions and recut. Finally, 11,489 images of cavernous hemangioma (from 33 different hospitals) and 3,478 images of schwannoma (from 16 different hospitals) were collected. Labeling: All images were labeled using standard anatomical knowledge and pathological diagnosis. Training: Three types of models were trained in sequence (a total of 96 models), with each model including a specific improvement. The first two model groups were eye- and tumor-positioning models designed to reduce the identification scope, while the third model group consisted of classification models trained to make the final diagnosis. Results First, internal four-fold cross-validation processes were conducted for all the models. During the validation of the first group, the 32 eye-positioning models were able to localize the position of the eyes with an average precision of 100%. In the second group, the 28 tumor-positioning models were able to reach an average precision above 90%. Subsequently, using the third group, the accuracy of all 32 tumor classification models reached nearly 90%. Next, external validation processes of 32 tumor classification models were conducted. The results showed that the accuracy of the transverse T1-weighted contrast-enhanced sequence reached 91.13%; the accuracy of the remaining models was significantly lower compared with the ground truth. Conclusions The findings of this retrospective study show that an artificial intelligence framework can achieve high accuracy, sensitivity, and specificity in automated differential diagnosis between cavernous hemangioma and schwannoma in a real-world setting, which can help doctors determine appropriate treatments.
Collapse
Affiliation(s)
- Shaowei Bi
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Rongxin Chen
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Kai Zhang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China.,School of Computer Science and Technology, Xidian University, Xi'an, China
| | - Yifan Xiang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Ruixin Wang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Haotian Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China.,Center for Precision Medicine, Sun Yat-sen University, Guangzhou, China
| | - Huasheng Yang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| |
Collapse
|
21
|
Lv J, Zhang K, Chen Q, Chen Q, Huang W, Cui L, Li M, Li J, Chen L, Shen C, Yang Z, Bei Y, Li L, Wu X, Zeng S, Xu F, Lin H. Deep learning-based automated diagnosis of fungal keratitis with in vivo confocal microscopy images. ANNALS OF TRANSLATIONAL MEDICINE 2020; 8:706. [PMID: 32617326 PMCID: PMC7327373 DOI: 10.21037/atm.2020.03.134] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 01/29/2023]
Abstract
Background The aim of this study was to develop an intelligent system based on a deep learning algorithm for automatically diagnosing fungal keratitis (FK) in in vivo confocal microscopy (IVCM) images. Methods A total of 2,088 IVCM images were included in the training dataset. The positive group consisted of 688 images with fungal hyphae, and the negative group included 1,400 images without fungal hyphae. A total of 535 images in the testing dataset were not included in the training dataset. Deep Residual Learning for Image Recognition (ResNet) was used to build the intelligent system for diagnosing FK automatically. The system was verified by external validation in the testing dataset using the area under the receiver operating characteristic curve (AUC), accuracy, specificity and sensitivity. Results In the testing dataset, 515 images were diagnosed correctly and 20 were misdiagnosed (including 6 with fungal hyphae and 14 without). The system achieved an AUC of 0.9875 with an accuracy of 0.9626 in detecting fungal hyphae. The sensitivity of the system was 0.9186, with a specificity of 0.9834. When 349 diabetic patients were included in the training dataset, 501 images were diagnosed correctly and thirty-four were misdiagnosed (including 4 with fungal hyphae and 30 without). The AUC of the system was 0.9769. The accuracy, specificity and sensitivity were 0.9364, 0.9889 and 0.8256, respectively. Conclusions The intelligent system based on a deep learning algorithm exhibited satisfactory diagnostic performance and effectively classified FK in various IVCM images. The context of this deep learning automated diagnostic system can be extended to other types of keratitis.
Collapse
Affiliation(s)
- Jian Lv
- Department of Ophthalmology, People's Hospital of Guangxi Zhuang Autonomous Region, Nanning, China
| | - Kai Zhang
- Center for Precision Medicine, Sun Yat-sen University, Guangzhou, China.,School of Computer Science and Technology, Xidian University, Xi'an, China
| | - Qing Chen
- Department of Ophthalmology, People's Hospital of Guangxi Zhuang Autonomous Region, Nanning, China
| | - Qi Chen
- Department of Ophthalmology, People's Hospital of Guangxi Zhuang Autonomous Region, Nanning, China
| | - Wei Huang
- Department of Ophthalmology, People's Hospital of Guangxi Zhuang Autonomous Region, Nanning, China
| | - Ling Cui
- Department of Ophthalmology, People's Hospital of Guangxi Zhuang Autonomous Region, Nanning, China
| | - Min Li
- Department of Ophthalmology, People's Hospital of Guangxi Zhuang Autonomous Region, Nanning, China
| | - Jianyin Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Lifei Chen
- Department of Ophthalmology, People's Hospital of Guangxi Zhuang Autonomous Region, Nanning, China
| | - Chaolan Shen
- Department of Ophthalmology, People's Hospital of Guangxi Zhuang Autonomous Region, Nanning, China
| | - Zhao Yang
- Department of Ophthalmology, People's Hospital of Guangxi Zhuang Autonomous Region, Nanning, China
| | - Yixuan Bei
- Department of Ophthalmology, People's Hospital of Guangxi Zhuang Autonomous Region, Nanning, China
| | - Lanjian Li
- Department of Ophthalmology, People's Hospital of Guangxi Zhuang Autonomous Region, Nanning, China
| | - Xiaohang Wu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Siming Zeng
- Department of Ophthalmology, People's Hospital of Guangxi Zhuang Autonomous Region, Nanning, China
| | - Fan Xu
- Department of Ophthalmology, People's Hospital of Guangxi Zhuang Autonomous Region, Nanning, China
| | - Haotian Lin
- Center for Precision Medicine, Sun Yat-sen University, Guangzhou, China.,State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| |
Collapse
|
22
|
Wu X, Liu L, Zhao L, Guo C, Li R, Wang T, Yang X, Xie P, Liu Y, Lin H. Application of artificial intelligence in anterior segment ophthalmic diseases: diversity and standardization. ANNALS OF TRANSLATIONAL MEDICINE 2020; 8:714. [PMID: 32617334 PMCID: PMC7327317 DOI: 10.21037/atm-20-976] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 12/26/2022]
Abstract
Artificial intelligence (AI) based on machine learning (ML) and deep learning (DL) techniques has gained tremendous global interest in this era. Recent studies have demonstrated the potential of AI systems to provide improved capability in various tasks, especially in image recognition field. As an image-centric subspecialty, ophthalmology has become one of the frontiers of AI research. Trained on optical coherence tomography, slit-lamp images and even ordinary eye images, AI can achieve robust performance in the detection of glaucoma, corneal arcus and cataracts. Moreover, AI models based on other forms of data also performed satisfactorily. Nevertheless, several challenges with AI application in ophthalmology have also arisen, including standardization of data sets, validation and applicability of AI models, and ethical issues. In this review, we provided a summary of the state-of-the-art AI application in anterior segment ophthalmic diseases, potential challenges in clinical implementation and our prospects.
Collapse
Affiliation(s)
- Xiaohang Wu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Lixue Liu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Lanqin Zhao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Chong Guo
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Ruiyang Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Ting Wang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Xiaonan Yang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Peichen Xie
- Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, China
| | - Yizhi Liu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Haotian Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China.,Center for Precision Medicine, Sun Yat-sen University, Guangzhou, China
| |
Collapse
|
23
|
Zhang Y, Li F, Yuan F, Zhang K, Huo L, Dong Z, Lang Y, Zhang Y, Wang M, Gao Z, Qin Z, Shen L. Diagnosing chronic atrophic gastritis by gastroscopy using artificial intelligence. Dig Liver Dis 2020; 52:566-572. [PMID: 32061504 DOI: 10.1016/j.dld.2019.12.146] [Citation(s) in RCA: 66] [Impact Index Per Article: 16.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/21/2019] [Revised: 12/28/2019] [Accepted: 12/31/2019] [Indexed: 12/11/2022]
Abstract
BACKGROUND The sensitivity of endoscopy in diagnosing chronic atrophic gastritis is only 42%, and multipoint biopsy, despite being more accurate, is not always available. AIMS This study aimed to construct a convolutional neural network to improve the diagnostic rate of chronic atrophic gastritis. METHODS We collected 5470 images of the gastric antrums of 1699 patients and labeled them with their pathological findings. Of these, 3042 images depicted atrophic gastritis and 2428 did not. We designed and trained a convolutional neural network-chronic atrophic gastritis model to diagnose atrophic gastritis accurately, verified by five-fold cross-validation. Moreover, the diagnoses of the deep learning model were compared with those of three experts. RESULTS The diagnostic accuracy, sensitivity, and specificity of the convolutional neural network-chronic atrophic gastritis model in diagnosing atrophic gastritis were 0.942, 0.945, and 0.940, respectively, which were higher than those of the experts. The detection rates of mild, moderate, and severe atrophic gastritis were 93%, 95%, and 99%, respectively. CONCLUSION Chronic atrophic gastritis could be diagnosed by gastroscopic images using the convolutional neural network-chronic atrophic gastritis model. This may greatly reduce the burden on endoscopy physicians, simplify diagnostic routines, and reduce costs for doctors and patients.
Collapse
Affiliation(s)
- Yaqiong Zhang
- Department of Gastroenterology, Shanxi Provincial People's Hospital of Shanxi Medical University, Taiyuan, China
| | - Fengxia Li
- Department of Gastroenterology, Shanxi Provincial People's Hospital, Taiyuan, China.
| | - Fuqiang Yuan
- Baidu Online Network Technology (Beijing) Corporation, Beijing, China
| | - Kai Zhang
- School of Computer Science and Technology, Xidian University, Xi'an, China
| | - Lijuan Huo
- Department of Gastroenterology, The First Hospital of Shanxi Medical University, Taiyuan, China
| | - Zichen Dong
- School of Computer Science and Technology, Xidian University, Xi'an, China
| | - Yiming Lang
- School of Computer Science and Technology, Xidian University, Xi'an, China
| | - Yapeng Zhang
- Fenyang College of Shanxi Medical University, Fenyang, China
| | - Meihong Wang
- Department of Gastroenterology, Shanxi Provincial People's Hospital of Shanxi Medical University, Taiyuan, China
| | - Zenghui Gao
- Department of Gastroenterology, Shanxi Provincial People's Hospital of Shanxi Medical University, Taiyuan, China
| | - Zhenzhen Qin
- Department of Gastroenterology, Shanxi Provincial People's Hospital of Shanxi Medical University, Taiyuan, China
| | - Leixue Shen
- School of Computer Science and Technology, Xidian University, Xi'an, China
| |
Collapse
|
24
|
A human-in-the-loop deep learning paradigm for synergic visual evaluation in children. Neural Netw 2020; 122:163-173. [DOI: 10.1016/j.neunet.2019.10.003] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2019] [Revised: 08/11/2019] [Accepted: 10/01/2019] [Indexed: 11/20/2022]
|
25
|
Yang J, Zhang K, Fan H, Huang Z, Xiang Y, Yang J, He L, Zhang L, Yang Y, Li R, Zhu Y, Chen C, Liu F, Yang H, Deng Y, Tan W, Deng N, Yu X, Xuan X, Xie X, Liu X, Lin H. Development and validation of deep learning algorithms for scoliosis screening using back images. Commun Biol 2019; 2:390. [PMID: 31667364 PMCID: PMC6814825 DOI: 10.1038/s42003-019-0635-8] [Citation(s) in RCA: 53] [Impact Index Per Article: 10.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2019] [Accepted: 09/24/2019] [Indexed: 02/08/2023] Open
Abstract
Adolescent idiopathic scoliosis is the most common spinal disorder in adolescents with a prevalence of 0.5-5.2% worldwide. The traditional methods for scoliosis screening are easily accessible but require unnecessary referrals and radiography exposure due to their low positive predictive values. The application of deep learning algorithms has the potential to reduce unnecessary referrals and costs in scoliosis screening. Here, we developed and validated deep learning algorithms for automated scoliosis screening using unclothed back images. The accuracies of the algorithms were superior to those of human specialists in detecting scoliosis, detecting cases with a curve ≥20°, and severity grading for both binary classifications and the four-class classification. Our approach can be potentially applied in routine scoliosis screening and periodic follow-ups of pretreatment cases without radiation exposure.
Collapse
Affiliation(s)
- Junlin Yang
- Spine Center, Xinhua Hospital Affiliated to Shanghai Jiaotong University School of Medicine, Shanghai, China
| | - Kai Zhang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, Guangdong China
- School of Computer Science and Technology, Xidian University, Xi’an, Shanxi China
| | - Hengwei Fan
- Spine Center, Xinhua Hospital Affiliated to Shanghai Jiaotong University School of Medicine, Shanghai, China
| | - Zifang Huang
- Department of Spine Surgery, the 1st Affiliated Hospital of Sun Yat-sen University, Guangzhou, Guangdong China
| | - Yifan Xiang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, Guangdong China
| | - Jingfan Yang
- Spine Center, Xinhua Hospital Affiliated to Shanghai Jiaotong University School of Medicine, Shanghai, China
| | - Lin He
- School of Computer Science and Technology, Xidian University, Xi’an, Shanxi China
| | - Lei Zhang
- School of Computer Science and Technology, Xidian University, Xi’an, Shanxi China
| | - Yahan Yang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, Guangdong China
| | - Ruiyang Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, Guangdong China
| | - Yi Zhu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, Guangdong China
- Department of Molecular and Cellular Pharmacology, University of Miami Miller School of Medicine, Miami, FL USA
| | - Chuan Chen
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, Guangdong China
- Department of Molecular and Cellular Pharmacology, University of Miami Miller School of Medicine, Miami, FL USA
| | - Fan Liu
- School of Computer Science and Technology, Xidian University, Xi’an, Shanxi China
| | - Haoqing Yang
- School of Computer Science and Technology, Xidian University, Xi’an, Shanxi China
| | - Yaolong Deng
- Spine Center, Xinhua Hospital Affiliated to Shanghai Jiaotong University School of Medicine, Shanghai, China
| | - Weiqing Tan
- Health Promotion Centre for Primary and Secondary Schools of Guangzhou Municipality, Guangzhou, Guangdong China
| | - Nali Deng
- Health Promotion Centre for Primary and Secondary Schools of Guangzhou Municipality, Guangzhou, Guangdong China
| | - Xuexiang Yu
- Department of Sports and Arts, Guangzhou Sport University, Guangzhou, Guangdong China
| | - Xiaoling Xuan
- Xinmiao Scoliosis Prevention of Guangdong Province, Guangzhou, Guangdong China
| | - Xiaofeng Xie
- Xinmiao Scoliosis Prevention of Guangdong Province, Guangzhou, Guangdong China
| | - Xiyang Liu
- School of Computer Science and Technology, Xidian University, Xi’an, Shanxi China
| | - Haotian Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, Guangdong China
- Center for Precision Medicine, Sun Yat-sen University, Guangzhou, Guangdong China
| |
Collapse
|
26
|
Tobore I, Li J, Yuhang L, Al-Handarish Y, Kandwal A, Nie Z, Wang L. Deep Learning Intervention for Health Care Challenges: Some Biomedical Domain Considerations. JMIR Mhealth Uhealth 2019; 7:e11966. [PMID: 31376272 PMCID: PMC6696854 DOI: 10.2196/11966] [Citation(s) in RCA: 58] [Impact Index Per Article: 11.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2018] [Revised: 04/14/2019] [Accepted: 06/12/2019] [Indexed: 01/10/2023] Open
Abstract
The use of deep learning (DL) for the analysis and diagnosis of biomedical and health care problems has received unprecedented attention in the last decade. The technique has recorded a number of achievements for unearthing meaningful features and accomplishing tasks that were hitherto difficult to solve by other methods and human experts. Currently, biological and medical devices, treatment, and applications are capable of generating large volumes of data in the form of images, sounds, text, graphs, and signals creating the concept of big data. The innovation of DL is a developing trend in the wake of big data for data representation and analysis. DL is a type of machine learning algorithm that has deeper (or more) hidden layers of similar function cascaded into the network and has the capability to make meaning from medical big data. Current transformation drivers to achieve personalized health care delivery will be possible with the use of mobile health (mHealth). DL can provide the analysis for the deluge of data generated from mHealth apps. This paper reviews the fundamentals of DL methods and presents a general view of the trends in DL by capturing literature from PubMed and the Institute of Electrical and Electronics Engineers database publications that implement different variants of DL. We highlight the implementation of DL in health care, which we categorize into biological system, electronic health record, medical image, and physiological signals. In addition, we discuss some inherent challenges of DL affecting biomedical and health domain, as well as prospective research directions that focus on improving health management by promoting the application of physiological signals and modern internet technology.
Collapse
Affiliation(s)
- Igbe Tobore
- Center for Medical Robotics and Minimally Invasive Surgical Devices, Shenzhen Institutes of Advance Technology, Chinese Academy of Sciences, Shenzhen, China.,Graduate University, Chinese Academy of Sciences, Beijing, China
| | - Jingzhen Li
- Center for Medical Robotics and Minimally Invasive Surgical Devices, Shenzhen Institutes of Advance Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Liu Yuhang
- Center for Medical Robotics and Minimally Invasive Surgical Devices, Shenzhen Institutes of Advance Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Yousef Al-Handarish
- Center for Medical Robotics and Minimally Invasive Surgical Devices, Shenzhen Institutes of Advance Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Abhishek Kandwal
- Center for Medical Robotics and Minimally Invasive Surgical Devices, Shenzhen Institutes of Advance Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Zedong Nie
- Center for Medical Robotics and Minimally Invasive Surgical Devices, Shenzhen Institutes of Advance Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Lei Wang
- Center for Medical Robotics and Minimally Invasive Surgical Devices, Shenzhen Institutes of Advance Technology, Chinese Academy of Sciences, Shenzhen, China
| |
Collapse
|
27
|
Zhang K, Liu X, Jiang J, Li W, Wang S, Liu L, Zhou X, Wang L. Prediction of postoperative complications of pediatric cataract patients using data mining. J Transl Med 2019; 17:2. [PMID: 30602368 PMCID: PMC6317183 DOI: 10.1186/s12967-018-1758-2] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2018] [Accepted: 12/21/2018] [Indexed: 12/31/2022] Open
Abstract
BACKGROUND The common treatment for pediatric cataracts is to replace the cloudy lens with an artificial one. However, patients may suffer complications (severe lens proliferation into the visual axis and abnormal high intraocular pressure; SLPVA and AHIP) within 1 year after surgery and factors causing these complications are unknown. METHODS Apriori algorithm is employed to find association rules related to complications. We use random forest (RF) and Naïve Bayesian (NB) to predict the complications with datasets preprocessed by SMOTE (synthetic minority oversampling technique). Genetic feature selection is exploited to find real features related to complications. RESULTS Average classification accuracies in three binary classification problems are over 75%. Second, the relationship between the classification performance and the number of random forest tree is studied. Results show except for gender and age at surgery (AS); other attributes are related to complications. Except for the secondary IOL placement, operation mode, AS and area of cataracts; other attributes are related to SLPVA. Except for the gender, operation mode, and laterality; other attributes are related to the AHIP. Next, the association rules related to the complications are mined out. Then additional 50 data were used to test the performance of RF and NB, both of then obtained the accuracies of over 65% for three classification problems. Finally, we developed a webserver to assist doctors. CONCLUSIONS The postoperative complications of pediatric cataracts patients can be predicted. Then the factors related to the complications are found. Finally, the association rules that is about the complications can provide reference to doctors.
Collapse
Affiliation(s)
- Kai Zhang
- School of Computer Science and Technology, Xidian University, No.2 South Taibai Rd, Xi'an, 710071, China.,State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, 510060, China
| | - Xiyang Liu
- School of Computer Science and Technology, Xidian University, No.2 South Taibai Rd, Xi'an, 710071, China. .,Institute of Software Engineering, Xidian University, Xi'an, 710071, China. .,School of Software, Xidian University, Xi'an, 710071, China.
| | - Jiewei Jiang
- School of Computer Science and Technology, Xidian University, No.2 South Taibai Rd, Xi'an, 710071, China.,State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, 510060, China
| | - Wangting Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, 510060, China
| | - Shuai Wang
- School of Software, Xidian University, Xi'an, 710071, China
| | - Lin Liu
- School of Computer Science and Technology, Xidian University, No.2 South Taibai Rd, Xi'an, 710071, China
| | - Xiaojing Zhou
- School of Computer Science, Northwestern Polytechnical University, Xi'an, 710072, China
| | - Liming Wang
- School of Computer Science and Technology, Xidian University, No.2 South Taibai Rd, Xi'an, 710071, China.,Institute of Software Engineering, Xidian University, Xi'an, 710071, China.,School of Software, Xidian University, Xi'an, 710071, China
| |
Collapse
|