1
|
Zhu S, Liu X, Lu Y, Zheng B, Wu M, Yao X, Yang W, Gong Y. Application and visualization study of an intelligence-assisted classification model for common eye diseases using B-mode ultrasound images. Front Neurosci 2024; 18:1339075. [PMID: 38808029 PMCID: PMC11130417 DOI: 10.3389/fnins.2024.1339075] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2023] [Accepted: 04/25/2024] [Indexed: 05/30/2024] Open
Abstract
Aim Conventional approaches to diagnosing common eye diseases using B-mode ultrasonography are labor-intensive and time-consuming, must requiring expert intervention for accuracy. This study aims to address these challenges by proposing an intelligence-assisted analysis five-classification model for diagnosing common eye diseases using B-mode ultrasound images. Methods This research utilizes 2064 B-mode ultrasound images of the eye to train a novel model integrating artificial intelligence technology. Results The ConvNeXt-L model achieved outstanding performance with an accuracy rate of 84.3% and a Kappa value of 80.3%. Across five classifications (no obvious abnormality, vitreous opacity, posterior vitreous detachment, retinal detachment, and choroidal detachment), the model demonstrated sensitivity values of 93.2%, 67.6%, 86.1%, 89.4%, and 81.4%, respectively, and specificity values ranging from 94.6% to 98.1%. F1 scores ranged from 71% to 92%, while AUC values ranged from 89.7% to 97.8%. Conclusion Among various models compared, the ConvNeXt-L model exhibited superior performance. It effectively categorizes and visualizes pathological changes, providing essential assisted information for ophthalmologists and enhancing diagnostic accuracy and efficiency.
Collapse
Affiliation(s)
- Shaojun Zhu
- School of Information Engineering, Huzhou University, Huzhou, China
| | - Xiangjun Liu
- School of Information Engineering, Huzhou University, Huzhou, China
| | - Ying Lu
- School of Information Engineering, Huzhou University, Huzhou, China
| | - Bo Zheng
- School of Information Engineering, Huzhou University, Huzhou, China
| | - Maonian Wu
- School of Information Engineering, Huzhou University, Huzhou, China
| | - Xue Yao
- Shenzhen Eye Institute, Shenzhen Eye Hospital, Jinan University, Shenzhen, China
| | - Weihua Yang
- Shenzhen Eye Institute, Shenzhen Eye Hospital, Jinan University, Shenzhen, China
| | - Yan Gong
- Department of Ophthalmology, Ningbo Eye Hospital, Wenzhou Medical University, Ningbo, China
| |
Collapse
|
2
|
Zhu S, Zhan H, Yan Z, Wu M, Zheng B, Xu S, Jiang Q, Yang W. Prediction of spherical equivalent refraction and axial length in children based on machine learning. Indian J Ophthalmol 2023; 71:2115-2131. [PMID: 37203092 PMCID: PMC10391375 DOI: 10.4103/ijo.ijo_2989_22] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/20/2023] Open
Abstract
Purpose Recently, the proportion of patients with high myopia has shown a continuous growing trend, more toward the younger age groups. This study aimed to predict the changes in spherical equivalent refraction (SER) and axial length (AL) in children using machine learning methods. Methods This study is a retrospective study. The cooperative ophthalmology hospital of this study collected data on 179 sets of childhood myopia examinations. The data collected included AL and SER from grades 1 to 6. This study used the six machine learning models to predict AL and SER based on the data. Six evaluation indicators were used to evaluate the prediction results of the models. Results For predicting SER in grade 6, grade 5, grade 4, grade 3, and grade 2, the best results were obtained through the multilayer perceptron (MLP) algorithm, MLP algorithm, orthogonal matching pursuit (OMP) algorithm, OMP algorithm, and OMP algorithm, respectively. The R2 of the five models were 0.8997, 0.7839, 0.7177, 0.5118, and 0.1758, respectively. For predicting AL in grade 6, grade 5, grade 4, grade 3, and grade 2, the best results were obtained through the Extra Tree (ET) algorithm, MLP algorithm, kernel ridge (KR) algorithm, KR algorithm, and MLP algorithm, respectively. The R2 of the five models were 0.7546, 0.5456, 0.8755, 0.9072, and 0.8534, respectively. Conclusion Therefore, in predicting SER, the OMP model performed better than the other models in most experiments. In predicting AL, the KR and MLP models were better than the other models in most experiments.
Collapse
Affiliation(s)
- Shaojun Zhu
- School of Information Engineering, Huzhou University; Zhejiang Province Key Laboratory of Smart Management and Application of Modern Agricultural Resources, Huzhou University, Huzhou, China
| | - Haodong Zhan
- School of Information Engineering, Huzhou University, Huzhou, China
| | - Zhipeng Yan
- Eye Hospital, Nanjing Medical University, Nanjing, China
| | - Maonian Wu
- School of Information Engineering, Huzhou University; Zhejiang Province Key Laboratory of Smart Management and Application of Modern Agricultural Resources, Huzhou University, Huzhou, China
| | - Bo Zheng
- School of Information Engineering, Huzhou University; Zhejiang Province Key Laboratory of Smart Management and Application of Modern Agricultural Resources, Huzhou University, Huzhou, China
| | - Shanshan Xu
- Eye Hospital, Nanjing Medical University, Nanjing, China
| | - Qin Jiang
- Eye Hospital, Nanjing Medical University, Nanjing, China
| | - Weihua Yang
- Shenzhen Eye Hospital, Jinan University, Shenzhen, China
| |
Collapse
|
3
|
Wan C, Fang J, Hua X, Chen L, Zhang S, Yang W. Automated detection of myopic maculopathy using five-category models based on vision outlooker for visual recognition. Front Comput Neurosci 2023; 17:1169464. [PMID: 37152298 PMCID: PMC10157024 DOI: 10.3389/fncom.2023.1169464] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2023] [Accepted: 04/06/2023] [Indexed: 05/09/2023] Open
Abstract
Purpose To propose a five-category model for the automatic detection of myopic macular lesions to help grassroots medical institutions conduct preliminary screening of myopic macular lesions from limited number of color fundus images. Methods First, 1,750 fundus images of non-myopic retinal lesions and four categories of pathological myopic maculopathy were collected, graded, and labeled. Subsequently, three five-classification models based on Vision Outlooker for Visual Recognition (VOLO), EfficientNetV2, and ResNet50 for detecting myopic maculopathy were trained with data-augmented images, and the diagnostic results of the different trained models were compared and analyzed. The main evaluation metrics were sensitivity, specificity, negative predictive value (NPV), positive predictive value (PPV), area under the curve (AUC), kappa and accuracy, and receiver operating characteristic curve (ROC). Results The diagnostic accuracy of the VOLO-D2 model was 96.60% with a kappa value of 95.60%. All indicators used for the diagnosis of myopia-free macular degeneration were 100%. The sensitivity, NPV, specificity, and PPV for diagnosis of leopard fundus were 96.43, 98.33, 100, and 100%, respectively. The sensitivity, specificity, PPV, and NPV for the diagnosis of diffuse chorioretinal atrophy were 96.88, 98.59, 93.94, and 99.29%, respectively. The sensitivity, specificity, PPV, and NPV for the diagnosis of patchy chorioretinal atrophy were 92.31, 99.26, 97.30, and 97.81%, respectively. The sensitivity, specificity, PPV, and NPV for the diagnosis of macular atrophy were 100, 98.10, 84.21, and 100%, respectively. Conclusion The VOLO-D2 model accurately identified myopia-free macular lesions and four pathological myopia-related macular lesions with high sensitivity and specificity. It can be used in screening pathological myopic macular lesions and can help ophthalmologists and primary medical institution providers complete the initial screening diagnosis of patients.
Collapse
Affiliation(s)
- Cheng Wan
- College of Electronic Information Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| | - Jiyi Fang
- College of Electronic Information Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| | - Xiao Hua
- Nanjing Star-mile Technology Co., Ltd., Nanjing, China
| | - Lu Chen
- Shenzhen Eye Hospital, Jinan University, Shenzhen, China
- Shenzhen Eye Institute, Shenzhen, China
| | - Shaochong Zhang
- Shenzhen Eye Hospital, Jinan University, Shenzhen, China
- Shenzhen Eye Institute, Shenzhen, China
- Shaochong Zhang,
| | - Weihua Yang
- Shenzhen Eye Hospital, Jinan University, Shenzhen, China
- Shenzhen Eye Institute, Shenzhen, China
- *Correspondence: Weihua Yang,
| |
Collapse
|
4
|
Wu M, Lu Y, Hong X, Zhang J, Zheng B, Zhu S, Chen N, Zhu Z, Yang W. Classification of dry and wet macular degeneration based on the ConvNeXT model. Front Comput Neurosci 2022; 16:1079155. [PMID: 36568576 PMCID: PMC9773079 DOI: 10.3389/fncom.2022.1079155] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2022] [Accepted: 11/24/2022] [Indexed: 12/13/2022] Open
Abstract
Purpose To assess the value of an automated classification model for dry and wet macular degeneration based on the ConvNeXT model. Methods A total of 672 fundus images of normal, dry, and wet macular degeneration were collected from the Affiliated Eye Hospital of Nanjing Medical University and the fundus images of dry macular degeneration were expanded. The ConvNeXT three-category model was trained on the original and expanded datasets, and compared to the results of the VGG16, ResNet18, ResNet50, EfficientNetB7, and RegNet three-category models. A total of 289 fundus images were used to test the models, and the classification results of the models on different datasets were compared. The main evaluation indicators were sensitivity, specificity, F1-score, area under the curve (AUC), accuracy, and kappa. Results Using 289 fundus images, three-category models trained on the original and expanded datasets were assessed. The ConvNeXT model trained on the expanded dataset was the most effective, with a diagnostic accuracy of 96.89%, kappa value of 94.99%, and high diagnostic consistency. The sensitivity, specificity, F1-score, and AUC values for normal fundus images were 100.00, 99.41, 99.59, and 99.80%, respectively. The sensitivity, specificity, F1-score, and AUC values for dry macular degeneration diagnosis were 87.50, 98.76, 90.32, and 97.10%, respectively. The sensitivity, specificity, F1-score, and AUC values for wet macular degeneration diagnosis were 97.52, 97.02, 96.72, and 99.10%, respectively. Conclusion The ConvNeXT-based category model for dry and wet macular degeneration automatically identified dry and wet macular degeneration, aiding rapid, and accurate clinical diagnosis.
Collapse
Affiliation(s)
- Maonian Wu
- School of Information Engineering, Huzhou University, Huzhou, China,Zhejiang Province Key Laboratory of Smart Management and Application of Modern Agricultural Resources, Huzhou University, Huzhou, China
| | - Ying Lu
- School of Information Engineering, Huzhou University, Huzhou, China
| | - Xiangqian Hong
- Shenzhen Eye Hospital, Jinan University, Shenzhen, China
| | - Jie Zhang
- Advanced Ophthalmology Laboratory, Brightview Medical Technologies (Nanjing) Co., Ltd., Nanjing, China
| | - Bo Zheng
- School of Information Engineering, Huzhou University, Huzhou, China,Zhejiang Province Key Laboratory of Smart Management and Application of Modern Agricultural Resources, Huzhou University, Huzhou, China
| | - Shaojun Zhu
- School of Information Engineering, Huzhou University, Huzhou, China,Zhejiang Province Key Laboratory of Smart Management and Application of Modern Agricultural Resources, Huzhou University, Huzhou, China
| | - Naimei Chen
- Department of Ophthalmology, Huaian Hospital of Huaian City, Huaian, China
| | - Zhentao Zhu
- Department of Ophthalmology, Huaian Hospital of Huaian City, Huaian, China,*Correspondence: Zhentao Zhu,
| | - Weihua Yang
- Shenzhen Eye Hospital, Jinan University, Shenzhen, China,Weihua Yang,
| |
Collapse
|
5
|
Shi X, Wang L, Li Y, Wu J, Huang H. GCLDNet: Gastric cancer lesion detection network combining level feature aggregation and attention feature fusion. Front Oncol 2022; 12:901475. [PMID: 36106104 PMCID: PMC9464831 DOI: 10.3389/fonc.2022.901475] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2022] [Accepted: 08/01/2022] [Indexed: 11/13/2022] Open
Abstract
Background Analysis of histopathological slices of gastric cancer is the gold standard for diagnosing gastric cancer, while manual identification is time-consuming and highly relies on the experience of pathologists. Artificial intelligence methods, particularly deep learning, can assist pathologists in finding cancerous tissues and realizing automated detection. However, due to the variety of shapes and sizes of gastric cancer lesions, as well as many interfering factors, GCHIs have a high level of complexity and difficulty in accurately finding the lesion region. Traditional deep learning methods cannot effectively extract discriminative features because of their simple decoding method so they cannot detect lesions accurately, and there is less research dedicated to detecting gastric cancer lesions. Methods We propose a gastric cancer lesion detection network (GCLDNet). At first, GCLDNet designs a level feature aggregation structure in decoder, which can effectively fuse deep and shallow features of GCHIs. Second, an attention feature fusion module is introduced to accurately locate the lesion area, which merges attention features of different scales and obtains rich discriminative information focusing on lesion. Finally, focal Tversky loss (FTL) is employed as a loss function to depress false-negative predictions and mine difficult samples. Results Experimental results on two GCHI datasets of SEED and BOT show that DSCs of the GCLDNet are 0.8265 and 0.8991, ACCs are 0.8827 and 0.8949, JIs are 0.7092 and 0.8182, and PREs are 0.7820 and 0.8763, respectively. Conclusions Experimental results demonstrate the effectiveness of GCLDNet in the detection of gastric cancer lesions. Compared with other state-of-the-art (SOTA) detection methods, the GCLDNet obtains a more satisfactory performance. This research can provide good auxiliary support for pathologists in clinical diagnosis.
Collapse
Affiliation(s)
- Xu Shi
- Key Laboratory of Optoelectronic Technology and Systems of the Education Ministry of China, Chongqing University, Chongqing, China
| | - Long Wang
- Key Laboratory of Optoelectronic Technology and Systems of the Education Ministry of China, Chongqing University, Chongqing, China
| | - Yu Li
- Department of Pathology, Chongqing University Cancer Hospital and Chongqing Cancer Institute and Chongqing Cancer Hospital, Chongqing, China
| | - Jian Wu
- Head and Neck Cancer Center, Chongqing University Cancer Hospital and Chongqing Cancer Institute and Chongqing Cancer Hospital, Chongqing, China
- *Correspondence: Jian Wu, ; Hong Huang,
| | - Hong Huang
- Key Laboratory of Optoelectronic Technology and Systems of the Education Ministry of China, Chongqing University, Chongqing, China
- *Correspondence: Jian Wu, ; Hong Huang,
| |
Collapse
|
6
|
Zheng B, Shen Y, Luo Y, Fang X, Zhu S, Zhang J, Wu M, Jin L, Yang W, Wang C. Automated measurement of the disc-fovea angle based on DeepLabv3. Front Neurol 2022; 13:949805. [PMID: 35968300 PMCID: PMC9363794 DOI: 10.3389/fneur.2022.949805] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Accepted: 07/06/2022] [Indexed: 11/30/2022] Open
Abstract
Purpose To assess the value of automatic disc-fovea angle (DFA) measurement using the DeepLabv3+ segmentation model. Methods A total of 682 normal fundus image datasets were collected from the Eye Hospital of Nanjing Medical University. The following parts of the images were labeled and subsequently reviewed by ophthalmologists: optic disc center, macular center, optic disc area, and virtual macular area. A total of 477 normal fundus images were used to train DeepLabv3+, U-Net, and PSPNet model, which were used to obtain the optic disc area and virtual macular area. Then, the coordinates of the optic disc center and macular center were obstained by using the minimum outer circle technique. Finally the DFA was calculated. Results In this study, 205 normal fundus images were used to test the model. The experimental results showed that the errors in automatic DFA measurement using DeepLabv3+, U-Net, and PSPNet segmentation models were 0.76°, 1.4°, and 2.12°, respectively. The mean intersection over union (MIoU), mean pixel accuracy (MPA), average error in the center of the optic disc, and average error in the center of the virtual macula obstained by using DeepLabv3+ model was 94.77%, 97.32%, 10.94 pixels, and 13.44 pixels, respectively. The automatic DFA measurement using DeepLabv3+ got the less error than the errors that using the other segmentation models. Therefore, the DeepLabv3+ segmentation model was finally chosen to measure DFA automatically. Conclusions The DeepLabv3+ segmentation model -based automatic segmentation techniques can produce accurate and rapid DFA measurements.
Collapse
Affiliation(s)
- Bo Zheng
- School of Information Engineering, Huzhou University, Huzhou, China
- Zhejiang Province Key Laboratory of Smart Management & Application of Modern Agricultural Resources, Huzhou University, Huzhou, China
| | - Yifan Shen
- School of Information Engineering, Huzhou University, Huzhou, China
| | - Yuxin Luo
- The Laboratory of Artificial Intelligence and Bigdata in Ophthalmology, Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
| | - Xinwen Fang
- School of Information Engineering, Huzhou University, Huzhou, China
| | - Shaojun Zhu
- School of Information Engineering, Huzhou University, Huzhou, China
- Zhejiang Province Key Laboratory of Smart Management & Application of Modern Agricultural Resources, Huzhou University, Huzhou, China
| | - Jie Zhang
- Advanced Ophthalmology Laboratory (AOL), Robotrak Technologies, Nanjing, China
| | - Maonian Wu
- School of Information Engineering, Huzhou University, Huzhou, China
- Zhejiang Province Key Laboratory of Smart Management & Application of Modern Agricultural Resources, Huzhou University, Huzhou, China
| | - Ling Jin
- The Laboratory of Artificial Intelligence and Bigdata in Ophthalmology, Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
| | - Weihua Yang
- The Laboratory of Artificial Intelligence and Bigdata in Ophthalmology, Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
| | - Chenghu Wang
- The Laboratory of Artificial Intelligence and Bigdata in Ophthalmology, Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
| |
Collapse
|
7
|
A dark and bright channel prior guided deep network for retinal image quality assessment. Biocybern Biomed Eng 2022. [DOI: 10.1016/j.bbe.2022.06.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
|