1
|
Li L, Xiao K, Shang X, Hu W, Yusufu M, Chen R, Wang Y, Liu J, Lai T, Guo L, Zou J, van Wijngaarden P, Ge Z, He M, Zhu Z. Advances in artificial intelligence for meibomian gland evaluation: A comprehensive review. Surv Ophthalmol 2024; 69:945-956. [PMID: 39025239 DOI: 10.1016/j.survophthal.2024.07.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2024] [Revised: 07/14/2024] [Accepted: 07/15/2024] [Indexed: 07/20/2024]
Abstract
Meibomian gland dysfunction (MGD) is increasingly recognized as a critical contributor to evaporative dry eye, significantly impacting visual quality. With a global prevalence estimated at 35.8 %, it presents substantial challenges for clinicians. Conventional manual evaluation techniques for MGD face limitations characterized by inefficiencies, high subjectivity, limited big data processing capabilities, and a dearth of quantitative analytical tools. With rapidly advancing artificial intelligence (AI) techniques revolutionizing ophthalmology, studies are now leveraging sophisticated AI methodologies--including computer vision, unsupervised learning, and supervised learning--to facilitate comprehensive analyses of meibomian gland (MG) evaluations. These evaluations employ various techniques, including slit lamp examination, infrared imaging, confocal microscopy, and optical coherence tomography. This paradigm shift promises enhanced accuracy and consistency in disease evaluation and severity classification. While AI has achieved preliminary strides in meibomian gland evaluation, ongoing advancements in system development and clinical validation are imperative. We review the evolution of MG evaluation, juxtapose AI-driven methods with traditional approaches, elucidate the specific roles of diverse AI technologies, and explore their practical applications using various evaluation techniques. Moreover, we delve into critical considerations for the clinical deployment of AI technologies and envisages future prospects, providing novel insights into MG evaluation and fostering technological and clinical progress in this arena.
Collapse
Affiliation(s)
- Li Li
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Australia; Department of Surgery (Ophthalmology), The University of Melbourne, Melbourne, Australia; Shengli Clinical Medical College of Fujian Medical University, Fuzhou University Affiliated Provincial Hospital, Fuzhou, China
| | - Kunhong Xiao
- Department of Ophthalmology and Optometry, Fujian Medical University, Fuzhou, China
| | - Xianwen Shang
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Australia
| | - Wenyi Hu
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Australia; Department of Surgery (Ophthalmology), The University of Melbourne, Melbourne, Australia
| | - Mayinuer Yusufu
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Australia; Department of Surgery (Ophthalmology), The University of Melbourne, Melbourne, Australia
| | - Ruiye Chen
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Australia; Department of Surgery (Ophthalmology), The University of Melbourne, Melbourne, Australia
| | - Yujie Wang
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Australia; Department of Surgery (Ophthalmology), The University of Melbourne, Melbourne, Australia
| | - Jiahao Liu
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Australia; Department of Surgery (Ophthalmology), The University of Melbourne, Melbourne, Australia
| | - Taichen Lai
- Department of Clinical Medicine, Fujian Medical University, Fuzhou, China
| | - Linling Guo
- Department of Clinical Medicine, Fujian Medical University, Fuzhou, China
| | - Jing Zou
- Department of Clinical Medicine, Fujian Medical University, Fuzhou, China
| | - Peter van Wijngaarden
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Australia
| | - Zongyuan Ge
- The AIM for Health Lab, Faculty of IT, Monash University, Australia
| | - Mingguang He
- School of Optometry, The Hong Kong Polytechnic University, Hong Kong Special administrative regions of China; Research Centre for SHARP Vision, The Hong Kong Polytechnic University, Kowloon, Hong Kong Special administrative regions of China.
| | - Zhuoting Zhu
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Australia; Department of Surgery (Ophthalmology), The University of Melbourne, Melbourne, Australia.
| |
Collapse
|
2
|
Huan J, Yuan J, Zhang H, Xu X, Shi B, Zheng Y, Li X, Zhang C, Hu Q, Fan Y, Lv J, Zhou L. Identification of agricultural surface source pollution in plain river network areas based on 3D-EEMs and convolutional neural networks. WATER SCIENCE AND TECHNOLOGY : A JOURNAL OF THE INTERNATIONAL ASSOCIATION ON WATER POLLUTION RESEARCH 2024; 89:1961-1980. [PMID: 38678402 DOI: 10.2166/wst.2024.122] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/20/2023] [Accepted: 04/02/2024] [Indexed: 04/30/2024]
Abstract
Agricultural non-point sources, as major sources of organic pollution, continue to flow into the river network area of the Jiangnan Plain, posing a serious threat to the quality of water bodies, the ecological environment, and human health. Therefore, there is an urgent need for a method that can accurately identify various types of agricultural organic pollution to prevent the water ecosystems in the region from significant organic pollution. In this study, a network model called RA-GoogLeNet is proposed for accurately identifying agricultural organic pollution in the river network area of the Jiangnan Plain. RA-GoogLeNet uses fluorescence spectral data of agricultural non-point source water quality in Changzhou Changdang Lake Basin, based on GoogLeNet architecture, and adds an efficient channel attention (ECA) mechanism to its A-Inception module, which enables the model to automatically learn the importance of independent channel features. ResNet are used to connect each A-Reception module. The experimental results show that RA-GoogLeNet performs well in fluorescence spectral classification of water quality, with an accuracy of 96.3%, which is 1.2% higher than the baseline model, and has good recall and F1 score. This study provides powerful technical support for the traceability of agricultural organic pollution.
Collapse
Affiliation(s)
- Juan Huan
- School of Computer and Artificial Intelligence, School of Alibaba Cloud Big Data, School of Software, Changzhou University, Changzhou 213100, China E-mail:
| | - Jialong Yuan
- School of Computer and Artificial Intelligence, School of Alibaba Cloud Big Data, School of Software, Changzhou University, Changzhou 213100, China
| | - Hao Zhang
- School of Computer and Artificial Intelligence, School of Alibaba Cloud Big Data, School of Software, Changzhou University, Changzhou 213100, China
| | - Xiangen Xu
- Changzhou Environmental Science Research Institute, Changzhou 213002, China
| | - Bing Shi
- School of Computer and Artificial Intelligence, School of Alibaba Cloud Big Data, School of Software, Changzhou University, Changzhou 213100, China
| | - Yongchun Zheng
- School of Computer and Artificial Intelligence, School of Alibaba Cloud Big Data, School of Software, Changzhou University, Changzhou 213100, China
| | - Xincheng Li
- School of Computer and Artificial Intelligence, School of Alibaba Cloud Big Data, School of Software, Changzhou University, Changzhou 213100, China
| | - Chen Zhang
- School of Computer and Artificial Intelligence, School of Alibaba Cloud Big Data, School of Software, Changzhou University, Changzhou 213100, China
| | - Qucheng Hu
- School of Computer and Artificial Intelligence, School of Alibaba Cloud Big Data, School of Software, Changzhou University, Changzhou 213100, China
| | - Yixiong Fan
- School of Computer and Artificial Intelligence, School of Alibaba Cloud Big Data, School of Software, Changzhou University, Changzhou 213100, China
| | - Jiapeng Lv
- School of Computer and Artificial Intelligence, School of Alibaba Cloud Big Data, School of Software, Changzhou University, Changzhou 213100, China
| | - Liwan Zhou
- Changzhou Environmental Science Research Institute, Changzhou 213002, China
| |
Collapse
|
3
|
A Deep Learning Model for Evaluating Meibomian Glands Morphology from Meibography. J Clin Med 2023; 12:jcm12031053. [PMID: 36769701 PMCID: PMC9918190 DOI: 10.3390/jcm12031053] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2022] [Revised: 01/03/2023] [Accepted: 01/20/2023] [Indexed: 02/03/2023] Open
Abstract
To develop a deep learning model for automatically segmenting tarsus and meibomian gland areas on meibography, we included 1087 meibography images from dry eye patients. The contour of the tarsus and each meibomian gland was labeled manually by human experts. The dataset was divided into training, validation, and test sets. We built a convolutional neural network-based U-net and trained the model to segment the tarsus and meibomian gland area. Accuracy, sensitivity, specificity, and receiver operating characteristic curve (ROC) were calculated to evaluate the model. The area under the curve (AUC) values for models segmenting the tarsus and meibomian gland area were 0.985 and 0.938, respectively. The deep learning model achieved a sensitivity and specificity of 0.975 and 0.99, respectively, with an accuracy of 0.985 for segmenting the tarsus area. For meibomian gland area segmentation, the model obtained a high specificity of 0.96, with high accuracy of 0.937 and a moderate sensitivity of 0.751. The present research trained a deep learning model to automatically segment tarsus and the meibomian gland area from infrared meibography, and the model demonstrated outstanding accuracy in segmentation. With further improvement, the model could potentially be applied to assess the meibomian gland that facilitates dry eye evaluation in various clinical and research scenarios.
Collapse
|
4
|
Escottá ÁT, Beccaro W, Ramírez MA. Evaluation of 1D and 2D Deep Convolutional Neural Networks for Driving Event Recognition. SENSORS 2022; 22:s22114226. [PMID: 35684848 PMCID: PMC9185469 DOI: 10.3390/s22114226] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/04/2022] [Revised: 05/26/2022] [Accepted: 05/30/2022] [Indexed: 11/17/2022]
Abstract
Driving event detection and driver behavior recognition have been widely explored for many purposes, including detecting distractions, classifying driver actions, detecting kidnappings, pricing vehicle insurance, evaluating eco-driving, and managing shared and leased vehicles. Some systems can recognize the main driving events (e.g., accelerating, braking, and turning) by using in-vehicle devices, such as inertial measurement unit (IMU) sensors. In general, feature extraction is a commonly used technique to obtain robust and meaningful information from the sensor signals to guarantee the effectiveness of the subsequent classification algorithm. However, a general assessment of deep neural networks merits further investigation, particularly regarding end-to-end models based on Convolutional Neural Networks (CNNs), which combine two components, namely feature extraction and the classification parts. This paper primarily explores supervised deep-learning models based on 1D and 2D CNNs to classify driving events from the signals of linear acceleration and angular velocity obtained with the IMU sensors of a smartphone placed in the instrument panel of the vehicle. Aggressive and non-aggressive behaviors can be recognized by monitoring driving events, such as accelerating, braking, lane changing, and turning. The experimental results obtained are promising since the best classification model achieved accuracy values of up to 82.40%, and macro- and micro-average F1 scores, respectively, equal to 75.36% and 82.40%, thus, demonstrating high performance in the classification of driving events.
Collapse
|