1
|
Inouye K, Petrosyan A, Moskalensky L, Thankam FG. Artificial intelligence in therapeutic management of hyperlipidemic ocular pathology. Exp Eye Res 2024; 245:109954. [PMID: 38838975 DOI: 10.1016/j.exer.2024.109954] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2023] [Revised: 04/09/2024] [Accepted: 06/02/2024] [Indexed: 06/07/2024]
Abstract
Hyperlipidemia has many ocular manifestations, the most prevalent being retinal vascular occlusion. Hyperlipidemic lesions and occlusions to the vessels supplying the retina result in permanent blindness, necessitating prompt detection and treatment. Retinal vascular occlusion is diagnosed using different imaging modalities, including optical coherence tomography angiography. These diagnostic techniques obtain images representing the blood flow through the retinal vessels, providing an opportunity for AI to utilize image recognition to detect blockages and abnormalities before patients present with symptoms. AI is already being used as a non-invasive method to detect retinal vascular occlusions and other vascular pathology, as well as predict treatment outcomes. As providers see an increase in patients presenting with new retinal vascular occlusions, the use of AI to detect and treat these conditions has the potential to improve patient outcomes and reduce the financial burden on the healthcare system. This article comprehends the implications of AI in the current management strategies of retinal vascular occlusion (RVO) in hyperlipidemia and the recent developments of AI technology in the management of ocular diseases.
Collapse
Affiliation(s)
- Keiko Inouye
- Department of Translational Research, College of Osteopathic Medicine of the Pacific, Western University of Health Sciences, USA
| | - Aelita Petrosyan
- Department of Translational Research, College of Osteopathic Medicine of the Pacific, Western University of Health Sciences, USA
| | - Liana Moskalensky
- Department of Translational Research, College of Osteopathic Medicine of the Pacific, Western University of Health Sciences, USA
| | - Finosh G Thankam
- Department of Translational Research, College of Osteopathic Medicine of the Pacific, Western University of Health Sciences, USA.
| |
Collapse
|
2
|
Zhang L, Huang Y, Chen J, Xu X, Xu F, Yao J. Multimodal deep transfer learning to predict retinal vein occlusion macular edema recurrence after anti-VEGF therapy. Heliyon 2024; 10:e29334. [PMID: 38655307 PMCID: PMC11036002 DOI: 10.1016/j.heliyon.2024.e29334] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2023] [Revised: 03/28/2024] [Accepted: 04/05/2024] [Indexed: 04/26/2024] Open
Abstract
Purpose To develop a multimodal deep transfer learning (DTL) fusion model using optical coherence tomography angiography (OCTA) images to predict the recurrence of retinal vein occlusion (RVO) and macular edema (ME) after three consecutive anti-VEGF therapies. Methods This retrospective cross-sectional study consisted of 2800 B-scan OCTA macular images collected from 140 patients with RVO-ME. The central macular thickness (CMT) > 250 μm was used as a criterion for recurrence in the three-month follow-up after three injections of anti-VEGF therapy. The qualified OCTA image preprocessing and the lesion area segmentation were performed by senior ophthalmologists. We developed and validated the clinical, DTL, and multimodal fusion models based on clinical and extracted OCTA imaging features. The performance of the models and experts predictions were evaluated using several performance metrics, including the area under the receiver operating characteristic curve (AUC), accuracy, sensitivity, and specificity. Results The DTL models exhibited higher prediction efficacy than the clinical models and experts' predictions. Among the DTL models, the Vgg19 performed better than that of the other models, with an AUC of 0.968 (95 % CI, 0.943-0.994), accuracy of 0.913, sensitivity of 0.922, and specificity of 0.902 in the validation cohort. Moreover, the fusion Vgg19 model showed the highest prediction efficacy among all the models, with an AUC of 0.972 (95 % CI, 0.946-0.997), accuracy of 0.935, sensitivity of 0.935, and specificity of 0.934 in the validation cohort. Conclusions Multimodal fusion DTL models showed robust performance in predicting RVO-ME recurrence and may be applied to assist clinicians in determining patients' follow-up time after anti-VEGF therapy.
Collapse
Affiliation(s)
- Laihe Zhang
- The Affiliated Eye Hospital, Nanjing Medical University, Nanjing, China
- The Fourth School of Clinical Medicine, Nanjing Medical University, Nanjing, China
| | - Ying Huang
- The Affiliated Eye Hospital, Nanjing Medical University, Nanjing, China
- The Fourth School of Clinical Medicine, Nanjing Medical University, Nanjing, China
| | - Jiaqin Chen
- The Affiliated Eye Hospital, Nanjing Medical University, Nanjing, China
- The Fourth School of Clinical Medicine, Nanjing Medical University, Nanjing, China
| | - Xiangzhong Xu
- The Affiliated Eye Hospital, Nanjing Medical University, Nanjing, China
- The Fourth School of Clinical Medicine, Nanjing Medical University, Nanjing, China
| | - Fan Xu
- The Affiliated Eye Hospital, Nanjing Medical University, Nanjing, China
- The Fourth School of Clinical Medicine, Nanjing Medical University, Nanjing, China
| | - Jin Yao
- The Affiliated Eye Hospital, Nanjing Medical University, Nanjing, China
- The Fourth School of Clinical Medicine, Nanjing Medical University, Nanjing, China
| |
Collapse
|
3
|
Chato L, Regentova E. Survey of Transfer Learning Approaches in the Machine Learning of Digital Health Sensing Data. J Pers Med 2023; 13:1703. [PMID: 38138930 PMCID: PMC10744730 DOI: 10.3390/jpm13121703] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2023] [Revised: 12/01/2023] [Accepted: 12/08/2023] [Indexed: 12/24/2023] Open
Abstract
Machine learning and digital health sensing data have led to numerous research achievements aimed at improving digital health technology. However, using machine learning in digital health poses challenges related to data availability, such as incomplete, unstructured, and fragmented data, as well as issues related to data privacy, security, and data format standardization. Furthermore, there is a risk of bias and discrimination in machine learning models. Thus, developing an accurate prediction model from scratch can be an expensive and complicated task that often requires extensive experiments and complex computations. Transfer learning methods have emerged as a feasible solution to address these issues by transferring knowledge from a previously trained task to develop high-performance prediction models for a new task. This survey paper provides a comprehensive study of the effectiveness of transfer learning for digital health applications to enhance the accuracy and efficiency of diagnoses and prognoses, as well as to improve healthcare services. The first part of this survey paper presents and discusses the most common digital health sensing technologies as valuable data resources for machine learning applications, including transfer learning. The second part discusses the meaning of transfer learning, clarifying the categories and types of knowledge transfer. It also explains transfer learning methods and strategies, and their role in addressing the challenges in developing accurate machine learning models, specifically on digital health sensing data. These methods include feature extraction, fine-tuning, domain adaptation, multitask learning, federated learning, and few-/single-/zero-shot learning. This survey paper highlights the key features of each transfer learning method and strategy, and discusses the limitations and challenges of using transfer learning for digital health applications. Overall, this paper is a comprehensive survey of transfer learning methods on digital health sensing data which aims to inspire researchers to gain knowledge of transfer learning approaches and their applications in digital health, enhance the current transfer learning approaches in digital health, develop new transfer learning strategies to overcome the current limitations, and apply them to a variety of digital health technologies.
Collapse
Affiliation(s)
- Lina Chato
- Department of Electrical and Computer Engineering, University of Nevada, Las Vegas, NV 89154, USA;
| | | |
Collapse
|
4
|
Vali M, Nazari B, Sadri S, Pour EK, Riazi-Esfahani H, Faghihi H, Ebrahimiadib N, Azizkhani M, Innes W, Steel DH, Hurlbert A, Read JCA, Kafieh R. CNV-Net: Segmentation, Classification and Activity Score Measurement of Choroidal Neovascularization (CNV) Using Optical Coherence Tomography Angiography (OCTA). Diagnostics (Basel) 2023; 13:diagnostics13071309. [PMID: 37046527 PMCID: PMC10093691 DOI: 10.3390/diagnostics13071309] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2023] [Revised: 03/23/2023] [Accepted: 03/24/2023] [Indexed: 04/03/2023] Open
Abstract
This paper aims to present an artificial intelligence-based algorithm for the automated segmentation of Choroidal Neovascularization (CNV) areas and to identify the presence or absence of CNV activity criteria (branching, peripheral arcade, dark halo, shape, loop and anastomoses) in OCTA images. Methods: This retrospective and cross-sectional study includes 130 OCTA images from 101 patients with treatment-naïve CNV. At baseline, OCTA volumes of 6 × 6 mm2 were obtained to develop an AI-based algorithm to evaluate the CNV activity based on five activity criteria, including tiny branching vessels, anastomoses and loops, peripheral arcades, and perilesional hypointense halos. The proposed algorithm comprises two steps. The first block includes the pre-processing and segmentation of CNVs in OCTA images using a modified U-Net network. The second block consists of five binary classification networks, each implemented with various models from scratch, and using transfer learning from pre-trained networks. Results: The proposed segmentation network yielded an averaged Dice coefficient of 0.86. The individual classifiers corresponding to the five activity criteria (branch, peripheral arcade, dark halo, shape, loop, and anastomoses) showed accuracies of 0.84, 0.81, 0.86, 0.85, and 0.82, respectively. The AI-based algorithm potentially allows the reliable detection and segmentation of CNV from OCTA alone, without the need for imaging with contrast agents. The evaluation of the activity criteria in CNV lesions obtains acceptable results, and this algorithm could enable the objective, repeatable assessment of CNV features.
Collapse
|
5
|
Optical Coherence Tomography Angiography of the Intestine: How to Prevent Motion Artifacts in Open and Laparoscopic Surgery? Life (Basel) 2023; 13:life13030705. [PMID: 36983861 PMCID: PMC10055682 DOI: 10.3390/life13030705] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2023] [Revised: 02/25/2023] [Accepted: 02/28/2023] [Indexed: 03/08/2023] Open
Abstract
(1) Introduction. The problem that limits the intraoperative use of OCTA for the intestinal circulation diagnostics is the low informative value of OCTA images containing too many motion artifacts. The aim of this study is to evaluate the efficiency and safety of the developed unit for the prevention of the appearance of motion artifacts in the OCTA images of the intestine in both open and laparoscopic surgery in the experiment; (2) Methods. A high-speed spectral-domain multimodal optical coherence tomograph (IAP RAS, Russia) operating at a wavelength of 1310 nm with a spectral width of 100 μm and a power of 2 mW was used. The developed unit was tested in two groups of experimental animals—on minipigs (group I, n = 10, open abdomen) and on rabbits (group II, n = 10, laparoscopy). Acute mesenteric ischemia was modeled and then 1 h later the small intestine underwent OCTA evaluation. A total of 400 OCTA images of the intact and ischemic small intestine were obtained and analyzed. The quality of the obtained OCTA images was evaluated based on the score proposed in 2020 by the group of Magnin M. (3) Results. Without stabilization, OCTA images of the intestine tissues were informative only in 32–44% of cases in open surgery and in 14–22% of cases in laparoscopic surgery. A vacuum bowel stabilizer with a pressure deficit of 22–25 mm Hg significantly reduced the number of motion artifacts. As a result, the proportion of informative OCTA images in open surgery increased up to 86.5% (Χ2 = 200.2, p = 0.001), and in laparoscopy up to 60% (Χ2 = 148.3, p = 0.001). (4) Conclusions. The used vacuum tissue stabilizer enabled a significant increase in the proportion of informative OCTA images by significantly reducing the motion artifacts.
Collapse
|
6
|
Arrigo A, Aragona E, Battaglia Parodi M, Bandello F. Quantitative approaches in multimodal fundus imaging: State of the art and future perspectives. Prog Retin Eye Res 2023; 92:101111. [PMID: 35933313 DOI: 10.1016/j.preteyeres.2022.101111] [Citation(s) in RCA: 14] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2022] [Revised: 07/16/2022] [Accepted: 07/19/2022] [Indexed: 02/01/2023]
Abstract
When it first appeared, multimodal fundus imaging revolutionized the diagnostic workup and provided extremely useful new insights into the pathogenesis of fundus diseases. The recent addition of quantitative approaches has further expanded the amount of information that can be obtained. In spite of the growing interest in advanced quantitative metrics, the scientific community has not reached a stable consensus on repeatable, standardized quantitative techniques to process and analyze the images. Furthermore, imaging artifacts may considerably affect the processing and interpretation of quantitative data, potentially affecting their reliability. The aim of this survey is to provide a comprehensive summary of the main multimodal imaging techniques, covering their limitations as well as their strengths. We also offer a thorough analysis of current quantitative imaging metrics, looking into their technical features, limitations, and interpretation. In addition, we describe the main imaging artifacts and their potential impact on imaging quality and reliability. The prospect of increasing reliance on artificial intelligence-based analyses suggests there is a need to develop more sophisticated quantitative metrics and to improve imaging technologies, incorporating clear, standardized, post-processing procedures. These measures are becoming urgent if these analyses are to cross the threshold from a research context to real-life clinical practice.
Collapse
Affiliation(s)
- Alessandro Arrigo
- Department of Ophthalmology, IRCCS San Raffaele Scientific Institute, via Olgettina 60, 20132, Milan, Italy.
| | - Emanuela Aragona
- Department of Ophthalmology, IRCCS San Raffaele Scientific Institute, via Olgettina 60, 20132, Milan, Italy
| | - Maurizio Battaglia Parodi
- Department of Ophthalmology, IRCCS San Raffaele Scientific Institute, via Olgettina 60, 20132, Milan, Italy
| | - Francesco Bandello
- Department of Ophthalmology, IRCCS San Raffaele Scientific Institute, via Olgettina 60, 20132, Milan, Italy
| |
Collapse
|
7
|
Amygdalos I, Hachgenei E, Burkl L, Vargas D, Goßmann P, Wolff LI, Druzenko M, Frye M, König N, Schmitt RH, Chrysos A, Jöchle K, Ulmer TF, Lambertz A, Knüchel-Clarke R, Neumann UP, Lang SA. Optical coherence tomography and convolutional neural networks can differentiate colorectal liver metastases from liver parenchyma ex vivo. J Cancer Res Clin Oncol 2022:10.1007/s00432-022-04263-z. [PMID: 35960377 DOI: 10.1007/s00432-022-04263-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2022] [Accepted: 08/02/2022] [Indexed: 12/24/2022]
Abstract
PURPOSE Optical coherence tomography (OCT) is an imaging technology based on low-coherence interferometry, which provides non-invasive, high-resolution cross-sectional images of biological tissues. A potential clinical application is the intraoperative examination of resection margins, as a real-time adjunct to histological examination. In this ex vivo study, we investigated the ability of OCT to differentiate colorectal liver metastases (CRLM) from healthy liver parenchyma, when combined with convolutional neural networks (CNN). METHODS Between June and August 2020, consecutive adult patients undergoing elective liver resections for CRLM were included in this study. Fresh resection specimens were scanned ex vivo, before fixation in formalin, using a table-top OCT device at 1310 nm wavelength. Scanned areas were marked and histologically examined. A pre-trained CNN (Xception) was used to match OCT scans to their corresponding histological diagnoses. To validate the results, a stratified k-fold cross-validation (CV) was carried out. RESULTS A total of 26 scans (containing approx. 26,500 images in total) were obtained from 15 patients. Of these, 13 were of normal liver parenchyma and 13 of CRLM. The CNN distinguished CRLM from healthy liver parenchyma with an F1-score of 0.93 (0.03), and a sensitivity and specificity of 0.94 (0.04) and 0.93 (0.04), respectively. CONCLUSION Optical coherence tomography combined with CNN can distinguish between healthy liver and CRLM with great accuracy ex vivo. Further studies are needed to improve upon these results and develop in vivo diagnostic technologies, such as intraoperative scanning of resection margins.
Collapse
Affiliation(s)
- Iakovos Amygdalos
- Department of General, Visceral and Transplantation Surgery, University Hospital RWTH Aachen, Pauwelsstraße 30, 52074, Aachen, Germany.
| | - Enno Hachgenei
- Department of Production Metrology, Fraunhofer Institute for Production Technology IPT, Aachen, Germany
| | - Luisa Burkl
- Department of Production Metrology, Fraunhofer Institute for Production Technology IPT, Aachen, Germany
| | - David Vargas
- Institute for Histopathology, University Hospital RWTH Aachen, Aachen, Germany
| | - Paul Goßmann
- Department of General, Visceral and Transplantation Surgery, University Hospital RWTH Aachen, Pauwelsstraße 30, 52074, Aachen, Germany
| | - Laura I Wolff
- Department of General, Visceral and Transplantation Surgery, University Hospital RWTH Aachen, Pauwelsstraße 30, 52074, Aachen, Germany
| | - Mariia Druzenko
- Department of General, Visceral and Transplantation Surgery, University Hospital RWTH Aachen, Pauwelsstraße 30, 52074, Aachen, Germany
| | - Maik Frye
- Department of Production Metrology, Fraunhofer Institute for Production Technology IPT, Aachen, Germany
| | - Niels König
- Department of Production Metrology, Fraunhofer Institute for Production Technology IPT, Aachen, Germany
| | - Robert H Schmitt
- Department of Production Metrology, Fraunhofer Institute for Production Technology IPT, Aachen, Germany.,Laboratory for Machine Tools and Production Engineering (WZL), RWTH Aachen University, Aachen, Germany
| | - Alexandros Chrysos
- Department of General, Visceral and Transplantation Surgery, University Hospital RWTH Aachen, Pauwelsstraße 30, 52074, Aachen, Germany
| | - Katharina Jöchle
- Department of General, Visceral and Transplantation Surgery, University Hospital RWTH Aachen, Pauwelsstraße 30, 52074, Aachen, Germany
| | - Tom F Ulmer
- Department of General, Visceral and Transplantation Surgery, University Hospital RWTH Aachen, Pauwelsstraße 30, 52074, Aachen, Germany
| | - Andreas Lambertz
- Department of General, Visceral and Transplantation Surgery, University Hospital RWTH Aachen, Pauwelsstraße 30, 52074, Aachen, Germany
| | - Ruth Knüchel-Clarke
- Institute for Histopathology, University Hospital RWTH Aachen, Aachen, Germany
| | - Ulf P Neumann
- Department of General, Visceral and Transplantation Surgery, University Hospital RWTH Aachen, Pauwelsstraße 30, 52074, Aachen, Germany
| | - Sven A Lang
- Department of General, Visceral and Transplantation Surgery, University Hospital RWTH Aachen, Pauwelsstraße 30, 52074, Aachen, Germany
| |
Collapse
|
8
|
Diabetic Retinopathy Detection from Fundus Images of the Eye Using Hybrid Deep Learning Features. Diagnostics (Basel) 2022; 12:diagnostics12071607. [PMID: 35885512 PMCID: PMC9324358 DOI: 10.3390/diagnostics12071607] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2022] [Revised: 06/25/2022] [Accepted: 06/28/2022] [Indexed: 12/03/2022] Open
Abstract
Diabetic Retinopathy (DR) is a medical condition present in patients suffering from long-term diabetes. If a diagnosis is not carried out at an early stage, it can lead to vision impairment. High blood sugar in diabetic patients is the main source of DR. This affects the blood vessels within the retina. Manual detection of DR is a difficult task since it can affect the retina, causing structural changes such as Microaneurysms (MAs), Exudates (EXs), Hemorrhages (HMs), and extra blood vessel growth. In this work, a hybrid technique for the detection and classification of Diabetic Retinopathy in fundus images of the eye is proposed. Transfer learning (TL) is used on pre-trained Convolutional Neural Network (CNN) models to extract features that are combined to generate a hybrid feature vector. This feature vector is passed on to various classifiers for binary and multiclass classification of fundus images. System performance is measured using various metrics and results are compared with recent approaches for DR detection. The proposed method provides significant performance improvement in DR detection for fundus images. For binary classification, the proposed modified method achieved the highest accuracy of 97.8% and 89.29% for multiclass classification.
Collapse
|
9
|
Jiao S, Jia Y, Yao X. Emerging imaging developments in experimental vision sciences and ophthalmology. Exp Biol Med (Maywood) 2021; 246:2137-2139. [PMID: 34404253 PMCID: PMC8718248 DOI: 10.1177/15353702211038891] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Affiliation(s)
- Shuliang Jiao
- Department of Biomedical Engineering, Florida International University, Miami, FL 33174, USA
| | - Yali Jia
- Casey Eye Institute, Oregon Health & Science University, Portland, OR 97239, USA
| | - Xincheng Yao
- Department of Biomedical Engineering, University of Illinois at Chicago, Chicago, IL 60607, USA
| |
Collapse
|