1
|
Milad D, Antaki F, Mikhail D, Farah A, El-Khoury J, Touma S, Durr GM, Nayman T, Playout C, Keane PA, Duval R. Code-Free Deep Learning Glaucoma Detection on Color Fundus Images. OPHTHALMOLOGY SCIENCE 2025; 5:100721. [PMID: 40182983 PMCID: PMC11964632 DOI: 10.1016/j.xops.2025.100721] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/10/2024] [Revised: 01/04/2025] [Accepted: 01/23/2025] [Indexed: 04/05/2025]
Abstract
Objective Code-free deep learning (CFDL) allows clinicians with no coding experience to build their own artificial intelligence models. This study assesses the performance of CFDL in glaucoma detection from fundus images in comparison to expert-designed models. Design Deep learning model development, testing, and validation. Subjects A total of 101 442 labeled fundus images from the Rotterdam EyePACS Artificial Intelligence for Robust Glaucoma Screening (AIROGS) dataset were included. Methods Ophthalmology trainees without coding experience designed a CFDL binary model using the Rotterdam EyePACS AIROGS dataset of fundus images (101 442 labeled images) to differentiate glaucoma from normal optic nerves. We compared our results with bespoke models from the literature. We then proceeded to externally validate our model using 2 datasets, the Retinal Fundus Glaucoma Challenge (REFUGE) and the Glaucoma grading from Multi-Modality imAges (GAMMA) at 0.1, 0.3, and 0.5 confidence thresholds. Main Outcome Measures Area under the precision-recall curve (AuPRC), sensitivity at 95% specificity (SE@95SP), accuracy, area under the receiver operating curve (AUC), and positive predictive value (PPV). Results The CFDL model showed high performance metrics that were comparable to the bespoke deep learning models. Our single-label classification model had an AuPRC of 0.988, an SE@95SP of 95%, and an accuracy of 91% (compared with 85% SE@95SP for the top bespoke models). Using the REFUGE dataset for external validation, our model had an SE@95SP, AUC, PPV, and accuracy of 83%, 0.960%, 73% to 94%, and 95% to 98%, respectively, at the 0.1, 0.3, and 0.5 confidence threshold cutoffs. Using the GAMMA dataset for external validation at the same confidence threshold cutoffs, our model had an SE@95SP, AUC, PPV, and accuracy of 98%, 0.994%, 94% to 96%, and 94% to 97%, respectively. Conclusion The capacity of CFDL models to perform glaucoma screening using fundus images presents a compelling proof of concept, empowering clinicians to explore innovative model designs for broad glaucoma screening in the near future. Financial Disclosures Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.
Collapse
Affiliation(s)
- Daniel Milad
- Department of Ophthalmology, Université de Montréal, Montreal, Québec, Canada
- Centre Universitaire d’Ophtalmologie (CUO), Hôpital Maisonneuve-Rosemont, CIUSSS de l’Est-de-l’Île-de-Montréal, Montreal, Quebec, Canada
- Department of Ophthalmology, Centre Hospitalier de l'Université de Montréal (CHUM), Montreal, Quebec, Canada
| | - Fares Antaki
- Department of Ophthalmology, Université de Montréal, Montreal, Québec, Canada
- Centre Universitaire d’Ophtalmologie (CUO), Hôpital Maisonneuve-Rosemont, CIUSSS de l’Est-de-l’Île-de-Montréal, Montreal, Quebec, Canada
- Department of Ophthalmology, Centre Hospitalier de l'Université de Montréal (CHUM), Montreal, Quebec, Canada
- The CHUM School of Artificial Intelligence in Healthcare (SAIH), Centre Hospitalier de l'Université de Montréal (CHUM), Montreal, Quebec, Canada
- Institute of Ophthalmology, University College London, London, UK
| | - David Mikhail
- Temerty Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada
| | - Andrew Farah
- Department of Medicine, McGill University, Montreal, Quebec, Canada
| | - Jonathan El-Khoury
- Department of Ophthalmology, Université de Montréal, Montreal, Québec, Canada
- Centre Universitaire d’Ophtalmologie (CUO), Hôpital Maisonneuve-Rosemont, CIUSSS de l’Est-de-l’Île-de-Montréal, Montreal, Quebec, Canada
- Department of Ophthalmology, Centre Hospitalier de l'Université de Montréal (CHUM), Montreal, Quebec, Canada
| | - Samir Touma
- Department of Ophthalmology, Université de Montréal, Montreal, Québec, Canada
- Centre Universitaire d’Ophtalmologie (CUO), Hôpital Maisonneuve-Rosemont, CIUSSS de l’Est-de-l’Île-de-Montréal, Montreal, Quebec, Canada
- Department of Ophthalmology, Centre Hospitalier de l'Université de Montréal (CHUM), Montreal, Quebec, Canada
| | - Georges M. Durr
- Department of Ophthalmology, Université de Montréal, Montreal, Québec, Canada
- Department of Ophthalmology, Centre Hospitalier de l'Université de Montréal (CHUM), Montreal, Quebec, Canada
| | - Taylor Nayman
- Department of Ophthalmology, Université de Montréal, Montreal, Québec, Canada
- Centre Universitaire d’Ophtalmologie (CUO), Hôpital Maisonneuve-Rosemont, CIUSSS de l’Est-de-l’Île-de-Montréal, Montreal, Quebec, Canada
| | - Clément Playout
- Department of Ophthalmology, Université de Montréal, Montreal, Québec, Canada
- Centre Universitaire d’Ophtalmologie (CUO), Hôpital Maisonneuve-Rosemont, CIUSSS de l’Est-de-l’Île-de-Montréal, Montreal, Quebec, Canada
| | - Pearse A. Keane
- Institute of Ophthalmology, University College London, London, UK
- NIHR Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Renaud Duval
- Department of Ophthalmology, Université de Montréal, Montreal, Québec, Canada
- Centre Universitaire d’Ophtalmologie (CUO), Hôpital Maisonneuve-Rosemont, CIUSSS de l’Est-de-l’Île-de-Montréal, Montreal, Quebec, Canada
| |
Collapse
|
2
|
Li ZQ, Liu W, Luo WL, Chen SQ, Deng YP. Artificial intelligence software for assessing brain ischemic penumbra/core infarction on computed tomography perfusion: A real-world accuracy study. World J Radiol 2024; 16:329-336. [PMID: 39239246 PMCID: PMC11372548 DOI: 10.4329/wjr.v16.i8.329] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/30/2024] [Revised: 07/22/2024] [Accepted: 08/05/2024] [Indexed: 08/28/2024] Open
Abstract
BACKGROUND With the increasingly extensive application of artificial intelligence (AI) in medical systems, the accuracy of AI in medical diagnosis in the real world deserves attention and objective evaluation. AIM To investigate the accuracy of AI diagnostic software (Shukun) in assessing ischemic penumbra/core infarction in acute ischemic stroke patients due to large vessel occlusion. METHODS From November 2021 to March 2022, consecutive acute stroke patients with large vessel occlusion who underwent mechanical thrombectomy (MT) post-Shukun AI penumbra assessment were included. Computed tomography angiography (CTA) and perfusion exams were analyzed by AI, reviewed by senior neurointerventional experts. In the case of divergences among the three experts, discussions were held to reach a final conclusion. When the results of AI were inconsistent with the neurointerventional experts' diagnosis, the diagnosis by AI was considered inaccurate. RESULTS A total of 22 patients were included in the study. The vascular recanalization rate was 90.9%, and 63.6% of patients had modified Rankin scale scores of 0-2 at the 3-month follow-up. The computed tomography (CT) perfusion diagnosis by Shukun (AI) was confirmed to be invalid in 3 patients (inaccuracy rate: 13.6%). CONCLUSION AI (Shukun) has limits in assessing ischemic penumbra. Integrating clinical and imaging data (CT, CTA, and even magnetic resonance imaging) is crucial for MT decision-making.
Collapse
Affiliation(s)
- Zhu-Qin Li
- Department of Neurology, Huizhou Central People’s Hospital, Huizhou 516001, Guangdong Province, China
| | - Wu Liu
- Department of Neurology, Huizhou Central People’s Hospital, Huizhou 516001, Guangdong Province, China
| | - Wei-Liang Luo
- Department of Neurology, Huizhou Central People’s Hospital, Huizhou 516001, Guangdong Province, China
| | - Su-Qin Chen
- Department of Neurology, Huizhou Central People’s Hospital, Huizhou 516001, Guangdong Province, China
| | - Yu-Ping Deng
- Department of Neurology, Huizhou Central People’s Hospital, Huizhou 516001, Guangdong Province, China
| |
Collapse
|
3
|
Wu CW, Huang TY, Liou YC, Chen SH, Wu KY, Tseng HY. Recognition of Glaucomatous Fundus Images Using Machine Learning Methods Based on Optic Nerve Head Topographic Features. J Glaucoma 2024; 33:601-606. [PMID: 38546234 DOI: 10.1097/ijg.0000000000002379] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2023] [Accepted: 02/29/2024] [Indexed: 08/15/2024]
Abstract
PRCIS Machine learning classifiers are an effective approach to detecting glaucomatous fundus images based on optic disc topographic features making it a straightforward and effective approach. STUDY DESIGN Retrospective case-control study. OBJECTIVE The aim was to compare the effectiveness of clinical discriminant rules and machine learning classifiers in identifying glaucomatous fundus images based on optic disc topographic features. METHODS The study used a total of 800 fundus images, half of which were glaucomatous cases and the other half non-glaucomatous cases obtained from an open database and clinical work. The images were randomly divided into training and testing sets with equal numbers of glaucomatous and non-glaucomatous images. An ophthalmologist framed the edge of the optic cup and disc, and the program calculated five features, including the vertical cup-to-disc ratio and the width of the optic rim in four quadrants in pixels, used to create machine learning classifiers. The discriminative ability of these classifiers was compared with clinical discriminant rules. RESULTS The machine learning classifiers outperformed clinical discriminant rules, with the extreme gradient boosting method showing the best performance in identifying glaucomatous fundus images. Decision tree analysis revealed that the cup-to-disc ratio was the most important feature for identifying glaucoma fundus images. At the same time, the temporal width of the optic rim was the least important feature. CONCLUSIONS Machine learning classifiers are an effective approach to detecting glaucomatous fundus images based on optic disc topographic features and integration with an automated program for framing and calculating the required parameters would make it a straightforward and effective approach.
Collapse
Affiliation(s)
- Chao-Wei Wu
- Department of Ophthalmology, Kaohsiung Medical University Hospital
| | - Tzu-Yu Huang
- Department of Healthcare Administration and Medical Informatics, Kaohsiung Medical University, Kaohsiung City, Taiwan
| | - Yeong-Cheng Liou
- Department of Healthcare Administration and Medical Informatics, Kaohsiung Medical University, Kaohsiung City, Taiwan
| | - Shih-Hsin Chen
- Department of Computer Science and Information Engineering, Tamkang University, New Taipei, Taiwan (R.O.C.)
| | - Kwou-Yeung Wu
- Department of Ophthalmology, Kaohsiung Medical University Hospital
| | - Han-Yi Tseng
- Department of Ophthalmology, Kaohsiung Medical University Hospital
| |
Collapse
|
4
|
Mathkor DM, Mathkor N, Bassfar Z, Bantun F, Slama P, Ahmad F, Haque S. Multirole of the internet of medical things (IoMT) in biomedical systems for managing smart healthcare systems: An overview of current and future innovative trends. J Infect Public Health 2024; 17:559-572. [PMID: 38367570 DOI: 10.1016/j.jiph.2024.01.013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2023] [Revised: 01/16/2024] [Accepted: 01/18/2024] [Indexed: 02/19/2024] Open
Abstract
Internet of Medical Things (IoMT) is an emerging subset of Internet of Things (IoT), often called as IoT in healthcare, refers to medical devices and applications with internet connectivity, is exponentially gaining researchers' attention due to its wide-ranging applicability in biomedical systems for Smart Healthcare systems. IoMT facilitates remote health biomedical system and plays a crucial role within the healthcare industry to enhance precision, reliability, consistency and productivity of electronic devices used for various healthcare purposes. It comprises a conceptualized architecture for providing information retrieval strategies to extract the data from patient records using sensors for biomedical analysis and diagnostics against manifold diseases to provide cost-effective medical solutions, quick hospital treatments, and personalized healthcare. This article provides a comprehensive overview of IoMT with special emphasis on its current and future trends used in biomedical systems, such as deep learning, machine learning, blockchains, artificial intelligence, radio frequency identification, and industry 5.0.
Collapse
Affiliation(s)
- Darin Mansor Mathkor
- Research and Scientific Studies Unit, Department of Nursing, College of Nursing and Health Sciences, Jazan University, Jazan 45142, Saudi Arabia
| | - Noof Mathkor
- Department of Pathology, Ministry of National Guard Health Affairs (MNGHA), Riyadh, Saudi Arabia
| | - Zaid Bassfar
- Department of Information Technology, Faculty of Computers and Information Technology, University of Tabuk, Tabuk, Saudi Arabia
| | - Farkad Bantun
- Department of Microbiology, Faculty of Medicine, Umm Al-Qura University, Makkah, Saudi Arabia
| | - Petr Slama
- Laboratory of Animal Immunology and Biotechnology, Department of Animal Morphology, Physiology and Genetics, Mendel University in Brno, 61300 Brno, Czech Republic
| | - Faraz Ahmad
- Department of Biotechnology, School of Bio Sciences and Technology, Vellore Institute of Technology, Vellore 632014, India
| | - Shafiul Haque
- Research and Scientific Studies Unit, Department of Nursing, College of Nursing and Health Sciences, Jazan University, Jazan 45142, Saudi Arabia; Gilbert and Rose-Marie Chagoury School of Medicine, Lebanese American University, Beirut, Lebanon; Centre of Medical and Bio-Allied Health Sciences Research, Ajman University, Ajman, United Arab Emirates.
| |
Collapse
|
5
|
Tettey F, Parupelli SK, Desai S. A Review of Biomedical Devices: Classification, Regulatory Guidelines, Human Factors, Software as a Medical Device, and Cybersecurity. BIOMEDICAL MATERIALS & DEVICES 2024; 2:316-341. [DOI: 10.1007/s44174-023-00113-9] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/26/2023] [Accepted: 06/29/2023] [Indexed: 01/05/2025]
|
6
|
Carou-Senra P, Rodríguez-Pombo L, Awad A, Basit AW, Alvarez-Lorenzo C, Goyanes A. Inkjet Printing of Pharmaceuticals. ADVANCED MATERIALS (DEERFIELD BEACH, FLA.) 2024; 36:e2309164. [PMID: 37946604 DOI: 10.1002/adma.202309164] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/06/2023] [Revised: 10/23/2023] [Indexed: 11/12/2023]
Abstract
Inkjet printing (IJP) is an additive manufacturing process that selectively deposits ink materials, layer-by-layer, to create 3D objects or 2D patterns with precise control over their structure and composition. This technology has emerged as an attractive and versatile approach to address the ever-evolving demands of personalized medicine in the healthcare industry. Although originally developed for nonhealthcare applications, IJP harnesses the potential of pharma-inks, which are meticulously formulated inks containing drugs and pharmaceutical excipients. Delving into the formulation and components of pharma-inks, the key to precise and adaptable material deposition enabled by IJP is unraveled. The review extends its focus to substrate materials, including paper, films, foams, lenses, and 3D-printed materials, showcasing their diverse advantages, while exploring a wide spectrum of therapeutic applications. Additionally, the potential benefits of hardware and software improvements, along with artificial intelligence integration, are discussed to enhance IJP's precision and efficiency. Embracing these advancements, IJP holds immense potential to reshape traditional medicine manufacturing processes, ushering in an era of medical precision. However, further exploration and optimization are needed to fully utilize IJP's healthcare capabilities. As researchers push the boundaries of IJP, the vision of patient-specific treatment is on the horizon of becoming a tangible reality.
Collapse
Affiliation(s)
- Paola Carou-Senra
- Departamento de Farmacología, Farmacia y Tecnología Farmacéutica, I+D Farma Group (GI-1645), Facultad de Farmacia, Instituto de Materiales (iMATUS) and Health Research Institute of Santiago de Compostela (IDIS), Universidade de Santiago de Compostela, Santiago de Compostela, 15782, Spain
| | - Lucía Rodríguez-Pombo
- Departamento de Farmacología, Farmacia y Tecnología Farmacéutica, I+D Farma Group (GI-1645), Facultad de Farmacia, Instituto de Materiales (iMATUS) and Health Research Institute of Santiago de Compostela (IDIS), Universidade de Santiago de Compostela, Santiago de Compostela, 15782, Spain
| | - Atheer Awad
- Department of Clinical, Pharmaceutical and Biological Sciences, University of Hertfordshire, College Lane, Hatfield, AL10 9AB, UK
| | - Abdul W Basit
- Department of Pharmaceutics, UCL School of Pharmacy, University College London, 29-39 Brunswick Square, London, WC1N 1AX, UK
- FABRX Ltd., Henwood House, Henwood, Ashford, Kent, TN24 8DH, UK
- FABRX Artificial Intelligence, Carretera de Escairón 14, Currelos (O Saviñao), CP 27543, Spain
| | - Carmen Alvarez-Lorenzo
- Departamento de Farmacología, Farmacia y Tecnología Farmacéutica, I+D Farma Group (GI-1645), Facultad de Farmacia, Instituto de Materiales (iMATUS) and Health Research Institute of Santiago de Compostela (IDIS), Universidade de Santiago de Compostela, Santiago de Compostela, 15782, Spain
| | - Alvaro Goyanes
- Departamento de Farmacología, Farmacia y Tecnología Farmacéutica, I+D Farma Group (GI-1645), Facultad de Farmacia, Instituto de Materiales (iMATUS) and Health Research Institute of Santiago de Compostela (IDIS), Universidade de Santiago de Compostela, Santiago de Compostela, 15782, Spain
- Department of Pharmaceutics, UCL School of Pharmacy, University College London, 29-39 Brunswick Square, London, WC1N 1AX, UK
- FABRX Ltd., Henwood House, Henwood, Ashford, Kent, TN24 8DH, UK
- FABRX Artificial Intelligence, Carretera de Escairón 14, Currelos (O Saviñao), CP 27543, Spain
| |
Collapse
|
7
|
Pucchio A, Krance S, Pur DR, Bassi A, Miranda R, Felfeli T. The role of artificial intelligence in analysis of biofluid markers for diagnosis and management of glaucoma: A systematic review. Eur J Ophthalmol 2023; 33:1816-1833. [PMID: 36426575 PMCID: PMC10469503 DOI: 10.1177/11206721221140948] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2022] [Accepted: 11/01/2022] [Indexed: 08/31/2023]
Abstract
PURPOSE This review focuses on utility of artificial intelligence (AI) in analysis of biofluid markers in glaucoma. We detail the accuracy and validity of AI in the exploration of biomarkers to provide insight into glaucoma pathogenesis. METHODS A comprehensive search was conducted across five electronic databases including Embase, Medline, Cochrane Central Register of Controlled Trials, Cochrane Database of Systematic Reviews, and Web of Science. Studies pertaining to biofluid marker analysis using AI or bioinformatics in glaucoma were included. Identified studies were critically appraised and assessed for risk of bias using the Joanna Briggs Institute Critical Appraisal tools. RESULTS A total of 10,258 studies were screened and 39 studies met the inclusion criteria, including 23 cross-sectional studies (59%), nine prospective cohort studies (23%), six retrospective cohort studies (15%), and one case-control study (3%). Primary open angle glaucoma (POAG) was the most commonly studied subtype (55% of included studies). Twenty-four studies examined disease characteristics, 10 explored treatment decisions, and 5 provided diagnostic clarification. While studies examined at entire metabolomic or proteomic profiles to determine changes in POAG, there was heterogeneity in the data with over 175 unique, differentially expressed biomarkers reported. Discriminant analysis and artificial neural network predictive models displayed strong differentiating ability between glaucoma patients and controls, although these tools were untested in a clinical context. CONCLUSION The use of AI models could inform glaucoma diagnosis with high sensitivity and specificity. While insight into differentially expressed biomarkers is valuable in pathogenic exploration, no clear pathogenic mechanism in glaucoma has emerged.
Collapse
Affiliation(s)
- Aidan Pucchio
- School of Medicine, Queen's University, Kingston, Ontario, Canada
| | - Saffire Krance
- Schulich School of Medicine and Dentistry, Western University, London, Ontario, Canada
| | - Daiana R Pur
- Schulich School of Medicine and Dentistry, Western University, London, Ontario, Canada
| | - Arshpreet Bassi
- Schulich School of Medicine and Dentistry, Western University, London, Ontario, Canada
| | - Rafael Miranda
- Toronto Health Economics and Technology Assessment Collaborative, University of Toronto, Toronto, Ontario, Canada
- The Institute of Health Policy, Management and Evaluation, University of Toronto, Toronto, Ontario, Canada
| | - Tina Felfeli
- Toronto Health Economics and Technology Assessment Collaborative, University of Toronto, Toronto, Ontario, Canada
- Department of Ophthalmology and Visual Sciences, University of Toronto, Toronto, Ontario, Canada
- The Institute of Health Policy, Management and Evaluation, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
8
|
Ren X, Feng W, Ran R, Gao Y, Lin Y, Fu X, Tao Y, Wang T, Wang B, Ju L, Chen Y, He L, Xi W, Liu X, Ge Z, Zhang M. Artificial intelligence to distinguish retinal vein occlusion patients using color fundus photographs. Eye (Lond) 2023; 37:2026-2032. [PMID: 36302974 PMCID: PMC10333217 DOI: 10.1038/s41433-022-02239-4] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2022] [Revised: 08/04/2022] [Accepted: 09/02/2022] [Indexed: 11/09/2022] Open
Abstract
PURPOSE Our aim is to establish an AI model for distinguishing color fundus photographs (CFP) of RVO patients from normal individuals. METHODS The training dataset included 2013 CFP from fellow eyes of RVO patients and 8536 age- and gender-matched normal CFP. Model performance was assessed in two independent testing datasets. We evaluated the performance of the AI model using the area under the receiver operating characteristic curve (AUC), accuracy, precision, specificity, sensitivity, and confusion matrices. We further explained the probable clinical relevance of the AI by extracting and comparing features of the retinal images. RESULTS Our model achieved an average AUC was 0.9866 (95% CI: 0.9805-0.9918), accuracy was 0.9534 (95% CI: 0.9421-0.9639), precision was 0.9123 (95% CI: 0.8784-9453), specificity was 0.9810 (95% CI: 0.9729-0.9884), and sensitivity was 0.8367 (95% CI: 0.7953-0.8756) for identifying fundus images of RVO patients in training dataset. In independent external datasets 1, the AUC of the RVO group was 0.8102 (95% CI: 0.7979-0.8226), the accuracy of 0.7752 (95% CI: 0.7633-0.7875), the precision of 0.7041 (95% CI: 0.6873-0.7211), specificity of 0.6499 (95% CI: 0.6305-0.6679) and sensitivity of 0.9124 (95% CI: 0.9004-0.9241) for RVO group. There were significant differences in retinal arteriovenous ratio, optic cup to optic disc ratio, and optic disc tilt angle (p = 0.001, p = 0.0001, and p = 0.0001, respectively) between the two groups in training dataset. CONCLUSION We trained an AI model to classify color fundus photographs of RVO patients with stable performance both in internal and external datasets. This may be of great importance for risk prediction in patients with retinal venous occlusion.
Collapse
Affiliation(s)
- Xiang Ren
- Department of Ophthalmology, Ophthalmic Laboratory, West China Hospital, Sichuan University, Chengdu, Sichuan, 610041, P. R. China
- Research Laboratory of Ophthalmology and Vision Sciences, State Key Laboratory of Biotherapy, West China Hospital, Sichuan University, Chengdu, Sichuan, 610041, P. R. China
| | - Wei Feng
- Beijing Airdoc Technology Co Ltd, Beijing, China
| | - Ruijin Ran
- Department of Ophthalmology, Ophthalmic Laboratory, West China Hospital, Sichuan University, Chengdu, Sichuan, 610041, P. R. China
- Minda Hospital of Hubei Minzu University, Enshi, China
| | - Yunxia Gao
- Department of Ophthalmology, Ophthalmic Laboratory, West China Hospital, Sichuan University, Chengdu, Sichuan, 610041, P. R. China
| | - Yu Lin
- Department of Ophthalmology, Ophthalmic Laboratory, West China Hospital, Sichuan University, Chengdu, Sichuan, 610041, P. R. China
- Research Laboratory of Ophthalmology and Vision Sciences, State Key Laboratory of Biotherapy, West China Hospital, Sichuan University, Chengdu, Sichuan, 610041, P. R. China
| | - Xiangyu Fu
- Department of Ophthalmology, Ophthalmic Laboratory, West China Hospital, Sichuan University, Chengdu, Sichuan, 610041, P. R. China
- Research Laboratory of Ophthalmology and Vision Sciences, State Key Laboratory of Biotherapy, West China Hospital, Sichuan University, Chengdu, Sichuan, 610041, P. R. China
| | - Yunhan Tao
- Department of Ophthalmology, Ophthalmic Laboratory, West China Hospital, Sichuan University, Chengdu, Sichuan, 610041, P. R. China
| | - Ting Wang
- Department of Ophthalmology, Ophthalmic Laboratory, West China Hospital, Sichuan University, Chengdu, Sichuan, 610041, P. R. China
- Research Laboratory of Ophthalmology and Vision Sciences, State Key Laboratory of Biotherapy, West China Hospital, Sichuan University, Chengdu, Sichuan, 610041, P. R. China
| | - Bin Wang
- Beijing Airdoc Technology Co Ltd, Beijing, China
| | - Lie Ju
- Beijing Airdoc Technology Co Ltd, Beijing, China
- ECSE, Faculty of Engineering, Monash University, Melbourne, VIC, Australia
| | - Yuzhong Chen
- Beijing Airdoc Technology Co Ltd, Beijing, China
| | - Lanqing He
- Beijing Airdoc Technology Co Ltd, Beijing, China
| | - Wu Xi
- Chengdu Ikangguobin Health Examination Center Ltd, Chengdu, China
| | - Xiaorong Liu
- Chengdu Ikangguobin Health Examination Center Ltd, Chengdu, China
| | - Zongyuan Ge
- ECSE, Faculty of Engineering, Monash University, Melbourne, VIC, Australia
- eResearch Centre, Monash University, Melbourne, VIC, Australia
| | - Ming Zhang
- Department of Ophthalmology, Ophthalmic Laboratory, West China Hospital, Sichuan University, Chengdu, Sichuan, 610041, P. R. China.
| |
Collapse
|
9
|
Ahmed AA, Brychcy A, Abouzid M, Witt M, Kaczmarek E. Perception of Pathologists in Poland of Artificial Intelligence and Machine Learning in Medical Diagnosis-A Cross-Sectional Study. J Pers Med 2023; 13:962. [PMID: 37373951 DOI: 10.3390/jpm13060962] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2023] [Revised: 05/31/2023] [Accepted: 06/04/2023] [Indexed: 06/29/2023] Open
Abstract
BACKGROUND In the past vicennium, several artificial intelligence (AI) and machine learning (ML) models have been developed to assist in medical diagnosis, decision making, and design of treatment protocols. The number of active pathologists in Poland is low, prolonging tumor patients' diagnosis and treatment journey. Hence, applying AI and ML may aid in this process. Therefore, our study aims to investigate the knowledge of using AI and ML methods in the clinical field in pathologists in Poland. To our knowledge, no similar study has been conducted. METHODS We conducted a cross-sectional study targeting pathologists in Poland from June to July 2022. The questionnaire included self-reported information on AI or ML knowledge, experience, specialization, personal thoughts, and level of agreement with different aspects of AI and ML in medical diagnosis. Data were analyzed using IBM® SPSS® Statistics v.26, PQStat Software v.1.8.2.238, and RStudio Build 351. RESULTS Overall, 68 pathologists in Poland participated in our study. Their average age and years of experience were 38.92 ± 8.88 and 12.78 ± 9.48 years, respectively. Approximately 42% used AI or ML methods, which showed a significant difference in the knowledge gap between those who never used it (OR = 17.9, 95% CI = 3.57-89.79, p < 0.001). Additionally, users of AI had higher odds of reporting satisfaction with the speed of AI in the medical diagnosis process (OR = 4.66, 95% CI = 1.05-20.78, p = 0.043). Finally, significant differences (p = 0.003) were observed in determining the liability for legal issues used by AI and ML methods. CONCLUSION Most pathologists in this study did not use AI or ML models, highlighting the importance of increasing awareness and educational programs regarding applying AI and ML in medical diagnosis.
Collapse
Affiliation(s)
- Alhassan Ali Ahmed
- Department of Bioinformatics and Computational Biology, Poznan University of Medical Sciences, 61-806 Poznan, Poland
- Doctoral School, Poznan University of Medical Sciences, 61-806 Poznan, Poland
| | - Agnieszka Brychcy
- Department of Clinical Patomorphology, Heliodor Swiecicki Clinical Hospital of the Poznan University of Medical Sciences, 61-806 Poznan, Poland
| | - Mohamed Abouzid
- Doctoral School, Poznan University of Medical Sciences, 61-806 Poznan, Poland
- Department of Physical Pharmacy and Pharmacokinetics, Poznan University of Medical Sciences, 60-806 Poznan, Poland
| | - Martin Witt
- Department of Anatomy, Rostock University Medical Centre, 18057 Rostock, Germany
- Department of Anatomy, Technische Universität Dresden, 01307 Dresden, Germany
| | - Elżbieta Kaczmarek
- Department of Bioinformatics and Computational Biology, Poznan University of Medical Sciences, 61-806 Poznan, Poland
| |
Collapse
|
10
|
Jadhav C, Yadav KS. Formulation and evaluation of polymer-coated bimatoprost-chitosan matrix ocular inserts for sustained lowering of IOP in rabbits. J Drug Deliv Sci Technol 2022. [DOI: 10.1016/j.jddst.2022.103885] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
11
|
Hamamoto R, Koyama T, Kouno N, Yasuda T, Yui S, Sudo K, Hirata M, Sunami K, Kubo T, Takasawa K, Takahashi S, Machino H, Kobayashi K, Asada K, Komatsu M, Kaneko S, Yatabe Y, Yamamoto N. Introducing AI to the molecular tumor board: one direction toward the establishment of precision medicine using large-scale cancer clinical and biological information. Exp Hematol Oncol 2022; 11:82. [PMID: 36316731 PMCID: PMC9620610 DOI: 10.1186/s40164-022-00333-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2022] [Accepted: 10/05/2022] [Indexed: 11/10/2022] Open
Abstract
Since U.S. President Barack Obama announced the Precision Medicine Initiative in his New Year's State of the Union address in 2015, the establishment of a precision medicine system has been emphasized worldwide, particularly in the field of oncology. With the advent of next-generation sequencers specifically, genome analysis technology has made remarkable progress, and there are active efforts to apply genome information to diagnosis and treatment. Generally, in the process of feeding back the results of next-generation sequencing analysis to patients, a molecular tumor board (MTB), consisting of experts in clinical oncology, genetic medicine, etc., is established to discuss the results. On the other hand, an MTB currently involves a large amount of work, with humans searching through vast databases and literature, selecting the best drug candidates, and manually confirming the status of available clinical trials. In addition, as personalized medicine advances, the burden on MTB members is expected to increase in the future. Under these circumstances, introducing cutting-edge artificial intelligence (AI) technology and information and communication technology to MTBs while reducing the burden on MTB members and building a platform that enables more accurate and personalized medical care would be of great benefit to patients. In this review, we introduced the latest status of elemental technologies that have potential for AI utilization in MTB, and discussed issues that may arise in the future as we progress with AI implementation.
Collapse
Affiliation(s)
- Ryuji Hamamoto
- grid.272242.30000 0001 2168 5385Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045 Japan ,grid.509456.bCancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo, 103-0027 Japan
| | - Takafumi Koyama
- grid.272242.30000 0001 2168 5385Department of Experimental Therapeutics, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045 Japan
| | - Nobuji Kouno
- grid.272242.30000 0001 2168 5385Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045 Japan ,grid.258799.80000 0004 0372 2033Department of Surgery, Graduate School of Medicine, Kyoto University, Yoshida-konoe-cho, Sakyo-ku, Kyoto, 606-8303 Japan
| | - Tomohiro Yasuda
- grid.272242.30000 0001 2168 5385Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045 Japan ,grid.417547.40000 0004 1763 9564Research and Development Group, Hitachi, Ltd., 1-280 Higashi-koigakubo, Kokubunji, Tokyo, 185-8601 Japan
| | - Shuntaro Yui
- grid.272242.30000 0001 2168 5385Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045 Japan ,grid.417547.40000 0004 1763 9564Research and Development Group, Hitachi, Ltd., 1-280 Higashi-koigakubo, Kokubunji, Tokyo, 185-8601 Japan
| | - Kazuki Sudo
- grid.272242.30000 0001 2168 5385Department of Experimental Therapeutics, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045 Japan ,grid.272242.30000 0001 2168 5385Department of Medical Oncology, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045 Japan
| | - Makoto Hirata
- grid.272242.30000 0001 2168 5385Department of Genetic Medicine and Services, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045 Japan
| | - Kuniko Sunami
- grid.272242.30000 0001 2168 5385Department of Laboratory Medicine, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045 Japan
| | - Takashi Kubo
- grid.272242.30000 0001 2168 5385Department of Laboratory Medicine, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045 Japan
| | - Ken Takasawa
- grid.272242.30000 0001 2168 5385Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045 Japan ,grid.509456.bCancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo, 103-0027 Japan
| | - Satoshi Takahashi
- grid.272242.30000 0001 2168 5385Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045 Japan ,grid.509456.bCancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo, 103-0027 Japan
| | - Hidenori Machino
- grid.272242.30000 0001 2168 5385Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045 Japan ,grid.509456.bCancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo, 103-0027 Japan
| | - Kazuma Kobayashi
- grid.272242.30000 0001 2168 5385Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045 Japan ,grid.509456.bCancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo, 103-0027 Japan
| | - Ken Asada
- grid.272242.30000 0001 2168 5385Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045 Japan ,grid.509456.bCancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo, 103-0027 Japan
| | - Masaaki Komatsu
- grid.272242.30000 0001 2168 5385Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045 Japan ,grid.509456.bCancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo, 103-0027 Japan
| | - Syuzo Kaneko
- grid.272242.30000 0001 2168 5385Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045 Japan ,grid.509456.bCancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo, 103-0027 Japan
| | - Yasushi Yatabe
- grid.272242.30000 0001 2168 5385Department of Diagnostic Pathology, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045 Japan ,grid.272242.30000 0001 2168 5385Division of Molecular Pathology, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045 Japan
| | - Noboru Yamamoto
- grid.272242.30000 0001 2168 5385Department of Experimental Therapeutics, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045 Japan
| |
Collapse
|
12
|
Chen X, Xu H, Qi Q, Sun C, Jin J, Zhao H, Wang X, Weng W, Wang S, Sui X, Wang Z, Dai C, Peng M, Wang D, Hao Z, Huang Y, Wang X, Duan L, Zhu Y, Hong N, Yang F. AI-based chest CT semantic segmentation algorithm enables semi-automated lung cancer surgery planning by recognizing anatomical variants of pulmonary vessels. Front Oncol 2022; 12:1021084. [PMID: 36324583 PMCID: PMC9621115 DOI: 10.3389/fonc.2022.1021084] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2022] [Accepted: 09/26/2022] [Indexed: 11/16/2022] Open
Abstract
Background The recognition of anatomical variants is essential in preoperative planning for lung cancer surgery. Although three-dimensional (3-D) reconstruction provided an intuitive demonstration of the anatomical structure, the recognition process remains fully manual. To render a semiautomated approach for surgery planning, we developed an artificial intelligence (AI)–based chest CT semantic segmentation algorithm that recognizes pulmonary vessels on lobular or segmental levels. Hereby, we present a retrospective validation of the algorithm comparing surgeons’ performance. Methods The semantic segmentation algorithm to be validated was trained on non-contrast CT scans from a single center. A retrospective pilot study was performed. An independent validation dataset was constituted by an arbitrary selection from patients who underwent lobectomy or segmentectomy in three institutions during Apr. 2020 to Jun. 2021. The golden standard of anatomical variants of each enrolled case was obtained via expert surgeons’ judgments based on chest CT, 3-D reconstruction, and surgical observation. The performance of the algorithm is compared against the performance of two junior thoracic surgery attendings based on chest CT. Results A total of 27 cases were included in this study. The overall case-wise accuracy of the AI model was 82.8% in pulmonary vessels compared to 78.8% and 77.0% for the two surgeons, respectively. Segmental artery accuracy was 79.7%, 73.6%, and 72.7%; lobular vein accuracy was 96.3%, 96.3%, and 92.6% by the AI model and two surgeons, respectively. No statistical significance was found. In subgroup analysis, the anatomic structure-wise analysis of the AI algorithm showed a significant difference in accuracies between different lobes (p = 0.012). Higher AI accuracy in the right-upper lobe (RUL) and left-lower lobe (LLL) arteries was shown. A trend of better performance in non-contrast CT was also detected. Most recognition errors by the algorithm were the misclassification of LA1+2 and LA3. Radiological parameters did not exhibit a significant impact on the performance of both AI and surgeons. Conclusion The semantic segmentation algorithm achieves the recognition of the segmental pulmonary artery and the lobular pulmonary vein. The performance of the model approximates that of junior thoracic surgery attendings. Our work provides a novel semiautomated surgery planning approach that is potentially beneficial to lung cancer patients.
Collapse
Affiliation(s)
- Xiuyuan Chen
- Department of Thoracic Surgery, Peking University People’s Hospital, Beijing, China
- Thoracic Oncology Institute, Peking University People’s Hospital, Beijing, China
| | - Hao Xu
- Department of Thoracic Surgery, Peking University People’s Hospital, Beijing, China
- Thoracic Oncology Institute, Peking University People’s Hospital, Beijing, China
| | - Qingyi Qi
- Department of Radiology, Peking University People’s Hospital, Beijing, China
| | - Chao Sun
- Department of Radiology, Peking University People’s Hospital, Beijing, China
| | - Jian Jin
- Department of Thoracic Surgery, Peking University People’s Hospital, Beijing, China
- Thoracic Oncology Institute, Peking University People’s Hospital, Beijing, China
| | - Heng Zhao
- Department of Thoracic Surgery, Peking University People’s Hospital, Beijing, China
- Thoracic Oncology Institute, Peking University People’s Hospital, Beijing, China
| | - Xun Wang
- Department of Thoracic Surgery, Peking University People’s Hospital, Beijing, China
- Thoracic Oncology Institute, Peking University People’s Hospital, Beijing, China
| | - Wenhan Weng
- Department of Thoracic Surgery, Peking University People’s Hospital, Beijing, China
- Thoracic Oncology Institute, Peking University People’s Hospital, Beijing, China
| | - Shaodong Wang
- Department of Thoracic Surgery, Peking University People’s Hospital, Beijing, China
- Thoracic Oncology Institute, Peking University People’s Hospital, Beijing, China
| | - Xizhao Sui
- Department of Thoracic Surgery, Peking University People’s Hospital, Beijing, China
- Thoracic Oncology Institute, Peking University People’s Hospital, Beijing, China
| | - Zhenfan Wang
- Department of Thoracic Surgery, Peking University People’s Hospital, Beijing, China
- Thoracic Oncology Institute, Peking University People’s Hospital, Beijing, China
| | - Chenyang Dai
- Thoracic Surgery Department, Shanghai Pulmonary Hospital, Shanghai, China
| | - Muyun Peng
- Thoracic Surgery Department, The Second Xiangya Hospital of Central South University, Changsha, China
| | - Dawei Wang
- Institute of Advanced Research, Infervision Medical Technology Co., Ltd, Beijing, China
| | - Zenghao Hao
- Institute of Advanced Research, Infervision Medical Technology Co., Ltd, Beijing, China
| | - Yafen Huang
- Institute of Advanced Research, Infervision Medical Technology Co., Ltd, Beijing, China
| | - Xiang Wang
- Thoracic Surgery Department, The Second Xiangya Hospital of Central South University, Changsha, China
| | - Liang Duan
- Thoracic Surgery Department, Shanghai Pulmonary Hospital, Shanghai, China
| | - Yuming Zhu
- Thoracic Surgery Department, Shanghai Pulmonary Hospital, Shanghai, China
| | - Nan Hong
- Department of Radiology, Peking University People’s Hospital, Beijing, China
| | - Fan Yang
- Department of Thoracic Surgery, Peking University People’s Hospital, Beijing, China
- Thoracic Oncology Institute, Peking University People’s Hospital, Beijing, China
- *Correspondence: Fan Yang,
| |
Collapse
|
13
|
Alexopoulos P, Madu C, Wollstein G, Schuman JS. The Development and Clinical Application of Innovative Optical Ophthalmic Imaging Techniques. Front Med (Lausanne) 2022; 9:891369. [PMID: 35847772 PMCID: PMC9279625 DOI: 10.3389/fmed.2022.891369] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2022] [Accepted: 05/23/2022] [Indexed: 11/22/2022] Open
Abstract
The field of ophthalmic imaging has grown substantially over the last years. Massive improvements in image processing and computer hardware have allowed the emergence of multiple imaging techniques of the eye that can transform patient care. The purpose of this review is to describe the most recent advances in eye imaging and explain how new technologies and imaging methods can be utilized in a clinical setting. The introduction of optical coherence tomography (OCT) was a revolution in eye imaging and has since become the standard of care for a plethora of conditions. Its most recent iterations, OCT angiography, and visible light OCT, as well as imaging modalities, such as fluorescent lifetime imaging ophthalmoscopy, would allow a more thorough evaluation of patients and provide additional information on disease processes. Toward that goal, the application of adaptive optics (AO) and full-field scanning to a variety of eye imaging techniques has further allowed the histologic study of single cells in the retina and anterior segment. Toward the goal of remote eye care and more accessible eye imaging, methods such as handheld OCT devices and imaging through smartphones, have emerged. Finally, incorporating artificial intelligence (AI) in eye images has the potential to become a new milestone for eye imaging while also contributing in social aspects of eye care.
Collapse
Affiliation(s)
- Palaiologos Alexopoulos
- Department of Ophthalmology, NYU Langone Health, NYU Grossman School of Medicine, New York, NY, United States
| | - Chisom Madu
- Department of Ophthalmology, NYU Langone Health, NYU Grossman School of Medicine, New York, NY, United States
| | - Gadi Wollstein
- Department of Ophthalmology, NYU Langone Health, NYU Grossman School of Medicine, New York, NY, United States
- Department of Biomedical Engineering, NYU Tandon School of Engineering, Brooklyn, NY, United States
- Center for Neural Science, College of Arts & Science, New York University, New York, NY, United States
| | - Joel S. Schuman
- Department of Ophthalmology, NYU Langone Health, NYU Grossman School of Medicine, New York, NY, United States
- Department of Biomedical Engineering, NYU Tandon School of Engineering, Brooklyn, NY, United States
- Center for Neural Science, College of Arts & Science, New York University, New York, NY, United States
- Department of Electrical and Computer Engineering, NYU Tandon School of Engineering, Brooklyn, NY, United States
| |
Collapse
|
14
|
Movaghar A, Page D, Brilliant M, Mailick M. Advancing artificial intelligence-assisted pre-screening for fragile X syndrome. BMC Med Inform Decis Mak 2022; 22:152. [PMID: 35689224 PMCID: PMC9185893 DOI: 10.1186/s12911-022-01896-5] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2022] [Accepted: 06/01/2022] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Fragile X syndrome (FXS), the most common inherited cause of intellectual disability and autism, is significantly underdiagnosed in the general population. Diagnosing FXS is challenging due to the heterogeneity of the condition, subtle physical characteristics at the time of birth and similarity of phenotypes to other conditions. The medical complexity of FXS underscores an urgent need to develop more efficient and effective screening methods to identify individuals with FXS. In this study, we evaluate the effectiveness of using artificial intelligence (AI) and electronic health records (EHRs) to accelerate FXS diagnosis. METHODS The EHRs of 2.1 million patients served by the University of Wisconsin Health System (UW Health) were the main data source for this retrospective study. UW Health includes patients from south central Wisconsin, with approximately 33 years (1988-2021) of digitized health data. We identified all participants who received a code for FXS in the form of International Classification of Diseases (ICD), Ninth or Tenth Revision (ICD9 = 759.83, ICD10 = Q99.2). Only individuals who received the FXS code on at least two occasions ("Rule of 2") were classified as clinically diagnosed cases. To ensure the availability of sufficient data prior to clinical diagnosis to test the model, only individuals who were diagnosed after age 10 were included in the analysis. A supervised random forest classifier was used to create an AI-assisted pre-screening tool to identify cases with FXS, 5 years earlier than the time of clinical diagnosis based on their medical records. The area under receiver operating characteristic curve (AUROC) was reported. The AUROC shows the level of success in identification of cases and controls (AUROC = 1 represents perfect classification). RESULTS 52 individuals were identified as target cases and matched with 5200 controls. AI-assisted pre-screening tool successfully identified cases with FXS, 5 years earlier than the time of clinical diagnosis with an AUROC of 0.717. A separate model trained and tested on UW Health cases achieved the AUROC of 0.798. CONCLUSIONS This result shows the potential utility of our tool in accelerating FXS diagnosis in real clinical settings. Earlier diagnosis can lead to more timely intervention and access to services with the goal of improving patients' health outcomes.
Collapse
Affiliation(s)
- Arezoo Movaghar
- Waisman Center, University of Wisconsin-Madison, 1500 Highland Avenue, Madison, WI, 53705, USA.
| | - David Page
- Department of Biostatistics and Bioinformatics, Duke University, Durham, NC, USA
| | - Murray Brilliant
- Waisman Center, University of Wisconsin-Madison, 1500 Highland Avenue, Madison, WI, 53705, USA
| | - Marsha Mailick
- Waisman Center, University of Wisconsin-Madison, 1500 Highland Avenue, Madison, WI, 53705, USA
| |
Collapse
|
15
|
Spanos K, Giannoukas AD, Kouvelos G, Tsougos I, Mavroforou A. Artificial Intelligence application in Vascular Diseases. J Vasc Surg 2022; 76:615-619. [PMID: 35661694 DOI: 10.1016/j.jvs.2022.03.895] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2021] [Accepted: 03/11/2022] [Indexed: 11/28/2022]
Affiliation(s)
- Konstantinos Spanos
- Department of Vascular Surgery, School of Health Sciences, University of Thessaly, Larissa, Greece.
| | - Athanasios D Giannoukas
- Department of Vascular Surgery, School of Health Sciences, University of Thessaly, Larissa, Greece.
| | - George Kouvelos
- Department of Vascular Surgery, School of Health Sciences, University of Thessaly, Larissa, Greece.
| | - Ioannis Tsougos
- Department of Medical Physics and Informatics, Faculty of Medicine, School of Health Sciences, University of Thessaly, Larissa, Greece.
| | - Anna Mavroforou
- Deontology and Bioethics Lab, Faculty of Nursing, School of Health Sciences, University of Thessaly, Larissa, Greece.
| |
Collapse
|
16
|
Wu CW, Chen HY, Chen JY, Lee CH. Glaucoma Detection Using Support Vector Machine Method Based on Spectralis OCT. Diagnostics (Basel) 2022; 12:diagnostics12020391. [PMID: 35204482 PMCID: PMC8871188 DOI: 10.3390/diagnostics12020391] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2021] [Revised: 01/17/2022] [Accepted: 01/30/2022] [Indexed: 02/01/2023] Open
Abstract
Spectralis optical coherence tomography (OCT) provided more detailed parameters in the peripapillary and macular areas among the OCT machines, but it is not easy to understand the enormous information (114 features) generated from Spectralis OCT in glaucoma assessment. Machine learning methodology has been well-applied in glaucoma detection in recent years and has the ability to process a large amount of information at once. Here we aimed to analyze the diagnostic capability of Spectralis OCT parameters on glaucoma detection using Support Vector Machine (SVM) classification method in our population. Our results showed that applying all OCT features with the SVM method had good capability in the detection of glaucomatous eyes (area under curve (AUC) = 0.82), as well as discriminating normal eyes from early, moderate, or severe glaucomatous eyes (AUC = 0.78, 0.89, and 0.93, respectively). Apart from using all OCT features, the minimum rim width (MRW) may be good feature groups to discriminate early glaucomatous from normal eyes (AUC = 0.78). The combination of peripapillary and macular parameters, including MRW_temporal inferior (TI), MRW_global (G), ganglion cell layer (GCL)_outer temporal (T2), GCL_inner inferior (I1), peripapillary nerve fiber layer thickness (ppNFLT)_temporal superior (TS), and GCL_inner temporal (T1), provided better results (AUC = 0.84). This study showed promise in glaucoma management in the Taiwanese population. However, further validation study is needed to test the performance of our proposed model in the real world.
Collapse
Affiliation(s)
- Chao-Wei Wu
- Graduate Institute of Medicine, College of Medicine, Kaohsiung Medical University, Kaohsiung City 807378, Taiwan;
- Department of Ophthalmology, Kaohsiung Medical University Hospital, Kaohsiung Medical University, Kaohsiung City 807378, Taiwan
| | - Hsin-Yi Chen
- Department of Ophthalmology, Fu Jen Catholic University Hospital, New Taipei City 24352, Taiwan
- School of Medicine, College of Medicine, Fu Jen Catholic University, New Taipei City 242062, Taiwan
- Correspondence: (H.-Y.C.); (C.-H.L.)
| | - Jui-Yu Chen
- Institute of Electrical and Control Engineering, National Yang Ming Chiao Tung University, Hsinchu City 30010, Taiwan;
| | - Ching-Hung Lee
- Department of Electrical and Computer Engineering, National Yang Ming Chiao Tung University, Hsinchu City 30010, Taiwan
- Correspondence: (H.-Y.C.); (C.-H.L.)
| |
Collapse
|
17
|
|
18
|
Eleftheriadis GK, Genina N, Boetker J, Rantanen J. Modular design principle based on compartmental drug delivery systems. Adv Drug Deliv Rev 2021; 178:113921. [PMID: 34390776 DOI: 10.1016/j.addr.2021.113921] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2021] [Revised: 07/21/2021] [Accepted: 08/09/2021] [Indexed: 12/28/2022]
Abstract
The current manufacturing solutions for oral solid dosage forms are fundamentally based on technologies from the 19th century. This approach is well suited for mass production of one-size-fits-all products; however, it does not allow for a straight-forward personalization and mass customization of the pharmaceutical end-product. In order to provide better therapies to the patients, a need for innovative manufacturing concepts and product design principles has been rising. Additive manufacturing opens up a possibility for compartmentalization of drug products, including design of spatially separated multidrug and functional excipient compartments. This compartmentalized solution can be further expanded to modular design thinking. Modular design is referring to combination of building blocks containing a given amount of drug compound(s) and related functional excipients into a larger final product. Implementation of modular design principles is paving the way for implementing the emerging personalization potential within health sciences by designing compartmental and reactive product structures that can be manufactured based on the individual needs of each patient. This review will introduce the existing compartmentalized product design principles and discuss the integration of these into edible electronics allowing for innovative control of drug release.
Collapse
Affiliation(s)
| | - Natalja Genina
- Department of Pharmacy, Faculty of Health and Medical Sciences, University of Copenhagen, DK-2100 Copenhagen, Denmark
| | - Johan Boetker
- Department of Pharmacy, Faculty of Health and Medical Sciences, University of Copenhagen, DK-2100 Copenhagen, Denmark
| | - Jukka Rantanen
- Department of Pharmacy, Faculty of Health and Medical Sciences, University of Copenhagen, DK-2100 Copenhagen, Denmark.
| |
Collapse
|
19
|
Wu Y, Szymanska M, Hu Y, Fazal MI, Jiang N, Yetisen AK, Cordeiro MF. Measures of disease activity in glaucoma. Biosens Bioelectron 2021; 196:113700. [PMID: 34653715 DOI: 10.1016/j.bios.2021.113700] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2021] [Revised: 10/01/2021] [Accepted: 10/08/2021] [Indexed: 12/13/2022]
Abstract
Glaucoma is the leading cause of irreversible blindness globally which significantly affects the quality of life and has a substantial economic impact. Effective detective methods are necessary to identify glaucoma as early as possible. Regular eye examinations are important for detecting the disease early and preventing deterioration of vision and quality of life. Current methods of measuring disease activity are powerful in describing the functional and structural changes in glaucomatous eyes. However, there is still a need for a novel tool to detect glaucoma earlier and more accurately. Tear fluid biomarker analysis and new imaging technology provide novel surrogate endpoints of glaucoma. Artificial intelligence is a post-diagnostic tool that can analyse ophthalmic test results. A detail review of currently used clinical tests in glaucoma include intraocular pressure test, visual field test and optical coherence tomography are presented. The advanced technologies for glaucoma measurement which can identify specific disease characteristics, as well as the mechanism, performance and future perspectives of these devices are highlighted. Applications of AI in diagnosis and prediction in glaucoma are mentioned. With the development in imaging tools, sensor technologies and artificial intelligence, diagnostic evaluation of glaucoma must assess more variables to facilitate earlier diagnosis and management in the future.
Collapse
Affiliation(s)
- Yue Wu
- Department of Surgery and Cancer, Imperial College London, South Kensington, London, United Kingdom; Department of Chemical Engineering, Imperial College London, South Kensington, London, United Kingdom
| | - Maja Szymanska
- The Imperial College Ophthalmic Research Group (ICORG), Imperial College London, London, United Kingdom
| | - Yubing Hu
- Department of Chemical Engineering, Imperial College London, South Kensington, London, United Kingdom.
| | - M Ihsan Fazal
- The Imperial College Ophthalmic Research Group (ICORG), Imperial College London, London, United Kingdom
| | - Nan Jiang
- West China School of Basic Medical Sciences & Forensic Medicine, Sichuan University, Chengdu, 610041, China
| | - Ali K Yetisen
- Department of Chemical Engineering, Imperial College London, South Kensington, London, United Kingdom
| | - M Francesca Cordeiro
- The Imperial College Ophthalmic Research Group (ICORG), Imperial College London, London, United Kingdom; The Western Eye Hospital, Imperial College Healthcare NHS Trust (ICHNT), London, United Kingdom; Glaucoma and Retinal Neurodegeneration Group, Department of Visual Neuroscience, UCL Institute of Ophthalmology, London, United Kingdom.
| |
Collapse
|
20
|
MasPA: A Machine Learning Application to Predict Risk of Mastitis in Cattle from AMS Sensor Data. AGRIENGINEERING 2021. [DOI: 10.3390/agriengineering3030037] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/21/2023]
Abstract
Mastitis is a common disease that prevails in cattle owing mainly to environmental pathogens; they are also the most expensive disease for cattle in dairy farms. Several prevention and treatment methods are available, although most of these options are quite expensive, especially for small farms. In this study, we utilized a dataset of 6600 cattle along with several of their sensory parameters (collected via inexpensive sensors) and their prevalence to mastitis. Supervised machine learning approaches were deployed to determine the most effective parameters that could be utilized to predict the risk of mastitis in cattle. To achieve this goal, 26 classification models were built, among which the best performing model (the highest accuracy in the shortest time) was selected. Hyper parameter tuning and K-fold cross validation were applied to further boost the top model’s performance, while at the same time avoiding bias and overfitting of the model. The model was then utilized to build a GUI application that could be used online as a web application. The application can predict the risk of mastitis in cattle from the inhale and exhale limits of their udder and their temperature with an accuracy of 98.1% and sensitivity and specificity of 99.4% and 98.8%, respectively. The full potential of this application can be utilized via the standalone version, which can be easily integrated into an automatic milking system to detect the risk of mastitis in real time.
Collapse
|
21
|
Zippel C, Bohnet-Joschko S. Rise of Clinical Studies in the Field of Machine Learning: A Review of Data Registered in ClinicalTrials.gov. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2021; 18:5072. [PMID: 34064827 PMCID: PMC8151906 DOI: 10.3390/ijerph18105072] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/07/2021] [Revised: 05/06/2021] [Accepted: 05/07/2021] [Indexed: 12/29/2022]
Abstract
Although advances in machine-learning healthcare applications promise great potential for innovative medical care, few data are available on the translational status of these new technologies. We aimed to provide a comprehensive characterization of the development and status quo of clinical studies in the field of machine learning. For this purpose, we performed a registry-based analysis of machine-learning-related studies that were published and first available in the ClinicalTrials.gov database until 2020, using the database's study classification. In total, n = 358 eligible studies could be included in the analysis. Of these, 82% were initiated by academic institutions/university (hospitals) and 18% by industry sponsors. A total of 96% were national and 4% international. About half of the studies (47%) had at least one recruiting location in a country in North America, followed by Europe (37%) and Asia (15%). Most of the studies reported were initiated in the medical field of imaging (12%), followed by cardiology, psychiatry, anesthesia/intensive care medicine (all 11%) and neurology (10%). Although the majority of the clinical studies were still initiated in an academic research context, the first industry-financed projects on machine-learning-based algorithms are becoming visible. The number of clinical studies with machine-learning-related applications and the variety of medical challenges addressed serve to indicate their increasing importance in future clinical care. Finally, they also set a time frame for the adjustment of medical device-related regulation and governance.
Collapse
Affiliation(s)
| | - Sabine Bohnet-Joschko
- Chair of Management and Innovation in Health Care, Faculty of Management, Economics and Society, Witten/Herdecke University, 58448 Witten, Germany;
| |
Collapse
|