1
|
Bartolotta TV, Militello C, Prinzi F, Ferraro F, Rundo L, Zarcaro C, Dimarco M, Orlando AAM, Matranga D, Vitabile S. Artificial intelligence-based, semi-automated segmentation for the extraction of ultrasound-derived radiomics features in breast cancer: a prospective multicenter study. LA RADIOLOGIA MEDICA 2024; 129:977-988. [PMID: 38724697 PMCID: PMC11252191 DOI: 10.1007/s11547-024-01826-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/27/2023] [Accepted: 04/29/2024] [Indexed: 07/17/2024]
Abstract
PURPOSE To investigate the feasibility of an artificial intelligence (AI)-based semi-automated segmentation for the extraction of ultrasound (US)-derived radiomics features in the characterization of focal breast lesions (FBLs). MATERIAL AND METHODS Two expert radiologists classified according to US BI-RADS criteria 352 FBLs detected in 352 patients (237 at Center A and 115 at Center B). An AI-based semi-automated segmentation was used to build a machine learning (ML) model on the basis of B-mode US of 237 images (center A) and then validated on an external cohort of B-mode US images of 115 patients (Center B). RESULTS A total of 202 of 352 (57.4%) FBLs were benign, and 150 of 352 (42.6%) were malignant. The AI-based semi-automated segmentation achieved a success rate of 95.7% for one reviewer and 96% for the other, without significant difference (p = 0.839). A total of 15 (4.3%) and 14 (4%) of 352 semi-automated segmentations were not accepted due to posterior acoustic shadowing at B-Mode US and 13 and 10 of them corresponded to malignant lesions, respectively. In the validation cohort, the characterization made by the expert radiologist yielded values of sensitivity, specificity, PPV and NPV of 0.933, 0.9, 0.857, 0.955, respectively. The ML model obtained values of sensitivity, specificity, PPV and NPV of 0.544, 0.6, 0.416, 0.628, respectively. The combined assessment of radiologists and ML model yielded values of sensitivity, specificity, PPV and NPV of 0.756, 0.928, 0.872, 0.855, respectively. CONCLUSION AI-based semi-automated segmentation is feasible, allowing an instantaneous and reproducible extraction of US-derived radiomics features of FBLs. The combination of radiomics and US BI-RADS classification led to a potential decrease of unnecessary biopsy but at the expense of a not negligible increase of potentially missed cancers.
Collapse
Affiliation(s)
- Tommaso Vincenzo Bartolotta
- Department of Biomedicine, Neuroscience and Advanced Diagnostics (BiND), University of Palermo, Palermo, Italy.
| | - Carmelo Militello
- Institute for High-Performance Computing and Networking (ICAR-CNR), Italian National Research Council, Palermo, Italy
| | - Francesco Prinzi
- Department of Biomedicine, Neuroscience and Advanced Diagnostics (BiND), University of Palermo, Palermo, Italy
- Department of Computer Science and Technology, University of Cambridge, Cambridge, UK
| | - Fabiola Ferraro
- Department of Biomedicine, Neuroscience and Advanced Diagnostics (BiND), University of Palermo, Palermo, Italy
| | - Leonardo Rundo
- Department of Information and Electrical Engineering and Applied Mathematics (DIEM), University of Salerno, Fisciano, SA, Italy
| | - Calogero Zarcaro
- Department of Biomedicine, Neuroscience and Advanced Diagnostics (BiND), University of Palermo, Palermo, Italy
| | | | | | - Domenica Matranga
- Department of Health Promotion, Mother and Child Care, Internal Medicine and Medical Specialties (ProMISE), University of Palermo, Palermo, Italy
| | - Salvatore Vitabile
- Department of Biomedicine, Neuroscience and Advanced Diagnostics (BiND), University of Palermo, Palermo, Italy
| |
Collapse
|
2
|
Smith H, Downer J, Ives J. Clinicians and AI use: where is the professional guidance? JOURNAL OF MEDICAL ETHICS 2024; 50:437-441. [PMID: 37607805 PMCID: PMC11228205 DOI: 10.1136/jme-2022-108831] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/06/2022] [Accepted: 08/04/2023] [Indexed: 08/24/2023]
Abstract
With the introduction of artificial intelligence (AI) to healthcare, there is also a need for professional guidance to support its use. New (2022) reports from National Health Service AI Lab & Health Education England focus on healthcare workers' understanding and confidence in AI clinical decision support systems (AI-CDDSs), and are concerned with developing trust in, and the trustworthiness of these systems. While they offer guidance to aid developers and purchasers of such systems, they offer little specific guidance for the clinical users who will be required to use them in patient care.This paper argues that clinical, professional and reputational safety will be risked if this deficit of professional guidance for clinical users of AI-CDDSs is not redressed. We argue it is not enough to develop training for clinical users without first establishing professional guidance regarding the rights and expectations of clinical users.We conclude with a call to action for clinical regulators: to unite to draft guidance for users of AI-CDDS that helps manage clinical, professional and reputational risks. We further suggest that this exercise offers an opportunity to address fundamental issues in the use of AI-CDDSs; regarding, for example, the fair burden of responsibility for outcomes.
Collapse
Affiliation(s)
- Helen Smith
- Centre for Ethics in Medicine, Population Health Sciences, University of Bristol, Bristol, UK
| | - John Downer
- School of Sociology, Politics and International Studies, University of Bristol, Bristol, UK
| | - Jonathan Ives
- Centre for Ethics in Medicine, Population Health Sciences, University of Bristol, Bristol, UK
| |
Collapse
|
3
|
Lawton T, Morgan P, Porter Z, Hickey S, Cunningham A, Hughes N, Iacovides I, Jia Y, Sharma V, Habli I. Clinicians risk becoming 'liability sinks' for artificial intelligence. Future Healthc J 2024; 11:100007. [PMID: 38646041 PMCID: PMC11025047 DOI: 10.1016/j.fhj.2024.100007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/23/2024]
Affiliation(s)
- Tom Lawton
- Improvement Academy, Bradford Institute for Health Research, Bradford Royal Infirmary, Duckworth Lane, Bradford BD9 6RJ, UK
- Assuring Autonomy International Programme, University of York, Heslington, York YO10 5DD, UK
| | - Phillip Morgan
- York Law School, University of York, Heslington, York YO10 5DD, UK
| | - Zoe Porter
- Assuring Autonomy International Programme, University of York, Heslington, York YO10 5DD, UK
| | - Shireen Hickey
- Improvement Academy, Bradford Institute for Health Research, Bradford Royal Infirmary, Duckworth Lane, Bradford BD9 6RJ, UK
| | - Alice Cunningham
- Improvement Academy, Bradford Institute for Health Research, Bradford Royal Infirmary, Duckworth Lane, Bradford BD9 6RJ, UK
| | - Nathan Hughes
- Assuring Autonomy International Programme, University of York, Heslington, York YO10 5DD, UK
| | - Ioanna Iacovides
- Department of Computer Science, University of York, Heslington, York YO10 5DD, UK
| | - Yan Jia
- Assuring Autonomy International Programme, University of York, Heslington, York YO10 5DD, UK
| | - Vishal Sharma
- Improvement Academy, Bradford Institute for Health Research, Bradford Royal Infirmary, Duckworth Lane, Bradford BD9 6RJ, UK
| | - Ibrahim Habli
- Assuring Autonomy International Programme, University of York, Heslington, York YO10 5DD, UK
| |
Collapse
|
4
|
Wu CC, Islam MM, Poly TN, Weng YC. Artificial Intelligence in Kidney Disease: A Comprehensive Study and Directions for Future Research. Diagnostics (Basel) 2024; 14:397. [PMID: 38396436 PMCID: PMC10887584 DOI: 10.3390/diagnostics14040397] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2023] [Revised: 02/03/2024] [Accepted: 02/05/2024] [Indexed: 02/25/2024] Open
Abstract
Artificial intelligence (AI) has emerged as a promising tool in the field of healthcare, with an increasing number of research articles evaluating its applications in the domain of kidney disease. To comprehend the evolving landscape of AI research in kidney disease, a bibliometric analysis is essential. The purposes of this study are to systematically analyze and quantify the scientific output, research trends, and collaborative networks in the application of AI to kidney disease. This study collected AI-related articles published between 2012 and 20 November 2023 from the Web of Science. Descriptive analyses of research trends in the application of AI in kidney disease were used to determine the growth rate of publications by authors, journals, institutions, and countries. Visualization network maps of country collaborations and author-provided keyword co-occurrences were generated to show the hotspots and research trends in AI research on kidney disease. The initial search yielded 673 articles, of which 631 were included in the analyses. Our findings reveal a noteworthy exponential growth trend in the annual publications of AI applications in kidney disease. Nephrology Dialysis Transplantation emerged as the leading publisher, accounting for 4.12% (26 out of 631 papers), followed by the American Journal of Transplantation at 3.01% (19/631) and Scientific Reports at 2.69% (17/631). The primary contributors were predominantly from the United States (n = 164, 25.99%), followed by China (n = 156, 24.72%) and India (n = 62, 9.83%). In terms of institutions, Mayo Clinic led with 27 contributions (4.27%), while Harvard University (n = 19, 3.01%) and Sun Yat-Sen University (n = 16, 2.53%) secured the second and third positions, respectively. This study summarized AI research trends in the field of kidney disease through statistical analysis and network visualization. The findings show that the field of AI in kidney disease is dynamic and rapidly progressing and provides valuable information for recognizing emerging patterns, technological shifts, and interdisciplinary collaborations that contribute to the advancement of knowledge in this critical domain.
Collapse
Affiliation(s)
- Chieh-Chen Wu
- Department of Healthcare Information and Management, School of Health and Medical Engineering, Ming Chuan University, Taipei 111, Taiwan;
| | - Md. Mohaimenul Islam
- Outcomes and Translational Sciences, College of Pharmacy, The Ohio State University, Columbus, OH 43210, USA;
| | - Tahmina Nasrin Poly
- Graduate Institute of Biomedical Informatics, College of Medical Science and Technology, Taipei Medical University, Taipei 110, Taiwan;
| | - Yung-Ching Weng
- Department of Healthcare Information and Management, School of Health and Medical Engineering, Ming Chuan University, Taipei 111, Taiwan;
| |
Collapse
|
5
|
Stephenson C, Jagayat J, Kumar A, Khamooshi P, Eadie J, Pannu A, Meartsi D, Danaee E, Gutierrez G, Khan F, Gizzarelli T, Patel C, Moghimi E, Yang M, Shirazi A, Omrani M, Patel A, Alavi N. Comparing clinical decision-making of AI technology to a multi-professional care team in an electronic cognitive behavioural therapy program for depression: protocol. Front Psychiatry 2023; 14:1220607. [PMID: 38188047 PMCID: PMC10768033 DOI: 10.3389/fpsyt.2023.1220607] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/12/2023] [Accepted: 11/29/2023] [Indexed: 01/09/2024] Open
Abstract
Introduction Depression is a leading cause of disability worldwide, affecting up to 300 million people globally. Despite its high prevalence and debilitating effects, only one-third of patients newly diagnosed with depression initiate treatment. Electronic cognitive behavioural therapy (e-CBT) is an effective treatment for depression and is a feasible solution to make mental health care more accessible. Due to its online format, e-CBT can be combined with variable therapist engagement to address different care needs. Typically, a multi-professional care team determines which combination therapy most benefits the patient. However, this process can add to the costs of these programs. Artificial intelligence (AI) has been proposed to offset these costs. Methods This study is a double-blinded randomized controlled trial recruiting individuals experiencing depression. The degree of care intensity a participant will receive will be randomly decided by either: (1) a machine learning algorithm, or (2) an assessment made by a group of healthcare professionals. Subsequently, participants will receive depression-specific e-CBT treatment through the secure online platform. There will be three available intensities of therapist interaction: (1) e-CBT; (2) e-CBT with a 15-20-min phone/video call; and (3) e-CBT with pharmacotherapy. This approach aims to accurately allocate care tailored to each patient's needs, allowing for more efficient use of resources. Discussion Artificial intelligence and providing patients with varying intensities of care can increase the efficiency of mental health care services. This study aims to determine a cost-effective method to decrease depressive symptoms and increase treatment adherence to online psychotherapy by allocating the correct intensity of therapist care for individuals diagnosed with depression. This will be done by comparing a decision-making machine learning algorithm to a multi-professional care team. This approach aims to accurately allocate care tailored to each patient's needs, allowing for more efficient use of resources with the convergence of technologies and healthcare. Ethics The study received ethics approval and began participant recruitment in December 2022. Participant recruitment has been conducted through targeted advertisements and physician referrals. Complete data collection and analysis are expected to conclude by August 2024. Clinical trial registration ClinicalTrials.Gov, identifier NCT04747873.
Collapse
Affiliation(s)
- Callum Stephenson
- Department of Psychiatry, Faculty of Health Sciences, Queen’s University, Kingston, ON, Canada
| | - Jasleen Jagayat
- Department of Psychiatry, Faculty of Health Sciences, Queen’s University, Kingston, ON, Canada
- Centre for Neuroscience Studies, Faculty of Health Sciences, Queen’s University, Kingston, ON, Canada
| | - Anchan Kumar
- Department of Psychiatry, Faculty of Health Sciences, Queen’s University, Kingston, ON, Canada
| | - Paniz Khamooshi
- Department of Psychiatry, Faculty of Health Sciences, Queen’s University, Kingston, ON, Canada
| | - Jazmin Eadie
- Department of Psychiatry, Faculty of Health Sciences, Queen’s University, Kingston, ON, Canada
- Department of Psychology, Faculty of Arts and Sciences, Queen’s University, Kingston, ON, Canada
| | - Amrita Pannu
- Department of Psychiatry, Faculty of Health Sciences, Queen’s University, Kingston, ON, Canada
| | - Dekel Meartsi
- Department of Psychiatry, Faculty of Health Sciences, Queen’s University, Kingston, ON, Canada
| | - Eileen Danaee
- Department of Psychiatry, Faculty of Health Sciences, Queen’s University, Kingston, ON, Canada
| | - Gilmar Gutierrez
- Department of Psychiatry, Faculty of Health Sciences, Queen’s University, Kingston, ON, Canada
| | - Ferwa Khan
- Department of Psychiatry, Faculty of Health Sciences, Queen’s University, Kingston, ON, Canada
| | - Tessa Gizzarelli
- Department of Psychiatry, Faculty of Health Sciences, Queen’s University, Kingston, ON, Canada
| | - Charmy Patel
- Department of Psychiatry, Faculty of Health Sciences, Queen’s University, Kingston, ON, Canada
| | - Elnaz Moghimi
- Department of Psychiatry, Faculty of Health Sciences, Queen’s University, Kingston, ON, Canada
| | - Megan Yang
- Department of Psychiatry, Faculty of Health Sciences, Queen’s University, Kingston, ON, Canada
| | | | - Mohsen Omrani
- Department of Psychiatry, Faculty of Health Sciences, Queen’s University, Kingston, ON, Canada
- OPTT Inc., Toronto, ON, Canada
| | - Archana Patel
- Department of Psychiatry, Faculty of Health Sciences, Queen’s University, Kingston, ON, Canada
| | - Nazanin Alavi
- Department of Psychiatry, Faculty of Health Sciences, Queen’s University, Kingston, ON, Canada
- Centre for Neuroscience Studies, Faculty of Health Sciences, Queen’s University, Kingston, ON, Canada
| |
Collapse
|
6
|
Bottomley D, Thaldar D. Liability for harm caused by AI in healthcare: an overview of the core legal concepts. Front Pharmacol 2023; 14:1297353. [PMID: 38161692 PMCID: PMC10755877 DOI: 10.3389/fphar.2023.1297353] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2023] [Accepted: 11/27/2023] [Indexed: 01/03/2024] Open
Abstract
The integration of artificial intelligence (AI) into healthcare in Africa presents transformative opportunities but also raises profound legal challenges, especially concerning liability. As AI becomes more autonomous, determining who or what is responsible when things go wrong becomes ambiguous. This article aims to review the legal concepts relevant to the issue of liability for harm caused by AI in healthcare. While some suggest attributing legal personhood to AI as a potential solution, the feasibility of this remains controversial. The principal-agent relationship, where the physician is held responsible for AI decisions, risks reducing the adoption of AI tools due to potential liabilities. Similarly, using product law to establish liability is problematic because of the dynamic learning nature of AI, which deviates from static products. This fluidity complicates traditional definitions of product defects and, by extension, where responsibility lies. Exploring alternatives, risk-based determinations of liability, which focus on potential hazards rather than on specific fault assignments, emerges as a potential pathway. However, these, too, present challenges in assigning accountability. Strict liability has been proposed as another avenue. It can simplify the compensation process for victims by focusing on the harm rather than on the fault. Yet, concerns arise over the economic impact on stakeholders, the potential for unjust reputational damage, and the feasibility of a global application. Instead of approaches based on liability, reconciliation holds much promise to facilitate regulatory sandboxes. In conclusion, while the integration of AI systems into healthcare holds vast potential, it necessitates a re-evaluation of our legal frameworks. The central challenge is how to adapt traditional concepts of liability to the novel and unpredictable nature of AI-or to move away from liability towards reconciliation. Future discussions and research must navigate these complex waters and seek solutions that ensure both progress and protection.
Collapse
Affiliation(s)
| | - Donrich Thaldar
- School of Law, University of KwaZulu-Natal, Durban, South Africa
| |
Collapse
|
7
|
Balsano C, Burra P, Duvoux C, Alisi A, Piscaglia F, Gerussi A. Artificial Intelligence and liver: Opportunities and barriers. Dig Liver Dis 2023; 55:1455-1461. [PMID: 37718227 DOI: 10.1016/j.dld.2023.08.048] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/20/2023] [Revised: 08/14/2023] [Accepted: 08/17/2023] [Indexed: 09/19/2023]
Abstract
Artificial Intelligence (AI) has recently been shown as an excellent tool for the study of the liver; however, many obstacles still have to be overcome for the digitalization of real-world hepatology. The authors present an overview of the current state of the art on the use of innovative technologies in different areas (big data, translational hepatology, imaging, and transplant setting). In clinical practice, physicians must integrate a vast array of data modalities (medical history, clinical data, laboratory tests, imaging, and pathology slides) to achieve a diagnostic or therapeutic decision. Unfortunately, machine learning and deep learning are still far from really supporting clinicians in real life. In fact, the accuracy of any technological support has no value in medicine without the support of clinicians. To make better use of new technologies, it is essential to improve clinicians' knowledge about them. To this end, the authors propose that collaborative networks for multidisciplinary approaches will improve the rapid implementation of AI systems for developing disease-customized AI-powered clinical decision support tools. The authors also discuss ethical, educational, and legal challenges that must be overcome to build robust bridges and deploy potentially effective AI in real-world clinical settings.
Collapse
Affiliation(s)
- Clara Balsano
- Department of Life, Health and Environmental Sciences-MESVA, School of Emergency-Urgency Medicine, University of L'Aquila, Piazzale Salvatore Tommasi 1, Coppito, L'Aquila 67100, Italy.
| | - Patrizia Burra
- Multivisceral Transplant Unit Gastroenterology Department of Surgery, Oncology and Gastroenterology, Padua University Hospital, Padua, Italy
| | - Christophe Duvoux
- Department of Hepatology, Medical Liver Transplant Unit, Hospital Henri Mondor AP-HP, University of Paris-Est Créteil (UPEC), France
| | - Anna Alisi
- Research Unit of Molecular Genetics of Complex Phenotypes, Bambino Gesù Children's Hospital, IRCCS, Rome, Italy
| | - Fabio Piscaglia
- Department of Medical and Surgical Sciences, University of Bologna, Bologna, Italy
| | - Alessio Gerussi
- Division of Gastroenterology, Center for Autoimmune Liver Diseases, Department of Medicine and Surgery, University of Milano-Bicocca, Monza, Italy; European Reference Network on Hepatological Diseases (ERN RARE-LIVER), Fondazione IRCCS San Gerardo dei Tintori, Monza, Italy
| |
Collapse
|
8
|
Drabiak K, Kyzer S, Nemov V, El Naqa I. AI and machine learning ethics, law, diversity, and global impact. Br J Radiol 2023; 96:20220934. [PMID: 37191072 PMCID: PMC10546451 DOI: 10.1259/bjr.20220934] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Revised: 03/20/2023] [Accepted: 03/29/2023] [Indexed: 05/17/2023] Open
Abstract
Artificial intelligence (AI) and its machine learning (ML) algorithms are offering new promise for personalized biomedicine and more cost-effective healthcare with impressive technical capability to mimic human cognitive capabilities. However, widespread application of this promising technology has been limited in the medical domain and expectations have been tampered by ethical challenges and concerns regarding patient privacy, legal responsibility, trustworthiness, and fairness. To balance technical innovation with ethical applications of AI/ML, developers must demonstrate the AI functions as intended and adopt strategies to minimize the risks for failure or bias. This review describes the new ethical challenges created by AI/ML for clinical care and identifies specific considerations for its practice in medicine. We provide an overview of regulatory and legal issues applicable in Europe and the United States, a description of technical aspects to consider, and present recommendations for trustworthy AI/ML that promote transparency, minimize risks of bias or error, and protect the patient well-being.
Collapse
Affiliation(s)
- Katherine Drabiak
- Colleges of Public Health and Medicine, University of South Florida, Tampa, FL, USA
| | - Skylar Kyzer
- Colleges of Public Health and Medicine, University of South Florida, Tampa, FL, USA
| | - Valerie Nemov
- Colleges of Public Health and Medicine, University of South Florida, Tampa, FL, USA
| | - Issam El Naqa
- Department of Machine Learning, Moffitt Cancer Center, Tampa, FL, USA
| |
Collapse
|
9
|
Patsakis C, Lykousas N. Man vs the machine in the struggle for effective text anonymisation in the age of large language models. Sci Rep 2023; 13:16026. [PMID: 37749217 PMCID: PMC10519977 DOI: 10.1038/s41598-023-42977-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2023] [Accepted: 09/17/2023] [Indexed: 09/27/2023] Open
Abstract
The collection and use of personal data are becoming more common in today's data-driven culture. While there are many advantages to this, including better decision-making and service delivery, it also poses significant ethical issues around confidentiality and privacy. Text anonymisation tries to prune and/or mask identifiable information from a text while keeping the remaining content intact to alleviate privacy concerns. Text anonymisation is especially important in industries like healthcare, law, as well as research, where sensitive and personal information is collected, processed, and exchanged under high legal and ethical standards. Although text anonymisation is widely adopted in practice, it continues to face considerable challenges. The most significant challenge is striking a balance between removing information to protect individuals' privacy while maintaining the text's usability for future purposes. The question is whether these anonymisation methods sufficiently reduce the risk of re-identification, in which an individual can be identified based on the remaining information in the text. In this work, we challenge the effectiveness of these methods and how we perceive identifiers. We assess the efficacy of these methods against the elephant in the room, the use of AI over big data. While most of the research is focused on identifying and removing personal information, there is limited discussion on whether the remaining information is sufficient to deanonymise individuals and, more precisely, who can do it. To this end, we conduct an experiment using GPT over anonymised texts of famous people to determine whether such trained networks can deanonymise them. The latter allows us to revise these methods and introduce a novel methodology that employs Large Language Models to improve the anonymity of texts.
Collapse
Affiliation(s)
- Constantinos Patsakis
- Department of Informatics, University of Piraeus, 80 Karaoli & Dimitriou str, 18534, Piraeus, Greece.
- Management Systems Institute of Athena Research Centre, Marousi, Greece.
| | - Nikolaos Lykousas
- Management Systems Institute of Athena Research Centre, Marousi, Greece
- Data Centric Services, Bucharest, Romania
| |
Collapse
|
10
|
Redrup Hill E, Mitchell C, Brigden T, Hall A. Ethical and legal considerations influencing human involvement in the implementation of artificial intelligence in a clinical pathway: A multi-stakeholder perspective. Front Digit Health 2023; 5:1139210. [PMID: 36999168 PMCID: PMC10043985 DOI: 10.3389/fdgth.2023.1139210] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2023] [Accepted: 02/23/2023] [Indexed: 03/18/2023] Open
Abstract
IntroductionEthical and legal factors will have an important bearing on when and whether automation is appropriate in healthcare. There is a developing literature on the ethics of artificial intelligence (AI) in health, including specific legal or regulatory questions such as whether there is a right to an explanation of AI decision-making. However, there has been limited consideration of the specific ethical and legal factors that influence when, and in what form, human involvement may be required in the implementation of AI in a clinical pathway, and the views of the wide range of stakeholders involved. To address this question, we chose the exemplar of the pathway for the early detection of Barrett's Oesophagus (BE) and oesophageal adenocarcinoma, where Gehrung and colleagues have developed a “semi-automated”, deep-learning system to analyse samples from the CytospongeTM TFF3 test (a minimally invasive alternative to endoscopy), where AI promises to mitigate increasing demands for pathologists' time and input.MethodsWe gathered a multidisciplinary group of stakeholders, including developers, patients, healthcare professionals and regulators, to obtain their perspectives on the ethical and legal issues that may arise using this exemplar.ResultsThe findings are grouped under six general themes: risk and potential harms; impacts on human experts; equity and bias; transparency and oversight; patient information and choice; accountability, moral responsibility and liability for error. Within these themes, a range of subtle and context-specific elements emerged, highlighting the importance of pre-implementation, interdisciplinary discussions and appreciation of pathway specific considerations.DiscussionTo evaluate these findings, we draw on the well-established principles of biomedical ethics identified by Beauchamp and Childress as a lens through which to view these results and their implications for personalised medicine. Our findings are not only relevant to this context but have implications for AI in digital pathology and healthcare more broadly.
Collapse
|
11
|
Amann J, Vayena E, Ormond KE, Frey D, Madai VI, Blasimme A. Expectations and attitudes towards medical artificial intelligence: A qualitative study in the field of stroke. PLoS One 2023; 18:e0279088. [PMID: 36630325 PMCID: PMC9833517 DOI: 10.1371/journal.pone.0279088] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2022] [Accepted: 11/01/2022] [Indexed: 01/12/2023] Open
Abstract
INTRODUCTION Artificial intelligence (AI) has the potential to transform clinical decision-making as we know it. Powered by sophisticated machine learning algorithms, clinical decision support systems (CDSS) can generate unprecedented amounts of predictive information about individuals' health. Yet, despite the potential of these systems to promote proactive decision-making and improve health outcomes, their utility and impact remain poorly understood due to their still rare application in clinical practice. Taking the example of AI-powered CDSS in stroke medicine as a case in point, this paper provides a nuanced account of stroke survivors', family members', and healthcare professionals' expectations and attitudes towards medical AI. METHODS We followed a qualitative research design informed by the sociology of expectations, which recognizes the generative role of individuals' expectations in shaping scientific and technological change. Semi-structured interviews were conducted with stroke survivors, family members, and healthcare professionals specialized in stroke based in Germany and Switzerland. Data was analyzed using a combination of inductive and deductive thematic analysis. RESULTS Based on the participants' deliberations, we identified four presumed roles that medical AI could play in stroke medicine, including an administrative, assistive, advisory, and autonomous role AI. While most participants held positive attitudes towards medical AI and its potential to increase accuracy, speed, and efficiency in medical decision making, they also cautioned that it is not a stand-alone solution and may even lead to new problems. Participants particularly emphasized the importance of relational aspects and raised questions regarding the impact of AI on roles and responsibilities and patients' rights to information and decision-making. These findings shed light on the potential impact of medical AI on professional identities, role perceptions, and the doctor-patient relationship. CONCLUSION Our findings highlight the need for a more differentiated approach to identifying and tackling pertinent ethical and legal issues in the context of medical AI. We advocate for stakeholder and public involvement in the development of AI and AI governance to ensure that medical AI offers solutions to the most pressing challenges patients and clinicians face in clinical care.
Collapse
Affiliation(s)
- Julia Amann
- Health Ethics and Policy Lab, Department of Health Sciences and Technology, ETH Zurich, Zurich, Switzerland
| | - Effy Vayena
- Health Ethics and Policy Lab, Department of Health Sciences and Technology, ETH Zurich, Zurich, Switzerland
| | - Kelly E. Ormond
- Health Ethics and Policy Lab, Department of Health Sciences and Technology, ETH Zurich, Zurich, Switzerland
| | - Dietmar Frey
- CLAIM—Charité Lab for AI in Medicine, Charité Universitätsmedizin Berlin, Berlin, Germany
| | - Vince I. Madai
- CLAIM—Charité Lab for AI in Medicine, Charité Universitätsmedizin Berlin, Berlin, Germany
- QUEST Center for Responsible Research, Berlin Institute of Health (BIH), Charité Universitätsmedizin Berlin, Berlin, Germany
- School of Computing and Digital Technology, Faculty of Computing, Engineering and the Built Environment, Birmingham City University, Birmingham, United Kingdom
| | - Alessandro Blasimme
- Health Ethics and Policy Lab, Department of Health Sciences and Technology, ETH Zurich, Zurich, Switzerland
| |
Collapse
|
12
|
Kerasidou CX, Kerasidou A, Buscher M, Wilkinson S. Before and beyond trust: reliance in medical AI. JOURNAL OF MEDICAL ETHICS 2022; 48:852-856. [PMID: 34426519 PMCID: PMC9626908 DOI: 10.1136/medethics-2020-107095] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/09/2020] [Accepted: 07/02/2021] [Indexed: 06/13/2023]
Abstract
Artificial intelligence (AI) is changing healthcare and the practice of medicine as data-driven science and machine-learning technologies, in particular, are contributing to a variety of medical and clinical tasks. Such advancements have also raised many questions, especially about public trust. As a response to these concerns there has been a concentrated effort from public bodies, policy-makers and technology companies leading the way in AI to address what is identified as a "public trust deficit". This paper argues that a focus on trust as the basis upon which a relationship between this new technology and the public is built is, at best, ineffective, at worst, inappropriate or even dangerous, as it diverts attention from what is actually needed to actively warrant trust. Instead of agonising about how to facilitate trust, a type of relationship which can leave those trusting vulnerable and exposed, we argue that efforts should be focused on the difficult and dynamic process of ensuring reliance underwritten by strong legal and regulatory frameworks. From there, trust could emerge but not merely as a means to an end. Instead, as something to work in practice towards; that is, the deserved result of an ongoing ethical relationship where there is the appropriate, enforceable and reliable regulatory infrastructure in place for problems, challenges and power asymmetries to be continuously accounted for and appropriately redressed.
Collapse
Affiliation(s)
| | - Angeliki Kerasidou
- The Ethox Centre, Nuffield Department of Population Health, University of Oxford, Oxford, UK
| | - Monika Buscher
- Department of Sociology, Lancaster University, Lancaster, UK
| | - Stephen Wilkinson
- Department of Politics, Philosophy, & Religion, Lancaster University, Lancaster, UK
| |
Collapse
|
13
|
Yadav M, Tanwar M. Impact of COVID-19 on glaucoma management: A review. FRONTIERS IN OPHTHALMOLOGY 2022; 2:1003653. [PMID: 38983512 PMCID: PMC11182257 DOI: 10.3389/fopht.2022.1003653] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/26/2022] [Accepted: 08/23/2022] [Indexed: 07/11/2024]
Abstract
Glaucoma is the leading cause of irreversible vision loss and the second leading cause of blindness worldwide. The rapid transmission of SARS-CoV-2virus compelled governments to concentrate their efforts on emergency units to treat the large number of cases that arose due to the Covid-19 outbreak. As a result, many chronically ill patients were left without access to medical care. The progression of glaucoma in previously diagnosed cases has been accelerated; due to this, some have lost their vision. Evaluation of Covid-19's effect on glaucoma treatment was one goal of this study. We used search phrases like "COVID-19," "telemedicine," and "glaucoma" to find published papers on COVID-19 and glaucoma. Artificial Intelligence (AI) may be the answer to the unanswered questions that arose due to this pandemic crisis. The benefits and drawbacks of AI in the context of teliglaucoma have been thoroughly examined. These AI-related ideas have been floating around for some time. We hope that Covid-19's enormous revisions will provide them with the motivation to move forward and significantly improve services. Despite the devastation the pandemic has caused, we are hopeful that eye care services will be better prepared and better equipped to avoid the loss of sight due to glaucoma in future.
Collapse
Affiliation(s)
| | - Mukesh Tanwar
- Department of Genetics, Maharshi Dayanand University, Rohtak, India
| |
Collapse
|
14
|
Hu Z, Hu R, Yau O, Teng M, Wang P, Hu G, Singla R. Tempering Expectations on the Medical Artificial Intelligence Revolution: The Medical Trainee Viewpoint. JMIR Med Inform 2022; 10:e34304. [PMID: 35969464 PMCID: PMC9425164 DOI: 10.2196/34304] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2021] [Revised: 07/29/2022] [Accepted: 08/02/2022] [Indexed: 12/02/2022] Open
Abstract
The rapid development of artificial intelligence (AI) in medicine has resulted in an increased number of applications deployed in clinical trials. AI tools have been developed with goals of improving diagnostic accuracy, workflow efficiency through automation, and discovery of novel features in clinical data. There is subsequent concern on the role of AI in replacing existing tasks traditionally entrusted to physicians. This has implications for medical trainees who may make decisions based on the perception of how disruptive AI may be to their future career. This commentary discusses current barriers to AI adoption to moderate concerns of the role of AI in the clinical setting, particularly as a standalone tool that replaces physicians. Technical limitations of AI include generalizability of performance and deficits in existing infrastructure to accommodate data, both of which are less obvious in pilot studies, where high performance is achieved in a controlled data processing environment. Economic limitations include rigorous regulatory requirements to deploy medical devices safely, particularly if AI is to replace human decision-making. Ethical guidelines are also required in the event of dysfunction to identify responsibility of the developer of the tool, health care authority, and patient. The consequences are apparent when identifying the scope of existing AI tools, most of which aim to be physician assisting rather than a physician replacement. The combination of the limitations will delay the onset of ubiquitous AI tools that perform standalone clinical tasks. The role of the physician likely remains paramount to clinical decision-making in the near future.
Collapse
Affiliation(s)
- Zoe Hu
- School of Medicine, Queen's University, Kingston, ON, Canada
| | - Ricky Hu
- School of Medicine, Queen's University, Kingston, ON, Canada.,School of Biomedical Engineering, University of British Columbia, Vancouver, BC, Canada
| | - Olivia Yau
- School of Medicine, University of British Columbia, Vancouver, BC, Canada
| | - Minnie Teng
- School of Medicine, University of British Columbia, Vancouver, BC, Canada
| | - Patrick Wang
- School of Medicine, Queen's University, Kingston, ON, Canada
| | - Grace Hu
- Temerty Faculty of Medicine, University of Toronto, Toronto, ON, Canada
| | - Rohit Singla
- School of Biomedical Engineering, University of British Columbia, Vancouver, BC, Canada.,School of Medicine, University of British Columbia, Vancouver, BC, Canada
| |
Collapse
|
15
|
Banja JD, Hollstein RD, Bruno MA. When Artificial Intelligence Models Surpass Physician Performance: Medical Malpractice Liability in an Era of Advanced Artificial Intelligence. J Am Coll Radiol 2022; 19:816-820. [PMID: 35120881 DOI: 10.1016/j.jacr.2021.11.014] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2021] [Accepted: 11/26/2021] [Indexed: 10/19/2022]
Abstract
It seems inevitable that diagnostic and recommender artificial intelligence models will ultimately reach a point when they outperform human clinicians. Just as antibiotics displaced a host of medicinals for treating infections, the superior performance of such models will force their adoption. This article contemplates certain ethical and legal implications bearing on that adoption, especially as they involve a clinician's exposure to allegations of malpractice. The article discusses four relevant considerations: (1) the imperative of using explainable artificial intelligence models in clinical care, (2) specific strategies for diminishing liability when a clinician agrees or disagrees with a model's findings or recommendations but the patient nevertheless experiences a poor outcome, (3) relieving liability through legislation or regulation, and (4) comprehending such models as "persons" and therefore as potential defendants in legal proceedings. We conclude with observations on clinician-vendor relationships and argue that, although advanced artificial intelligence models have not yet arrived, clinicians must begin considering their implications now.
Collapse
Affiliation(s)
- John D Banja
- Professor and Director of the Center for Ethics, Center for Ethics and Department of Rehabilitation Medicine, Emory University, Atlanta, Georgia.
| | - Rolf Dieter Hollstein
- President, Advanced Radiology Services Foundation, Clinical Assistant Professor, Michigan State University; Advanced Radiology Services, P.C., Grand Rapids, Michigan
| | - Michael A Bruno
- Professor, Radiology and Medicine, Vice Chair, Quality and Safety, and Chief, Emergency Radiology, Department Radiology, Penn State Milton S. Hershey Medical Center, Hershey, Pennsylvania
| |
Collapse
|
16
|
Bleher H, Braun M. Diffused responsibility: attributions of responsibility in the use of AI-driven clinical decision support systems. AI AND ETHICS 2022; 2:747-761. [PMID: 35098247 PMCID: PMC8785388 DOI: 10.1007/s43681-022-00135-x] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/30/2021] [Accepted: 12/31/2021] [Indexed: 01/02/2023]
Abstract
Good decision-making is a complex endeavor, and particularly so in a health context. The possibilities for day-to-day clinical practice opened up by AI-driven clinical decision support systems (AI-CDSS) give rise to fundamental questions around responsibility. In causal, moral and legal terms the application of AI-CDSS is challenging existing attributions of responsibility. In this context, responsibility gaps are often identified as main problem. Mapping out the changing dynamics and levels of attributing responsibility, we argue in this article that the application of AI-CDSS causes diffusions of responsibility with respect to a causal, moral, and legal dimension. Responsibility diffusion describes the situation where multiple options and several agents can be considered for attributing responsibility. Using the example of an AI-driven ‘digital tumor board’, we illustrate how clinical decision-making is changed and diffusions of responsibility take place. Not denying or attempting to bridge responsibility gaps, we argue that dynamics and ambivalences are inherent in responsibility, which is based on normative considerations such as avoiding experiences of disregard and vulnerability of human life, which are inherently accompanied by a moment of uncertainty, and is characterized by revision openness. Against this background and to avoid responsibility gaps, the article concludes with suggestions for managing responsibility diffusions in clinical decision-making with AI-CDSS.
Collapse
Affiliation(s)
- Hannah Bleher
- Friedrich-Alexander University of Erlangen-Nuremberg (FAU), Institute for Systematic Theology, Chair of Systematic Theology II (Ethics), Kochstraße 6, 91054 Erlangen, Germany
| | - Matthias Braun
- Friedrich-Alexander University of Erlangen-Nuremberg (FAU), Institute for Systematic Theology, Chair of Systematic Theology II (Ethics), Kochstraße 6, 91054 Erlangen, Germany
| |
Collapse
|
17
|
Trustworthy AI: Closing the gap between development and integration of AI systems in ophthalmic practice. Prog Retin Eye Res 2021; 90:101034. [PMID: 34902546 DOI: 10.1016/j.preteyeres.2021.101034] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2021] [Revised: 12/03/2021] [Accepted: 12/06/2021] [Indexed: 01/14/2023]
Abstract
An increasing number of artificial intelligence (AI) systems are being proposed in ophthalmology, motivated by the variety and amount of clinical and imaging data, as well as their potential benefits at the different stages of patient care. Despite achieving close or even superior performance to that of experts, there is a critical gap between development and integration of AI systems in ophthalmic practice. This work focuses on the importance of trustworthy AI to close that gap. We identify the main aspects or challenges that need to be considered along the AI design pipeline so as to generate systems that meet the requirements to be deemed trustworthy, including those concerning accuracy, resiliency, reliability, safety, and accountability. We elaborate on mechanisms and considerations to address those aspects or challenges, and define the roles and responsibilities of the different stakeholders involved in AI for ophthalmic care, i.e., AI developers, reading centers, healthcare providers, healthcare institutions, ophthalmological societies and working groups or committees, patients, regulatory bodies, and payers. Generating trustworthy AI is not a responsibility of a sole stakeholder. There is an impending necessity for a collaborative approach where the different stakeholders are represented along the AI design pipeline, from the definition of the intended use to post-market surveillance after regulatory approval. This work contributes to establish such multi-stakeholder interaction and the main action points to be taken so that the potential benefits of AI reach real-world ophthalmic settings.
Collapse
|
18
|
Tigard DW. Technological Answerability and the Severance Problem: Staying Connected by Demanding Answers. SCIENCE AND ENGINEERING ETHICS 2021; 27:59. [PMID: 34427804 PMCID: PMC8383242 DOI: 10.1007/s11948-021-00334-5] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/30/2020] [Accepted: 08/02/2021] [Indexed: 05/20/2023]
Abstract
Artificial intelligence (AI) and robotic technologies have become nearly ubiquitous. In some ways, the developments have likely helped us, but in other ways sophisticated technologies set back our interests. Among the latter sort is what has been dubbed the 'severance problem'-the idea that technologies sever our connection to the world, a connection which is necessary for us to flourish and live meaningful lives. I grant that the severance problem is a threat we should mitigate and I ask: how can we stave it off? In particular, the fact that some technologies exhibit behavior that is unclear to us seems to constitute a kind of severance. Building upon contemporary work on moral responsibility, I argue for a mechanism I refer to as 'technological answerability', namely the capacity to recognize human demands for answers and to respond accordingly. By designing select devices-such as robotic assistants and personal AI programs-for increased answerability, we see at least one way of satisfying our demands for answers and thereby retaining our connection to a world increasingly occupied by technology.
Collapse
Affiliation(s)
- Daniel W Tigard
- Institute for History and Ethics of Medicine, Technical University of Munich, Ismaninger Str. 22, 81675, Munich, Germany.
| |
Collapse
|
19
|
Jobson D, Mar V, Freckelton I. Legal and ethical considerations of artificial intelligence in skin cancer diagnosis. Australas J Dermatol 2021; 63:e1-e5. [PMID: 34407234 DOI: 10.1111/ajd.13690] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2021] [Revised: 07/22/2021] [Accepted: 07/24/2021] [Indexed: 01/08/2023]
Abstract
Artificial intelligence (AI) technology is becoming increasingly accurate and prevalent for the diagnosis of skin cancers. Commercially available AI diagnostic software is entering markets across the world posing new legal and ethical challenges for both clinicians and software companies. Australia has the highest rates of skin cancer in the world and is poised to be a significant benefactor and pioneer of the technology. This review describes the legal and ethical considerations raised by the emergence of artificial intelligence in skin cancer diagnosis and proposes recommendations for best practice.
Collapse
Affiliation(s)
- Dale Jobson
- Victorian Melanoma Service, Alfred Hospital, Melbourne, Victoria, Australia
| | - Victoria Mar
- Victorian Melanoma Service, Alfred Hospital, Melbourne, Victoria, Australia.,School of Public Health and Preventive Medicine, Monash University, Melbourne, Victoria, Australia
| | - Ian Freckelton
- Victorian Bar, Melbourne, Victoria, Australia.,Law Faculty, Department of Psychiatry, University of Melbourne, Melbourne, Victoria, Australia
| |
Collapse
|
20
|
Kudina O. Regulating AI in Health Care: The Challenges of Informed User Engagement. Hastings Cent Rep 2021; 51:6-7. [PMID: 34159617 DOI: 10.1002/hast.1263] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
The European Union's proposed Artificial Intelligence Act is a welcome, ambitious law on the regulation of AI systems. However, it underestimates the responsibilities placed on individual users to navigate the implementation of AI. Focusing on the health care sector, this policy piece examines challenges that the proposed law bypasses. First, effective human-AI collaboration in the diagnostic process hinges on the acknowledgment of AI's mediating role in this process, on forming a diagnostic dialogue between humans and AI. Second, with AI in this mediating role, the meaning of responsibility is changed to accommodate the broadened scope of clinician and patient duties, modified clinical workflows, and emergent medical norms. Finally, the challenge of media literacy concerns both the issues of access to knowledge and the ability to make informed choices regarding human-AI interaction. This policy piece suggests that embracing the complexity of the use practices is essential to achieving an effective human-AI partnership, in the medical sector and at large.
Collapse
|
21
|
Ramessur R, Raja L, Kilduff CLS, Kang S, Li JPO, Thomas PBM, Sim DA. Impact and Challenges of Integrating Artificial Intelligence and Telemedicine into Clinical Ophthalmology. Asia Pac J Ophthalmol (Phila) 2021; 10:317-327. [PMID: 34383722 DOI: 10.1097/apo.0000000000000406] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022] Open
Abstract
ABSTRACT Aging populations and worsening burden of chronic, treatable disease is increasingly creating a global shortfall in ophthalmic care provision. Remote and automated systems carry the promise to expand the scale and potential of health care interventions, and reduce strain on health care services through safe, personalized, efficient, and cost-effective services. However, significant challenges remain. Forward planning in service design is paramount to safeguard patient safety, trust in digital services, data privacy, medico-legal implications, and digital exclusion. We explore the impact and challenges facing patients and clinicians in integrating AI and telemedicine into ophthalmic care-and how these may influence its direction.
Collapse
Affiliation(s)
- Rishi Ramessur
- Royal Free Hospital, Royal Free London NHS Foundation Trust, London, United Kingdom
| | - Laxmi Raja
- Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom
| | - Caroline L S Kilduff
- Central Middlesex Hospital, London North West University Healthcare NHS Trust, London, United Kingdom
| | - Swan Kang
- Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom
| | - Ji-Peng Olivia Li
- Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom
| | - Peter B M Thomas
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, United Kingdom
| | - Dawn A Sim
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, United Kingdom
| |
Collapse
|