1
|
Bratan T, Schneider D, Funer F, Heyen NB, Klausen A, Liedtke W, Lipprandt M, Salloch S, Langanke M. [Supporting medical and nursing activities with AI: recommendations for responsible design and use]. Bundesgesundheitsblatt Gesundheitsforschung Gesundheitsschutz 2024:10.1007/s00103-024-03918-1. [PMID: 39017712 DOI: 10.1007/s00103-024-03918-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2023] [Accepted: 06/12/2024] [Indexed: 07/18/2024]
Abstract
Clinical decision support systems (CDSS) based on artificial intelligence (AI) are complex socio-technical innovations and are increasingly being used in medicine and nursing to improve the overall quality and efficiency of care, while also addressing limited financial and human resources. However, in addition to such intended clinical and organisational effects, far-reaching ethical, social and legal implications of AI-based CDSS on patient care and nursing are to be expected. To date, these normative-social implications have not been sufficiently investigated. The BMBF-funded project DESIREE (DEcision Support In Routine and Emergency HEalth Care: Ethical and Social Implications) has developed recommendations for the responsible design and use of clinical decision support systems. This article focuses primarily on ethical and social aspects of AI-based CDSS that could have a negative impact on patient health. Our recommendations are intended as additions to existing recommendations and are divided into the following action fields with relevance across all stakeholder groups: development, clinical use, information and consent, education and training, and (accompanying) research.
Collapse
Affiliation(s)
- Tanja Bratan
- Competence Center Neue Technologien, Fraunhofer-Institut für System- und Innovationsforschung ISI, Breslauer Straße 48, 76139, Karlsruhe, Deutschland.
| | - Diana Schneider
- Competence Center Neue Technologien, Fraunhofer-Institut für System- und Innovationsforschung ISI, Breslauer Straße 48, 76139, Karlsruhe, Deutschland
| | - Florian Funer
- Institut für Ethik, Geschichte und Philosophie der Medizin, Medizinische Hochschule Hannover (MHH), Hannover, Deutschland
- Institut für Ethik und Geschichte der Medizin, Eberhard Karls Universität Tübingen, Tübingen, Deutschland
| | - Nils B Heyen
- Competence Center Neue Technologien, Fraunhofer-Institut für System- und Innovationsforschung ISI, Breslauer Straße 48, 76139, Karlsruhe, Deutschland
| | - Andrea Klausen
- Uniklinik RWTH Aachen, Institut für Medizinische Informatik, Rheinisch-Westfälische Technische Hochschule (RWTH) Aachen, Aachen, Deutschland
| | - Wenke Liedtke
- Theologische Fakultät, Universität Greifswald, Greifswald, Deutschland
| | - Myriam Lipprandt
- Uniklinik RWTH Aachen, Institut für Medizinische Informatik, Rheinisch-Westfälische Technische Hochschule (RWTH) Aachen, Aachen, Deutschland
| | - Sabine Salloch
- Institut für Ethik, Geschichte und Philosophie der Medizin, Medizinische Hochschule Hannover (MHH), Hannover, Deutschland
| | - Martin Langanke
- Angewandte Ethik/Fachbereich Soziale Arbeit, Evangelische Hochschule Rheinland-Westfalen-Lippe, Bochum, Deutschland
| |
Collapse
|
2
|
Vandemeulebroucke T. The ethics of artificial intelligence systems in healthcare and medicine: from a local to a global perspective, and back. Pflugers Arch 2024:10.1007/s00424-024-02984-3. [PMID: 38969841 DOI: 10.1007/s00424-024-02984-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2024] [Revised: 04/30/2024] [Accepted: 06/24/2024] [Indexed: 07/07/2024]
Abstract
Artificial intelligence systems (ai-systems) (e.g. machine learning, generative artificial intelligence), in healthcare and medicine, have been received with hopes of better care quality, more efficiency, lower care costs, etc. Simultaneously, these systems have been met with reservations regarding their impacts on stakeholders' privacy, on changing power dynamics, on systemic biases, etc. Fortunately, healthcare and medicine have been guided by a multitude of ethical principles, frameworks, or approaches, which also guide the use of ai-systems in healthcare and medicine, in one form or another. Nevertheless, in this article, I argue that most of these approaches are inspired by a local isolationist view on ai-systems, here exemplified by the principlist approach. Despite positive contributions to laying out the ethical landscape of ai-systems in healthcare and medicine, such ethics approaches are too focused on a specific local healthcare and medical setting, be it a particular care relationship, a particular care organisation, or a particular society or region. By doing so, they lose sight of the global impacts ai-systems have, especially environmental impacts and related social impacts, such as increased health risks. To meet this gap, this article presents a global approach to the ethics of ai-systems in healthcare and medicine which consists of five levels of ethical impacts and analysis: individual-relational, organisational, societal, global, and historical. As such, this global approach incorporates the local isolationist view by integrating it in a wider landscape of ethical consideration so to ensure ai-systems meet the needs of everyone everywhere.
Collapse
Affiliation(s)
- Tijs Vandemeulebroucke
- Bonn Sustainable AI Lab, Institut für Wissenschaft und Ethik, Universität Bonn-University of Bonn, Bonner Talweg 57, 53113, Bonn, Germany.
| |
Collapse
|
3
|
Yu L, Zhai X. Use of artificial intelligence to address health disparities in low- and middle-income countries: a thematic analysis of ethical issues. Public Health 2024; 234:77-83. [PMID: 38964129 DOI: 10.1016/j.puhe.2024.05.029] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2024] [Revised: 04/26/2024] [Accepted: 05/22/2024] [Indexed: 07/06/2024]
Abstract
OBJECTIVES Artificial intelligence (AI) is reshaping health and medicine, especially through its potential to address health disparities in low- and middle-income countries (LMICs). However, there are several issues associated with the use of AI that may reduce its impact and potentially exacerbate global health disparities. This study presents the key issues in AI deployment faced by LMICs. STUDY DESIGN Thematic analysis. METHODS PubMed, Scopus, Embase and the Web of Science databases were searched, from the date of their inception until September 2023, using the terms "artificial intelligence", "LMICs", "ethic∗" and "global health". Additional searches were conducted by snowballing references before and after the primary search. The final studies were chosen based on their relevance to the topic of this article. RESULTS After reviewing 378 articles, 14 studies were included in the final analysis. A concept named the 'AI Deployment Paradox' was introduced to focus on the challenges of using AI to address health disparities in LMICs, and the following three categories were identified: (1) data poverty and contextual shifts; (2) cost-effectiveness and health equity; and (3) new technological colonisation and potential exploitation. CONCLUSIONS The relationship between global health, AI and ethical considerations is an area that requires systematic investigation. Relying on health data inherent with structural biases and deploying AI without systematic ethical considerations may exacerbate global health inequalities. Addressing these challenges requires nuanced socio-political comprehension, localised stakeholder engagement, and well-considered ethical and regulatory frameworks.
Collapse
Affiliation(s)
- Lanyi Yu
- School of Population Medicine and Public Health, Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, China; Center for Bioethics, Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, China
| | - Xiaomei Zhai
- School of Population Medicine and Public Health, Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, China; Center for Bioethics, Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, China.
| |
Collapse
|
4
|
Law S, Oldfield B, Yang W. ChatGPT/GPT-4 (large language models): Opportunities and challenges of perspective in bariatric healthcare professionals. Obes Rev 2024; 25:e13746. [PMID: 38613164 DOI: 10.1111/obr.13746] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/10/2023] [Revised: 03/14/2024] [Accepted: 03/15/2024] [Indexed: 04/14/2024]
Abstract
ChatGPT/GPT-4 is a conversational large language model (LLM) based on artificial intelligence (AI). The potential application of LLM as a virtual assistant for bariatric healthcare professionals in education and practice may be promising if relevant and valid issues are actively examined and addressed. In general medical terms, it is possible that AI models like ChatGPT/GPT-4 will be deeply integrated into medical scenarios, improving medical efficiency and quality, and allowing doctors more time to communicate with patients and implement personalized health management. Chatbots based on AI have great potential in bariatric healthcare and may play an important role in predicting and intervening in weight loss and obesity-related complications. However, given its potential limitations, we should carefully consider the medical, legal, ethical, data security, privacy, and liability issues arising from medical errors caused by ChatGPT/GPT-4. This concern also extends to ChatGPT/GPT -4's ability to justify wrong decisions, and there is an urgent need for appropriate guidelines and regulations to ensure the safe and responsible use of ChatGPT/GPT-4.
Collapse
Affiliation(s)
- Saikam Law
- Department of Metabolic and Bariatric Surgery, The First Affiliated Hospital of Jinan University, Guangzhou, China
- School of Medicine, Jinan University, Guangzhou, China
| | - Brian Oldfield
- Department of Physiology, Monash Biomedicine Discovery Institute, Monash University, Melbourne, Australia
| | - Wah Yang
- Department of Metabolic and Bariatric Surgery, The First Affiliated Hospital of Jinan University, Guangzhou, China
| |
Collapse
|
5
|
Yao J, Lim J, Lim GYS, Ong JCL, Ke Y, Tan TF, Tan TE, Vujosevic S, Ting DSW. Novel artificial intelligence algorithms for diabetic retinopathy and diabetic macular edema. EYE AND VISION (LONDON, ENGLAND) 2024; 11:23. [PMID: 38880890 PMCID: PMC11181581 DOI: 10.1186/s40662-024-00389-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/03/2024] [Accepted: 05/09/2024] [Indexed: 06/18/2024]
Abstract
BACKGROUND Diabetic retinopathy (DR) and diabetic macular edema (DME) are major causes of visual impairment that challenge global vision health. New strategies are needed to tackle these growing global health problems, and the integration of artificial intelligence (AI) into ophthalmology has the potential to revolutionize DR and DME management to meet these challenges. MAIN TEXT This review discusses the latest AI-driven methodologies in the context of DR and DME in terms of disease identification, patient-specific disease profiling, and short-term and long-term management. This includes current screening and diagnostic systems and their real-world implementation, lesion detection and analysis, disease progression prediction, and treatment response models. It also highlights the technical advancements that have been made in these areas. Despite these advancements, there are obstacles to the widespread adoption of these technologies in clinical settings, including regulatory and privacy concerns, the need for extensive validation, and integration with existing healthcare systems. We also explore the disparity between the potential of AI models and their actual effectiveness in real-world applications. CONCLUSION AI has the potential to revolutionize the management of DR and DME, offering more efficient and precise tools for healthcare professionals. However, overcoming challenges in deployment, regulatory compliance, and patient privacy is essential for these technologies to realize their full potential. Future research should aim to bridge the gap between technological innovation and clinical application, ensuring AI tools integrate seamlessly into healthcare workflows to enhance patient outcomes.
Collapse
Affiliation(s)
- Jie Yao
- Singapore National Eye Centre, Singapore Eye Research Institute, 11 Third Hospital Avenue, Singapore, 168751, Singapore
- Duke-NUS Medical School, Singapore, Singapore
| | - Joshua Lim
- Singapore National Eye Centre, Singapore Eye Research Institute, 11 Third Hospital Avenue, Singapore, 168751, Singapore
| | - Gilbert Yong San Lim
- Duke-NUS Medical School, Singapore, Singapore
- SingHealth AI Health Program, Singapore, Singapore
| | - Jasmine Chiat Ling Ong
- Duke-NUS Medical School, Singapore, Singapore
- Division of Pharmacy, Singapore General Hospital, Singapore, Singapore
| | - Yuhe Ke
- Department of Anesthesiology and Perioperative Science, Singapore General Hospital, Singapore, Singapore
| | - Ting Fang Tan
- Singapore National Eye Centre, Singapore Eye Research Institute, 11 Third Hospital Avenue, Singapore, 168751, Singapore
| | - Tien-En Tan
- Singapore National Eye Centre, Singapore Eye Research Institute, 11 Third Hospital Avenue, Singapore, 168751, Singapore
- Duke-NUS Medical School, Singapore, Singapore
| | - Stela Vujosevic
- Department of Biomedical, Surgical and Dental Sciences, University of Milan, Milan, Italy
- Eye Clinic, IRCCS MultiMedica, Milan, Italy
| | - Daniel Shu Wei Ting
- Singapore National Eye Centre, Singapore Eye Research Institute, 11 Third Hospital Avenue, Singapore, 168751, Singapore.
- Duke-NUS Medical School, Singapore, Singapore.
- SingHealth AI Health Program, Singapore, Singapore.
| |
Collapse
|
6
|
Olver IN. Ethics of artificial intelligence in supportive care in cancer. Med J Aust 2024; 220:499-501. [PMID: 38714360 DOI: 10.5694/mja2.52297] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2023] [Accepted: 12/22/2023] [Indexed: 05/09/2024]
|
7
|
Tretter M. Equipping AI-decision-support-systems with emotional capabilities? Ethical perspectives. Front Artif Intell 2024; 7:1398395. [PMID: 38881951 PMCID: PMC11177341 DOI: 10.3389/frai.2024.1398395] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2024] [Accepted: 05/13/2024] [Indexed: 06/18/2024] Open
Abstract
It is important to accompany the research on Emotional Artificial Intelligence with ethical oversight. Previous publications on the ethics of Emotional Artificial Intelligence emphasize the importance of subjecting every (possible) type of Emotional Artificial Intelligence to separate ethical considerations. That's why, in this contribution I will focus on a particular subset of AI systems: AI-driven Decision-Support Systems (AI-DSS), and ask whether it would be advisable from an ethical perspective to equip these AI systems with emotional capacities. I will show, on one hand, equipping AI-DSS with emotional capabilities offers great opportunities, as they open the possibility to prevent emotionally biased decisions - but that it also amplifies the ethical challenges already posed by emotionally-incapable AI-DSS. Yet, if their introduction is accompanied by a broad social discourse and prepared by suitable measures to address these challenges, I argue, nothing should fundamentally stand in the way of equipping AI-DSS with emotional capabilities.
Collapse
Affiliation(s)
- Max Tretter
- Faculty of Humanities, Social Sciences, and Theology, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| |
Collapse
|
8
|
Kuziemsky CE, Chrimes D, Minshall S, Mannerow M, Lau F. AI Quality Standards in Health Care: Rapid Umbrella Review. J Med Internet Res 2024; 26:e54705. [PMID: 38776538 PMCID: PMC11153979 DOI: 10.2196/54705] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2023] [Revised: 04/03/2024] [Accepted: 04/04/2024] [Indexed: 05/25/2024] Open
Abstract
BACKGROUND In recent years, there has been an upwelling of artificial intelligence (AI) studies in the health care literature. During this period, there has been an increasing number of proposed standards to evaluate the quality of health care AI studies. OBJECTIVE This rapid umbrella review examines the use of AI quality standards in a sample of health care AI systematic review articles published over a 36-month period. METHODS We used a modified version of the Joanna Briggs Institute umbrella review method. Our rapid approach was informed by the practical guide by Tricco and colleagues for conducting rapid reviews. Our search was focused on the MEDLINE database supplemented with Google Scholar. The inclusion criteria were English-language systematic reviews regardless of review type, with mention of AI and health in the abstract, published during a 36-month period. For the synthesis, we summarized the AI quality standards used and issues noted in these reviews drawing on a set of published health care AI standards, harmonized the terms used, and offered guidance to improve the quality of future health care AI studies. RESULTS We selected 33 review articles published between 2020 and 2022 in our synthesis. The reviews covered a wide range of objectives, topics, settings, designs, and results. Over 60 AI approaches across different domains were identified with varying levels of detail spanning different AI life cycle stages, making comparisons difficult. Health care AI quality standards were applied in only 39% (13/33) of the reviews and in 14% (25/178) of the original studies from the reviews examined, mostly to appraise their methodological or reporting quality. Only a handful mentioned the transparency, explainability, trustworthiness, ethics, and privacy aspects. A total of 23 AI quality standard-related issues were identified in the reviews. There was a recognized need to standardize the planning, conduct, and reporting of health care AI studies and address their broader societal, ethical, and regulatory implications. CONCLUSIONS Despite the growing number of AI standards to assess the quality of health care AI studies, they are seldom applied in practice. With increasing desire to adopt AI in different health topics, domains, and settings, practitioners and researchers must stay abreast of and adapt to the evolving landscape of health care AI quality standards and apply these standards to improve the quality of their AI studies.
Collapse
Affiliation(s)
| | - Dillon Chrimes
- School of Health Information Science, University of Victoria, Victoria, BC, Canada
| | - Simon Minshall
- School of Health Information Science, University of Victoria, Victoria, BC, Canada
| | | | - Francis Lau
- School of Health Information Science, University of Victoria, Victoria, BC, Canada
| |
Collapse
|
9
|
Mohammad-Rahimi H, Khoury ZH, Alamdari MI, Rokhshad R, Motie P, Parsa A, Tavares T, Sciubba JJ, Price JB, Sultan AS. Performance of AI chatbots on controversial topics in oral medicine, pathology, and radiology. Oral Surg Oral Med Oral Pathol Oral Radiol 2024; 137:508-514. [PMID: 38553304 DOI: 10.1016/j.oooo.2024.01.015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2023] [Revised: 12/20/2023] [Accepted: 01/25/2024] [Indexed: 06/19/2024]
Abstract
OBJECTIVES In this study, we assessed 6 different artificial intelligence (AI) chatbots (Bing, GPT-3.5, GPT-4, Google Bard, Claude, Sage) responses to controversial and difficult questions in oral pathology, oral medicine, and oral radiology. STUDY DESIGN The chatbots' answers were evaluated by board-certified specialists using a modified version of the global quality score on a 5-point Likert scale. The quality and validity of chatbot citations were evaluated. RESULTS Claude had the highest mean score of 4.341 ± 0.582 for oral pathology and medicine. Bing had the lowest scores of 3.447 ± 0.566. In oral radiology, GPT-4 had the highest mean score of 3.621 ± 1.009 and Bing the lowest score of 2.379 ± 0.978. GPT-4 achieved the highest mean score of 4.066 ± 0.825 for performance across all disciplines. 82 out of 349 (23.50%) of generated citations from chatbots were fake. CONCLUSIONS The most superior chatbot in providing high-quality information for controversial topics in various dental disciplines was GPT-4. Although the majority of chatbots performed well, it is suggested that developers of AI medical chatbots incorporate scientific citation authenticators to validate the outputted citations given the relatively high number of fabricated citations.
Collapse
Affiliation(s)
- Hossein Mohammad-Rahimi
- Division of Artificial Intelligence Research, University of Maryland School of Dentistry, Baltimore, MD, USA; Topic Group Dental Diagnostics and Digital Dentistry, ITU/WHO Focus Group AI on Health, Berlin, Germany
| | - Zaid H Khoury
- Department of Oral Diagnostic Sciences and Research, Meharry Medical College School of Dentistry, Nashville, TN, USA
| | - Mina Iranparvar Alamdari
- Department of Oral and Maxillofacial Radiology, School of Dentistry, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Rata Rokhshad
- Topic Group Dental Diagnostics and Digital Dentistry, ITU/WHO Focus Group AI on Health, Berlin, Germany
| | - Parisa Motie
- Medical Image and Signal Processing Research Center, Medical University of Isfahan, Isfahan, Iran
| | - Azin Parsa
- Department of Oncology and Diagnostic Sciences, University of Maryland School of Dentistry, Baltimore, MD, USA
| | - Tiffany Tavares
- Department of Comprehensive Dentistry, UT Health San Antonio School of Dentistry, San Antonio, TX, USA
| | - James J Sciubba
- Department of Otolaryngology, Head & Neck Surgery, The Johns Hopkins University, Baltimore, MD, USA
| | - Jeffery B Price
- Division of Artificial Intelligence Research, University of Maryland School of Dentistry, Baltimore, MD, USA; Department of Oncology and Diagnostic Sciences, University of Maryland School of Dentistry, Baltimore, MD, USA
| | - Ahmed S Sultan
- Division of Artificial Intelligence Research, University of Maryland School of Dentistry, Baltimore, MD, USA; Department of Oncology and Diagnostic Sciences, University of Maryland School of Dentistry, Baltimore, MD, USA; University of Maryland Marlene and Stewart Greenebaum Comprehensive Cancer Center, Baltimore, MD, USA.
| |
Collapse
|
10
|
Lee YM, Kim S, Lee YH, Kim HS, Seo SW, Kim H, Kim KJ. Defining Medical AI Competencies for Medical School Graduates: Outcomes of a Delphi Survey and Medical Student/Educator Questionnaire of South Korean Medical Schools. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2024; 99:524-533. [PMID: 38207056 DOI: 10.1097/acm.0000000000005618] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/13/2024]
Abstract
PURPOSE Given the increasing significance and potential impact of artificial intelligence (AI) technology on health care delivery, there is an increasing demand to integrate AI into medical school curricula. This study aimed to define medical AI competencies and identify the essential competencies for medical graduates in South Korea. METHOD An initial Delphi survey conducted in 2022 involving 4 groups of medical AI experts (n = 28) yielded 42 competency items. Subsequently, an online questionnaire survey was carried out with 1,955 participants (1,174 students and 781 professors) from medical schools across South Korea, utilizing the list of 42 competencies developed from the first Delphi round. A subsequent Delphi survey was conducted with 33 medical educators from 21 medical schools to differentiate the essential AI competencies from the optional ones. RESULTS The study identified 6 domains encompassing 36 AI competencies essential for medical graduates: (1) understanding digital health and changes driven by AI; (2) fundamental knowledge and skills in medical AI; (3) ethics and legal aspects in the use of medical AI; (4) medical AI application in clinical practice; (5) processing, analyzing, and evaluating medical data; and (6) research and development of medical AI, as well as subcompetencies within each domain. While numerous competencies within the first 4 domains were deemed essential, a higher percentage of experts indicated responses in the last 2 domains, data science and medical AI research and development, were optional. CONCLUSIONS This medical AI framework of 6 competencies and their subcompetencies for medical graduates exhibits promising potential for guiding the integration of AI into medical curricula. Further studies conducted in diverse contexts and countries are necessary to validate and confirm the applicability of these findings. Additional research is imperative for developing specific and feasible educational models to integrate these proposed competencies into pre-existing curricula.
Collapse
|
11
|
Marco-Ruiz L, Hernández MÁT, Ngo PD, Makhlysheva A, Svenning TO, Dyb K, Chomutare T, Llatas CF, Muñoz-Gama J, Tayefi M. A multinational study on artificial intelligence adoption: Clinical implementers' perspectives. Int J Med Inform 2024; 184:105377. [PMID: 38377725 DOI: 10.1016/j.ijmedinf.2024.105377] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2023] [Revised: 02/06/2024] [Accepted: 02/12/2024] [Indexed: 02/22/2024]
Abstract
BACKGROUND Despite substantial progress in AI research for healthcare, translating research achievements to AI systems in clinical settings is challenging and, in many cases, unsatisfactory. As a result, many AI investments have stalled at the prototype level, never reaching clinical settings. OBJECTIVE To improve the chances of future AI implementation projects succeeding, we analyzed the experiences of clinical AI system implementers to better understand the challenges and success factors in their implementations. METHODS Thirty-seven implementers of clinical AI from European and North and South American countries were interviewed. Semi-structured interviews were transcribed and analyzed qualitatively with the framework method, identifying the success factors and the reasons for challenges as well as documenting proposals from implementers to improve AI adoption in clinical settings. RESULTS We gathered the implementers' requirements for facilitating AI adoption in the clinical setting. The main findings include 1) the lesser importance of AI explainability in favor of proper clinical validation studies, 2) the need to actively involve clinical practitioners, and not only clinical researchers, in the inception of AI research projects, 3) the need for better information structures and processes to manage data access and the ethical approval of AI projects, 4) the need for better support for regulatory compliance and avoidance of duplications in data management approval bodies, 5) the need to increase both clinicians' and citizens' literacy as respects the benefits and limitations of AI, and 6) the need for better funding schemes to support the implementation, embedding, and validation of AI in the clinical workflow, beyond pilots. CONCLUSION Participants in the interviews are positive about the future of AI in clinical settings. At the same time, they proposenumerous measures to transfer research advancesinto implementations that will benefit healthcare personnel. Transferring AI research into benefits for healthcare workers and patients requires adjustments in regulations, data access procedures, education, funding schemes, and validation of AI systems.
Collapse
Affiliation(s)
- Luis Marco-Ruiz
- Norwegian Centre for E-Health Research, University Hospital of North Norway, Tromsø, Norway.
| | | | - Phuong Dinh Ngo
- Norwegian Centre for E-Health Research, University Hospital of North Norway, Tromsø, Norway
| | - Alexandra Makhlysheva
- Norwegian Centre for E-Health Research, University Hospital of North Norway, Tromsø, Norway
| | - Therese Olsen Svenning
- Norwegian Centre for E-Health Research, University Hospital of North Norway, Tromsø, Norway
| | - Kari Dyb
- Norwegian Centre for E-Health Research, University Hospital of North Norway, Tromsø, Norway
| | - Taridzo Chomutare
- Norwegian Centre for E-Health Research, University Hospital of North Norway, Tromsø, Norway
| | - Carlos Fernández Llatas
- Instituto de las Tecnologías de la Información y las Comunicaciones (ITACA), Universitat Politècnica de València (UPV), Valencia, Spain
| | - Jorge Muñoz-Gama
- Department of Computer Science, Pontificia Universidad Católica de Chile, Santiago, Chile
| | - Maryam Tayefi
- Norwegian Centre for E-Health Research, University Hospital of North Norway, Tromsø, Norway
| |
Collapse
|
12
|
Leuenberger M. Track Thyself? The Value and Ethics of Self-knowledge Through Technology. PHILOSOPHY & TECHNOLOGY 2024; 37:13. [PMID: 38288051 PMCID: PMC10821817 DOI: 10.1007/s13347-024-00704-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/02/2023] [Accepted: 01/12/2024] [Indexed: 01/31/2024]
Abstract
Novel technological devices, applications, and algorithms can provide us with a vast amount of personal information about ourselves. Given that we have ethical and practical reasons to pursue self-knowledge, should we use technology to increase our self-knowledge? And which ethical issues arise from the pursuit of technologically sourced self-knowledge? In this paper, I explore these questions in relation to bioinformation technologies (health and activity trackers, DTC genetic testing, and DTC neurotechnologies) and algorithmic profiling used for recommender systems, targeted advertising, and technologically supported decision-making. First, I distinguish between impersonal, critical, and relational self-knowledge. Relational self-knowledge is a so far neglected dimension of self-knowledge which is introduced in this paper. Next, I investigate the contribution of these technologies to the three types of self-knowledge and uncover the connected ethical concerns. Technology can provide a lot of impersonal self-knowledge, but we should focus on the quality of the information which tends to be particularly insufficient for marginalized groups. In terms of critical self-knowledge, the nature of technologically sourced personal information typically impedes critical engagement. The value of relational self-knowledge speaks in favour of transparency of information technology, notably for algorithms that are involved in decision-making about individuals. Moreover, bioinformation technologies and digital profiling shape the concepts and norms that define us. We should ensure they not only serve commercial interests but our identity and self-knowledge interests.
Collapse
Affiliation(s)
- Muriel Leuenberger
- Center for Ethics, University of Zurich, Zollikerstrasse 117, 8008 Zurich, Switzerland
| |
Collapse
|
13
|
Wang B, Asan O, Zhang Y. Shaping the future of chronic disease management: Insights into patient needs for AI-based homecare systems. Int J Med Inform 2024; 181:105301. [PMID: 38029700 DOI: 10.1016/j.ijmedinf.2023.105301] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2023] [Revised: 11/02/2023] [Accepted: 11/16/2023] [Indexed: 12/01/2023]
Abstract
BACKGROUND The rising demand for healthcare resources, especially in chronic disease management, has elevated the importance of Artificial Intelligence (AI) in healthcare. While AI-based homecare systems are being developed, the perspectives of chronic patients, who are one of the primary beneficiaries and risk bearers of these technologies, remain largely under-researched. While recent research has highlighted the importance of AI-based homecare systems, the current understanding of patients' desired designs and features is still limited. OBJECTIVE This paper explores chronic patients' perspectives regarding AI-based homecare systems, an area currently underrepresented in research. We aim to identify the factors influencing their decision to use such systems, elucidate the potential roles of government and other concerned authorities, and provide feedback to AI developers to enhance adoption, system design, and usability and improve the overall healthcare experiences of chronic patients. METHOD A web-based open-ended questionnaire was designed to gather the perspectives of chronic patients about AI-based homecare systems. In total, responses from 181 participants were collected. Using Krippendorff's clustering technique, an inductive thematic analysis was performed to identify the main themes and their respective subthemes. RESULT Through rigorous coding and thematic analysis of the collected responses, we identified four major themes further segmented into thirteen subthemes. These four primary themes were: 1) "Personalized Design", emphasizing the need for patients to manage their health condition better through personalized and educational resources and user-friendly interfaces; 2) "Emotional & Social Support", underscoring the desire for AI systems to facilitate social connectivity and provide emotional support to improve the well-being of chronic patients at home; 3) "System Integration & Proactive Care", addressing the importance of seamless communication, proactive patient monitoring and integration with existing healthcare platforms; and 4) "Ethics & Regulation", prioritizing ethical guidelines, regulatory compliance, and affordability in the design. CONCLUSION This study has offered significant insights into the needs and expectations of chronic patients regarding AI-based home care systems. 'The findings highlight the importance of personalized and accessible care, emotional and social support, seamless system integration, proactive care, and ethical considerations in designing and implementing such systems. By aligning the design and operation of these systems with the lived experiences and expectations of patients, we can better ensure their acceptance and effectiveness.
Collapse
Affiliation(s)
- Bijun Wang
- Department of Business Analytics and Data Science, Florida Polytechnic University, Lakeland, FL 33805, USA
| | - Onur Asan
- School of Systems and Enterprises, Stevens Institute of Technology, Hoboken, NJ 07047, USA.
| | - Yiqi Zhang
- Department of Industrial and Manufacturing Engineering, Penn State University, State College, PA 16801, USA
| |
Collapse
|
14
|
Funer F, Liedtke W, Tinnemeyer S, Klausen AD, Schneider D, Zacharias HU, Langanke M, Salloch S. Responsibility and decision-making authority in using clinical decision support systems: an empirical-ethical exploration of German prospective professionals' preferences and concerns. JOURNAL OF MEDICAL ETHICS 2023; 50:6-11. [PMID: 37217277 PMCID: PMC10803986 DOI: 10.1136/jme-2022-108814] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/25/2022] [Accepted: 03/11/2023] [Indexed: 05/24/2023]
Abstract
Machine learning-driven clinical decision support systems (ML-CDSSs) seem impressively promising for future routine and emergency care. However, reflection on their clinical implementation reveals a wide array of ethical challenges. The preferences, concerns and expectations of professional stakeholders remain largely unexplored. Empirical research, however, may help to clarify the conceptual debate and its aspects in terms of their relevance for clinical practice. This study explores, from an ethical point of view, future healthcare professionals' attitudes to potential changes of responsibility and decision-making authority when using ML-CDSS. Twenty-seven semistructured interviews were conducted with German medical students and nursing trainees. The data were analysed based on qualitative content analysis according to Kuckartz. Interviewees' reflections are presented under three themes the interviewees describe as closely related: (self-)attribution of responsibility, decision-making authority and need of (professional) experience. The results illustrate the conceptual interconnectedness of professional responsibility and its structural and epistemic preconditions to be able to fulfil clinicians' responsibility in a meaningful manner. The study also sheds light on the four relata of responsibility understood as a relational concept. The article closes with concrete suggestions for the ethically sound clinical implementation of ML-CDSS.
Collapse
Affiliation(s)
- Florian Funer
- Institute of Ethics, History and Philosophy of Medicine, Hannover Medical School, Hannover, Germany
- Institute of Ethics and History of Medicine, Eberhard Karls University Tübingen, Tübingen, Germany
| | - Wenke Liedtke
- Department of Social Work, Protestant University of Applied Sciences RWL, Bochum, Germany
| | - Sara Tinnemeyer
- Institute of Ethics, History and Philosophy of Medicine, Hannover Medical School, Hannover, Germany
| | | | - Diana Schneider
- Competence Center Emerging Technologies, Fraunhofer Institute for Systems and Innovation Research ISI, Karlsruhe, Germany
| | - Helena U Zacharias
- Peter L. Reichertz Institute for Medical Informatics of TU Braunschweig and Hannover Medical School, Hannover Medical School, Hannover, Germany
| | - Martin Langanke
- Department of Social Work, Protestant University of Applied Sciences RWL, Bochum, Germany
| | - Sabine Salloch
- Institute of Ethics, History and Philosophy of Medicine, Hannover Medical School, Hannover, Germany
| |
Collapse
|
15
|
Yoshiyasu Y, Wu F, Dhanda AK, Gorelik D, Takashima M, Ahmed OG. GPT-4 accuracy and completeness against International Consensus Statement on Allergy and Rhinology: Rhinosinusitis. Int Forum Allergy Rhinol 2023; 13:2231-2234. [PMID: 37260081 DOI: 10.1002/alr.23201] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2023] [Revised: 05/14/2023] [Accepted: 05/22/2023] [Indexed: 06/02/2023]
Abstract
KEY POINTS GPT-4 is an AI language model that can answer basic questions about rhinologic disease. Vetting is needed before AI models can be safely integrated into otolarygologic patient care.
Collapse
Affiliation(s)
- Yuki Yoshiyasu
- Department of Otolaryngology-Head and Neck Surgery, University of Texas Medical Branch, Galveston, Texas, USA
| | - Franklin Wu
- Department of Otolaryngology-Head and Neck Surgery, Houston Methodist Hospital, Houston, Texas, USA
| | - Aatin K Dhanda
- Department of Otolaryngology-Head and Neck Surgery, Houston Methodist Hospital, Houston, Texas, USA
| | - Daniel Gorelik
- Department of Otolaryngology-Head and Neck Surgery, Houston Methodist Hospital, Houston, Texas, USA
| | - Masayoshi Takashima
- Department of Otolaryngology-Head and Neck Surgery, Houston Methodist Hospital, Houston, Texas, USA
| | - Omar G Ahmed
- Department of Otolaryngology-Head and Neck Surgery, Houston Methodist Hospital, Houston, Texas, USA
| |
Collapse
|
16
|
Au K, Yang W. Auxiliary use of ChatGPT in surgical diagnosis and treatment. Int J Surg 2023; 109:3940-3943. [PMID: 37678271 PMCID: PMC10720849 DOI: 10.1097/js9.0000000000000686] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2023] [Accepted: 08/09/2023] [Indexed: 09/09/2023]
Abstract
ChatGPT can be used as an auxiliary tool in surgical diagnosis and treatment in several ways. One of the most incredible values of using ChatGPT is its ability to quickly process and handle large amounts of data and provide relatively accurate information to healthcare workers. Due to its high accuracy and ability to process big data, ChatGPT has been widely used in the healthcare industry for tasks such as assisting medical diagnosis, giving predictions of some diseases, and analyzing some medical cases. Surgical diagnosis and treatment can serve as an auxiliary tool to help healthcare professionals. Process large amounts of medical data, provide real-time guidance and feedback, and increase healthcare's overall speed and quality. Although it has great acceptance, it still faces issues such as ethics, patient privacy, data security, law, trustworthiness, and accuracy. This study aimed to explore the auxiliary use of ChatGPT in surgical diagnosis and treatment.
Collapse
Affiliation(s)
- Kahei Au
- School of Medicine, Jinan University
| | - Wah Yang
- Department of Metabolic and Bariatric Surgery, The First Affiliated Hospital of Jinan University, Guangzhou, Guangdong Province, People’s Republic of China
| |
Collapse
|
17
|
Kashani KB, Awdishu L, Bagshaw SM, Barreto EF, Claure-Del Granado R, Evans BJ, Forni LG, Ghosh E, Goldstein SL, Kane-Gill SL, Koola J, Koyner JL, Liu M, Murugan R, Nadkarni GN, Neyra JA, Ninan J, Ostermann M, Pannu N, Rashidi P, Ronco C, Rosner MH, Selby NM, Shickel B, Singh K, Soranno DE, Sutherland SM, Bihorac A, Mehta RL. Digital health and acute kidney injury: consensus report of the 27th Acute Disease Quality Initiative workgroup. Nat Rev Nephrol 2023; 19:807-818. [PMID: 37580570 DOI: 10.1038/s41581-023-00744-7] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/06/2023] [Indexed: 08/16/2023]
Abstract
Acute kidney injury (AKI), which is a common complication of acute illnesses, affects the health of individuals in community, acute care and post-acute care settings. Although the recognition, prevention and management of AKI has advanced over the past decades, its incidence and related morbidity, mortality and health care burden remain overwhelming. The rapid growth of digital technologies has provided a new platform to improve patient care, and reports show demonstrable benefits in care processes and, in some instances, in patient outcomes. However, despite great progress, the potential benefits of using digital technology to manage AKI has not yet been fully explored or implemented in clinical practice. Digital health studies in AKI have shown variable evidence of benefits, and the digital divide means that access to digital technologies is not equitable. Upstream research and development costs, limited stakeholder participation and acceptance, and poor scalability of digital health solutions have hindered their widespread implementation and use. Here, we provide recommendations from the Acute Disease Quality Initiative consensus meeting, which involved experts in adult and paediatric nephrology, critical care, pharmacy and data science, at which the use of digital health for risk prediction, prevention, identification and management of AKI and its consequences was discussed.
Collapse
Affiliation(s)
- Kianoush B Kashani
- Division of Nephrology and Hypertension, Division of Pulmonary and Critical Care Medicine, Department of Medicine, Mayo Clinic, Rochester, MN, USA.
| | - Linda Awdishu
- Clinical Pharmacy, San Diego Skaggs School of Pharmacy and Pharmaceutical Sciences, University of California San Diego, La Jolla, CA, USA
| | - Sean M Bagshaw
- Department of Critical Care Medicine, Faculty of Medicine and Dentistry, University of Alberta and Alberta Health Services, Edmonton, Canada
| | | | - Rolando Claure-Del Granado
- Division of Nephrology, Hospital Obrero No 2 - CNS, Cochabamba, Bolivia
- Universidad Mayor de San Simon, School of Medicine, Cochabamba, Bolivia
| | - Barbara J Evans
- Intelligent Critical Care Center, University of Florida, Gainesville, FL, USA
| | - Lui G Forni
- Department of Critical Care, Royal Surrey Hospital NHS Foundation Trust & Department of Clinical & Experimental Medicine, University of Surrey, Guildford, UK
| | - Erina Ghosh
- Philips Research North America, Cambridge, MA, USA
| | - Stuart L Goldstein
- Center for Acute Care Nephrology, Cincinnati Children's Hospital Medical Center, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| | - Sandra L Kane-Gill
- Biomedical Informatics and Clinical Translational Sciences, University of Pittsburgh, Pittsburgh, PA, USA
| | - Jejo Koola
- UC San Diego Health Department of Biomedical Informatics, Department of Medicine, La Jolla, CA, USA
| | - Jay L Koyner
- Section of Nephrology, Department of Medicine, University of Chicago, Chicago, IL, USA
| | - Mei Liu
- Department of Health Outcomes and Biomedical Informatics, University of Florida, Gainesville, FL, USA
| | - Raghavan Murugan
- The Program for Critical Care Nephrology, Department of Critical Care Medicine, University of Pittsburgh School of Medicine, Pittsburgh, PA, USA
- The Clinical Research, Investigation, and Systems Modelling of Acute Illness Center, Department of Critical Care Medicine, University of Pittsburgh School of Medicine, Pittsburgh, PA, USA
| | - Girish N Nadkarni
- Division of Data-Driven and Digital Medicine (D3M), Department of Medicine, Icahn School of Medicine at Mount Sinai; Mount Sinai Clinical Intelligence Center, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Javier A Neyra
- Division of Nephrology, Department of Medicine, University of Alabama at Birmingham, Birmingham, AL, USA
| | - Jacob Ninan
- Division of Pulmonary, Critical Care and Sleep Medicine, Mayo Clinic, Rochester, MN, USA
| | - Marlies Ostermann
- Department of Critical Care, King's College London, Guy's & St Thomas' Hospital, London, UK
| | - Neesh Pannu
- Division of Nephrology, University of Alberta, Edmonton, Canada
| | - Parisa Rashidi
- Intelligent Critical Care Center, University of Florida, Gainesville, FL, USA
| | - Claudio Ronco
- Università di Padova; Scientific Director Foundation IRRIV; International Renal Research Institute; San Bortolo Hospital, Vicenza, Italy
| | - Mitchell H Rosner
- Department of Medicine, University of Virginia Health, Charlottesville, VA, USA
| | - Nicholas M Selby
- Centre for Kidney Research and Innovation, Academic Unit of Translational Medical Sciences, University of Nottingham, Nottingham, UK
- Department of Renal Medicine, Royal Derby Hospital, Derby, UK
| | - Benjamin Shickel
- Intelligent Critical Care Center, University of Florida, Gainesville, FL, USA
| | - Karandeep Singh
- Department of Learning Health Sciences, University of Michigan Medical School, Ann Arbor, MI, USA
| | - Danielle E Soranno
- Section of Nephrology, Department of Pediatrics, Indiana University, Riley Hospital for Children, Indianapolis, IN, USA
| | - Scott M Sutherland
- Division of Nephrology, Department of Pediatrics, Stanford University School of Medicine, Stanford, CA, USA
| | - Azra Bihorac
- Intelligent Critical Care Center, University of Florida, Gainesville, FL, USA.
| | - Ravindra L Mehta
- Division of Nephrology-Hypertension, Department of Medicine, University of California San Diego, La Jolla, CA, USA.
| |
Collapse
|
18
|
Wang J, Xu Y, Zhang X, Pan H. Ethical predicaments and countermeasures in nursing informatics. Nurs Ethics 2023:9697330231215962. [PMID: 37976551 DOI: 10.1177/09697330231215962] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2023]
Abstract
With the increasing use of technology in nursing, how nurses perform practice care has changed, inevitably leading to ethical concerns that differ from original ethical norms in nursing. Studies have focused on ethical issues in health informatics from clinicians' or patients' perspectives, while nurses' perspective is needed. This paper conducts a theoretical study on ethical predicaments that arise in nursing informatics from nurses' perspectives. Why and how these predicaments emerge are elaborated. Also, this paper offers countermeasures in realistic contexts from technique, education, and leadership aspects. Collaborations between governments, administrators, educators, technicians, and nurses are needed to step out of these predicaments.
Collapse
Affiliation(s)
| | - Yihong Xu
- Zhejiang University School of Medicine
| | | | | |
Collapse
|
19
|
Wang B, Asan O, Mansouri M. Perspectives of Patients With Chronic Diseases on Future Acceptance of AI-Based Home Care Systems: Cross-Sectional Web-Based Survey Study. JMIR Hum Factors 2023; 10:e49788. [PMID: 37930780 PMCID: PMC10660233 DOI: 10.2196/49788] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2023] [Revised: 08/18/2023] [Accepted: 10/05/2023] [Indexed: 11/07/2023] Open
Abstract
BACKGROUND Artificial intelligence (AI)-based home care systems and devices are being gradually integrated into health care delivery to benefit patients with chronic diseases. However, existing research mainly focuses on the technical and clinical aspects of AI application, with an insufficient investigation of patients' motivation and intention to adopt such systems. OBJECTIVE This study aimed to examine the factors that affect the motivation of patients with chronic diseases to adopt AI-based home care systems and provide empirical evidence for the proposed research hypotheses. METHODS We conducted a cross-sectional web-based survey with 222 patients with chronic diseases based on a hypothetical scenario. RESULTS The results indicated that patients have an overall positive perception of AI-based home care systems. Their attitudes toward the technology, perceived usefulness, and comfortability were found to be significant factors encouraging adoption, with a clear understanding of accountability being a particularly influential factor in shaping patients' attitudes toward their motivation to use these systems. However, privacy concerns persist as an indirect factor, affecting the perceived usefulness and comfortability, hence influencing patients' attitudes. CONCLUSIONS This study is one of the first to examine the motivation of patients with chronic diseases to adopt AI-based home care systems, offering practical insights for policy makers, care or technology providers, and patients. This understanding can facilitate effective policy formulation, product design, and informed patient decision-making, potentially improving the overall health status of patients with chronic diseases.
Collapse
Affiliation(s)
- Bijun Wang
- Department of Business Analytics and Data Science, Florida Polytechnic University, Lakeland, FL, United States
| | - Onur Asan
- School of Systems and Enterprises, Stevens Institue of Technology, Hoboken, NJ, United States
| | - Mo Mansouri
- School of Systems and Enterprises, Stevens Institue of Technology, Hoboken, NJ, United States
| |
Collapse
|
20
|
Ramachandran M, Brinton C, Wiljer D, Upshur R, Gray CS. The impact of eHealth on relationships and trust in primary care: a review of reviews. BMC PRIMARY CARE 2023; 24:228. [PMID: 37919688 PMCID: PMC10623772 DOI: 10.1186/s12875-023-02176-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/18/2023] [Accepted: 10/11/2023] [Indexed: 11/04/2023]
Abstract
BACKGROUND Given the increasing integration of digital health technologies in team-based primary care, this review aimed at understanding the impact of eHealth on patient-provider and provider-provider relationships. METHODS A review of reviews was conducted on three databases to identify papers published in English from 2008 onwards. The impact of different types of eHealth on relationships and trust and the factors influencing the impact were thematically analyzed. RESULTS A total of 79 reviews were included. Patient-provider relationships were discussed more frequently as compared to provider-provider relationships. Communication systems like telemedicine were the most discussed type of technology. eHealth was found to have both positive and negative impacts on relationships and/or trust. This impact was influenced by a range of patient-related, provider-related, technology-related, and organizational factors, such as patient sociodemographics, provider communication skills, technology design, and organizational technology implementation, respectively. CONCLUSIONS Recommendations are provided for effective and equitable technology selection, application, and training to optimize the impact of eHealth on relationships and trust. The review findings can inform providers' and policymakers' decision-making around the use of eHealth in primary care delivery to facilitate relationship-building.
Collapse
Affiliation(s)
- Meena Ramachandran
- Bridgepoint Collaboratory for Research and Innovation, Lunenfeld-Tanenbaum Research Institute, Sinai Health, 1 Bridgepoint Dr, Toronto, ON, M4M 2B5, Canada.
- School of Physical and Occupational Therapy, McGill University, 3654 Promenade Sir-William-Osler, Montreal, QC, H3G 1Y5, Canada.
| | - Christopher Brinton
- Bridgepoint Collaboratory for Research and Innovation, Lunenfeld-Tanenbaum Research Institute, Sinai Health, 1 Bridgepoint Dr, Toronto, ON, M4M 2B5, Canada
- Michael G. DeGroote School of Medicine, McMaster University, 1280 Main Street West, Hamilton, ON, L8S 4L8, Canada
| | - David Wiljer
- Education Technology Innovation, University Health Network, 190 Elizabeth St, Toronto, ON, M5G 2C4, Canada
- Department of Psychiatry, University of Toronto, 155 College St, Toronto, ON, M5T 3M6, Canada
- Institute for Health Policy, Management and Evaluation, University of Toronto, 155 College St, Toronto, ON, M5T 3M6, Canada
- Centre for Addiction and Mental Health, 1000 Queen St W, Toronto, ON, M6J 1H4, Canada
| | - Ross Upshur
- Bridgepoint Collaboratory for Research and Innovation, Lunenfeld-Tanenbaum Research Institute, Sinai Health, 1 Bridgepoint Dr, Toronto, ON, M4M 2B5, Canada
- Dalla Lana School of Public Health, University of Toronto, 155 College St, Toronto, ON, M5T 3M6, Canada
| | - Carolyn Steele Gray
- Bridgepoint Collaboratory for Research and Innovation, Lunenfeld-Tanenbaum Research Institute, Sinai Health, 1 Bridgepoint Dr, Toronto, ON, M4M 2B5, Canada
- Institute for Health Policy, Management and Evaluation, University of Toronto, 155 College St, Toronto, ON, M5T 3M6, Canada
| |
Collapse
|
21
|
Rajesh AE, Davidson OQ, Lee CS, Lee AY. Artificial Intelligence and Diabetic Retinopathy: AI Framework, Prospective Studies, Head-to-head Validation, and Cost-effectiveness. Diabetes Care 2023; 46:1728-1739. [PMID: 37729502 PMCID: PMC10516248 DOI: 10.2337/dci23-0032] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/25/2023] [Accepted: 07/15/2023] [Indexed: 09/22/2023]
Abstract
Current guidelines recommend that individuals with diabetes receive yearly eye exams for detection of referable diabetic retinopathy (DR), one of the leading causes of new-onset blindness. For addressing the immense screening burden, artificial intelligence (AI) algorithms have been developed to autonomously screen for DR from fundus photography without human input. Over the last 10 years, many AI algorithms have achieved good sensitivity and specificity (>85%) for detection of referable DR compared with human graders; however, many questions still remain. In this narrative review on AI in DR screening, we discuss key concepts in AI algorithm development as a background for understanding the algorithms. We present the AI algorithms that have been prospectively validated against human graders and demonstrate the variability of reference standards and cohort demographics. We review the limited head-to-head validation studies where investigators attempt to directly compare the available algorithms. Next, we discuss the literature regarding cost-effectiveness, equity and bias, and medicolegal considerations, all of which play a role in the implementation of these AI algorithms in clinical practice. Lastly, we highlight ongoing efforts to bridge gaps in AI model data sets to pursue equitable development and delivery.
Collapse
Affiliation(s)
- Anand E. Rajesh
- Department of Ophthalmology, University of Washington, Seattle, WA
- Roger H. and Angie Karalis Johnson Retina Center, Seattle, WA
| | - Oliver Q. Davidson
- Department of Ophthalmology, University of Washington, Seattle, WA
- Roger H. and Angie Karalis Johnson Retina Center, Seattle, WA
| | - Cecilia S. Lee
- Department of Ophthalmology, University of Washington, Seattle, WA
- Roger H. and Angie Karalis Johnson Retina Center, Seattle, WA
| | - Aaron Y. Lee
- Department of Ophthalmology, University of Washington, Seattle, WA
- Roger H. and Angie Karalis Johnson Retina Center, Seattle, WA
| |
Collapse
|
22
|
Momenaei B, Wakabayashi T, Shahlaee A, Durrani AF, Pandit SA, Wang K, Mansour HA, Abishek RM, Xu D, Sridhar J, Yonekawa Y, Kuriyan AE. Reply. Ophthalmol Retina 2023; 7:e15-e16. [PMID: 37379883 DOI: 10.1016/j.oret.2023.06.017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2023] [Accepted: 06/22/2023] [Indexed: 06/30/2023]
Affiliation(s)
- Bita Momenaei
- Wills Eye Hospital, Mid Atlantic Retina, Thomas Jefferson University, Philadelphia, Pennsylvania
| | - Taku Wakabayashi
- Wills Eye Hospital, Mid Atlantic Retina, Thomas Jefferson University, Philadelphia, Pennsylvania
| | - Abtin Shahlaee
- Wills Eye Hospital, Mid Atlantic Retina, Thomas Jefferson University, Philadelphia, Pennsylvania
| | - Asad F Durrani
- Wills Eye Hospital, Mid Atlantic Retina, Thomas Jefferson University, Philadelphia, Pennsylvania
| | - Saagar A Pandit
- Wills Eye Hospital, Mid Atlantic Retina, Thomas Jefferson University, Philadelphia, Pennsylvania
| | - Kristine Wang
- Wills Eye Hospital, Mid Atlantic Retina, Thomas Jefferson University, Philadelphia, Pennsylvania
| | - Hana A Mansour
- Wills Eye Hospital, Mid Atlantic Retina, Thomas Jefferson University, Philadelphia, Pennsylvania
| | - Robert M Abishek
- Wills Eye Hospital, Mid Atlantic Retina, Thomas Jefferson University, Philadelphia, Pennsylvania
| | - David Xu
- Wills Eye Hospital, Mid Atlantic Retina, Thomas Jefferson University, Philadelphia, Pennsylvania
| | - Jayanth Sridhar
- Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, Florida
| | - Yoshihiro Yonekawa
- Wills Eye Hospital, Mid Atlantic Retina, Thomas Jefferson University, Philadelphia, Pennsylvania
| | - Ajay E Kuriyan
- Wills Eye Hospital, Mid Atlantic Retina, Thomas Jefferson University, Philadelphia, Pennsylvania.
| |
Collapse
|
23
|
Wang Y, Song Y, Ma Z, Han X. Multidisciplinary considerations of fairness in medical AI: A scoping review. Int J Med Inform 2023; 178:105175. [PMID: 37595374 DOI: 10.1016/j.ijmedinf.2023.105175] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2023] [Revised: 08/02/2023] [Accepted: 08/04/2023] [Indexed: 08/20/2023]
Abstract
INTRODUCTION Artificial Intelligence (AI) technology has been developed significantly in recent years. The fairness of medical AI is of great concern due to its direct relation to human life and health. This review aims to analyze the existing research literature on fairness in medical AI from the perspectives of computer science, medical science, and social science (including law and ethics). The objective of the review is to examine the similarities and differences in the understanding of fairness, explore influencing factors, and investigate potential measures to implement fairness in medical AI across English and Chinese literature. METHODS This study employed a scoping review methodology and selected the following databases: Web of Science, MEDLINE, Pubmed, OVID, CNKI, WANFANG Data, etc., for the fairness issues in medical AI through February 2023. The search was conducted using various keywords such as "artificial intelligence," "machine learning," "medical," "algorithm," "fairness," "decision-making," and "bias." The collected data were charted, synthesized, and subjected to descriptive and thematic analysis. RESULTS After reviewing 468 English papers and 356 Chinese papers, 53 and 42 were included in the final analysis. Our results show the three different disciplines all show significant differences in the research on the core issues. Data is the foundation that affects medical AI fairness in addition to algorithmic bias and human bias. Legal, ethical, and technological measures all promote the implementation of medical AI fairness. CONCLUSIONS Our review indicates a consensus regarding the importance of data fairness as the foundation for achieving fairness in medical AI across multidisciplinary perspectives. However, there are substantial discrepancies in core aspects such as the concept, influencing factors, and implementation measures of fairness in medical AI. Consequently, future research should facilitate interdisciplinary discussions to bridge the cognitive gaps between different fields and enhance the practical implementation of fairness in medical AI.
Collapse
Affiliation(s)
- Yue Wang
- School of Law, Xi'an Jiaotong University, No.28, Xianning West Road, Xi'an, Shaanxi, 710049, PR China.
| | - Yaxin Song
- School of Law, Xi'an Jiaotong University, No.28, Xianning West Road, Xi'an, Shaanxi, 710049, PR China.
| | - Zhuo Ma
- School of Law, Xi'an Jiaotong University, No.28, Xianning West Road, Xi'an, Shaanxi, 710049, PR China.
| | - Xiaoxue Han
- Xi'an Jiaotong University Library, No.28, Xianning West Road, Xi'an, Shaanxi, 710049, PR China.
| |
Collapse
|
24
|
Borges do Nascimento IJ, Abdulazeem H, Vasanthan LT, Martinez EZ, Zucoloto ML, Østengaard L, Azzopardi-Muscat N, Zapata T, Novillo-Ortiz D. Barriers and facilitators to utilizing digital health technologies by healthcare professionals. NPJ Digit Med 2023; 6:161. [PMID: 37723240 PMCID: PMC10507089 DOI: 10.1038/s41746-023-00899-4] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2022] [Accepted: 08/01/2023] [Indexed: 09/20/2023] Open
Abstract
Digital technologies change the healthcare environment, with several studies suggesting barriers and facilitators to using digital interventions by healthcare professionals (HPs). We consolidated the evidence from existing systematic reviews mentioning barriers and facilitators for the use of digital health technologies by HP. Electronic searches were performed in five databases (Cochrane Database of Systematic Reviews, Embase®, Epistemonikos, MEDLINE®, and Scopus) from inception to March 2023. We included reviews that reported barriers or facilitators factors to use technology solutions among HP. We performed data abstraction, methodological assessment, and certainty of the evidence appraisal by at least two authors. Overall, we included 108 reviews involving physicians, pharmacists, and nurses were included. High-quality evidence suggested that infrastructure and technical barriers (Relative Frequency Occurrence [RFO] 6.4% [95% CI 2.9-14.1]), psychological and personal issues (RFO 5.3% [95% CI 2.2-12.7]), and concerns of increasing working hours or workload (RFO 3.9% [95% CI 1.5-10.1]) were common concerns reported by HPs. Likewise, high-quality evidence supports that training/educational programs, multisector incentives, and the perception of technology effectiveness facilitate the adoption of digital technologies by HPs (RFO 3.8% [95% CI 1.8-7.9]). Our findings showed that infrastructure and technical issues, psychological barriers, and workload-related concerns are relevant barriers to comprehensively and holistically adopting digital health technologies by HPs. Conversely, deploying training, evaluating HP's perception of usefulness and willingness to use, and multi-stakeholders incentives are vital enablers to enhance the HP adoption of digital interventions.
Collapse
Affiliation(s)
- Israel Júnior Borges do Nascimento
- Division of Country Health Policies and Systems (CPS), World Health Organization Regional Office for Europe, Copenhagen, 2100, Denmark
- Pathology and Laboratory Medicine, Medical College of Wisconsin, Milwaukee, WI, 53226-3522, USA
| | - Hebatullah Abdulazeem
- Department of Sport and Health Science, Techanische Universität München, Munich, 80333, Germany
| | - Lenny Thinagaran Vasanthan
- Physical Medicine and Rehabilitation Department, Christian Medical College, Vellore, Tamil Nadu, 632004, India
| | - Edson Zangiacomi Martinez
- Department of Social Medicine and Biostatistics, Ribeirão Preto Medical School, University of São Paulo, Ribeirão Preto, São Paulo, 14049-900, Brazil
| | - Miriane Lucindo Zucoloto
- Department of Social Medicine and Biostatistics, Ribeirão Preto Medical School, University of São Paulo, Ribeirão Preto, São Paulo, 14049-900, Brazil
| | - Lasse Østengaard
- Centre for Evidence-Based Medicine Odense (CEBMO) and Cochrane Denmark, Department of Clinical Research, University Library of Southern Denmark, Odense, 5230, Denmark
| | - Natasha Azzopardi-Muscat
- Division of Country Health Policies and Systems (CPS), World Health Organization Regional Office for Europe, Copenhagen, 2100, Denmark
| | - Tomas Zapata
- Division of Country Health Policies and Systems (CPS), World Health Organization Regional Office for Europe, Copenhagen, 2100, Denmark
| | - David Novillo-Ortiz
- Division of Country Health Policies and Systems (CPS), World Health Organization Regional Office for Europe, Copenhagen, 2100, Denmark.
| |
Collapse
|
25
|
Steerling E, Siira E, Nilsen P, Svedberg P, Nygren J. Implementing AI in healthcare-the relevance of trust: a scoping review. FRONTIERS IN HEALTH SERVICES 2023; 3:1211150. [PMID: 37693234 PMCID: PMC10484529 DOI: 10.3389/frhs.2023.1211150] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/24/2023] [Accepted: 08/11/2023] [Indexed: 09/12/2023]
Abstract
Background The process of translation of AI and its potential benefits into practice in healthcare services has been slow in spite of its rapid development. Trust in AI in relation to implementation processes is an important aspect. Without a clear understanding, the development of effective implementation strategies will not be possible, nor will AI advance despite the significant investments and possibilities. Objective This study aimed to explore the scientific literature regarding how trust in AI in relation to implementation in healthcare is conceptualized and what influences trust in AI in relation to implementation in healthcare. Methods This scoping review included five scientific databases. These were searched to identify publications related to the study aims. Articles were included if they were published in English, after 2012, and peer-reviewed. Two independent reviewers conducted an abstract and full-text review, as well as carrying out a thematic analysis with an inductive approach to address the study aims. The review was reported in accordance with the PRISMA-ScR guidelines. Results A total of eight studies were included in the final review. We found that trust was conceptualized in different ways. Most empirical studies had an individual perspective where trust was directed toward the technology's capability. Two studies focused on trust as relational between people in the context of the AI application rather than as having trust in the technology itself. Trust was also understood by its determinants and as having a mediating role, positioned between characteristics and AI use. The thematic analysis yielded three themes: individual characteristics, AI characteristics and contextual characteristics, which influence trust in AI in relation to implementation in healthcare. Conclusions Findings showed that the conceptualization of trust in AI differed between the studies, as well as which determinants they accounted for as influencing trust. Few studies looked beyond individual characteristics and AI characteristics. Future empirical research addressing trust in AI in relation to implementation in healthcare should have a more holistic view of the concept to be able to manage the many challenges, uncertainties, and perceived risks.
Collapse
Affiliation(s)
- Emilie Steerling
- School of Health and Welfare, Halmstad University, Halmstad, Sweden
| | - Elin Siira
- School of Health and Welfare, Halmstad University, Halmstad, Sweden
| | - Per Nilsen
- School of Health and Welfare, Halmstad University, Halmstad, Sweden
- Department of Health, Medicine and Caring Sciences, Linköping University, Linköping, Sweden
| | - Petra Svedberg
- School of Health and Welfare, Halmstad University, Halmstad, Sweden
| | - Jens Nygren
- School of Health and Welfare, Halmstad University, Halmstad, Sweden
| |
Collapse
|
26
|
Asif A, Rajpoot K, Graham S, Snead D, Minhas F, Rajpoot N. Unleashing the potential of AI for pathology: challenges and recommendations. J Pathol 2023; 260:564-577. [PMID: 37550878 PMCID: PMC10952719 DOI: 10.1002/path.6168] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2023] [Revised: 06/21/2023] [Accepted: 06/22/2023] [Indexed: 08/09/2023]
Abstract
Computational pathology is currently witnessing a surge in the development of AI techniques, offering promise for achieving breakthroughs and significantly impacting the practices of pathology and oncology. These AI methods bring with them the potential to revolutionize diagnostic pipelines as well as treatment planning and overall patient care. Numerous peer-reviewed studies reporting remarkable performance across diverse tasks serve as a testimony to the potential of AI in the field. However, widespread adoption of these methods in clinical and pre-clinical settings still remains a challenge. In this review article, we present a detailed analysis of the major obstacles encountered during the development of effective models and their deployment in practice. We aim to provide readers with an overview of the latest developments, assist them with insights into identifying some specific challenges that may require resolution, and suggest recommendations and potential future research directions. © 2023 The Authors. The Journal of Pathology published by John Wiley & Sons Ltd on behalf of The Pathological Society of Great Britain and Ireland.
Collapse
Affiliation(s)
- Amina Asif
- Tissue Image Analytics Centre, Department of Computer ScienceUniversity of WarwickCoventryUK
| | - Kashif Rajpoot
- School of Computer ScienceUniversity of BirminghamBirminghamUK
| | - Simon Graham
- Histofy Ltd, Birmingham Business ParkBirminghamUK
| | - David Snead
- Histofy Ltd, Birmingham Business ParkBirminghamUK
- Department of PathologyUniversity Hospitals Coventry & Warwickshire NHS TrustCoventryUK
| | - Fayyaz Minhas
- Tissue Image Analytics Centre, Department of Computer ScienceUniversity of WarwickCoventryUK
- Cancer Research CentreUniversity of WarwickCoventryUK
| | - Nasir Rajpoot
- Tissue Image Analytics Centre, Department of Computer ScienceUniversity of WarwickCoventryUK
- Histofy Ltd, Birmingham Business ParkBirminghamUK
- Cancer Research CentreUniversity of WarwickCoventryUK
- The Alan Turing InstituteLondonUK
| |
Collapse
|
27
|
Borges do Nascimento IJ, Abdulazeem HM, Vasanthan LT, Martinez EZ, Zucoloto ML, Østengaard L, Azzopardi-Muscat N, Zapata T, Novillo-Ortiz D. The global effect of digital health technologies on health workers' competencies and health workplace: an umbrella review of systematic reviews and lexical-based and sentence-based meta-analysis. Lancet Digit Health 2023; 5:e534-e544. [PMID: 37507197 PMCID: PMC10397356 DOI: 10.1016/s2589-7500(23)00092-4] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2022] [Revised: 03/31/2023] [Accepted: 04/29/2023] [Indexed: 07/30/2023]
Abstract
Systematic reviews have quantified the effectiveness, feasibility, acceptability, and cost-effectiveness of digital health technologies (DHTs) used by health-care workers. We aimed to collate available evidence on technologies' effect on health-care workers' competencies and performance. We searched the Cochrane Database of Systematic Reviews, Embase, MEDLINE, Epistemonikos, and Scopus for reviews published from database inception to March 1, 2023. Studies assessing the effects of DHTs on the organisational, socioeconomic, clinical, and epidemiological levels within the workplace, and on health-care workers' performance parameters, were included. Data were extracted and clustered into 25 domains using vote counting based on the direction of effect. The relative frequency of occurrence (RFO) of each domain was estimated using R software. AMSTAR-2 tool was used to appraise the quality of reporting, and the Confidence in the Evidence from Reviews of Qualitative research approach developed by Grading of Recommendations Assessment, Development and Evaluation was used to analyse the certainty of evidence among included studies. The 12 794 screened reviews generated 132 eligible records for assessment. Top-ranked RFO identifiers showed associations of DHT with the enhancement of health-care workers' performance (10·9% [95% CI 5·3-22·5]), improvement of clinical practice and management (9·8% [3·9-24·2]), and improvement of care delivery and access to care (9·2% [4·1-20·9]). Our overview found that DHTs positively influence the daily practice of health-care workers in various medical specialties. However, poor reporting in crucial domains is widely prevalent in reviews of DHT, hindering our findings' generalisability and interpretation. Likewise, most of the included reviews reported substantially more data from high-income countries. Improving the reporting of future studies and focusing on low-income and middle-income countries might elucidate and answer current knowledge gaps.
Collapse
Affiliation(s)
- Israel Júnior Borges do Nascimento
- Pathology and Laboratory Medicine, Medical College of Wisconsin, Milwaukee, WI, USA; School of Medicine and University Hospital, Federal University of Minas Gerais, Belo Horizonte, Brazil; Division of Country Health Policies and Systems, World Health Organization Regional Office for Europe, Copenhagen, Denmark
| | | | | | - Edson Zangiacomi Martinez
- Department of Social Medicine-Biostatistics, Ribeirão Preto Medical School, University of São Paulo, São Paulo, Brazil
| | - Miriane Lucindo Zucoloto
- Department of Social Medicine-Biostatistics, Ribeirão Preto Medical School, University of São Paulo, São Paulo, Brazil
| | - Lasse Østengaard
- Centre for Evidence-Based Medicine Odense and Cochrane Denmark, Department of Clinical Research, University of Southern Denmark, Odense, Denmark; University Library of Southern Denmark, University of Southern Denmark, Odense, Denmark
| | - Natasha Azzopardi-Muscat
- Division of Country Health Policies and Systems, World Health Organization Regional Office for Europe, Copenhagen, Denmark
| | - Tomas Zapata
- Division of Country Health Policies and Systems, World Health Organization Regional Office for Europe, Copenhagen, Denmark
| | - David Novillo-Ortiz
- Division of Country Health Policies and Systems, World Health Organization Regional Office for Europe, Copenhagen, Denmark.
| |
Collapse
|
28
|
Kallweit U, Marson AG. Neurology beyond big data - the ninth Congress of the EAN. Nat Rev Neurol 2023:10.1038/s41582-023-00837-8. [PMID: 37393314 DOI: 10.1038/s41582-023-00837-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/03/2023]
Affiliation(s)
- Ulf Kallweit
- Department of Health, University of Witten/Herdecke, Witten, Germany
| | - Anthony G Marson
- Department of Pharmacology and Therapeutics, University of Liverpool, Liverpool, UK.
- The Walton Centre, NHS Foundation Trust, Liverpool, UK.
| |
Collapse
|
29
|
Banda JM, Shah NH, Periyakoil VS. Characterizing subgroup performance of probabilistic phenotype algorithms within older adults: a case study for dementia, mild cognitive impairment, and Alzheimer's and Parkinson's diseases. JAMIA Open 2023; 6:ooad043. [PMID: 37397506 PMCID: PMC10307941 DOI: 10.1093/jamiaopen/ooad043] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2023] [Revised: 06/06/2023] [Accepted: 06/22/2023] [Indexed: 07/04/2023] Open
Abstract
Objective Biases within probabilistic electronic phenotyping algorithms are largely unexplored. In this work, we characterize differences in subgroup performance of phenotyping algorithms for Alzheimer's disease and related dementias (ADRD) in older adults. Materials and methods We created an experimental framework to characterize the performance of probabilistic phenotyping algorithms under different racial distributions allowing us to identify which algorithms may have differential performance, by how much, and under what conditions. We relied on rule-based phenotype definitions as reference to evaluate probabilistic phenotype algorithms created using the Automated PHenotype Routine for Observational Definition, Identification, Training and Evaluation framework. Results We demonstrate that some algorithms have performance variations anywhere from 3% to 30% for different populations, even when not using race as an input variable. We show that while performance differences in subgroups are not present for all phenotypes, they do affect some phenotypes and groups more disproportionately than others. Discussion Our analysis establishes the need for a robust evaluation framework for subgroup differences. The underlying patient populations for the algorithms showing subgroup performance differences have great variance between model features when compared with the phenotypes with little to no differences. Conclusion We have created a framework to identify systematic differences in the performance of probabilistic phenotyping algorithms specifically in the context of ADRD as a use case. Differences in subgroup performance of probabilistic phenotyping algorithms are not widespread nor do they occur consistently. This highlights the great need for careful ongoing monitoring to evaluate, measure, and try to mitigate such differences.
Collapse
Affiliation(s)
- Juan M Banda
- Corresponding Author: Juan M. Banda, PhD, Department of Computer Science, College of Arts and Sciences, Georgia State University, 25 Park Place, Suite 752, Atlanta, GA 30303, USA;
| | - Nigam H Shah
- Stanford Center for Biomedical Informatics Research, Stanford University School of Medicine, Stanford, California, USA
| | - Vyjeyanthi S Periyakoil
- Stanford Department of Medicine, Palo Alto, California, USA
- VA Palo Alto Health Care System, Palo Alto, California, USA
| |
Collapse
|
30
|
Sun K, Zheng X, Liu W. Increasing clinical medical service satisfaction: An investigation into the impacts of Physicians' use of clinical decision-making support AI on patients' service satisfaction. Int J Med Inform 2023; 176:105107. [PMID: 37257235 DOI: 10.1016/j.ijmedinf.2023.105107] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2022] [Revised: 04/12/2023] [Accepted: 05/19/2023] [Indexed: 06/02/2023]
Abstract
BACKGROUND The medical industry is one of the key industries for the application of artificial intelligence (AI). Although it is believed that the combination of CDSS and physicians could improve the medical service, there are still many concerns about the usage of CDSS. Based on these concerns, limited studies have answered the question that when a physician makes decision independently or with AI's help, will there be any differences in patients' satisfaction with the medical service? METHODS This study uses the service fairness theory as a theoretical lens and employs three vignette experiments to address this research gap. There are totally 740 subjects recruited to participate into the three experiments. Group comparison methods and structural equation model are used to verify the hypotheses. RESULTS The experimental results reveal that: (1) physicians using AI can reduce patients' service satisfaction (Mdifference=0.404,p=0.004); (2) the negative relationship between AI usage and service satisfaction can partially be mediated through distributive fairness and procedural fairness; (3) physicians actively informing their patients about the usage of AI can help mitigate the reduction in service satisfaction (Mdifference=0.400,p=0.003) and three types of fairness Mdifferencedistributive=0.307,p=0.042;Mdifferenceprocedural=0.483,p<0.001;Mdifferenceinteractional=0.253,p=0.027. CONCLUSION This study investigates the effect of physicians using decision-making support AI on their patients' service satisfaction. These results contribute to the existing literature pertaining to AI and fairness theory, and also help in formulating some practical suggestions for medical staff and AI development companies.
Collapse
Affiliation(s)
- Kai Sun
- School of Management Science and Engineering, Shandong University of Finance and Economics, Jinan, China.
| | - Xiangwei Zheng
- School of Information Science and Engineering, Shandong Normal University, Jinan, China
| | - Weilong Liu
- School of Management Science and Engineering, Shandong University of Finance and Economics, Jinan, China
| |
Collapse
|
31
|
Neri E, Aghakhanyan G, Zerunian M, Gandolfo N, Grassi R, Miele V, Giovagnoni A, Laghi A. Explainable AI in radiology: a white paper of the Italian Society of Medical and Interventional Radiology. LA RADIOLOGIA MEDICA 2023:10.1007/s11547-023-01634-5. [PMID: 37155000 DOI: 10.1007/s11547-023-01634-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/17/2023] [Accepted: 04/19/2023] [Indexed: 05/10/2023]
Abstract
The term Explainable Artificial Intelligence (xAI) groups together the scientific body of knowledge developed while searching for methods to explain the inner logic behind the AI algorithm and the model inference based on knowledge-based interpretability. The xAI is now generally recognized as a core area of AI. A variety of xAI methods currently are available to researchers; nonetheless, the comprehensive classification of the xAI methods is still lacking. In addition, there is no consensus among the researchers with regards to what an explanation exactly is and which are salient properties that must be considered to make it understandable for every end-user. The SIRM introduces an xAI-white paper, which is intended to aid Radiologists, medical practitioners, and scientists in the understanding an emerging field of xAI, the black-box problem behind the success of the AI, the xAI methods to unveil the black-box into a glass-box, the role, and responsibilities of the Radiologists for appropriate use of the AI-technology. Due to the rapidly changing and evolution of AI, a definitive conclusion or solution is far away from being defined. However, one of our greatest responsibilities is to keep up with the change in a critical manner. In fact, ignoring and discrediting the advent of AI a priori will not curb its use but could result in its application without awareness. Therefore, learning and increasing our knowledge about this very important technological change will allow us to put AI at our service and at the service of the patients in a conscious way, pushing this paradigm shift as far as it will benefit us.
Collapse
Affiliation(s)
- Emanuele Neri
- Academic Radiology, Department of Translational Research and of New Surgical and Medical Technology, University of Pisa, Pisa, Italy
| | - Gayane Aghakhanyan
- Academic Radiology, Department of Translational Research and of New Surgical and Medical Technology, University of Pisa, Pisa, Italy.
| | - Marta Zerunian
- Medical-Surgical Sciences and Translational Medicine, Sapienza University of Rome, Sant'Andrea Hospital, Rome, Italy
| | - Nicoletta Gandolfo
- Diagnostic Imaging Department, VillaScassi Hospital-ASL 3, Corso Scassi 1, Genoa, Italy
| | - Roberto Grassi
- Radiology Unit, Università Degli Studi Della Campania Luigi Vanvitelli, Naples, Italy
| | - Vittorio Miele
- Department of Radiology, Careggi University Hospital, Florence, Italy
| | - Andrea Giovagnoni
- Department of Radiological Sciences, Radiology Clinic, Azienda Ospedaliera Universitaria, Ospedali Riuniti Di Ancona, Ancona, Italy
| | - Andrea Laghi
- Medical-Surgical Sciences and Translational Medicine, Sapienza University of Rome, Sant'Andrea Hospital, Rome, Italy
| |
Collapse
|
32
|
Jung J, Lee H, Jung H, Kim H. Essential properties and explanation effectiveness of explainable artificial intelligence in healthcare: A systematic review. Heliyon 2023; 9:e16110. [PMID: 37234618 PMCID: PMC10205582 DOI: 10.1016/j.heliyon.2023.e16110] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2022] [Revised: 03/26/2023] [Accepted: 05/05/2023] [Indexed: 05/28/2023] Open
Abstract
Background Significant advancements in the field of information technology have influenced the creation of trustworthy explainable artificial intelligence (XAI) in healthcare. Despite improved performance of XAI, XAI techniques have not yet been integrated into real-time patient care. Objective The aim of this systematic review is to understand the trends and gaps in research on XAI through an assessment of the essential properties of XAI and an evaluation of explanation effectiveness in the healthcare field. Methods A search of PubMed and Embase databases for relevant peer-reviewed articles on development of an XAI model using clinical data and evaluating explanation effectiveness published between January 1, 2011, and April 30, 2022, was conducted. All retrieved papers were screened independently by the two authors. Relevant papers were also reviewed for identification of the essential properties of XAI (e.g., stakeholders and objectives of XAI, quality of personalized explanations) and the measures of explanation effectiveness (e.g., mental model, user satisfaction, trust assessment, task performance, and correctability). Results Six out of 882 articles met the criteria for eligibility. Artificial Intelligence (AI) users were the most frequently described stakeholders. XAI served various purposes, including evaluation, justification, improvement, and learning from AI. Evaluation of the quality of personalized explanations was based on fidelity, explanatory power, interpretability, and plausibility. User satisfaction was the most frequently used measure of explanation effectiveness, followed by trust assessment, correctability, and task performance. The methods of assessing these measures also varied. Conclusion XAI research should address the lack of a comprehensive and agreed-upon framework for explaining XAI and standardized approaches for evaluating the effectiveness of the explanation that XAI provides to diverse AI stakeholders.
Collapse
Affiliation(s)
- Jinsun Jung
- College of Nursing, Seoul National University, Seoul, Republic of Korea
- Center for Human-Caring Nurse Leaders for the Future by Brain Korea 21 (BK 21) Four Project, College of Nursing, Seoul National University, Seoul, Republic of Korea
| | - Hyungbok Lee
- College of Nursing, Seoul National University, Seoul, Republic of Korea
- Emergency Nursing Department, Seoul National University Hospital, Seoul, Republic of Korea
| | - Hyunggu Jung
- Department of Computer Science and Engineering, University of Seoul, Seoul, Republic of Korea
- Department of Artificial Intelligence, University of Seoul, Seoul, Republic of Korea
| | - Hyeoneui Kim
- College of Nursing, Seoul National University, Seoul, Republic of Korea
- Research Institute of Nursing Science, College of Nursing, Seoul National University, Seoul, Republic of Korea
| |
Collapse
|
33
|
Avila FR, Boczar D, Spaulding AC, Quest DJ, Samanta A, Torres-Guzman RA, Maita KC, Garcia JP, Eldaly AS, Forte AJ. High Satisfaction With a Virtual Assistant for Plastic Surgery Frequently Asked Questions. Aesthet Surg J 2023; 43:494-503. [PMID: 36353923 DOI: 10.1093/asj/sjac290] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2022] [Revised: 11/03/2022] [Accepted: 11/03/2022] [Indexed: 11/11/2022] Open
Abstract
BACKGROUND Most of a surgeon's office time is dedicated to patient education, preventing an appropriate patient-physician relationship. Telephone-accessed artificial intelligent virtual assistants (AIVAs) that simulate a human conversation and answer preoperative frequently asked questions (FAQs) can be effective solutions to this matter. An AIVA capable of answering preoperative plastic surgery-related FAQs has previously been described by the authors. OBJECTIVES The aim of this paper was to determine patients' perception and satisfaction with an AIVA. METHODS Twenty-six adult patients from a plastic surgery service answered a 3-part survey consisting of: (1) an evaluation of the answers' correctness, (2) their agreement with the feasibility, usefulness, and future uses of the AIVA, and (3) a section on comments. The first part made it possible to measure the system's accuracy, and the second to evaluate perception and satisfaction. The data were analyzed with Microsoft Excel 2010 (Microsoft Corporation, Redmond, WA). RESULTS The AIVA correctly answered the patients' questions 98.5% of the time, and the topic with the lowest accuracy was "nausea." Additionally, 88% of patients agreed with the statements of the second part of the survey. Thus, the patients' perception was positive and overall satisfaction with the AIVA was high. Patients agreed the least with using the AIVA to select their surgical procedure. The comments provided improvement areas for subsequent stages of the project. CONCLUSIONS The results show that patients were satisfied and expressed a positive experience with using the AIVA to answer plastic surgery FAQs before surgery. The system is also highly accurate.
Collapse
|
34
|
Bajgain B, Lorenzetti D, Lee J, Sauro K. Determinants of implementing artificial intelligence-based clinical decision support tools in healthcare: a scoping review protocol. BMJ Open 2023; 13:e068373. [PMID: 36822813 PMCID: PMC9950925 DOI: 10.1136/bmjopen-2022-068373] [Citation(s) in RCA: 11] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/25/2023] Open
Abstract
INTRODUCTION Artificial intelligence (AI), the simulation of human intelligence processes by machines, is being increasingly leveraged to facilitate clinical decision-making. AI-based clinical decision support (CDS) tools can improve the quality of care and appropriate use of healthcare resources, and decrease healthcare provider burnout. Understanding the determinants of implementing AI-based CDS tools in healthcare delivery is vital to reap the benefits of these tools. The objective of this scoping review is to map and synthesise determinants (barriers and facilitators) to implementing AI-based CDS tools in healthcare. METHODS AND ANALYSIS This scoping review will follow the Joanna Briggs Institute methodology and the Preferred Reporting Items for Systematic reviews and Meta-Analysis extension for Scoping Reviews checklist. The search terms will be tailored to each database, which includes MEDLINE, Embase, CINAHL, APA PsycINFO and the Cochrane Library. Grey literature and references of included studies will also be searched. The search will include studies published from database inception until 10 May 2022. We will not limit searches by study design or language. Studies that either report determinants or describe the implementation of AI-based CDS tools in clinical practice or/and healthcare settings will be included. The identified determinants (barriers and facilitators) will be described by synthesising the themes using the Theoretical Domains Framework. The outcome variables measured will be mapped and the measures of effectiveness will be summarised using descriptive statistics. ETHICS AND DISSEMINATION Ethics approval is not required because all data for this study have been previously published. The findings of this review will be published in a peer-reviewed journal and presented at academic conferences. Importantly, the findings of this scoping review will be widely presented to decision-makers, health system administrators, healthcare providers, and patients and family/caregivers as part of an implementation study of an AI-based CDS for the treatment of coronary artery disease.
Collapse
Affiliation(s)
- Bishnu Bajgain
- Department of Community Health Sciences, University of Calgary, Calgary, Alberta, Canada
| | - Diane Lorenzetti
- Department of Community Health Sciences, University of Calgary, Calgary, Alberta, Canada
| | - Joon Lee
- Department of Community Health Sciences, University of Calgary, Calgary, Alberta, Canada
- Department of Cardiac Sciences, University of Calgary, Calgary, Alberta, Canada
| | - Khara Sauro
- Departments of Community Health Sciences, Surgery & Oncology, University of Calgary, Calgary, Alberta, Canada
| |
Collapse
|
35
|
Siddiqui MF, Alam A, Kalmatov R, Mouna A, Villela R, Mitalipova A, Mrad YN, Rahat SAA, Magarde BK, Muhammad W, Sherbaevna SR, Tashmatova N, Islamovna UG, Abuassi MA, Parween Z. Leveraging Healthcare System with Nature-Inspired Computing Techniques: An Overview and Future Perspective. NATURE-INSPIRED INTELLIGENT COMPUTING TECHNIQUES IN BIOINFORMATICS 2023:19-42. [DOI: 10.1007/978-981-19-6379-7_2] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/24/2024]
|
36
|
Čartolovni A, Malešević A, Poslon L. Critical analysis of the AI impact on the patient-physician relationship: A multi-stakeholder qualitative study. Digit Health 2023; 9:20552076231220833. [PMID: 38130798 PMCID: PMC10734361 DOI: 10.1177/20552076231220833] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2023] [Accepted: 11/29/2023] [Indexed: 12/23/2023] Open
Abstract
Objective This qualitative study aims to present the aspirations, expectations and critical analysis of the potential for artificial intelligence (AI) to transform patient-physician relationship, according to multi-stakeholder insight. Methods This study was conducted from June to December 2021, using an anticipatory ethics approach and sociology of expectations as the theoretical frameworks. It focused mainly on three groups of stakeholders; namely, physicians (n = 12), patients (n = 15) and healthcare managers (n = 11), all of whom are directly related to the adoption of AI in medicine (n = 38). Results In this study, interviews were conducted with 40% of the patients in the sample (15/38), as well as 31% of the physicians (12/38) and 29% of health managers in the sample (11/38). The findings highlight the following: (1) the impact of AI on fundamental aspects of the patient-physician relationship and the underlying importance of a synergistic relationship between the physician and AI; (2) the potential for AI to alleviate workload and reduce administrative burden by saving time and putting the patient at the centre of the caring process and (3) the potential risk to the holistic approach by neglecting humanness in healthcare. Conclusions This multi-stakeholder qualitative study, which focused on the micro-level of healthcare decision-making, sheds new light on the impact of AI on healthcare and the potential transformation of patient-physician relationship. The results of the current study highlight the need to adopt a critical awareness approach to the implementation of AI in healthcare by applying critical thinking and reasoning. It is important not to rely solely upon the recommendations of AI while neglecting clinical reasoning and physicians' knowledge of best clinical practices. Instead, it is vital that the core values of the existing patient-physician relationship - such as trust and honesty, conveyed through open and sincere communication - are preserved.
Collapse
Affiliation(s)
- Anto Čartolovni
- Digital Healthcare Ethics Laboratory (Digit-HeaL), Catholic University of Croatia, Zagreb, Croatia
- School of Medicine, Catholic University of Croatia, Zagreb, Croatia
| | - Anamaria Malešević
- Digital Healthcare Ethics Laboratory (Digit-HeaL), Catholic University of Croatia, Zagreb, Croatia
| | - Luka Poslon
- Digital Healthcare Ethics Laboratory (Digit-HeaL), Catholic University of Croatia, Zagreb, Croatia
| |
Collapse
|
37
|
Liu CF, Chen ZC, Kuo SC, Lin TC. Does AI explainability affect physicians’ intention to use AI? Int J Med Inform 2022; 168:104884. [DOI: 10.1016/j.ijmedinf.2022.104884] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2022] [Revised: 09/24/2022] [Accepted: 09/30/2022] [Indexed: 11/06/2022]
|
38
|
Ethical Risk Factors and Mechanisms in Artificial Intelligence Decision Making. Behav Sci (Basel) 2022; 12:bs12090343. [PMID: 36135147 PMCID: PMC9495402 DOI: 10.3390/bs12090343] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2022] [Revised: 09/12/2022] [Accepted: 09/14/2022] [Indexed: 11/29/2022] Open
Abstract
While artificial intelligence (AI) technology can enhance social wellbeing and progress, it also generates ethical decision-making dilemmas such as algorithmic discrimination, data bias, and unclear accountability. In this paper, we identify the ethical risk factors of AI decision making from the perspective of qualitative research, construct a risk-factor model of AI decision making ethical risks using rooting theory, and explore the mechanisms of interaction between risks through system dynamics, based on which risk management strategies are proposed. We find that technological uncertainty, incomplete data, and management errors are the main sources of ethical risks in AI decision making and that the intervention of risk governance elements can effectively block the social risks arising from algorithmic, technological, and data risks. Accordingly, we propose strategies for the governance of ethical risks in AI decision making from the perspectives of management, research, and development.
Collapse
|
39
|
Mullins M, Himly M, Llopis IR, Furxhi I, Hofer S, Hofstätter N, Wick P, Romeo D, Küehnel D, Siivola K, Catalán J, Hund-Rinke K, Xiarchos I, Linehan S, Schuurbiers D, Bilbao AG, Barruetabeña L, Drobne D. (Re)Conceptualizing decision-making tools in a risk governance framework for emerging technologies-the case of nanomaterials. ENVIRONMENT SYSTEMS & DECISIONS 2022; 43:3-15. [PMID: 35912374 PMCID: PMC9309004 DOI: 10.1007/s10669-022-09870-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Accepted: 07/06/2022] [Indexed: 12/03/2022]
Abstract
The utility of decision-making tools for the risk governance of nanotechnology is at the core of this paper. Those working in nanotechnology risk management have been prolific in creating such tools, many derived from European FP7 and H2020-funded projects. What is less clear is how such tools might assist the overarching ambition of creating a fair system of risk governance. In this paper, we reflect upon the role that tools might and should play in any system of risk governance. With many tools designed for the risk governance of this emerging technology falling into disuse, this paper provides an overview of extant tools and addresses their potential shortcomings. We also posit the need for a data readiness tool. With the EUs NMP13 family of research consortia about to report to the Commission on ways forward in terms of risk governance of this domain, this is a timely intervention on an important element of any risk governance system.
Collapse
Affiliation(s)
- Martin Mullins
- Transgero Limited, Cullinagh, Newcastle West, Co., Limerick, Ireland
- Department of Accounting and Finance, Kemmy Business School, University of Limerick, Limerick, Ireland
| | - Martin Himly
- Department of Biosciences, Paris Lodron University of Salzburg (PLUS), 5020 Salzburg, Austria
| | - Isabel Rodríguez Llopis
- GAIKER Technology Centre, Basque Research and Technology Alliance, (BRTA) ES, Gipuzkoa, Spain
| | - Irini Furxhi
- Transgero Limited, Cullinagh, Newcastle West, Co., Limerick, Ireland
- Department of Accounting and Finance, Kemmy Business School, University of Limerick, Limerick, Ireland
| | - Sabine Hofer
- Department of Biosciences, Paris Lodron University of Salzburg (PLUS), 5020 Salzburg, Austria
| | - Norbert Hofstätter
- Department of Biosciences, Paris Lodron University of Salzburg (PLUS), 5020 Salzburg, Austria
| | - Peter Wick
- Particles-Biology Interactions Laboratory, Empa, Swiss Federal Laboratories for Materials Science and Technology, Lerchenfeldstrasse 5, 9014 St. Gallen, Switzerland
| | - Daina Romeo
- Particles-Biology Interactions Laboratory, Empa, Swiss Federal Laboratories for Materials Science and Technology, Lerchenfeldstrasse 5, 9014 St. Gallen, Switzerland
| | - Dana Küehnel
- Department Bioanalytical Ecotoxicology (BIOTOX), Helmholtz Centre for Environmental Research - UFZ, Permoserstraße 15, 04318 Leipzig, Germany
| | - Kirsi Siivola
- Finnish Institute of Occupational Health, Työterveyslaitos, Box 40, 00032 Helsinki, Finland
| | - Julia Catalán
- Finnish Institute of Occupational Health, Työterveyslaitos, Box 40, 00032 Helsinki, Finland
- Department of Anatomy, Embryology and Genetics, University of Zaragoza, Saragossa, Spain
| | - Kerstin Hund-Rinke
- Fraunhofer Institute for Molecular Biology and Applied Ecology IME, Auf dem Aberg 1, 57392 Schmallenberg, Germany
| | - Ioannis Xiarchos
- Research Lab of Advanced Composite, Nanomaterials, and Nanotechnology (R-NanoLab), School of Chemical Engineering, National Technical University of Athens, 9 Heroon Polytechniou str, 15780 Zographos, Athens Greece
| | - Shona Linehan
- Management, Cairnes School of Business and Economics, National University of Ireland Galway, Galway, Ireland
| | - Daan Schuurbiers
- De Proeffabriek Josef Israelslaan 63, NL-6813 JB Arnhem, The Netherlands
| | - Amaia García Bilbao
- GAIKER Technology Centre, Basque Research and Technology Alliance, (BRTA) ES, Gipuzkoa, Spain
| | - Leire Barruetabeña
- GAIKER Technology Centre, Basque Research and Technology Alliance, (BRTA) ES, Gipuzkoa, Spain
| | - Damjana Drobne
- Department Biology, Biotechnical Faculty, University of Ljubljana, Ljubljana, Slovenia
| |
Collapse
|
40
|
Schmitz-Luhn B, Chandler J. Ethical and Legal Aspects of Technology-Assisted Care in Neurodegenerative Disease. J Pers Med 2022; 12:jpm12061011. [PMID: 35743795 PMCID: PMC9225587 DOI: 10.3390/jpm12061011] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2022] [Revised: 06/17/2022] [Accepted: 06/18/2022] [Indexed: 11/16/2022] Open
Abstract
Technological solutions are increasingly seen as a way to respond to the demands of managing complex chronic conditions, especially neurodegenerative diseases such as Parkinson’s Disease. All of these new possibilities provide a variety of chances to improve the lives of affected persons and their families, friends, and caregivers. However, there are also a number of challenges that should be considered in order to safeguard the interests of affected persons. In this article, we discuss the ethical and legal considerations associated with the use of technology-assisted care in the context of neurodegenerative conditions.
Collapse
Affiliation(s)
- Bjoern Schmitz-Luhn
- Center for Life Ethics, Bonn University, 53113 Bonn, Germany
- Correspondence: ; Tel.: +49-228-73-66100
| | - Jennifer Chandler
- Bertram Loeb Research Chair, Centre for Health Law, Policy and Ethics, University of Ottawa, Ottawa, ON K1N 6N5, Canada;
| | | |
Collapse
|
41
|
An Evolution Gaining Momentum—The Growing Role of Artificial Intelligence in the Diagnosis and Treatment of Spinal Diseases. Diagnostics (Basel) 2022; 12:diagnostics12040836. [PMID: 35453884 PMCID: PMC9025301 DOI: 10.3390/diagnostics12040836] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2022] [Revised: 03/23/2022] [Accepted: 03/28/2022] [Indexed: 11/17/2022] Open
Abstract
In recent years, applications using artificial intelligence have been gaining importance in the diagnosis and treatment of spinal diseases. In our review, we describe the basic features of artificial intelligence which are currently applied in the field of spine diagnosis and treatment, and we provide an orientation of the recent technical developments and their applications. Furthermore, we point out the possible limitations and challenges in dealing with such technological advances. Despite the momentary limitations in practical application, artificial intelligence is gaining ground in the field of spine treatment. As an applying physician, it is therefore necessary to engage with it in order to benefit from those advances in the interest of the patient and to prevent these applications being misused by non-medical partners.
Collapse
|
42
|
Castor D, Saidu R, Boa R, Mbatani N, Mutsvangwa TEM, Moodley J, Denny L, Kuhn L. Assessment of the implementation context in preparation for a clinical study of machine-learning algorithms to automate the classification of digital cervical images for cervical cancer screening in resource-constrained settings. FRONTIERS IN HEALTH SERVICES 2022; 2:1000150. [PMID: 36925850 PMCID: PMC10012690 DOI: 10.3389/frhs.2022.1000150] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/21/2022] [Accepted: 08/23/2022] [Indexed: 11/13/2022]
Abstract
Introduction We assessed the implementation context and image quality in preparation for a clinical study evaluating the effectiveness of automated visual assessment devices within cervical cancer screening of women living without and with HIV. Methods We developed a semi-structured questionnaire based on three Consolidated Framework for Implementation Research (CFIR) domains; intervention characteristics, inner setting, and process, in Cape Town, South Africa. Between December 1, 2020, and August 6, 2021, we evaluated two devices: MobileODT handheld colposcope; and a commercially-available cell phone (Samsung A21ST). Colposcopists visually inspected cervical images for technical adequacy. Descriptive analyses were tabulated for quantitative variables, and narrative responses were summarized in the text. Results Two colposcopists described the devices as easy to operate, without data loss. The clinical workspace and gynecological workflow were modified to incorporate devices and manage images. Providers believed either device would likely perform better than cytology under most circumstances unless the squamocolumnar junction (SCJ) were not visible, in which case cytology was expected to be better. Image quality (N = 75) from the MobileODT device and cell phone was comparable in terms of achieving good focus (81% vs. 84%), obtaining visibility of the squamous columnar junction (88% vs. 97%), avoiding occlusion (79% vs. 87%), and detection of lesion and range of lesion includes the upper limit (63% vs. 53%) but differed in taking photographs free of glare (100% vs. 24%). Conclusion Novel application of the CFIR early in the conduct of the clinical study, including assessment of image quality, highlight real-world factors about intervention characteristics, inner clinical setting, and workflow process that may affect both the clinical study findings and ultimate pace of translating to clinical practice. The application and augmentation of the CFIR in this study context highlighted adaptations needed for the framework to better measure factors relevant to implementing digital interventions.
Collapse
Affiliation(s)
- Delivette Castor
- Division of Infectious Diseases, Vagelos College of Physicians and Surgeons, Columbia University Irving Medical Center, New York, NY, United States.,Department of Epidemiology, Mailman School of Public Health, Columbia University Irving Medical Center, New York, NY, United States
| | - Rakiya Saidu
- Department of Obstetrics and Gynaecology, University of Cape Town, Cape Town, South Africa.,Groote Schuur Hospital and South African Medical Research Council, Gynaecology Cancer Research Centre, University of Cape Town, Cape Town, South Africa
| | - Rosalind Boa
- Department of Obstetrics and Gynaecology, University of Cape Town, Cape Town, South Africa.,Groote Schuur Hospital and South African Medical Research Council, Gynaecology Cancer Research Centre, University of Cape Town, Cape Town, South Africa
| | - Nomonde Mbatani
- Department of Obstetrics and Gynaecology, University of Cape Town, Cape Town, South Africa.,Groote Schuur Hospital and South African Medical Research Council, Gynaecology Cancer Research Centre, University of Cape Town, Cape Town, South Africa
| | - Tinashe E M Mutsvangwa
- Division of Biomedical Engineering, Department of Human Biology, University of Cape Town, Cape Town, South Africa
| | - Jennifer Moodley
- Groote Schuur Hospital and South African Medical Research Council, Gynaecology Cancer Research Centre, University of Cape Town, Cape Town, South Africa.,Women's Health Research Unit, School of Public Health and Family Medicine, University of Cape Town, Cape Town, South Africa
| | - Lynette Denny
- Department of Obstetrics and Gynaecology, University of Cape Town, Cape Town, South Africa.,Groote Schuur Hospital and South African Medical Research Council, Gynaecology Cancer Research Centre, University of Cape Town, Cape Town, South Africa
| | - Louise Kuhn
- Department of Epidemiology, Mailman School of Public Health, Columbia University Irving Medical Center, New York, NY, United States.,Gertrude H. Sergievsky Center, Vagelos College of Physicians and Surgeons, Columbia University Irving Medical Center, New York, NY, United States
| |
Collapse
|