1
|
Mooghali M, Stroud AM, Yoo DW, Barry BA, Grimshaw AA, Ross JS, Zhu X, Miller JE. Trustworthy and ethical AI-enabled cardiovascular care: a rapid review. BMC Med Inform Decis Mak 2024; 24:247. [PMID: 39232725 PMCID: PMC11373417 DOI: 10.1186/s12911-024-02653-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2023] [Accepted: 08/26/2024] [Indexed: 09/06/2024] Open
Abstract
BACKGROUND Artificial intelligence (AI) is increasingly used for prevention, diagnosis, monitoring, and treatment of cardiovascular diseases. Despite the potential for AI to improve care, ethical concerns and mistrust in AI-enabled healthcare exist among the public and medical community. Given the rapid and transformative recent growth of AI in cardiovascular care, to inform practice guidelines and regulatory policies that facilitate ethical and trustworthy use of AI in medicine, we conducted a literature review to identify key ethical and trust barriers and facilitators from patients' and healthcare providers' perspectives when using AI in cardiovascular care. METHODS In this rapid literature review, we searched six bibliographic databases to identify publications discussing transparency, trust, or ethical concerns (outcomes of interest) associated with AI-based medical devices (interventions of interest) in the context of cardiovascular care from patients', caregivers', or healthcare providers' perspectives. The search was completed on May 24, 2022 and was not limited by date or study design. RESULTS After reviewing 7,925 papers from six databases and 3,603 papers identified through citation chasing, 145 articles were included. Key ethical concerns included privacy, security, or confidentiality issues (n = 59, 40.7%); risk of healthcare inequity or disparity (n = 36, 24.8%); risk of patient harm (n = 24, 16.6%); accountability and responsibility concerns (n = 19, 13.1%); problematic informed consent and potential loss of patient autonomy (n = 17, 11.7%); and issues related to data ownership (n = 11, 7.6%). Major trust barriers included data privacy and security concerns, potential risk of patient harm, perceived lack of transparency about AI-enabled medical devices, concerns about AI replacing human aspects of care, concerns about prioritizing profits over patients' interests, and lack of robust evidence related to the accuracy and limitations of AI-based medical devices. Ethical and trust facilitators included ensuring data privacy and data validation, conducting clinical trials in diverse cohorts, providing appropriate training and resources to patients and healthcare providers and improving their engagement in different phases of AI implementation, and establishing further regulatory oversights. CONCLUSION This review revealed key ethical concerns and barriers and facilitators of trust in AI-enabled medical devices from patients' and healthcare providers' perspectives. Successful integration of AI into cardiovascular care necessitates implementation of mitigation strategies. These strategies should focus on enhanced regulatory oversight on the use of patient data and promoting transparency around the use of AI in patient care.
Collapse
Affiliation(s)
- Maryam Mooghali
- Section of General Internal Medicine, Department of Internal Medicine, Yale School of Medicine, New Haven, CT, USA.
- Yale Center for Outcomes Research and Evaluation (CORE), 195 Church Street, New Haven, CT, 06510, USA.
| | - Austin M Stroud
- Biomedical Ethics Research Program, Mayo Clinic, Rochester, MN, USA
| | - Dong Whi Yoo
- School of Information, Kent State University, Kent, OH, USA
| | - Barbara A Barry
- Robert D. and Patricia E. Kern Center for the Science of Health Care Delivery, Mayo Clinic, Rochester, MN, USA
- Division of Health Care Delivery Research, Mayo Clinic, Rochester, MN, USA
| | - Alyssa A Grimshaw
- Harvey Cushing/John Hay Whitney Medical Library, Yale University, New Haven, CT, USA
| | - Joseph S Ross
- Section of General Internal Medicine, Department of Internal Medicine, Yale School of Medicine, New Haven, CT, USA
- Department of Health Policy and Management, Yale School of Public Health, New Haven, CT, USA
| | - Xuan Zhu
- Robert D. and Patricia E. Kern Center for the Science of Health Care Delivery, Mayo Clinic, Rochester, MN, USA
| | - Jennifer E Miller
- Section of General Internal Medicine, Department of Internal Medicine, Yale School of Medicine, New Haven, CT, USA
| |
Collapse
|
2
|
Sachdeva M, Datchoua AM, Yakam VF, Kenfack B, Jonnalagedda-Cattin M, Thiran JP, Petignat P, Schmidt NC. Acceptability of artificial intelligence for cervical cancer screening in Dschang, Cameroon: a qualitative study on patient perspectives. Reprod Health 2024; 21:92. [PMID: 38937771 PMCID: PMC11212410 DOI: 10.1186/s12978-024-01828-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2023] [Accepted: 06/12/2024] [Indexed: 06/29/2024] Open
Abstract
BACKGROUND Cervical cancer is the fourth most frequent cancer among women, with 90% of cervical cancer-related deaths occurring in low- and middle-income countries like Cameroon. Visual inspection with acetic acid is often used in low-resource settings to screen for cervical cancer; however, its accuracy can be limited. To address this issue, the Swiss Federal Institute of Technology Lausanne and the University Hospitals of Geneva are collaborating to develop an automated smartphone-based image classifier that serves as a computer aided diagnosis tool for cancerous lesions. The primary objective of this study is to explore the acceptability and perspectives of women in Dschang regarding the usage of a screening tool for cervical cancer relying on artificial intelligence. A secondary objective is to understand the preferred form and type of information women would like to receive regarding this artificial intelligence-based screening tool. METHODS A qualitative methodology was employed to gain better insight into the women's perspectives. Participants, aged between 30 and 49 were invited from both rural and urban regions and semi-structured interviews using a pre-tested interview guide were conducted. The focus groups were divided on the basis of level of education, as well as HPV status. The interviews were audio-recorded, transcribed, and coded using the ATLAS.ti software. RESULTS A total of 32 participants took part in the six focus groups, and 38% of participants had a primary level of education. The perspectives identified were classified using an adapted version of the Technology Acceptance Model. Key factors influencing the acceptability of artificial intelligence include privacy concerns, perceived usefulness, and trust in the competence of providers, accuracy of the tool as well as the potential negative impact of smartphones. CONCLUSION The results suggest that an artificial intelligence-based screening tool for cervical cancer is mostly acceptable to the women in Dschang. By ensuring patient confidentiality and by providing clear explanations, acceptance can be fostered in the community and uptake of cervical cancer screening can be improved. TRIAL REGISTRATION Ethical Cantonal Board of Geneva, Switzerland (CCER, N°2017-0110 and CER-amendment n°4) and Cameroonian National Ethics Committee for Human Health Research (N°2022/12/1518/CE/CNERSH/SP). NCT: 03757299.
Collapse
Affiliation(s)
- Malika Sachdeva
- Faculty of Medicine, University of Geneva, Geneva, Switzerland.
| | - Alida Moukam Datchoua
- Department of Gynaecology and Obstetrics, Dschang Regional Annex Hospital, Dschang, Cameroon
- Institute of Global Health, Faculty of Medicine, University of Geneva, Geneva, Switzerland
| | - Virginie Flore Yakam
- Department of Gynaecology and Obstetrics, Dschang Regional Annex Hospital, Dschang, Cameroon
| | - Bruno Kenfack
- Department of Gynaecology and Obstetrics, Dschang Regional Annex Hospital, Dschang, Cameroon
| | - Magali Jonnalagedda-Cattin
- Signal Processing Laboratory LTS5, School of Engineering, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
- EssentialTech Centre, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| | - Jean-Philippe Thiran
- Signal Processing Laboratory LTS5, School of Engineering, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| | - Patrick Petignat
- Gynaecology Division, Department of Paediatrics, Gynaecology and Obstetrics, University Hospitals of Geneva, Geneva, Switzerland
| | - Nicole Christine Schmidt
- Gynaecology Division, Department of Paediatrics, Gynaecology and Obstetrics, University Hospitals of Geneva, Geneva, Switzerland
- Faculty of Social Science, Catholic University of Applied Science, Munich, Germany
| |
Collapse
|
3
|
Frost EK, Bosward R, Aquino YSJ, Braunack-Mayer A, Carter SM. Facilitating public involvement in research about healthcare AI: A scoping review of empirical methods. Int J Med Inform 2024; 186:105417. [PMID: 38564959 DOI: 10.1016/j.ijmedinf.2024.105417] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2024] [Revised: 03/06/2024] [Accepted: 03/17/2024] [Indexed: 04/04/2024]
Abstract
OBJECTIVE With the recent increase in research into public views on healthcare artificial intelligence (HCAI), the objective of this review is to examine the methods of empirical studies on public views on HCAI. We map how studies provided participants with information about HCAI, and we examine the extent to which studies framed publics as active contributors to HCAI governance. MATERIALS AND METHODS We searched 5 academic databases and Google Advanced for empirical studies investigating public views on HCAI. We extracted information including study aims, research instruments, and recommendations. RESULTS Sixty-two studies were included. Most were quantitative (N = 42). Most (N = 47) reported providing participants with background information about HCAI. Despite this, studies often reported participants' lack of prior knowledge about HCAI as a limitation. Over three quarters (N = 48) of the studies made recommendations that envisaged public views being used to guide governance of AI. DISCUSSION Provision of background information is an important component of facilitating research with publics on HCAI. The high proportion of studies reporting participants' lack of knowledge about HCAI as a limitation reflects the need for more guidance on how information should be presented. A minority of studies adopted technocratic positions that construed publics as passive beneficiaries of AI, rather than as active stakeholders in HCAI design and implementation. CONCLUSION This review draws attention to how public roles in HCAI governance are constructed in empirical studies. To facilitate active participation, we recommend that research with publics on HCAI consider methodological designs that expose participants to diverse information sources.
Collapse
Affiliation(s)
- Emma Kellie Frost
- Australian Centre for Health Engagement, Evidence and Values, School of Health and Society, Faculty of the Arts, Social Sciences, and Humanities, University of Wollongong, Australia.
| | - Rebecca Bosward
- Australian Centre for Health Engagement, Evidence and Values, School of Health and Society, Faculty of the Arts, Social Sciences, and Humanities, University of Wollongong, Australia.
| | - Yves Saint James Aquino
- Australian Centre for Health Engagement, Evidence and Values, School of Health and Society, Faculty of the Arts, Social Sciences, and Humanities, University of Wollongong, Australia.
| | - Annette Braunack-Mayer
- Australian Centre for Health Engagement, Evidence and Values, School of Health and Society, Faculty of the Arts, Social Sciences, and Humanities, University of Wollongong, Australia.
| | - Stacy M Carter
- Australian Centre for Health Engagement, Evidence and Values, School of Health and Society, Faculty of the Arts, Social Sciences, and Humanities, University of Wollongong, Australia.
| |
Collapse
|
4
|
Viberg Johansson J, Dembrower K, Strand F, Grauman Å. Women's perceptions and attitudes towards the use of AI in mammography in Sweden: a qualitative interview study. BMJ Open 2024; 14:e084014. [PMID: 38355190 PMCID: PMC10868248 DOI: 10.1136/bmjopen-2024-084014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/05/2024] [Accepted: 02/02/2024] [Indexed: 02/16/2024] Open
Abstract
BACKGROUND Understanding women's perspectives can help to create an effective and acceptable artificial intelligence (AI) implementation for triaging mammograms, ensuring a high proportion of screening-detected cancer. This study aimed to explore Swedish women's perceptions and attitudes towards the use of AI in mammography. METHOD Semistructured interviews were conducted with 16 women recruited in the spring of 2023 at Capio S:t Görans Hospital, Sweden, during an ongoing clinical trial of AI in screening (ScreenTrustCAD, NCT04778670) with Philips equipment. The interview transcripts were analysed using inductive thematic content analysis. RESULTS In general, women viewed AI as an excellent complementary tool to help radiologists in their decision-making, rather than a complete replacement of their expertise. To trust the AI, the women requested a thorough evaluation, transparency about AI usage in healthcare, and the involvement of a radiologist in the assessment. They would rather be more worried because of being called in more often for scans than risk having overlooked a sign of cancer. They expressed substantial trust in the healthcare system if the implementation of AI was to become a standard practice. CONCLUSION The findings suggest that the interviewed women, in general, hold a positive attitude towards the implementation of AI in mammography; nonetheless, they expect and demand more from an AI than a radiologist. Effective communication regarding the role and limitations of AI is crucial to ensure that patients understand the purpose and potential outcomes of AI-assisted healthcare.
Collapse
Affiliation(s)
- Jennifer Viberg Johansson
- Centre for Research Ethics & Bioethics (CRB), Department of Public Health and Caring Sciences, Uppsala University, Uppsala, Sweden
| | - Karin Dembrower
- Capio S:t Görans Hospital, Stockholm, Sweden
- Department of Oncology-Pathology, Karolinska Institute, Stockholm, Sweden
| | - Fredrik Strand
- Department of Oncology-Pathology, Karolinska Institute, Stockholm, Sweden
| | - Åsa Grauman
- Centre for Research Ethics & Bioethics (CRB), Department of Public Health and Caring Sciences, Uppsala University, Uppsala, Sweden
| |
Collapse
|
5
|
Racine N, Chow C, Hamwi L, Bucsea O, Cheng C, Du H, Fabrizi L, Jasim S, Johannsson L, Jones L, Laudiano-Dray MP, Meek J, Mistry N, Shah V, Stedman I, Wang X, Riddell RP. Health Care Professionals' and Parents' Perspectives on the Use of AI for Pain Monitoring in the Neonatal Intensive Care Unit: Multisite Qualitative Study. JMIR AI 2024; 3:e51535. [PMID: 38875686 PMCID: PMC11041412 DOI: 10.2196/51535] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/02/2023] [Revised: 11/24/2023] [Accepted: 12/17/2023] [Indexed: 06/16/2024]
Abstract
BACKGROUND The use of artificial intelligence (AI) for pain assessment has the potential to address historical challenges in infant pain assessment. There is a dearth of information on the perceived benefits and barriers to the implementation of AI for neonatal pain monitoring in the neonatal intensive care unit (NICU) from the perspective of health care professionals (HCPs) and parents. This qualitative analysis provides novel data obtained from 2 large tertiary care hospitals in Canada and the United Kingdom. OBJECTIVE The aim of the study is to explore the perspectives of HCPs and parents regarding the use of AI for pain assessment in the NICU. METHODS In total, 20 HCPs and 20 parents of preterm infants were recruited and consented to participate from February 2020 to October 2022 in interviews asking about AI use for pain assessment in the NICU, potential benefits of the technology, and potential barriers to use. RESULTS The 40 participants included 20 HCPs (17 women and 3 men) with an average of 19.4 (SD 10.69) years of experience in the NICU and 20 parents (mean age 34.4, SD 5.42 years) of preterm infants who were on average 43 (SD 30.34) days old. Six themes from the perspective of HCPs were identified: regular use of technology in the NICU, concerns with regard to AI integration, the potential to improve patient care, requirements for implementation, AI as a tool for pain assessment, and ethical considerations. Seven parent themes included the potential for improved care, increased parental distress, support for parents regarding AI, the impact on parent engagement, the importance of human care, requirements for integration, and the desire for choice in its use. A consistent theme was the importance of AI as a tool to inform clinical decision-making and not replace it. CONCLUSIONS HCPs and parents expressed generally positive sentiments about the potential use of AI for pain assessment in the NICU, with HCPs highlighting important ethical considerations. This study identifies critical methodological and ethical perspectives from key stakeholders that should be noted by any team considering the creation and implementation of AI for pain monitoring in the NICU.
Collapse
Affiliation(s)
- Nicole Racine
- School of Psychology, University of Ottawa, Children's Hospital of Eastern Ontario Research Institute, Ottawa, ON, Canada
| | - Cheryl Chow
- Department of Psychology, York University, Toronto, ON, Canada
| | - Lojain Hamwi
- Department of Psychology, York University, Toronto, ON, Canada
| | - Oana Bucsea
- Department of Psychology, York University, Toronto, ON, Canada
| | - Carol Cheng
- Department of Nursing, Mount Sinai Hospital, Toronto, ON, Canada
| | - Hang Du
- Department of Mathematics and Statistics, York University, Toronto, ON, Canada
| | - Lorenzo Fabrizi
- Department of Neuroscience, Physiology and Pharmacology, University College London, London, United Kingdom
| | - Sara Jasim
- Department of Psychology, York University, Toronto, ON, Canada
| | | | - Laura Jones
- Department of Neuroscience, Physiology and Pharmacology, University College London, London, United Kingdom
| | - Maria Pureza Laudiano-Dray
- Department of Neuroscience, Physiology and Pharmacology, University College London, London, United Kingdom
| | - Judith Meek
- Neonatal Care Unit, University College London Hospitals, London, United Kingdom
| | - Neelum Mistry
- Department of Neuroscience, Physiology and Pharmacology, University College London, London, United Kingdom
| | - Vibhuti Shah
- Department of Pediatrics, Mount Sinai Hospital, Toronto, ON, Canada
| | - Ian Stedman
- School of Public Policy and Administration, York University, Toronto, ON, Canada
| | - Xiaogang Wang
- Department of Mathematics and Statistics, York University, Toronto, ON, Canada
| | | |
Collapse
|
6
|
Vo V, Chen G, Aquino YSJ, Carter SM, Do QN, Woode ME. Multi-stakeholder preferences for the use of artificial intelligence in healthcare: A systematic review and thematic analysis. Soc Sci Med 2023; 338:116357. [PMID: 37949020 DOI: 10.1016/j.socscimed.2023.116357] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2023] [Revised: 09/04/2023] [Accepted: 10/24/2023] [Indexed: 11/12/2023]
Abstract
INTRODUCTION Despite the proliferation of Artificial Intelligence (AI) technology over the last decade, clinician, patient, and public perceptions of its use in healthcare raise a number of ethical, legal and social questions. We systematically review the literature on attitudes towards the use of AI in healthcare from patients, the general public and health professionals' perspectives to understand these issues from multiple perspectives. METHODOLOGY A search for original research articles using qualitative, quantitative, and mixed methods published between 1 Jan 2001 to 24 Aug 2021 was conducted on six bibliographic databases. Data were extracted and classified into different themes representing views on: (i) knowledge and familiarity of AI, (ii) AI benefits, risks, and challenges, (iii) AI acceptability, (iv) AI development, (v) AI implementation, (vi) AI regulations, and (vii) Human - AI relationship. RESULTS The final search identified 7,490 different records of which 105 publications were selected based on predefined inclusion/exclusion criteria. While the majority of patients, the general public and health professionals generally had a positive attitude towards the use of AI in healthcare, all groups indicated some perceived risks and challenges. Commonly perceived risks included data privacy; reduced professional autonomy; algorithmic bias; healthcare inequities; and greater burnout to acquire AI-related skills. While patients had mixed opinions on whether healthcare workers suffer from job loss due to the use of AI, health professionals strongly indicated that AI would not be able to completely replace them in their professions. Both groups shared similar doubts about AI's ability to deliver empathic care. The need for AI validation, transparency, explainability, and patient and clinical involvement in the development of AI was emphasised. To help successfully implement AI in health care, most participants envisioned that an investment in training and education campaigns was necessary, especially for health professionals. Lack of familiarity, lack of trust, and regulatory uncertainties were identified as factors hindering AI implementation. Regarding AI regulations, key themes included data access and data privacy. While the general public and patients exhibited a willingness to share anonymised data for AI development, there remained concerns about sharing data with insurance or technology companies. One key domain under this theme was the question of who should be held accountable in the case of adverse events arising from using AI. CONCLUSIONS While overall positivity persists in attitudes and preferences toward AI use in healthcare, some prevalent problems require more attention. There is a need to go beyond addressing algorithm-related issues to look at the translation of legislation and guidelines into practice to ensure fairness, accountability, transparency, and ethics in AI.
Collapse
Affiliation(s)
- Vinh Vo
- Centre for Health Economics, Monash University, Australia.
| | - Gang Chen
- Centre for Health Economics, Monash University, Australia
| | - Yves Saint James Aquino
- Australian Centre for Health Engagement, Evidence and Values, School of Health and Soceity, University of Wollongong, Australia
| | - Stacy M Carter
- Australian Centre for Health Engagement, Evidence and Values, School of Health and Soceity, University of Wollongong, Australia
| | - Quynh Nga Do
- Department of Economics, Monash University, Australia
| | - Maame Esi Woode
- Centre for Health Economics, Monash University, Australia; Monash Data Futures Research Institute, Australia
| |
Collapse
|
7
|
Miró Catalina Q, Femenia J, Fuster-Casanovas A, Marin-Gomez FX, Escalé-Besa A, Solé-Casals J, Vidal-Alaball J. Knowledge and Perception of the Use of AI and its Implementation in the Field of Radiology: Cross-Sectional Study. J Med Internet Res 2023; 25:e50728. [PMID: 37831495 PMCID: PMC10612005 DOI: 10.2196/50728] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2023] [Revised: 08/31/2023] [Accepted: 09/25/2023] [Indexed: 10/14/2023] Open
Abstract
BACKGROUND Artificial Intelligence (AI) has been developing for decades, but in recent years its use in the field of health care has experienced an exponential increase. Currently, there is little doubt that these tools have transformed clinical practice. Therefore, it is important to know how the population perceives its implementation to be able to propose strategies for acceptance and implementation and to improve or prevent problems arising from future applications. OBJECTIVE This study aims to describe the population's perception and knowledge of the use of AI as a health support tool and its application to radiology through a validated questionnaire, in order to develop strategies aimed at increasing acceptance of AI use, reducing possible resistance to change and identifying possible sociodemographic factors related to perception and knowledge. METHODS A cross-sectional observational study was conducted using an anonymous and voluntarily validated questionnaire aimed at the entire population of Catalonia aged 18 years or older. The survey addresses 4 dimensions defined to describe users' perception of the use of AI in radiology, (1) "distrust and accountability," (2) "personal interaction," (3) "efficiency," and (4) "being informed," all with questions in a Likert scale format. Results closer to 5 refer to a negative perception of the use of AI, while results closer to 1 express a positive perception. Univariate and bivariate analyses were performed to assess possible associations between the 4 dimensions and sociodemographic characteristics. RESULTS A total of 379 users responded to the survey, with an average age of 43.9 (SD 17.52) years and 59.8% (n=226) of them identified as female. In addition, 89.8% (n=335) of respondents indicated that they understood the concept of AI. Of the 4 dimensions analyzed, "distrust and accountability" obtained a mean score of 3.37 (SD 0.53), "personal interaction" obtained a mean score of 4.37 (SD 0.60), "efficiency" obtained a mean score of 3.06 (SD 0.73) and "being informed" obtained a mean score of 3.67 (SD 0.57). In relation to the "distrust and accountability" dimension, women, people older than 65 years, the group with university studies, and the population that indicated not understanding the AI concept had significantly more distrust in the use of AI. On the dimension of "being informed," it was observed that the group with university studies rated access to information more positively and those who indicated not understanding the concept of AI rated it more negatively. CONCLUSIONS The majority of the sample investigated reported being familiar with the concept of AI, with varying degrees of acceptance of its implementation in radiology. It is clear that the most conflictive dimension is "personal interaction," whereas "efficiency" is where there is the greatest acceptance, being the dimension in which there are the best expectations for the implementation of AI in radiology.
Collapse
Affiliation(s)
- Queralt Miró Catalina
- Unitat de Suport a la Recerca de la Catalunya Central, Fundació Institut Universitari per a la Recerca a l'Atenció Primària de Salut Jordi Gol i Gurina, Sant Fruitós de Bages, Spain
- Health Promotion in Rural Areas Research Group, Gerència Territorial de la Catalunya Central, Institut Català de la Salut, Sant Fruitós de Bages, Spain
| | - Joaquim Femenia
- Faculty of Medicine, University of Vic-Central University of Catalonia, Vic, Spain
| | - Aïna Fuster-Casanovas
- Unitat de Suport a la Recerca de la Catalunya Central, Fundació Institut Universitari per a la Recerca a l'Atenció Primària de Salut Jordi Gol i Gurina, Sant Fruitós de Bages, Spain
| | - Francesc X Marin-Gomez
- Unitat de Suport a la Recerca de la Catalunya Central, Fundació Institut Universitari per a la Recerca a l'Atenció Primària de Salut Jordi Gol i Gurina, Sant Fruitós de Bages, Spain
- Health Promotion in Rural Areas Research Group, Gerència Territorial de la Catalunya Central, Institut Català de la Salut, Sant Fruitós de Bages, Spain
| | - Anna Escalé-Besa
- Unitat de Suport a la Recerca de la Catalunya Central, Fundació Institut Universitari per a la Recerca a l'Atenció Primària de Salut Jordi Gol i Gurina, Sant Fruitós de Bages, Spain
- Health Promotion in Rural Areas Research Group, Gerència Territorial de la Catalunya Central, Institut Català de la Salut, Sant Fruitós de Bages, Spain
- Faculty of Medicine, University of Vic-Central University of Catalonia, Vic, Spain
| | - Jordi Solé-Casals
- Data and Signal Processing group, Faculty of Science, Technology and Engineering, University of Vic-Central University of Catalonia, Vic, Spain
- Department of Psychiatry, University of Cambridge, Cambridge, United Kingdom
| | - Josep Vidal-Alaball
- Unitat de Suport a la Recerca de la Catalunya Central, Fundació Institut Universitari per a la Recerca a l'Atenció Primària de Salut Jordi Gol i Gurina, Sant Fruitós de Bages, Spain
- Health Promotion in Rural Areas Research Group, Gerència Territorial de la Catalunya Central, Institut Català de la Salut, Sant Fruitós de Bages, Spain
- Faculty of Medicine, University of Vic-Central University of Catalonia, Vic, Spain
| |
Collapse
|
8
|
Clermont G. The Learning Electronic Health Record. Crit Care Clin 2023; 39:689-700. [PMID: 37704334 DOI: 10.1016/j.ccc.2023.03.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/15/2023]
Abstract
Electronic medical records (EMRs) constitute the electronic version of all medical information included in a patient's paper chart. The electronic health record (EHR) technology has witnessed massive expansion in developed countries and to a lesser extent in underresourced countries during the last 2 decades. We will review factors leading to this expansion, how the emergence of EHRs is affecting several health-care stakeholders; some of the growing pains associated with EHRs with a particular emphasis on the delivery of care to the critically ill; and ongoing developments on the path to improve the quality of research, health-care delivery, and stakeholder satisfaction.
Collapse
Affiliation(s)
- Gilles Clermont
- VA Pittsburgh Medical Center, 1054 Aliquippa Street, Pittsburgh, PA 15104, USA; Critical Care Medicine, University of Pittsburgh, 200 Lothrop Street, Pittsburgh, PA 15061, USA.
| |
Collapse
|
9
|
Gould DJ, Dowsey MM, Glanville-Hearst M, Spelman T, Bailey JA, Choong PFM, Bunzli S. Patients' Views on AI for Risk Prediction in Shared Decision-Making for Knee Replacement Surgery: Qualitative Interview Study. J Med Internet Res 2023; 25:e43632. [PMID: 37721797 PMCID: PMC10546266 DOI: 10.2196/43632] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2022] [Revised: 05/04/2023] [Accepted: 08/21/2023] [Indexed: 09/19/2023] Open
Abstract
BACKGROUND The use of artificial intelligence (AI) in decision-making around knee replacement surgery is increasing, and this technology holds promise to improve the prediction of patient outcomes. Ambiguity surrounds the definition of AI, and there are mixed views on its application in clinical settings. OBJECTIVE In this study, we aimed to explore the understanding and attitudes of patients who underwent knee replacement surgery regarding AI in the context of risk prediction for shared clinical decision-making. METHODS This qualitative study involved patients who underwent knee replacement surgery at a tertiary referral center for joint replacement surgery. The participants were selected based on their age and sex. Semistructured interviews explored the participants' understanding of AI and their opinions on its use in shared clinical decision-making. Data collection and reflexive thematic analyses were conducted concurrently. Recruitment continued until thematic saturation was achieved. RESULTS Thematic saturation was achieved with 19 interviews and confirmed with 1 additional interview, resulting in 20 participants being interviewed (female participants: n=11, 55%; male participants: n=9, 45%; median age: 66 years). A total of 11 (55%) participants had a substantial postoperative complication. Three themes captured the participants' understanding of AI and their perceptions of its use in shared clinical decision-making. The theme Expectations captured the participants' views of themselves as individuals with the right to self-determination as they sought therapeutic solutions tailored to their circumstances, needs, and desires, including whether to use AI at all. The theme Empowerment highlighted the potential of AI to enable patients to develop realistic expectations and equip them with personalized risk information to discuss in shared decision-making conversations with the surgeon. The theme Partnership captured the importance of symbiosis between AI and clinicians because AI has varied levels of interpretability and understanding of human emotions and empathy. CONCLUSIONS Patients who underwent knee replacement surgery in this study had varied levels of familiarity with AI and diverse conceptualizations of its definitions and capabilities. Educating patients about AI through nontechnical explanations and illustrative scenarios could help inform their decision to use it for risk prediction in the shared decision-making process with their surgeon. These findings could be used in the process of developing a questionnaire to ascertain the views of patients undergoing knee replacement surgery on the acceptability of AI in shared clinical decision-making. Future work could investigate the accuracy of this patient group's understanding of AI, beyond their familiarity with it, and how this influences their acceptance of its use. Surgeons may play a key role in finding a place for AI in the clinical setting as the uptake of this technology in health care continues to grow.
Collapse
Affiliation(s)
- Daniel J Gould
- St Vincent's Hospital, Department of Surgery, University of Melbourne, Melbourne, Australia
| | - Michelle M Dowsey
- St Vincent's Hospital, Department of Surgery, University of Melbourne, Melbourne, Australia
- Department of Orthopaedics, St Vincent's Hospital Melbourne, Melbourne, Australia
| | | | - Tim Spelman
- St Vincent's Hospital, Department of Surgery, University of Melbourne, Melbourne, Australia
| | - James A Bailey
- School of Computing and Information Systems, University of Melbourne, Melbourne, Australia
| | - Peter F M Choong
- St Vincent's Hospital, Department of Surgery, University of Melbourne, Melbourne, Australia
- Department of Orthopaedics, St Vincent's Hospital Melbourne, Melbourne, Australia
| | - Samantha Bunzli
- School of Health Sciences and Social Work, Griffith University, Brisbane, Australia
| |
Collapse
|
10
|
Ibba S, Tancredi C, Fantesini A, Cellina M, Presta R, Montanari R, Papa S, Alì M. How do patients perceive the AI-radiologists interaction? Results of a survey on 2119 responders. Eur J Radiol 2023; 165:110917. [PMID: 37327548 DOI: 10.1016/j.ejrad.2023.110917] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2023] [Revised: 05/16/2023] [Accepted: 05/31/2023] [Indexed: 06/18/2023]
Abstract
PURPOSE In this study we investigate how patients perceive the interaction between artificial intelligence (AI) and radiologists by designing a survey. METHOD We created a survey focused on the application of Artificial Intelligence in radiology which consisted of 20 questions distributed in three sections:Only completed questionnaires were considered for analysis. RESULTS 2119 subjects completed the survey. Among them, 1216 respondents were over 60 years old, showing interest in AI even though they were not digital natives. Although >45% of the respondents reported a high level of education, only 3% said they were AI experts. 87% of respondents favored using AI to support diagnosis but would like to be informed. Only 10% would consult another specialist if their doctor used AI support. Most respondents (76%) said they would not feel comfortable if the diagnosis was made by the AI alone, highlighting the importance of the physician's role in the emotional management of the patient. Finally, 36% of respondents were willing to discuss the topic further in a focus group. CONCLUSION Patients' perception of the use of AI in radiology was positive, although still strictly linked to the supervision of the radiologist. Respondents showed interest and willingness to learn more about AI in the medical field, confirming how patients' confidence in AI technology and its acceptance is central to its widespread use in clinical practice.
Collapse
Affiliation(s)
- Simona Ibba
- Unit of Diagnostic Imaging and Stereotactic Radiosurgery, CDI Centro Diagnostico Italiano S.p.A., Via Simone Saint Bon 20, 20147 Milan, Italy.
| | - Chiara Tancredi
- Suor Orsola Benincasa University, Corso Vittorio Emanuele 292, 80135 Naples, Italy.
| | - Arianna Fantesini
- Suor Orsola Benincasa University, Corso Vittorio Emanuele 292, 80135 Naples, Italy; RE:LAB s.r.l., Via Tamburini, 5, 42122 Reggio Emilia, Italy.
| | - Michaela Cellina
- Radiology Department, ASST Fatebenefratelli Sacco, Piazza Principessa Clotilde 3, 20121 Milan, Italy.
| | - Roberta Presta
- Suor Orsola Benincasa University, Corso Vittorio Emanuele 292, 80135 Naples, Italy.
| | - Roberto Montanari
- Suor Orsola Benincasa University, Corso Vittorio Emanuele 292, 80135 Naples, Italy; RE:LAB s.r.l., Via Tamburini, 5, 42122 Reggio Emilia, Italy.
| | - Sergio Papa
- Unit of Diagnostic Imaging and Stereotactic Radiosurgery, CDI Centro Diagnostico Italiano S.p.A., Via Simone Saint Bon 20, 20147 Milan, Italy.
| | - Marco Alì
- Unit of Diagnostic Imaging and Stereotactic Radiosurgery, CDI Centro Diagnostico Italiano S.p.A., Via Simone Saint Bon 20, 20147 Milan, Italy; Bracco Imaging S.p.A., Via Egidio Folli, 50, 20134 Milan, Italy.
| |
Collapse
|
11
|
Thai K, Tsiandoulas KH, Stephenson EA, Menna-Dack D, Zlotnik Shaul R, Anderson JA, Shinewald AR, Ampofo A, McCradden MD. Perspectives of Youths on the Ethical Use of Artificial Intelligence in Health Care Research and Clinical Care. JAMA Netw Open 2023; 6:e2310659. [PMID: 37126349 PMCID: PMC10152306 DOI: 10.1001/jamanetworkopen.2023.10659] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 05/02/2023] Open
Abstract
Importance Understanding the views and values of patients is of substantial importance to developing the ethical parameters of artificial intelligence (AI) use in medicine. Thus far, there is limited study on the views of children and youths. Their perspectives contribute meaningfully to the integration of AI in medicine. Objective To explore the moral attitudes and views of children and youths regarding research and clinical care involving health AI at the point of care. Design, Setting, and Participants This qualitative study recruited participants younger than 18 years during a 1-year period (October 2021 to March 2022) at a large urban pediatric hospital. A total of 44 individuals who were receiving or had previously received care at a hospital or rehabilitation clinic contacted the research team, but 15 were found to be ineligible. Of the 29 who consented to participate, 1 was lost to follow-up, resulting in 28 participants who completed the interview. Exposures Participants were interviewed using vignettes on 3 main themes: (1) health data research, (2) clinical AI trials, and (3) clinical use of AI. Main Outcomes and Measures Thematic description of values surrounding health data research, interventional AI research, and clinical use of AI. Results The 28 participants included 6 children (ages, 10-12 years) and 22 youths (ages, 13-17 years) (16 female, 10 male, and 3 trans/nonbinary/gender diverse). Mean (SD) age was 15 (2) years. Participants were highly engaged and quite knowledgeable about AI. They expressed a positive view of research intended to help others and had strong feelings about the uses of their health data for AI. Participants expressed appreciation for the vulnerability of potential participants in interventional AI trials and reinforced the importance of respect for their preferences regardless of their decisional capacity. A strong theme for the prospective use of clinical AI was the desire to maintain bedside interaction between the patient and their physician. Conclusions and Relevance In this study, children and youths reported generally positive views of AI, expressing strong interest and advocacy for their involvement in AI research and inclusion of their voices for shared decision-making with AI in clinical care. These findings suggest the need for more engagement of children and youths in health care AI research and integration.
Collapse
Affiliation(s)
- Kelly Thai
- Department of Bioethics, The Hospital for Sick Children, Toronto, Ontario, Canada
- Genetics & Genome Biology, Peter Gilgan Centre for Research & Learning, Toronto, Ontario, Canada
| | - Kate H Tsiandoulas
- Department of Bioethics, The Hospital for Sick Children, Toronto, Ontario, Canada
| | - Elizabeth A Stephenson
- Labatt Family Heart Centre, The Hospital for Sick Children, Toronto, Ontario, Canada
- Department of Paediatrics, University of Toronto, Toronto, Ontario, Canada
| | - Dolly Menna-Dack
- Holland Bloorview Kids Rehabilitation Hospital, Toronto, Ontario, Canada
| | - Randi Zlotnik Shaul
- Department of Bioethics, The Hospital for Sick Children, Toronto, Ontario, Canada
- Department of Paediatrics, University of Toronto, Toronto, Ontario, Canada
| | - James A Anderson
- Department of Bioethics, The Hospital for Sick Children, Toronto, Ontario, Canada
- Institute for Health Policy, Management and Evaluation, University of Toronto, Toronto, Ontario, Canada
| | | | | | - Melissa D McCradden
- Department of Bioethics, The Hospital for Sick Children, Toronto, Ontario, Canada
- Genetics & Genome Biology, Peter Gilgan Centre for Research & Learning, Toronto, Ontario, Canada
- Dalla Lana School of Public Health, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
12
|
Kelly BS, Kirwan A, Quinn MS, Kelly AM, Mathur P, Lawlor A, Killeen RP. The ethical matrix as a method for involving people living with disease and the wider public (PPI) in near-term artificial intelligence research. Radiography (Lond) 2023; 29 Suppl 1:S103-S111. [PMID: 37062673 DOI: 10.1016/j.radi.2023.03.009] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2022] [Revised: 03/10/2023] [Accepted: 03/12/2023] [Indexed: 04/18/2023]
Abstract
INTRODUCTION The rapid pace of research in the field of Artificial Intelligence in medicine has associated risks for near-term AI. Ethical considerations of the use of AI in medicine remain a subject of much debate. Concurrently, the Involvement of People living with disease and the Public (PPI) in research is becoming mandatory in the EU and UK. The goal of this research was to elucidate the important values for our relevant stakeholders: People with MS, Radiologists, neurologists, Registered Healthcare Practitioners and Computer Scientists concerning AI in radiology and synthesize these in an ethical matrix. METHODS An ethical matrix workshop co-designed with a patient expert. The workshop yielded a survey which was disseminated to the professional societies of the relevant stakeholders. Quantitative data were analysed using the Pingouin 0.53 python package. Qualitative data were examined with word frequency analysis and analysed for themes with grounded theory with a patient expert. RESULTS 184 participants were recruited, (54, 60, 17, 12, 41 respectively). There were significant (p < 0.00001) differences in age, gender and ethnicity between groups. Key themes emerging from our results were the importance fast and accurate results, explanations over model performance and the significance of maintaining personal connections and choice. These themes were used to construct the ethical matrix. CONCLUSION The ethical matrix is a useful tool for PPI and stakeholder engagement with particular advantages for near-term AI in the pandemic era. IMPLICATIONS FOR PRACTICE We have produced an ethical matrix that allows for the inclusion of stakeholder opinion in medical AI research design.
Collapse
Affiliation(s)
- B S Kelly
- School of Medicine, UCD, Belfield, Dublin 4, Ireland; Department of Radiology, St Vincent's University Hospital, Dublin 4, Ireland; School of Computer Science and Insight Centre, UCD Belfield, Dublin 4, Ireland.
| | - A Kirwan
- Multiple Sclerosis Ireland National Office, 80 Northumberland Road, Dublin 4, Ireland
| | - M S Quinn
- School of Computer Science and Insight Centre, UCD Belfield, Dublin 4, Ireland
| | - A M Kelly
- School of Education, Trinity College Dublin, Dublin 2, Ireland
| | - P Mathur
- Department of Radiology, St Vincent's University Hospital, Dublin 4, Ireland
| | - A Lawlor
- Department of Radiology, St Vincent's University Hospital, Dublin 4, Ireland
| | - R P Killeen
- School of Medicine, UCD, Belfield, Dublin 4, Ireland
| |
Collapse
|
13
|
Cumyn A, Ménard JF, Barton A, Dault R, Lévesque F, Ethier JF. Patients' and Members of the Public's Wishes Regarding Transparency in the Context of Secondary Use of Health Data: Scoping Review. J Med Internet Res 2023; 25:e45002. [PMID: 37052967 PMCID: PMC10141314 DOI: 10.2196/45002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2022] [Revised: 02/09/2023] [Accepted: 03/03/2023] [Indexed: 03/06/2023] Open
Abstract
BACKGROUND Secondary use of health data has reached unequaled potential to improve health systems governance, knowledge, and clinical care. Transparency regarding this secondary use is frequently cited as necessary to address deficits in trust and conditional support and to increase patient awareness. OBJECTIVE We aimed to review the current published literature to identify different stakeholders' perspectives and recommendations on what information patients and members of the public want to learn about the secondary use of health data for research purposes and how and in which situations. METHODS Using PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews) guidelines, we conducted a scoping review using Medline, CINAHL, PsycINFO, Scopus, Cochrane Library, and PubMed databases to locate a broad range of studies published in English or French until November 2022. We included articles reporting a stakeholder's perspective or recommendations of what information patients and members of the public want to learn about the secondary use of health data for research purposes and how or in which situations. Data were collected and analyzed with an iterative thematic approach using NVivo. RESULTS Overall, 178 articles were included in this scoping review. The type of information can be divided into generic and specific content. Generic content includes information on governance and regulatory frameworks, technical aspects, and scientific aims. Specific content includes updates on the use of one's data, return of results from individual tests, information on global results, information on data sharing, and how to access one's data. Recommendations on how to communicate the information focused on frequency, use of various supports, formats, and wording. Methods for communication generally favored broad approaches such as nationwide publicity campaigns, mainstream and social media for generic content, and mixed approaches for specific content including websites, patient portals, and face-to-face encounters. Content should be tailored to the individual as much as possible with regard to length, avoidance of technical terms, cultural competence, and level of detail. Finally, the review outlined 4 major situations where communication was deemed necessary: before a new use of data, when new test results became available, when global research results were released, and in the advent of a breach in confidentiality. CONCLUSIONS This review highlights how different types of information and approaches to communication efforts may serve as the basis for achieving greater transparency. Governing bodies could use the results: to elaborate or evaluate strategies to educate on the potential benefits; to provide some knowledge and control over data use as a form of reciprocity; and as a condition to engage citizens and build and maintain trust. Future work is needed to assess which strategies achieve the greatest outreach while striking a balance between meeting information needs and use of resources.
Collapse
Affiliation(s)
- Annabelle Cumyn
- Département de médecine, Faculté de médecine et des sciences de la santé, Université de Sherbrooke, Sherbrooke, QC, Canada
- Groupe de recherche interdisciplinaire en informatique de la santé, Faculté des sciences/Faculté de médecine et des sciences de la santé, Université de Sherbrooke, Sherbrooke, QC, Canada
| | - Jean-Frédéric Ménard
- Groupe de recherche interdisciplinaire en informatique de la santé, Faculté des sciences/Faculté de médecine et des sciences de la santé, Université de Sherbrooke, Sherbrooke, QC, Canada
- Faculté de droit, Université de Sherbrooke, Sherbrooke, QC, Canada
| | - Adrien Barton
- Groupe de recherche interdisciplinaire en informatique de la santé, Faculté des sciences/Faculté de médecine et des sciences de la santé, Université de Sherbrooke, Sherbrooke, QC, Canada
- Institut de recherche en informatique de Toulouse, Toulouse, France
| | - Roxanne Dault
- Groupe de recherche interdisciplinaire en informatique de la santé, Faculté des sciences/Faculté de médecine et des sciences de la santé, Université de Sherbrooke, Sherbrooke, QC, Canada
| | - Frédérique Lévesque
- Groupe de recherche interdisciplinaire en informatique de la santé, Faculté des sciences/Faculté de médecine et des sciences de la santé, Université de Sherbrooke, Sherbrooke, QC, Canada
| | - Jean-François Ethier
- Département de médecine, Faculté de médecine et des sciences de la santé, Université de Sherbrooke, Sherbrooke, QC, Canada
- Groupe de recherche interdisciplinaire en informatique de la santé, Faculté des sciences/Faculté de médecine et des sciences de la santé, Université de Sherbrooke, Sherbrooke, QC, Canada
| |
Collapse
|
14
|
Jeyakumar T, Younus S, Zhang M, Clare M, Charow R, Karsan I, Dhalla A, Al-Mouaswas D, Scandiffio J, Aling J, Salhia M, Lalani N, Overholt S, Wiljer D. Preparing for an Artificial Intelligence-Enabled Future: Patient Perspectives on Engagement and Health Care Professional Training for Adopting Artificial Intelligence Technologies in Health Care Settings. JMIR AI 2023; 2:e40973. [PMID: 38875561 PMCID: PMC11041489 DOI: 10.2196/40973] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/11/2022] [Revised: 11/29/2022] [Accepted: 12/29/2022] [Indexed: 06/16/2024]
Abstract
BACKGROUND As new technologies emerge, there is a significant shift in the way care is delivered on a global scale. Artificial intelligence (AI) technologies have been rapidly and inexorably used to optimize patient outcomes, reduce health system costs, improve workflow efficiency, and enhance population health. Despite the widespread adoption of AI technologies, the literature on patient engagement and their perspectives on how AI will affect clinical care is scarce. Minimal patient engagement can limit the optimization of these novel technologies and contribute to suboptimal use in care settings. OBJECTIVE We aimed to explore patients' views on what skills they believe health care professionals should have in preparation for this AI-enabled future and how we can better engage patients when adopting and deploying AI technologies in health care settings. METHODS Semistructured interviews were conducted from August 2020 to December 2021 with 12 individuals who were a patient in any Canadian health care setting. Interviews were conducted until thematic saturation occurred. A thematic analysis approach outlined by Braun and Clarke was used to inductively analyze the data and identify overarching themes. RESULTS Among the 12 patients interviewed, 8 (67%) were from urban settings and 4 (33%) were from rural settings. A majority of the participants were very comfortable with technology (n=6, 50%) and somewhat familiar with AI (n=7, 58%). In total, 3 themes emerged: cultivating patients' trust, fostering patient engagement, and establishing data governance and validation of AI technologies. CONCLUSIONS With the rapid surge of AI solutions, there is a critical need to understand patient values in advancing the quality of care and contributing to an equitable health system. Our study demonstrated that health care professionals play a synergetic role in the future of AI and digital technologies. Patient engagement is vital in addressing underlying health inequities and fostering an optimal care experience. Future research is warranted to understand and capture the diverse perspectives of patients with various racial, ethnic, and socioeconomic backgrounds.
Collapse
Affiliation(s)
| | | | | | - Megan Clare
- Michener Institute of Education, University Health Network, Toronto, ON, Canada
| | - Rebecca Charow
- University Health Network, Toronto, ON, Canada
- Institute of Health Policy, Management and Evaluation, Dalla Lana School of Public Health, University of Toronto, Toronto, ON, Canada
| | - Inaara Karsan
- University Health Network, Toronto, ON, Canada
- Institute of Health Policy, Management and Evaluation, Dalla Lana School of Public Health, University of Toronto, Toronto, ON, Canada
| | | | - Dalia Al-Mouaswas
- Michener Institute of Education, University Health Network, Toronto, ON, Canada
| | | | - Justin Aling
- Patient Partner Program, University Health Network, Toronto, ON, Canada
| | - Mohammad Salhia
- Michener Institute of Education, University Health Network, Toronto, ON, Canada
| | | | - Scott Overholt
- Patient Partner Program, University Health Network, Toronto, ON, Canada
| | - David Wiljer
- University Health Network, Toronto, ON, Canada
- Institute of Health Policy, Management and Evaluation, Dalla Lana School of Public Health, University of Toronto, Toronto, ON, Canada
- Faculty of Medicine, University of Toronto, Toronto, ON, Canada
- Office of Education, Centre for Addiction and Mental Health, Toronto, ON, Canada
| |
Collapse
|
15
|
Wu C, Xu H, Bai D, Chen X, Gao J, Jiang X. Public perceptions on the application of artificial intelligence in healthcare: a qualitative meta-synthesis. BMJ Open 2023; 13:e066322. [PMID: 36599634 PMCID: PMC9815015 DOI: 10.1136/bmjopen-2022-066322] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/05/2023] Open
Abstract
OBJECTIVES Medical artificial intelligence (AI) has been used widely applied in clinical field due to its convenience and innovation. However, several policy and regulatory issues such as credibility, sharing of responsibility and ethics have raised concerns in the use of AI. It is therefore necessary to understand the general public's views on medical AI. Here, a meta-synthesis was conducted to analyse and summarise the public's understanding of the application of AI in the healthcare field, to provide recommendations for future use and management of AI in medical practice. DESIGN This was a meta-synthesis of qualitative studies. METHOD A search was performed on the following databases to identify studies published in English and Chinese: MEDLINE, CINAHL, Web of science, Cochrane library, Embase, PsycINFO, CNKI, Wanfang and VIP. The search was conducted from database inception to 25 December 2021. The meta-aggregation approach of JBI was used to summarise findings from qualitative studies, focusing on the public's perception of the application of AI in healthcare. RESULTS Of the 5128 studies screened, 12 met the inclusion criteria, hence were incorporated into analysis. Three synthesised findings were used as the basis of our conclusions, including advantages of medical AI from the public's perspective, ethical and legal concerns about medical AI from the public's perspective, and public suggestions on the application of AI in medical field. CONCLUSION Results showed that the public acknowledges the unique advantages and convenience of medical AI. Meanwhile, several concerns about the application of medical AI were observed, most of which involve ethical and legal issues. The standard application and reasonable supervision of medical AI is key to ensuring its effective utilisation. Based on the public's perspective, this analysis provides insights and suggestions for health managers on how to implement and apply medical AI smoothly, while ensuring safety in healthcare practice. PROSPERO REGISTRATION NUMBER CRD42022315033.
Collapse
Affiliation(s)
- Chenxi Wu
- West China School of Nursing/West China Hospital, Sichuan University, Chengdu, Sichuan, China
- School of Nursing, Chengdu University of Traditional Chinese Medicine, Chengdu, Sichuan, China
| | - Huiqiong Xu
- West China School of Nursing,Sichuan University/ Abdominal Oncology Ward, Cancer Center,West China Hospital, Sichuan University, Chengdu, Sichuan, People's Republic of China
| | - Dingxi Bai
- School of Nursing, Chengdu University of Traditional Chinese Medicine, Chengdu, Sichuan, China
| | - Xinyu Chen
- School of Nursing, Chengdu University of Traditional Chinese Medicine, Chengdu, Sichuan, China
| | - Jing Gao
- School of Nursing, Chengdu University of Traditional Chinese Medicine, Chengdu, Sichuan, China
| | - Xiaolian Jiang
- West China School of Nursing/West China Hospital, Sichuan University, Chengdu, Sichuan, China
| |
Collapse
|
16
|
Partnering with children and youth to advance artificial intelligence in healthcare. Pediatr Res 2023; 93:284-286. [PMID: 35681090 DOI: 10.1038/s41390-022-02139-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/26/2022] [Accepted: 04/29/2022] [Indexed: 11/08/2022]
|
17
|
Eysenbach G, Leung T, Schneider G, Heinze O. Exploring Stakeholder Requirements to Enable the Research and Development of Artificial Intelligence Algorithms in a Hospital-Based Generic Infrastructure: Protocol for a Multistep Mixed Methods Study. JMIR Res Protoc 2022; 11:e42208. [PMID: 36525300 PMCID: PMC9804098 DOI: 10.2196/42208] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2022] [Revised: 10/12/2022] [Accepted: 10/18/2022] [Indexed: 11/05/2022] Open
Abstract
BACKGROUND In recent years, research and developments in advancing artificial intelligence (AI) in health care and medicine have increased. High expectations surround the use of AI technologies, such as improvements for diagnosis and increases in the quality of care with reductions in health care costs. The successful development and testing of new AI algorithms require large amounts of high-quality data. Academic hospitals could provide the data needed for AI development, but granting legal, controlled, and regulated access to these data for developers and researchers is difficult. Therefore, the German Federal Ministry of Health supports the Protected Artificial Intelligence Innovation Environment for Patient-Oriented Digital Health Solutions for Developing, Testing, and Evidence-Based Evaluation of Clinical Value (pAItient) project, aiming to install the AI Innovation Environment at the Heidelberg University Hospital in Germany. The AI Innovation Environment was designed as a proof-of-concept extension of the already existing Medical Data Integration Center. It will establish a process to support every step of developing and testing AI-based technologies. OBJECTIVE The first part of the pAItient project, as presented in this research protocol, aims to explore stakeholders' requirements for developing AI in partnership with an academic hospital and granting AI experts access to anonymized personal health data. METHODS We planned a multistep mixed methods approach. In the first step, researchers and employees from stakeholder organizations were invited to participate in semistructured interviews. In the following step, questionnaires were developed based on the participants' answers and distributed among the stakeholders' organizations to quantify qualitative findings and discover important aspects that were not mentioned by the interviewees. The questionnaires will be analyzed descriptively. In addition, patients and physicians were interviewed as well. No survey questionnaires were developed for this second group of participants. The study was approved by the Ethics Committee of the Heidelberg University Hospital (approval number: S-241/2021). RESULTS Data collection concluded in summer 2022. Data analysis is planned to start in fall 2022. We plan to publish the results in winter 2022 to 2023. CONCLUSIONS The results of our study will help in shaping the AI Innovation Environment at our academic hospital according to stakeholder requirements. With this approach, in turn, we aim to shape an AI environment that is effective and is deemed acceptable by all parties. INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID) DERR1-10.2196/42208.
Collapse
Affiliation(s)
| | | | - Gerd Schneider
- Institute of Medical Informatics, Heidelberg University Hospital, Heidelberg, Germany
| | - Oliver Heinze
- Institute of Medical Informatics, Heidelberg University Hospital, Heidelberg, Germany
| |
Collapse
|
18
|
Götzl C, Hiller S, Rauschenberg C, Schick A, Fechtelpeter J, Fischer Abaigar U, Koppe G, Durstewitz D, Reininghaus U, Krumm S. Artificial intelligence-informed mobile mental health apps for young people: a mixed-methods approach on users' and stakeholders' perspectives. Child Adolesc Psychiatry Ment Health 2022; 16:86. [PMID: 36397097 PMCID: PMC9672578 DOI: 10.1186/s13034-022-00522-6] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/18/2022] [Accepted: 11/05/2022] [Indexed: 11/19/2022] Open
Abstract
BACKGROUND Novel approaches in mobile mental health (mHealth) apps that make use of Artificial Intelligence (AI), Ecological Momentary Assessments, and Ecological Momentary Interventions have the potential to support young people in the achievement of mental health and wellbeing goals. However, little is known on the perspectives of young people and mental health experts on this rapidly advancing technology. This study aims to investigate the subjective needs, attitudes, and preferences of key stakeholders towards an AI-informed mHealth app, including young people and experts on mHealth promotion and prevention in youth. METHODS We used a convergent parallel mixed-method study design. Two semi-structured online focus groups (n = 8) and expert interviews (n = 5) to explore users and stakeholders perspectives were conducted. Furthermore a representative online survey was completed by young people (n = 666) to investigate attitudes, current use and preferences towards apps for mental health promotion and prevention. RESULTS Survey results show that more than two-thirds of young people have experience with mHealth apps, and 60% make regular use of 1-2 apps. A minority (17%) reported to feel negative about the application of AI in general, and 19% were negative about the embedding of AI in mHealth apps. This is in line with qualitative findings, where young people displayed rather positive attitudes towards AI and its integration into mHealth apps. Participants reported pragmatic attitudes towards data sharing and safety practices, implying openness to share data if it adds value for users and if the data request is not too intimate, however demanded transparency of data usage and control over personalization. Experts perceived AI-informed mHealth apps as a complementary solution to on-site delivered interventions in future health promotion among young people. Experts emphasized opportunities in regard with low-threshold access through the use of smartphones, and the chance to reach young people in risk situations. CONCLUSIONS The findings of this exploratory study highlight the importance of further participatory development of training components prior to implementation of a digital mHealth training in routine practice of mental health promotion and prevention. Our results may help to guide developments based on stakeholders' first recommendations for an AI-informed mHealth app.
Collapse
Affiliation(s)
- Christian Götzl
- Department of Psychiatry II, University of Ulm and BKH Guenzburg, Lindenallee 2, Guenzburg, 89312, Ulm, Germany. .,Department of Forensic Psychiatry and Psychotherapy, University of Ulm and BKH Guenzburg, Ulm, Germany.
| | - Selina Hiller
- grid.6582.90000 0004 1936 9748Department of Psychiatry II, University of Ulm and BKH Guenzburg, Lindenallee 2, Guenzburg, 89312 Ulm, Germany
| | - Christian Rauschenberg
- grid.7700.00000 0001 2190 4373Department of Public Mental Health, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
| | - Anita Schick
- grid.7700.00000 0001 2190 4373Department of Public Mental Health, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
| | - Janik Fechtelpeter
- grid.7700.00000 0001 2190 4373Department of Theoretical Neuroscience, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
| | - Unai Fischer Abaigar
- grid.7700.00000 0001 2190 4373Department of Theoretical Neuroscience, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
| | - Georgia Koppe
- grid.7700.00000 0001 2190 4373Department of Theoretical Neuroscience, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany ,grid.7700.00000 0001 2190 4373Department of Psychiatry and Psychotherapy, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
| | - Daniel Durstewitz
- grid.7700.00000 0001 2190 4373Department of Theoretical Neuroscience, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
| | - Ulrich Reininghaus
- grid.7700.00000 0001 2190 4373Department of Public Mental Health, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany ,grid.13097.3c0000 0001 2322 6764Centre for Epidemiology and Public Health, Health Service and Population Research Department, Institute of Psychiatry, Psychology & Neuroscience, King’s College London, London, UK ,grid.13097.3c0000 0001 2322 6764ESRC Centre for Society and Mental Health, King’s College London, London, UK
| | - Silvia Krumm
- grid.6582.90000 0004 1936 9748Department of Psychiatry II, University of Ulm and BKH Guenzburg, Lindenallee 2, Guenzburg, 89312 Ulm, Germany
| |
Collapse
|
19
|
Weinert L, Klass M, Schneider G, Heinze O. Exploring Stakeholder Requirements to enable research and development of AI algorithms in a hospital based generic infrastructure: Results of a Multi-step mixed-methods Study (Preprint). JMIR Form Res 2022; 7:e43958. [PMID: 37071450 PMCID: PMC10155093 DOI: 10.2196/43958] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2022] [Revised: 01/29/2023] [Accepted: 02/22/2023] [Indexed: 02/24/2023] Open
Abstract
BACKGROUND Legal, controlled, and regulated access to high-quality data from academic hospitals currently poses a barrier to the development and testing of new artificial intelligence (AI) algorithms. To overcome this barrier, the German Federal Ministry of Health supports the "pAItient" (Protected Artificial Intelligence Innovation Environment for Patient Oriented Digital Health Solutions for developing, testing and evidence-based evaluation of clinical value) project, with the goal to establish an AI Innovation Environment at the Heidelberg University Hospital, Germany. It is designed as a proof-of-concept extension to the preexisting Medical Data Integration Center. OBJECTIVE The first part of the pAItient project aims to explore stakeholders' requirements for developing AI in partnership with an academic hospital and granting AI experts access to anonymized personal health data. METHODS We designed a multistep mixed methods approach. First, researchers and employees from stakeholder organizations were invited to participate in semistructured interviews. In the following step, questionnaires were developed based on the participants' answers and distributed among the stakeholders' organizations. In addition, patients and physicians were interviewed. RESULTS The identified requirements covered a wide range and were conflicting sometimes. Relevant patient requirements included adequate provision of necessary information for data use, clear medical objective of the research and development activities, trustworthiness of the organization collecting the patient data, and data should not be reidentifiable. Requirements of AI researchers and developers encompassed contact with clinical users, an acceptable user interface (UI) for shared data platforms, stable connection to the planned infrastructure, relevant use cases, and assistance in dealing with data privacy regulations. In a next step, a requirements model was developed, which depicts the identified requirements in different layers. This developed model will be used to communicate stakeholder requirements within the pAItient project consortium. CONCLUSIONS The study led to the identification of necessary requirements for the development, testing, and validation of AI applications within a hospital-based generic infrastructure. A requirements model was developed, which will inform the next steps in the development of an AI innovation environment at our institution. Results from our study replicate previous findings from other contexts and will add to the emerging discussion on the use of routine medical data for the development of AI applications. INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID) RR2-10.2196/42208.
Collapse
Affiliation(s)
- Lina Weinert
- Institute of Medical Informatics, Heidelberg University Hospital, Heidelberg, Germany
- Section for Translational Health Economics, Department for Conservative Dentistry, Heidelberg University Hospital, Heidelberg, Germany
| | - Maximilian Klass
- Institute of Medical Informatics, Heidelberg University Hospital, Heidelberg, Germany
| | - Gerd Schneider
- Institute of Medical Informatics, Heidelberg University Hospital, Heidelberg, Germany
| | - Oliver Heinze
- Institute of Medical Informatics, Heidelberg University Hospital, Heidelberg, Germany
| |
Collapse
|
20
|
Anderson JA, McCradden MD, Stephenson EA. Response to Open Peer Commentaries: On Social Harms, Big Tech, and Institutional Accountability. THE AMERICAN JOURNAL OF BIOETHICS : AJOB 2022; 22:W6-W8. [PMID: 35593914 DOI: 10.1080/15265161.2022.2075977] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Affiliation(s)
| | - Melissa D McCradden
- The Hospital for Sick Children
- Peter Gilgan Centre for Research and Learning
- Dalla Lana School of Public Health
| | | |
Collapse
|
21
|
McCradden MD, Anderson JA, A Stephenson E, Drysdale E, Erdman L, Goldenberg A, Zlotnik Shaul R. A Research Ethics Framework for the Clinical Translation of Healthcare Machine Learning. THE AMERICAN JOURNAL OF BIOETHICS : AJOB 2022; 22:8-22. [PMID: 35048782 DOI: 10.1080/15265161.2021.2013977] [Citation(s) in RCA: 35] [Impact Index Per Article: 17.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
The application of artificial intelligence and machine learning (ML) technologies in healthcare have immense potential to improve the care of patients. While there are some emerging practices surrounding responsible ML as well as regulatory frameworks, the traditional role of research ethics oversight has been relatively unexplored regarding its relevance for clinical ML. In this paper, we provide a comprehensive research ethics framework that can apply to the systematic inquiry of ML research across its development cycle. The pathway consists of three stages: (1) exploratory, hypothesis-generating data access; (2) silent period evaluation; (3) prospective clinical evaluation. We connect each stage to its literature and ethical justification and suggest adaptations to traditional paradigms to suit ML while maintaining ethical rigor and the protection of individuals. This pathway can accommodate a multitude of research designs from observational to controlled trials, and the stages can apply individually to a variety of ML applications.
Collapse
Affiliation(s)
- Melissa D McCradden
- Department of Bioethics, The Hospital for Sick Children
- Genetics and Genome Biology, The Hospital for Sick Children, Peter Gilgan Centre for Research and Learning
- Division of Clinical & Public Health, Dalla Lana School of Public Health
| | - James A Anderson
- Department of Bioethics, The Hospital for Sick Children
- Institute for Health Management Policy, & Evaluation, University of Toronto
| | - Elizabeth A Stephenson
- Labatt Family Heart Centre, The Hospital for Sick Children
- Department of Pediatrics, The Hospital for Sick Children
| | - Erik Drysdale
- Genetics and Genome Biology, The Hospital for Sick Children, Peter Gilgan Centre for Research and Learning
| | - Lauren Erdman
- Genetics and Genome Biology, The Hospital for Sick Children, Peter Gilgan Centre for Research and Learning
- Vector Institute
- Department of Computer Science, University of Toronto
| | - Anna Goldenberg
- Department of Bioethics, The Hospital for Sick Children
- Vector Institute
- Department of Computer Science, University of Toronto
- CIFAR
| | - Randi Zlotnik Shaul
- Department of Bioethics, The Hospital for Sick Children
- Department of Pediatrics, The Hospital for Sick Children
- Child Health Evaluative Sciences, The Hospital for Sick Children
| |
Collapse
|
22
|
Romero RA, Young SD. Public perceptions and implementation considerations on the use of artificial intelligence in health. J Eval Clin Pract 2022; 28:75-78. [PMID: 33977613 DOI: 10.1111/jep.13580] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/05/2021] [Accepted: 04/23/2021] [Indexed: 11/30/2022]
Affiliation(s)
- Romina A Romero
- Department of Emergency Medicine, University of California, Irvine, Irvine, CA, USA
| | - Sean D Young
- Department of Emergency Medicine, University of California, Irvine, Irvine, CA, USA.,University of California Institute for Prediction Technology, Department of Informatics, University of California, Irvine, Irvine, CA, USA
| |
Collapse
|
23
|
Chew HSJ, Achananuparp P. Perceptions and Needs of Artificial Intelligence in Health Care to Increase Adoption: Scoping Review. J Med Internet Res 2022; 24:e32939. [PMID: 35029538 PMCID: PMC8800095 DOI: 10.2196/32939] [Citation(s) in RCA: 32] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2021] [Revised: 11/08/2021] [Accepted: 12/03/2021] [Indexed: 01/20/2023] Open
Abstract
BACKGROUND Artificial intelligence (AI) has the potential to improve the efficiency and effectiveness of health care service delivery. However, the perceptions and needs of such systems remain elusive, hindering efforts to promote AI adoption in health care. OBJECTIVE This study aims to provide an overview of the perceptions and needs of AI to increase its adoption in health care. METHODS A systematic scoping review was conducted according to the 5-stage framework by Arksey and O'Malley. Articles that described the perceptions and needs of AI in health care were searched across nine databases: ACM Library, CINAHL, Cochrane Central, Embase, IEEE Xplore, PsycINFO, PubMed, Scopus, and Web of Science for studies that were published from inception until June 21, 2021. Articles that were not specific to AI, not research studies, and not written in English were omitted. RESULTS Of the 3666 articles retrieved, 26 (0.71%) were eligible and included in this review. The mean age of the participants ranged from 30 to 72.6 years, the proportion of men ranged from 0% to 73.4%, and the sample sizes for primary studies ranged from 11 to 2780. The perceptions and needs of various populations in the use of AI were identified for general, primary, and community health care; chronic diseases self-management and self-diagnosis; mental health; and diagnostic procedures. The use of AI was perceived to be positive because of its availability, ease of use, and potential to improve efficiency and reduce the cost of health care service delivery. However, concerns were raised regarding the lack of trust in data privacy, patient safety, technological maturity, and the possibility of full automation. Suggestions for improving the adoption of AI in health care were highlighted: enhancing personalization and customizability; enhancing empathy and personification of AI-enabled chatbots and avatars; enhancing user experience, design, and interconnectedness with other devices; and educating the public on AI capabilities. Several corresponding mitigation strategies were also identified in this study. CONCLUSIONS The perceptions and needs of AI in its use in health care are crucial in improving its adoption by various stakeholders. Future studies and implementations should consider the points highlighted in this study to enhance the acceptability and adoption of AI in health care. This would facilitate an increase in the effectiveness and efficiency of health care service delivery to improve patient outcomes and satisfaction.
Collapse
Affiliation(s)
- Han Shi Jocelyn Chew
- Alice Lee Centre for Nursing Studies, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| | - Palakorn Achananuparp
- Living Analytics Research Centre, Singapore Management University, Singapore, Singapore
| |
Collapse
|
24
|
Fritsch SJ, Blankenheim A, Wahl A, Hetfeld P, Maassen O, Deffge S, Kunze J, Rossaint R, Riedel M, Marx G, Bickenbach J. Attitudes and perception of artificial intelligence in healthcare: A cross-sectional survey among patients. Digit Health 2022; 8:20552076221116772. [PMID: 35983102 PMCID: PMC9380417 DOI: 10.1177/20552076221116772] [Citation(s) in RCA: 32] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2022] [Accepted: 07/13/2022] [Indexed: 12/23/2022] Open
Abstract
Objective The attitudes about the usage of artificial intelligence in healthcare are
controversial. Unlike the perception of healthcare professionals, the
attitudes of patients and their companions have been of less interest so
far. In this study, we aimed to investigate the perception of artificial
intelligence in healthcare among this highly relevant group along with the
influence of digital affinity and sociodemographic factors. Methods We conducted a cross-sectional study using a paper-based questionnaire with
patients and their companions at a German tertiary referral hospital from
December 2019 to February 2020. The questionnaire consisted of three
sections examining (a) the respondents’ technical affinity, (b) their
perception of different aspects of artificial intelligence in healthcare and
(c) sociodemographic characteristics. Results From a total of 452 participants, more than 90% already read or heard about
artificial intelligence, but only 24% reported good or expert knowledge.
Asked on their general perception, 53.18% of the respondents rated the use
of artificial intelligence in medicine as positive or very positive, but
only 4.77% negative or very negative. The respondents denied concerns about
artificial intelligence, but strongly agreed that artificial intelligence
must be controlled by a physician. Older patients, women, persons with lower
education and technical affinity were more cautious on the
healthcare-related artificial intelligence usage. Conclusions German patients and their companions are open towards the usage of artificial
intelligence in healthcare. Although showing only a mediocre knowledge about
artificial intelligence, a majority rated artificial intelligence in
healthcare as positive. Particularly, patients insist that a physician
supervises the artificial intelligence and keeps ultimate responsibility for
diagnosis and therapy.
Collapse
Affiliation(s)
- Sebastian J Fritsch
- Department of Intensive Care Medicine, University Hospital RWTH Aachen, Germany
- SMITH Consortium of the German Medical Informatics Initiative, Germany
- Juelich Supercomputing Centre, Forschungszentrum Juelich, Germany
| | - Andrea Blankenheim
- Department of Intensive Care Medicine, University Hospital RWTH Aachen, Germany
| | - Alina Wahl
- Department of Intensive Care Medicine, University Hospital RWTH Aachen, Germany
| | - Petra Hetfeld
- Department of Intensive Care Medicine, University Hospital RWTH Aachen, Germany
- SMITH Consortium of the German Medical Informatics Initiative, Germany
| | - Oliver Maassen
- Department of Intensive Care Medicine, University Hospital RWTH Aachen, Germany
- SMITH Consortium of the German Medical Informatics Initiative, Germany
| | - Saskia Deffge
- Department of Intensive Care Medicine, University Hospital RWTH Aachen, Germany
- SMITH Consortium of the German Medical Informatics Initiative, Germany
| | - Julian Kunze
- SMITH Consortium of the German Medical Informatics Initiative, Germany
- Department of Anesthesiology, University Hospital RWTH Aachen, Germany
| | - Rolf Rossaint
- Department of Anesthesiology, University Hospital RWTH Aachen, Germany
| | - Morris Riedel
- SMITH Consortium of the German Medical Informatics Initiative, Germany
- Juelich Supercomputing Centre, Forschungszentrum Juelich, Germany
- Faculty of Industrial Engineering, Mechanical Engineering and Computer Science, University of Iceland, Iceland
| | - Gernot Marx
- Department of Intensive Care Medicine, University Hospital RWTH Aachen, Germany
- SMITH Consortium of the German Medical Informatics Initiative, Germany
| | - Johannes Bickenbach
- Department of Intensive Care Medicine, University Hospital RWTH Aachen, Germany
- SMITH Consortium of the German Medical Informatics Initiative, Germany
| |
Collapse
|
25
|
Aggarwal R, Farag S, Martin G, Ashrafian H, Darzi A. Patient Perceptions on Data Sharing and Applying Artificial Intelligence to Health Care Data: Cross-sectional Survey. J Med Internet Res 2021; 23:e26162. [PMID: 34236994 PMCID: PMC8430862 DOI: 10.2196/26162] [Citation(s) in RCA: 34] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2020] [Revised: 04/04/2021] [Accepted: 07/05/2021] [Indexed: 12/25/2022] Open
Abstract
Background Considerable research is being conducted as to how artificial intelligence (AI) can be effectively applied to health care. However, for the successful implementation of AI, large amounts of health data are required for training and testing algorithms. As such, there is a need to understand the perspectives and viewpoints of patients regarding the use of their health data in AI research. Objective We surveyed a large sample of patients for identifying current awareness regarding health data research, and for obtaining their opinions and views on data sharing for AI research purposes, and on the use of AI technology on health care data. Methods A cross-sectional survey with patients was conducted at a large multisite teaching hospital in the United Kingdom. Data were collected on patient and public views about sharing health data for research and the use of AI on health data. Results A total of 408 participants completed the survey. The respondents had generally low levels of prior knowledge about AI. Most were comfortable with sharing health data with the National Health Service (NHS) (318/408, 77.9%) or universities (268/408, 65.7%), but far fewer with commercial organizations such as technology companies (108/408, 26.4%). The majority endorsed AI research on health care data (357/408, 87.4%) and health care imaging (353/408, 86.4%) in a university setting, provided that concerns about privacy, reidentification of anonymized health care data, and consent processes were addressed. Conclusions There were significant variations in the patient perceptions, levels of support, and understanding of health data research and AI. Greater public engagement levels and debates are necessary to ensure the acceptability of AI research and its successful integration into clinical practice in future.
Collapse
Affiliation(s)
- Ravi Aggarwal
- Institute of Global Health Innovation, Imperial College London, London, United Kingdom
| | - Soma Farag
- Institute of Global Health Innovation, Imperial College London, London, United Kingdom
| | - Guy Martin
- Institute of Global Health Innovation, Imperial College London, London, United Kingdom
| | - Hutan Ashrafian
- Institute of Global Health Innovation, Imperial College London, London, United Kingdom
| | - Ara Darzi
- Institute of Global Health Innovation, Imperial College London, London, United Kingdom
| |
Collapse
|
26
|
Saheb T, Saheb T, Carpenter DO. Mapping research strands of ethics of artificial intelligence in healthcare: A bibliometric and content analysis. Comput Biol Med 2021; 135:104660. [PMID: 34346319 DOI: 10.1016/j.compbiomed.2021.104660] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2021] [Revised: 07/15/2021] [Accepted: 07/15/2021] [Indexed: 02/07/2023]
Abstract
The growth of artificial intelligence in promoting healthcare is rapidly progressing. Notwithstanding its promising nature, however, AI in healthcare embodies certain ethical challenges as well. This research aims to delineate the most influential elements of scientific research on AI ethics in healthcare by conducting bibliometric, social network analysis, and cluster-based content analysis of scientific articles. Not only did the bibliometric analysis identify the most influential authors, countries, institutions, sources, and documents, but it also recognized four ethical concerns associated with 12 medical issues. These ethical categories are composed of normative, meta-ethics, epistemological and medical practice. The content analysis complemented this list of ethical categories and distinguished seven more ethical categories: ethics of relationships, medico-legal concerns, ethics of robots, ethics of ambient intelligence, patients' rights, physicians' rights, and ethics of predictive analytics. This analysis likewise identified 40 general research gaps in the literature and plausible future research strands. This analysis furthers conversations on the ethics of AI and associated emerging technologies such as nanotech and biotech in healthcare, hence, advances convergence research on the ethics of AI in healthcare. Practically, this research will provide a map for policymakers and AI engineers and scientists on what dimensions of AI-based medical interventions require stricter policies and guidelines and robust ethical design and development.
Collapse
Affiliation(s)
- Tahereh Saheb
- Management Studies Center, Tarbiat Modares University, Tehran, Iran.
| | - Tayebeh Saheb
- Assistant professor, Faculty of Law, Tarbiat Modares University, Tehran, Iran.
| | - David O Carpenter
- Director for the Institute for Health and the Environment, School of Public Health, State University of New York, University at Albany, USA.
| |
Collapse
|
27
|
Khullar D, Casalino LP, Qian Y, Lu Y, Chang E, Aneja S. Public vs physician views of liability for artificial intelligence in health care. J Am Med Inform Assoc 2021; 28:1574-1577. [PMID: 33871009 DOI: 10.1093/jamia/ocab055] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2021] [Revised: 02/24/2021] [Accepted: 03/10/2021] [Indexed: 11/12/2022] Open
Abstract
The growing use of artificial intelligence (AI) in health care has raised questions about who should be held liable for medical errors that result from care delivered jointly by physicians and algorithms. In this survey study comparing views of physicians and the U.S. public, we find that the public is significantly more likely to believe that physicians should be held responsible when an error occurs during care delivered with medical AI, though the majority of both physicians and the public hold this view (66.0% vs 57.3%; P = .020). Physicians are more likely than the public to believe that vendors (43.8% vs 32.9%; P = .004) and healthcare organizations should be liable for AI-related medical errors (29.2% vs 22.6%; P = .05). Views of medical liability did not differ by clinical specialty. Among the general public, younger people are more likely to hold nearly all parties liable.
Collapse
Affiliation(s)
- Dhruv Khullar
- Division of Health Policy and Economics, Department of Population Health Sciences, Weill Cornell Medical College, New York, New York, USA.,Division of General Internal Medicine, Department of Medicine, Weill Cornell Medical College, New York, New York, USA
| | - Lawrence P Casalino
- Division of Health Policy and Economics, Department of Population Health Sciences, Weill Cornell Medical College, New York, New York, USA
| | - Yuting Qian
- Division of Health Policy and Economics, Department of Population Health Sciences, Weill Cornell Medical College, New York, New York, USA
| | - Yuan Lu
- Center for Outcomes Research and Evaluation, Yale School of Medicine, New Haven, Connecticut, USA
| | - Enoch Chang
- Department of Therapeutic Radiology, Yale School of Medicine, New Haven, Connecticut, USA
| | - Sanjay Aneja
- Center for Outcomes Research and Evaluation, Yale School of Medicine, New Haven, Connecticut, USA.,Department of Therapeutic Radiology, Yale School of Medicine, New Haven, Connecticut, USA
| |
Collapse
|
28
|
Lennox-Chhugani N, Chen Y, Pearson V, Trzcinski B, James J. Women's attitudes to the use of AI image readers: a case study from a national breast screening programme. BMJ Health Care Inform 2021; 28:bmjhci-2020-100293. [PMID: 33795236 PMCID: PMC8021737 DOI: 10.1136/bmjhci-2020-100293] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2020] [Revised: 03/01/2021] [Accepted: 03/08/2021] [Indexed: 12/11/2022] Open
Abstract
Background Researchers and developers are evaluating the use of mammogram readers that use artificial intelligence (AI) in clinical settings. Objectives This study examines the attitudes of women, both current and future users of breast screening, towards the use of AI in mammogram reading. Methods We used a cross-sectional, mixed methods study design with data from the survey responses and focus groups. We researched in four National Health Service hospitals in England. There we approached female workers over the age of 18 years and their immediate friends and family. We collected 4096 responses. Results Through descriptive statistical analysis, we learnt that women of screening age (≥50 years) were less likely than women under screening age to use technology apps for healthcare advice (likelihood ratio=0.85, 95% CI 0.82 to 0.89, p<0.001). They were also less likely than women under screening age to agree that AI can have a positive effect on society (likelihood ratio=0.89, 95% CI 0.84 to 0.95, p<0.001). However, they were more likely to feel positive about AI used to read mammograms (likelihood ratio=1.09, 95% CI 1.02 to 1.17, p=0.009). Discussion and Conclusions Women of screening age are ready to accept the use of AI in breast screening but are less likely to use other AI-based health applications. A large number of women are undecided, or had mixed views, about the use of AI generally and they remain to be convinced that it can be trusted.
Collapse
Affiliation(s)
| | - Yan Chen
- School of Medicine, University of Nottingham, Nottingham, UK
| | - Veronica Pearson
- East Midlands Imaging Network, Nottingham University Hospitals NHS Trust, Nottingham, UK
| | | | - Jonathan James
- Nottingham Breast Institute, Nottingham University Hospitals NHS Trust, Nottingham, UK
| |
Collapse
|