1
|
Kaye J, Shah N, Kogetsu A, Coy S, Katirai A, Kuroda M, Li Y, Kato K, Yamamoto BA. Moving beyond Technical Issues to Stakeholder Involvement: Key Areas for Consideration in the Development of Human-Centred and Trusted AI in Healthcare. Asian Bioeth Rev 2024; 16:501-511. [PMID: 39022370 PMCID: PMC11250765 DOI: 10.1007/s41649-024-00300-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2023] [Revised: 05/02/2024] [Accepted: 05/30/2024] [Indexed: 07/20/2024] Open
Abstract
Discussion around the increasing use of AI in healthcare tends to focus on the technical aspects of the technology rather than the socio-technical issues associated with implementation. In this paper, we argue for the development of a sustained societal dialogue between stakeholders around the use of AI in healthcare. We contend that a more human-centred approach to AI implementation in healthcare is needed which is inclusive of the views of a range of stakeholders. We identify four key areas to support stakeholder involvement that would enhance the development, implementation, and evaluation of AI in healthcare leading to greater levels of trust. These are as follows: (1) aligning AI development practices with social values, (2) appropriate and proportionate involvement of stakeholders, (3) understanding the importance of building trust in AI, (4) embedding stakeholder-driven governance to support these activities.
Collapse
Affiliation(s)
- Jane Kaye
- Centre for Health, Law, and Emerging Technologies (HeLEX), Faculty of Law, University of Oxford, Oxford, UK
- Melbourne Law School, University of Melbourne, Melbourne, VIC Australia
| | - Nisha Shah
- Centre for Health, Law, and Emerging Technologies (HeLEX), Faculty of Law, University of Oxford, Oxford, UK
| | - Atsushi Kogetsu
- Department of Biomedical Ethics and Public Policy, Graduate School of Medicine, Osaka University, Osaka, Japan
| | - Sarah Coy
- Centre for Health, Law, and Emerging Technologies (HeLEX), Faculty of Law, University of Oxford, Oxford, UK
| | - Amelia Katirai
- Research Center on Ethical, Legal, and Social Issues, Osaka University, Osaka, Japan
| | - Machie Kuroda
- Department of Biomedical Ethics and Public Policy, Graduate School of Medicine, Osaka University, Osaka, Japan
| | - Yan Li
- Center for Global Initiatives, Osaka University, Osaka, Japan
| | - Kazuto Kato
- Department of Biomedical Ethics and Public Policy, Graduate School of Medicine, Osaka University, Osaka, Japan
| | | |
Collapse
|
2
|
Frost EK, Bosward R, Aquino YSJ, Braunack-Mayer A, Carter SM. Facilitating public involvement in research about healthcare AI: A scoping review of empirical methods. Int J Med Inform 2024; 186:105417. [PMID: 38564959 DOI: 10.1016/j.ijmedinf.2024.105417] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2024] [Revised: 03/06/2024] [Accepted: 03/17/2024] [Indexed: 04/04/2024]
Abstract
OBJECTIVE With the recent increase in research into public views on healthcare artificial intelligence (HCAI), the objective of this review is to examine the methods of empirical studies on public views on HCAI. We map how studies provided participants with information about HCAI, and we examine the extent to which studies framed publics as active contributors to HCAI governance. MATERIALS AND METHODS We searched 5 academic databases and Google Advanced for empirical studies investigating public views on HCAI. We extracted information including study aims, research instruments, and recommendations. RESULTS Sixty-two studies were included. Most were quantitative (N = 42). Most (N = 47) reported providing participants with background information about HCAI. Despite this, studies often reported participants' lack of prior knowledge about HCAI as a limitation. Over three quarters (N = 48) of the studies made recommendations that envisaged public views being used to guide governance of AI. DISCUSSION Provision of background information is an important component of facilitating research with publics on HCAI. The high proportion of studies reporting participants' lack of knowledge about HCAI as a limitation reflects the need for more guidance on how information should be presented. A minority of studies adopted technocratic positions that construed publics as passive beneficiaries of AI, rather than as active stakeholders in HCAI design and implementation. CONCLUSION This review draws attention to how public roles in HCAI governance are constructed in empirical studies. To facilitate active participation, we recommend that research with publics on HCAI consider methodological designs that expose participants to diverse information sources.
Collapse
Affiliation(s)
- Emma Kellie Frost
- Australian Centre for Health Engagement, Evidence and Values, School of Health and Society, Faculty of the Arts, Social Sciences, and Humanities, University of Wollongong, Australia.
| | - Rebecca Bosward
- Australian Centre for Health Engagement, Evidence and Values, School of Health and Society, Faculty of the Arts, Social Sciences, and Humanities, University of Wollongong, Australia.
| | - Yves Saint James Aquino
- Australian Centre for Health Engagement, Evidence and Values, School of Health and Society, Faculty of the Arts, Social Sciences, and Humanities, University of Wollongong, Australia.
| | - Annette Braunack-Mayer
- Australian Centre for Health Engagement, Evidence and Values, School of Health and Society, Faculty of the Arts, Social Sciences, and Humanities, University of Wollongong, Australia.
| | - Stacy M Carter
- Australian Centre for Health Engagement, Evidence and Values, School of Health and Society, Faculty of the Arts, Social Sciences, and Humanities, University of Wollongong, Australia.
| |
Collapse
|
3
|
Moy S, Irannejad M, Manning SJ, Farahani M, Ahmed Y, Gao E, Prabhune R, Lorenz S, Mirza R, Klinger C. Patient Perspectives on the Use of Artificial Intelligence in Health Care: A Scoping Review. J Patient Cent Res Rev 2024; 11:51-62. [PMID: 38596349 PMCID: PMC11000703 DOI: 10.17294/2330-0698.2029] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/11/2024] Open
Abstract
Purpose Artificial intelligence (AI) technology is being rapidly adopted into many different branches of medicine. Although research has started to highlight the impact of AI on health care, the focus on patient perspectives of AI is scarce. This scoping review aimed to explore the literature on adult patients' perspectives on the use of an array of AI technologies in the health care setting for design and deployment. Methods This scoping review followed Arksey and O'Malley's framework and Preferred Reporting Items for Systematic Reviews and Meta-Analysis for Scoping Reviews (PRISMA-ScR). To evaluate patient perspectives, we conducted a comprehensive literature search using eight interdisciplinary electronic databases, including grey literature. Articles published from 2015 to 2022 that focused on patient views regarding AI technology in health care were included. Thematic analysis was performed on the extracted articles. Results Of the 10,571 imported studies, 37 articles were included and extracted. From the 33 peer-reviewed and 4 grey literature articles, the following themes on AI emerged: (i) Patient attitudes, (ii) Influences on patient attitudes, (iii) Considerations for design, and (iv) Considerations for use. Conclusions Patients are key stakeholders essential to the uptake of AI in health care. The findings indicate that patients' needs and expectations are not fully considered in the application of AI in health care. Therefore, there is a need for patient voices in the development of AI in health care.
Collapse
Affiliation(s)
- Sally Moy
- Translational Research Program, Temerty Faculty of Medicine, University of Toronto, Toronto, Canada
| | - Mona Irannejad
- Translational Research Program, Temerty Faculty of Medicine, University of Toronto, Toronto, Canada
| | | | - Mehrdad Farahani
- Translational Research Program, Temerty Faculty of Medicine, University of Toronto, Toronto, Canada
| | - Yomna Ahmed
- Translational Research Program, Temerty Faculty of Medicine, University of Toronto, Toronto, Canada
| | - Ellis Gao
- Translational Research Program, Temerty Faculty of Medicine, University of Toronto, Toronto, Canada
| | - Radhika Prabhune
- Translational Research Program, Temerty Faculty of Medicine, University of Toronto, Toronto, Canada
| | - Suzan Lorenz
- Translational Research Program, Temerty Faculty of Medicine, University of Toronto, Toronto, Canada
| | - Raza Mirza
- Translational Research Program, Temerty Faculty of Medicine, University of Toronto, Toronto, Canada
| | - Christopher Klinger
- Translational Research Program, Temerty Faculty of Medicine, University of Toronto, Toronto, Canada
- National Initiative for the Care of the Elderly, Toronto, Canada
| |
Collapse
|
4
|
Viberg Johansson J, Dembrower K, Strand F, Grauman Å. Women's perceptions and attitudes towards the use of AI in mammography in Sweden: a qualitative interview study. BMJ Open 2024; 14:e084014. [PMID: 38355190 PMCID: PMC10868248 DOI: 10.1136/bmjopen-2024-084014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/05/2024] [Accepted: 02/02/2024] [Indexed: 02/16/2024] Open
Abstract
BACKGROUND Understanding women's perspectives can help to create an effective and acceptable artificial intelligence (AI) implementation for triaging mammograms, ensuring a high proportion of screening-detected cancer. This study aimed to explore Swedish women's perceptions and attitudes towards the use of AI in mammography. METHOD Semistructured interviews were conducted with 16 women recruited in the spring of 2023 at Capio S:t Görans Hospital, Sweden, during an ongoing clinical trial of AI in screening (ScreenTrustCAD, NCT04778670) with Philips equipment. The interview transcripts were analysed using inductive thematic content analysis. RESULTS In general, women viewed AI as an excellent complementary tool to help radiologists in their decision-making, rather than a complete replacement of their expertise. To trust the AI, the women requested a thorough evaluation, transparency about AI usage in healthcare, and the involvement of a radiologist in the assessment. They would rather be more worried because of being called in more often for scans than risk having overlooked a sign of cancer. They expressed substantial trust in the healthcare system if the implementation of AI was to become a standard practice. CONCLUSION The findings suggest that the interviewed women, in general, hold a positive attitude towards the implementation of AI in mammography; nonetheless, they expect and demand more from an AI than a radiologist. Effective communication regarding the role and limitations of AI is crucial to ensure that patients understand the purpose and potential outcomes of AI-assisted healthcare.
Collapse
Affiliation(s)
- Jennifer Viberg Johansson
- Centre for Research Ethics & Bioethics (CRB), Department of Public Health and Caring Sciences, Uppsala University, Uppsala, Sweden
| | - Karin Dembrower
- Capio S:t Görans Hospital, Stockholm, Sweden
- Department of Oncology-Pathology, Karolinska Institute, Stockholm, Sweden
| | - Fredrik Strand
- Department of Oncology-Pathology, Karolinska Institute, Stockholm, Sweden
| | - Åsa Grauman
- Centre for Research Ethics & Bioethics (CRB), Department of Public Health and Caring Sciences, Uppsala University, Uppsala, Sweden
| |
Collapse
|
5
|
Shevtsova D, Ahmed A, Boot IWA, Sanges C, Hudecek M, Jacobs JJL, Hort S, Vrijhoef HJM. Trust in and Acceptance of Artificial Intelligence Applications in Medicine: Mixed Methods Study. JMIR Hum Factors 2024; 11:e47031. [PMID: 38231544 PMCID: PMC10831593 DOI: 10.2196/47031] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2023] [Revised: 09/25/2023] [Accepted: 11/20/2023] [Indexed: 01/18/2024] Open
Abstract
BACKGROUND Artificial intelligence (AI)-powered technologies are being increasingly used in almost all fields, including medicine. However, to successfully implement medical AI applications, ensuring trust and acceptance toward such technologies is crucial for their successful spread and timely adoption worldwide. Although AI applications in medicine provide advantages to the current health care system, there are also various associated challenges regarding, for instance, data privacy, accountability, and equity and fairness, which could hinder medical AI application implementation. OBJECTIVE The aim of this study was to identify factors related to trust in and acceptance of novel AI-powered medical technologies and to assess the relevance of those factors among relevant stakeholders. METHODS This study used a mixed methods design. First, a rapid review of the existing literature was conducted, aiming to identify various factors related to trust in and acceptance of novel AI applications in medicine. Next, an electronic survey including the rapid review-derived factors was disseminated among key stakeholder groups. Participants (N=22) were asked to assess on a 5-point Likert scale (1=irrelevant to 5=relevant) to what extent they thought the various factors (N=19) were relevant to trust in and acceptance of novel AI applications in medicine. RESULTS The rapid review (N=32 papers) yielded 110 factors related to trust and 77 factors related to acceptance toward AI technology in medicine. Closely related factors were assigned to 1 of the 19 overarching umbrella factors, which were further grouped into 4 categories: human-related (ie, the type of institution AI professionals originate from), technology-related (ie, the explainability and transparency of AI application processes and outcomes), ethical and legal (ie, data use transparency), and additional factors (ie, AI applications being environment friendly). The categorized 19 umbrella factors were presented as survey statements, which were evaluated by relevant stakeholders. Survey participants (N=22) represented researchers (n=18, 82%), technology providers (n=5, 23%), hospital staff (n=3, 14%), and policy makers (n=3, 14%). Of the 19 factors, 16 (84%) human-related, technology-related, ethical and legal, and additional factors were considered to be of high relevance to trust in and acceptance of novel AI applications in medicine. The patient's gender, age, and education level were found to be of low relevance (3/19, 16%). CONCLUSIONS The results of this study could help the implementers of medical AI applications to understand what drives trust and acceptance toward AI-powered technologies among key stakeholders in medicine. Consequently, this would allow the implementers to identify strategies that facilitate trust in and acceptance of medical AI applications among key stakeholders and potential users.
Collapse
Affiliation(s)
- Daria Shevtsova
- Panaxea bv, Den Bosch, Netherlands
- Vrije Universiteit Amsterdam, Amsterdam, Netherlands
| | | | | | | | | | | | - Simon Hort
- Fraunhofer Institute for Production Technology, Aachen, Germany
| | | |
Collapse
|
6
|
Park HJ. Patient perspectives on informed consent for medical AI: A web-based experiment. Digit Health 2024; 10:20552076241247938. [PMID: 38698829 PMCID: PMC11064747 DOI: 10.1177/20552076241247938] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2023] [Accepted: 03/28/2024] [Indexed: 05/05/2024] Open
Abstract
Objective Despite the increasing use of AI applications as a clinical decision support tool in healthcare, patients are often unaware of their use in the physician's decision-making process. This study aims to determine whether doctors should disclose the use of AI tools in diagnosis and what kind of information should be provided. Methods A survey experiment with 1000 respondents in South Korea was conducted to estimate the patients' perceived importance of information regarding the use of an AI tool in diagnosis in deciding whether to receive the treatment. Results The study found that the use of an AI tool increases the perceived importance of information related to its use, compared with when a physician consults with a human radiologist. Information regarding the AI tool when AI is used was perceived by participants either as more important than or similar to the regularly disclosed information regarding short-term effects when AI is not used. Further analysis revealed that gender, age, and income have a statistically significant effect on the perceived importance of every piece of AI information. Conclusions This study supports the disclosure of AI use in diagnosis during the informed consent process. However, the disclosure should be tailored to the individual patient's needs, as patient preferences for information regarding AI use vary across gender, age and income levels. It is recommended that ethical guidelines be developed for informed consent when using AI in diagnoses that go beyond mere legal requirements.
Collapse
Affiliation(s)
- Hai Jin Park
- Center for AI and Law, Hanyang University Law School, Seoul, South Korea
| |
Collapse
|
7
|
Grauman Å, Ancillotti M, Veldwijk J, Mascalzoni D. Precision cancer medicine and the doctor-patient relationship: a systematic review and narrative synthesis. BMC Med Inform Decis Mak 2023; 23:286. [PMID: 38098034 PMCID: PMC10722840 DOI: 10.1186/s12911-023-02395-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2023] [Accepted: 12/06/2023] [Indexed: 12/17/2023] Open
Abstract
BACKGROUND The implementation of precision medicine is likely to have a huge impact on clinical cancer care, while the doctor-patient relationship is a crucial aspect of cancer care that needs to be preserved. This systematic review aimed to map out perceptions and concerns regarding how the implementation of precision medicine will impact the doctor-patient relationship in cancer care so that threats against the doctor-patient relationship can be addressed. METHODS Electronic databases (Pubmed, Scopus, Web of Science, Social Science Premium Collection) were searched for articles published from January 2010 to December 2021, including qualitative, quantitative, and theoretical methods. Two reviewers completed title and abstract screening, full-text screening, and data extraction. Findings were summarized and explained using narrative synthesis. RESULTS Four themes were generated from the included articles (n = 35). Providing information addresses issues of information transmission and needs, and of complex concepts such as genetics and uncertainty. Making decisions in a trustful relationship addresses opacity issues, the role of trust, and and physicians' attitude towards the role of precision medicine tools in decision-making. Managing negative reactions of non-eligible patients addresses patients' unmet expectations of precision medicine. Conflicting roles in the blurry line between clinic and research addresses issues stemming from physicians' double role as doctors and researchers. CONCLUSIONS Many findings have previously been addressed in doctor-patient communication and clinical genetics. However, precision medicine adds complexity to these fields and further emphasizes the importance of clear communication on specific themes like the distinction between genomic and gene expression and patients' expectations about access, eligibility, effectiveness, and side effects of targeted therapies.
Collapse
Affiliation(s)
- Å Grauman
- Centre for Research Ethics and Bioethics, Uppsala University, Box 564, Uppsala, SE-751 22, Sweden.
| | - M Ancillotti
- Centre for Research Ethics and Bioethics, Uppsala University, Box 564, Uppsala, SE-751 22, Sweden
| | - J Veldwijk
- Erasmus School of Health Policy & Management, Erasmus University Rotterdam, Rotterdam, the Netherlands
- Erasmus Choice Modelling Centre, Erasmus University Rotterdam, Rotterdam, the Netherlands
| | - D Mascalzoni
- Centre for Research Ethics and Bioethics, Uppsala University, Box 564, Uppsala, SE-751 22, Sweden
- Erasmus Choice Modelling Centre, Erasmus University Rotterdam, Rotterdam, the Netherlands
| |
Collapse
|
8
|
Vo V, Chen G, Aquino YSJ, Carter SM, Do QN, Woode ME. Multi-stakeholder preferences for the use of artificial intelligence in healthcare: A systematic review and thematic analysis. Soc Sci Med 2023; 338:116357. [PMID: 37949020 DOI: 10.1016/j.socscimed.2023.116357] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2023] [Revised: 09/04/2023] [Accepted: 10/24/2023] [Indexed: 11/12/2023]
Abstract
INTRODUCTION Despite the proliferation of Artificial Intelligence (AI) technology over the last decade, clinician, patient, and public perceptions of its use in healthcare raise a number of ethical, legal and social questions. We systematically review the literature on attitudes towards the use of AI in healthcare from patients, the general public and health professionals' perspectives to understand these issues from multiple perspectives. METHODOLOGY A search for original research articles using qualitative, quantitative, and mixed methods published between 1 Jan 2001 to 24 Aug 2021 was conducted on six bibliographic databases. Data were extracted and classified into different themes representing views on: (i) knowledge and familiarity of AI, (ii) AI benefits, risks, and challenges, (iii) AI acceptability, (iv) AI development, (v) AI implementation, (vi) AI regulations, and (vii) Human - AI relationship. RESULTS The final search identified 7,490 different records of which 105 publications were selected based on predefined inclusion/exclusion criteria. While the majority of patients, the general public and health professionals generally had a positive attitude towards the use of AI in healthcare, all groups indicated some perceived risks and challenges. Commonly perceived risks included data privacy; reduced professional autonomy; algorithmic bias; healthcare inequities; and greater burnout to acquire AI-related skills. While patients had mixed opinions on whether healthcare workers suffer from job loss due to the use of AI, health professionals strongly indicated that AI would not be able to completely replace them in their professions. Both groups shared similar doubts about AI's ability to deliver empathic care. The need for AI validation, transparency, explainability, and patient and clinical involvement in the development of AI was emphasised. To help successfully implement AI in health care, most participants envisioned that an investment in training and education campaigns was necessary, especially for health professionals. Lack of familiarity, lack of trust, and regulatory uncertainties were identified as factors hindering AI implementation. Regarding AI regulations, key themes included data access and data privacy. While the general public and patients exhibited a willingness to share anonymised data for AI development, there remained concerns about sharing data with insurance or technology companies. One key domain under this theme was the question of who should be held accountable in the case of adverse events arising from using AI. CONCLUSIONS While overall positivity persists in attitudes and preferences toward AI use in healthcare, some prevalent problems require more attention. There is a need to go beyond addressing algorithm-related issues to look at the translation of legislation and guidelines into practice to ensure fairness, accountability, transparency, and ethics in AI.
Collapse
Affiliation(s)
- Vinh Vo
- Centre for Health Economics, Monash University, Australia.
| | - Gang Chen
- Centre for Health Economics, Monash University, Australia
| | - Yves Saint James Aquino
- Australian Centre for Health Engagement, Evidence and Values, School of Health and Soceity, University of Wollongong, Australia
| | - Stacy M Carter
- Australian Centre for Health Engagement, Evidence and Values, School of Health and Soceity, University of Wollongong, Australia
| | - Quynh Nga Do
- Department of Economics, Monash University, Australia
| | - Maame Esi Woode
- Centre for Health Economics, Monash University, Australia; Monash Data Futures Research Institute, Australia
| |
Collapse
|
9
|
Willis K, Chaudhry UAR, Chandrasekaran L, Wahlich C, Olvera-Barrios A, Chambers R, Bolter L, Anderson J, Barman SA, Fajtl J, Welikala R, Egan C, Tufail A, Owen CG, Rudnicka A. What are the perceptions and concerns of people living with diabetes and National Health Service staff around the potential implementation of AI-assisted screening for diabetic eye disease? Development and validation of a survey for use in a secondary care screening setting. BMJ Open 2023; 13:e075558. [PMID: 37968006 PMCID: PMC10660949 DOI: 10.1136/bmjopen-2023-075558] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/12/2023] [Accepted: 09/05/2023] [Indexed: 11/17/2023] Open
Abstract
INTRODUCTION The English National Health Service (NHS) Diabetic Eye Screening Programme (DESP) performs around 2.3 million eye screening appointments annually, generating approximately 13 million retinal images that are graded by humans for the presence or severity of diabetic retinopathy. Previous research has shown that automated retinal image analysis systems, including artificial intelligence (AI), can identify images with no disease from those with diabetic retinopathy as safely and effectively as human graders, and could significantly reduce the workload for human graders. Some algorithms can also determine the level of severity of the retinopathy with similar performance to humans. There is a need to examine perceptions and concerns surrounding AI-assisted eye-screening among people living with diabetes and NHS staff, if AI was to be introduced into the DESP, to identify factors that may influence acceptance of this technology. METHODS AND ANALYSIS People living with diabetes and staff from the North East London (NEL) NHS DESP were invited to participate in two respective focus groups to codesign two online surveys exploring their perceptions and concerns around the potential introduction of AI-assisted screening.Focus group participants were representative of the local population in terms of ages and ethnicity. Participants' feedback was taken into consideration to update surveys which were circulated for further feedback. Surveys will be piloted at the NEL DESP and followed by semistructured interviews to assess accessibility, usability and to validate the surveys.Validated surveys will be distributed by other NHS DESP sites, and also via patient groups on social media, relevant charities and the British Association of Retinal Screeners. Post-survey evaluative interviews will be undertaken among those who consent to participate in further research. ETHICS AND DISSEMINATION Ethical approval has been obtained by the NHS Research Ethics Committee (IRAS ID: 316631). Survey results will be shared and discussed with focus groups to facilitate preparation of findings for publication and to inform codesign of outreach activities to address concerns and perceptions identified.
Collapse
Affiliation(s)
- Kathryn Willis
- Population Health Research Institute, St George's University of London, London, UK
| | - Umar A R Chaudhry
- Population Health Research Institute, St George's University of London, London, UK
| | | | - Charlotte Wahlich
- Population Health Research Institute, St George's University of London, London, UK
| | - Abraham Olvera-Barrios
- NIHR Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Ryan Chambers
- Diabetes and Endocrinolgy, Homerton Healthcare NHS Foundation Trust, London, UK
| | - Louis Bolter
- Diabetes and Endocrinolgy, Homerton Healthcare NHS Foundation Trust, London, UK
| | - John Anderson
- Diabetes and Endocrinolgy, Homerton Healthcare NHS Foundation Trust, London, UK
| | - S A Barman
- School of Computer Science and Mathematics, Kingston University London, London, UK
| | - Jiri Fajtl
- School of Computer Science and Mathematics, Kingston University London, London, UK
| | - Roshan Welikala
- School of Computer Science and Mathematics, Kingston University London, London, UK
| | - Catherine Egan
- NIHR Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Adnan Tufail
- NIHR Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Christopher G Owen
- Population Health Research Institute, St George's University of London, London, UK
| | - Alicja Rudnicka
- Population Health Research Institute, St George's University of London, London, UK
| |
Collapse
|
10
|
Rodler S, Kopliku R, Ulrich D, Kaltenhauser A, Casuscelli J, Eismann L, Waidelich R, Buchner A, Butz A, Cacciamani GE, Stief CG, Westhofen T. Patients' Trust in Artificial Intelligence-based Decision-making for Localized Prostate Cancer: Results from a Prospective Trial. Eur Urol Focus 2023:S2405-4569(23)00237-7. [PMID: 37923632 DOI: 10.1016/j.euf.2023.10.020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2023] [Revised: 10/04/2023] [Accepted: 10/21/2023] [Indexed: 11/07/2023]
Abstract
BACKGROUND Artificial intelligence (AI) has the potential to enhance diagnostic accuracy and improve treatment outcomes. However, AI integration into clinical workflows and patient perspectives remain unclear. OBJECTIVE To determine patients' trust in AI and their perception of urologists relying on AI, and future diagnostic and therapeutic AI applications for patients. DESIGN, SETTING, AND PARTICIPANTS A prospective trial was conducted involving patients who received diagnostic or therapeutic interventions for prostate cancer (PC). INTERVENTION Patients were asked to complete a survey before magnetic resonance imaging, prostate biopsy, or radical prostatectomy. OUTCOME MEASUREMENTS AND STATISTICAL ANALYSIS The primary outcome was patient trust in AI. Secondary outcomes were the choice of AI in treatment settings and traits attributed to AI and urologists. RESULTS AND LIMITATIONS Data for 466 patients were analyzed. The cumulative affinity for technology was positively correlated with trust in AI (correlation coefficient 0.094; p = 0.04), whereas patient age, level of education, and subjective perception of illness were not (p > 0.05). The mean score (± standard deviation) for trust in capability was higher for physicians than for AI for responding in an individualized way when communicating a diagnosis (4.51 ± 0.76 vs 3.38 ± 1.07; mean difference [MD] 1.130, 95% confidence interval [CI] 1.010-1.250; t924 = 18.52, p < 0.001; Cohen's d = 1.040) and for explaining information in an understandable way (4.57 ± vs 3.18 ± 1.09; MD 1.392, 95% CI 1.275-1.509; t921 = 27.27, p < 0.001; Cohen's d = 1.216). Patients stated that they had higher trust in a diagnosis made by AI controlled by a physician versus AI not controlled by a physician (4.31 ± 0.88 vs 1.75 ± 0.93; MD 2.561, 95% CI 2.444-2.678; t925 = 42.89, p < 0.001; Cohen's d = 2.818). AI-assisted physicians (66.74%) were preferred over physicians alone (29.61%), physicians controlled by AI (2.36%), and AI alone (0.64%) for treatment in the current clinical scenario. CONCLUSIONS Trust in future diagnostic and therapeutic AI-based treatment relies on optimal integration with urologists as the human-machine interface to leverage human and AI capabilities. PATIENT SUMMARY Artificial intelligence (AI) will play a role in diagnostic decisions in prostate cancer in the future. At present, patients prefer AI-assisted urologists over urologists alone, AI alone, and AI-controlled urologists. Specific traits of AI and urologists could be used to optimize diagnosis and treatment for patients with prostate cancer.
Collapse
Affiliation(s)
- Severin Rodler
- Department of Urology, LMU University Hospital, Munich, Germany; USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA.
| | - Rega Kopliku
- Department of Urology, LMU University Hospital, Munich, Germany
| | - Daniel Ulrich
- Department of Informatics, Ludwig-Maximilian-Universität München, Munich, Germany
| | - Annika Kaltenhauser
- Department of Informatics, Ludwig-Maximilian-Universität München, Munich, Germany
| | | | - Lennert Eismann
- Department of Urology, LMU University Hospital, Munich, Germany
| | | | | | - Andreas Butz
- Department of Informatics, Ludwig-Maximilian-Universität München, Munich, Germany
| | - Giovanni E Cacciamani
- USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
| | | | - Thilo Westhofen
- Department of Urology, LMU University Hospital, Munich, Germany
| |
Collapse
|
11
|
Kelly S, Kaye SA, White KM, Oviedo-Trespalacios O. Clearing the way for participatory data stewardship in artificial intelligence development: a mixed methods approach. ERGONOMICS 2023; 66:1782-1799. [PMID: 38054452 DOI: 10.1080/00140139.2023.2289864] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/10/2023] [Accepted: 11/28/2023] [Indexed: 12/07/2023]
Abstract
Participatory data stewardship (PDS) empowers individuals to shape and govern their data via responsible collection and use. As artificial intelligence (AI) requires massive amounts of data, research must assess what factors predict consumers' willingness to provide their data to AI. This mixed-methods study applied the extended Technology Acceptance Model (TAM) with additional predictors of trust and subjective norms. Participants' data donation profile was also measured to assess the influence of individuals' social duty, understanding of the purpose and guilt. Participants (N = 322) completed an experimental survey. Individuals were willing to provide data to AI via PDS when they believed it was their social duty, understood the purpose and trusted AI. However, the TAM may not be a complete model for assessing user willingness. This study establishes that individuals value the importance of trusting and comprehending the broader societal impact of AI when providing their data to AI.Practitioner summary: To build responsible and representative AI, individuals are needed to participate in data stewardship. The factors driving willingness to participate in such methods were studied via an online survey. Trust, social duty and understanding the purpose significantly predicted willingness to provide data to AI via participatory data stewardship.
Collapse
Affiliation(s)
- Sage Kelly
- Centre for Accident Research and Road Safety - Queensland (CARRS-Q), School of Psychology & Counselling, Queensland University of Technology (QUT), Kelvin Grove, Queensland, Australia
| | - Sherrie-Anne Kaye
- Centre for Accident Research and Road Safety - Queensland (CARRS-Q), School of Psychology & Counselling, Queensland University of Technology (QUT), Kelvin Grove, Queensland, Australia
| | - Katherine M White
- Faculty of Health, School of Psychology & Counselling, Queensland University of Technology (QUT), Kelvin Grove, Queensland, Australia
| | - Oscar Oviedo-Trespalacios
- Faculty of Technology, Policy and Management, Delft University of Technology, Delft, the Netherlands
| |
Collapse
|
12
|
Manning F, Mahmoud A, Meertens R. Understanding patient views and acceptability of predictive software in osteoporosis identification. Radiography (Lond) 2023; 29:1046-1053. [PMID: 37734275 DOI: 10.1016/j.radi.2023.08.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2023] [Revised: 08/21/2023] [Accepted: 08/28/2023] [Indexed: 09/23/2023]
Abstract
INTRODUCTION Research into patient and public views on predictive software and its use in healthcare is relatively new. This study aimed to understand older adults' acceptability of an opportunistic bone density assessment for osteoporosis diagnosis (IBEX BH), views on its integration into healthcare, and views on predictive software and AI in healthcare. METHODS Focus groups were conducted with participants aged over 50 years, based in South West England. Data were analysed using thematic analysis. Analysis was informed by the theoretical framework of acceptability. RESULTS Two focus groups were undertaken with a total of 14 participants. Overall, the participants were generally positive about the IBEX BH software, and predictive software's in general stating 'it sounds like a brilliant idea'. Although participants did not understand the intricacies of the software, they did not feel they needed to. Concerns about IBEX BH focussed more on the clinical indications of the software (e.g. more scans or medications), with participants expressing less trust in results if they indicated medication. Questions were also raised about how and who would receive the results of this software. Individual choice was evident in these discussions, however most indicated the preferences for spoken communication 'But I would expect that these results would be given by a human to another human.' CONCLUSIONS Focus group participants were generally accepting of the use of predictive software in healthcare. IMPLICATIONS FOR PRACTICE Thought and care needs to be taken when integrating predictive software into practice. Focusses on empowering patients, providing information on processes and results are key.
Collapse
Affiliation(s)
- F Manning
- Department of Health and Care Professions, University of Exeter Medical School, University of Exeter, Exeter, UK.
| | - A Mahmoud
- Department of Health and Community Sciences, University of Exeter Medical School, University of Exeter, Exeter, UK.
| | - R Meertens
- Department of Health and Care Professions, University of Exeter Medical School, University of Exeter, Exeter, UK.
| |
Collapse
|
13
|
Grauman Å, Kontro M, Haller K, Nier S, Aakko S, Lang K, Zingaretti C, Meggiolaro E, De Padova S, Marconi G, Martinelli G, Heckman CA, Simonetti G, Bullinger L, Kihlbom U. Personalizing precision medicine: Patients with AML perceptions about treatment decisions. PATIENT EDUCATION AND COUNSELING 2023; 115:107883. [PMID: 37421687 DOI: 10.1016/j.pec.2023.107883] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/13/2022] [Revised: 06/27/2023] [Accepted: 07/03/2023] [Indexed: 07/10/2023]
Abstract
BACKGROUND This study aims to explore patients' with acute myeloid leukemia perceptions about precision medicine and their preferences for involvement in this new area of shared decision-making. METHODS Individual semi-structured interviews were conducted in Finland, Italy and Germany (n = 16). The study population included patients aged 24-79 years. Interviews were analyzed with thematic content analysis. RESULTS Patient's perceived lack of knowledge as a barrier for their involvement in decision-making. Treatment decisions were often made rapidly based on the patient's intuition and trust for the physician rather than on information, in situations that decrease the patient's decision capacity. The patients emphasized that they are in a desperate situation that makes them willing to accept treatment with low probabilities of being cured. CONCLUSIONS The study raised important issues regarding patients' understanding of precision medicine and challenges concerning how to involve patients in medical decision-making. Although technical advances were viewed positively, the role of the physician as an expert and person-of-trust cannot be replaced. PRACTICE IMPLICATIONS Regardless of patients' preferences for involvement in decision-making, information plays a crucial role for patients' perceived involvement in their care. The concepts related to precision medicine are complex and will imply challenges to patient education.
Collapse
Affiliation(s)
- Åsa Grauman
- Department of Public Health and Caring Sciences, Uppsala University, Uppsala, Sweden.
| | - Mika Kontro
- Institute for Molecular Medicine Finland (FIMM), University of Helsinki, Helsinki, Finland; Department of Hematology, Helsinki University, Helsinki, Finland; Foundation for the Finnish Cancer Institute, Helsinki, Finland
| | - Karl Haller
- Department of Hematology, Oncology, and Cancer Immunology, Charité-Universitätsmedizin Berlin, Berlin, Germany
| | | | - Sofia Aakko
- Institute for Molecular Medicine Finland (FIMM), University of Helsinki, Helsinki, Finland
| | - Katharina Lang
- Department of Hematology, Oncology, and Cancer Immunology, Charité-Universitätsmedizin Berlin, Berlin, Germany
| | - Chiara Zingaretti
- Unit of Biostatistics and Clinical Trials, IRCCS Istituto Romagnolo per lo Studio dei Tumori (IRST) "Dino Amadori", Meldola, FC, Italy
| | - Elena Meggiolaro
- Psycho-oncology Service, Palliative care, Pain therapy and Integrative Medicine Unit, IRCCS Istituto Romagnolo per lo Studio dei Tumori (IRST) "Dino Amadori", Meldola, FC, Italy
| | - Silvia De Padova
- Psycho-oncology Service, Palliative care, Pain therapy and Integrative Medicine Unit, IRCCS Istituto Romagnolo per lo Studio dei Tumori (IRST) "Dino Amadori", Meldola, FC, Italy
| | - Giovanni Marconi
- Hematology Unit, IRCCS Istituto Romagnolo per lo Studio dei Tumori (IRST) "Dino Amadori", Meldola, FC, Italy
| | - Giovanni Martinelli
- Scientific Directorate, IRCCS Istituto Romagnolo per lo Studio dei Tumori (IRST) "Dino Amadori", Meldola, FC, Italy
| | - Caroline A Heckman
- Institute for Molecular Medicine Finland (FIMM), University of Helsinki, Helsinki, Finland
| | - Giorgia Simonetti
- Biosciences Laboratory, IRCCS Istituto Romagnolo per lo Studio dei Tumori (IRST) "Dino Amadori", Meldola, FC, Italy
| | - Lars Bullinger
- Department of Hematology, Oncology, and Cancer Immunology, Charité-Universitätsmedizin Berlin, Berlin, Germany
| | - Ulrik Kihlbom
- Department of Public Health and Caring Sciences, Uppsala University, Uppsala, Sweden; Stockholm Centre for Health Care Ethics (CHE), LIME, Karoliniska Institutet, Sweden
| |
Collapse
|
14
|
Katirai A, Yamamoto BA, Kogetsu A, Kato K. Perspectives on artificial intelligence in healthcare from a Patient and Public Involvement Panel in Japan: an exploratory study. Front Digit Health 2023; 5:1229308. [PMID: 37781456 PMCID: PMC10533983 DOI: 10.3389/fdgth.2023.1229308] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2023] [Accepted: 08/28/2023] [Indexed: 10/03/2023] Open
Abstract
Patients and members of the public are the end users of healthcare, but little is known about their views on the use of artificial intelligence (AI) in healthcare, particularly in the Japanese context. This paper reports on an exploratory two-part workshop conducted with members of a Patient and Public Involvement Panel in Japan, which was designed to identify their expectations and concerns about the use of AI in healthcare broadly. 55 expectations and 52 concerns were elicited from workshop participants, who were then asked to cluster and title these expectations and concerns. Thematic content analysis was used to identify 12 major themes from this data. Participants had notable expectations around improved hospital administration, improved quality of care and patient experience, and positive changes in roles and relationships, and reductions in costs and disparities. These were counterbalanced by concerns about problematic changes to healthcare and a potential loss of autonomy, as well as risks around accountability and data management, and the possible emergence of new disparities. The findings reflect participants' expectations for AI as a possible solution for long-standing issues in healthcare, though their overall balanced view of AI mirrors findings reported in other contexts. Thus, this paper offers initial, novel insights into perspectives on AI in healthcare from the Japanese context. Moreover, the findings are used to argue for the importance of involving patient and public stakeholders in deliberation on AI in healthcare.
Collapse
Affiliation(s)
- Amelia Katirai
- Research Center on Ethical, Legal, and Social Issues, Osaka University, Suita, Japan
| | | | - Atsushi Kogetsu
- Department of Biomedical Ethics and Public Policy, Graduate School of Medicine, Osaka University, Suita, Japan
| | - Kazuto Kato
- Department of Biomedical Ethics and Public Policy, Graduate School of Medicine, Osaka University, Suita, Japan
| |
Collapse
|
15
|
Macri R, Roberts SL. The Use of Artificial Intelligence in Clinical Care: A Values-Based Guide for Shared Decision Making. Curr Oncol 2023; 30:2178-2186. [PMID: 36826129 PMCID: PMC9955933 DOI: 10.3390/curroncol30020168] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2022] [Revised: 01/28/2023] [Accepted: 02/01/2023] [Indexed: 02/12/2023] Open
Abstract
Clinical applications of artificial intelligence (AI) in healthcare, including in the field of oncology, have the potential to advance diagnosis and treatment. The literature suggests that patient values should be considered in decision making when using AI in clinical care; however, there is a lack of practical guidance for clinicians on how to approach these conversations and incorporate patient values into clinical decision making. We provide a practical, values-based guide for clinicians to assist in critical reflection and the incorporation of patient values into shared decision making when deciding to use AI in clinical care. Values that are relevant to patients, identified in the literature, include trust, privacy and confidentiality, non-maleficence, safety, accountability, beneficence, autonomy, transparency, compassion, equity, justice, and fairness. The guide offers questions for clinicians to consider when adopting the potential use of AI in their practice; explores illness understanding between the patient and clinician; encourages open dialogue of patient values; reviews all clinically appropriate options; and makes a shared decision of what option best meets the patient's values. The guide can be used for diverse clinical applications of AI.
Collapse
Affiliation(s)
- Rosanna Macri
- Department of Bioethics, Sinai Health, Toronto, ON M5G 1X5, Canada
- Joint Centre for Bioethics, Dalla Lana School of Public Health, University of Toronto, Toronto, ON M5T 1P8, Canada
- Department of Radiation Oncology, Temerty Faculty of Medicine, University of Toronto, Toronto, ON M5T 1P5, Canada
- Correspondence:
| | - Shannon L. Roberts
- Project-Specific Bioethics Research Volunteer Student, Hennick Bridgepoint Hospital, Sinai Health, Toronto, ON M4M 2B5, Canada
| |
Collapse
|
16
|
Shen KL, Huang CL, Lin YC, Du JK, Chen FL, Kabasawa Y, Chen CC, Huang HL. Effects of Artificial Intelligence (AI)-Assisted Dental Monitoring Intervention in Patients with Periodontitis: A Randomized Controlled Trial. J Clin Periodontol 2022; 49:988-998. [PMID: 35713224 DOI: 10.1111/jcpe.13675] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2022] [Revised: 05/25/2022] [Accepted: 05/30/2022] [Indexed: 11/27/2022]
Abstract
AIM To evaluate the effects of an at-home AI-assisted dental monitoring application on treatment outcomes in patients with periodontitis. MATERIALS AND METHODS Participants with periodontitis were recruited and randomly assigned into an AI (AI; n = 16), AI and human counseling (AIHC; n = 17), or control (CG; n = 20) group. All participants received nonsurgical periodontal treatment. We employed an AI-assisted tool called DENTAL MONITORING® (DM) intervention, a new technological AI monitoring product that utilizes smartphone cameras for intraoral scanning and assessment. Patients in the AI and AIHC groups respectively received additional (a) DM or (b) DM with real-person counseling over three months. Periodontal parameters were collected at baseline and follow-ups. A mixed-design model analyzed the follow-up effects over time. RESULTS The AI and AIHC groups respectively exhibited greater improvement in probing pocket depth [Mean diff = -0.9±0.4 and -1.4±0.3, effect size (ES) = 0.76 and 1.98], clinical attachment level (Mean diff = -0.8±0.3 and -1.4±0.3, ES = 0.84 and 1.77) and plaque index (Mean diff = -0.5±0.2 and -0.7±0.2, ES = 0.93 and 1.81) at 3-month follow-up than the CG did. The AIHC group had a greater reduction in probing pocket depth (ES = 0.46) and clinical attachment level (ES = 0.64) at the 3-month follow-up compared with the AI group. CONCLUSION Using AI monitoring at home had a positive effect on treatment outcomes for patients with periodontitis. Patients with AI-assisted health counseling exhibited better treatment outcomes than did patients who used AI monitoring alone. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Kang-Ling Shen
- Department of Oral Hygiene, College of Dental Medicine, Kaohsiung Medical University, Kaohsiung City, Taiwan
| | - Chiung-Lin Huang
- Division of Periodontics, Department of Dentistry, Kaohsiung Medical University Hospital, Kaohsiung City, Taiwan
| | - Ying-Chun Lin
- Department of Oral Hygiene, College of Dental Medicine, Kaohsiung Medical University, Kaohsiung City, Taiwan.,Department of Dentistry, Kaohsiung Medical University Hospital, Kaohsiung City, Taiwan
| | - Je-Kang Du
- Department of Dentistry, Kaohsiung Medical University Hospital, Kaohsiung City, Taiwan.,School of Dentistry, College of Dental Medicine, Kaohsiung Medical University, Kaohsiung City, Taiwan.,Division of Prosthodontics, Department of Dentistry, Kaohsiung Medical University Hospital, Kaohsiung City, Taiwan
| | - Fu-Li Chen
- Department of Public Health, Fu Jen Catholic University, New Taipei City, Taiwan
| | - Yuji Kabasawa
- Oral Care for Systemic Health Support, Faculty of Dentistry, School of Oral Health Care Sciences, Graduate School, Tokyo Medical and Dental University, Tokyo, Japan
| | - Chih-Chang Chen
- Department of Oral Hygiene, College of Dental Medicine, Kaohsiung Medical University, Kaohsiung City, Taiwan
| | - Hsiao-Ling Huang
- Department of Oral Hygiene, College of Dental Medicine, Kaohsiung Medical University, Kaohsiung City, Taiwan
| |
Collapse
|
17
|
Yap A, Wilkinson B, Chen E, Han L, Vaghefi E, Galloway C, Squirrell D. Patients Perceptions of Artificial Intelligence in Diabetic Eye Screening. Asia Pac J Ophthalmol (Phila) 2022; 11:287-293. [PMID: 35772087 DOI: 10.1097/apo.0000000000000525] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/25/2023] Open
Abstract
PURPOSE Artificial intelligence (AI) technology is poised to revolutionize modern delivery of health care services. We set to evaluate the patient perspective of AI use in diabetic retinal screening. DESIGN Survey. METHODS Four hundred thirty-eight patients undergoing diabetic retinal screening across New Zealand participated in a survey about their opinion of AI technology in retinal screening. The survey consisted of 13 questions covering topics of awareness, trust, and receptivity toward AI systems. RESULTS The mean age was 59 years. The majority of participants identified as New Zealand European (50%), followed by Asian (31%), Pacific Islander (10%), and Maori (5%). Whilst 73% of participants were aware of AI, only 58% have heard of it being implemented in health care. Overall, 78% of respondents were comfortable with AI use in their care, with 53% saying they would trust an AI-assisted screening program as much as a health professional. Despite having a higher awareness of AI, younger participants had lower trust in AI systems. A higher proportion of Maori and Pacific participants indicated a preference toward human-led screening. The main perceived benefits of AI included faster diagnostic speeds and greater accuracy. CONCLUSIONS There is low awareness of clinical AI applications among our participants. Despite this, most are receptive toward the implementation of AI in diabetic eye screening. Overall, there was a strong preference toward continual involvement of clinicians in the screening process. There are key recommendations to enhance the receptivity of the public toward incorporation of AI into retinal screening programs.
Collapse
Affiliation(s)
- Aaron Yap
- Department of Ophthalmology, Auckland, New Zealand
| | - Benjamin Wilkinson
- Department of Ophthalmology, University of Auckland, Auckland, New Zealand
| | - Eileen Chen
- School of Optometry and Vision Science, Auckland, New Zealand
| | - Lydia Han
- School of Optometry and Vision Science, Auckland, New Zealand
| | - Ehsan Vaghefi
- School of Optometry and Vision Science, Auckland, New Zealand
- Toku Eyes, Auckland, New Zealand
| | - Chris Galloway
- School of Communication, Journalism and Marketing Massey Business School, New Zealand
| | - David Squirrell
- Department of Ophthalmology, Auckland, New Zealand
- Toku Eyes, Auckland, New Zealand
| |
Collapse
|
18
|
Isbanner S, O'Shaughnessy P, Steel D, Wilcock S, Carter S. The Australian Values and Attitudes on AI (AVA-AI) Study: Methodologically Innovative National Survey about Adopting Artificial Intelligence in Healthcare and Social Services (Preprint). J Med Internet Res 2022; 24:e37611. [PMID: 35994331 PMCID: PMC9446139 DOI: 10.2196/37611] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2022] [Revised: 05/25/2022] [Accepted: 07/19/2022] [Indexed: 11/13/2022] Open
Affiliation(s)
- Sebastian Isbanner
- Social Marketing @ Griffith, Griffith Business School, Griffith University, Brisbane, Australia
| | - Pauline O'Shaughnessy
- School of Mathematics and Applied Statistics, Faculty of Engineering and Information Sciences, University of Wollongong, Wollongong, Australia
| | - David Steel
- School of Mathematics and Applied Statistics, Faculty of Engineering and Information Sciences, University of Wollongong, Wollongong, Australia
| | - Scarlet Wilcock
- Australian Research Council Centre of Excellence for Automated Decision-Making and Society, The University of Sydney Law School, The University of Sydney, Sydney, Australia
| | - Stacy Carter
- Australian Centre for Health Engagement Evidence and Values, Faculty of the Arts, Social Sciences and Humanities, University of Wollongong, Wollongong, Australia
| |
Collapse
|
19
|
Khanijahani A, Iezadi S, Dudley S, Goettler M, Kroetsch P, Wise J. Organizational, professional, and patient characteristics associated with artificial intelligence adoption in healthcare: A systematic review. HEALTH POLICY AND TECHNOLOGY 2022. [DOI: 10.1016/j.hlpt.2022.100602] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
|
20
|
Ploug T, Sundby A, Moeslund TB, Holm S. Population Preferences for Performance and Explainability of Artificial Intelligence in Health Care: Choice-Based Conjoint Survey. J Med Internet Res 2021; 23:e26611. [PMID: 34898454 PMCID: PMC8713089 DOI: 10.2196/26611] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2020] [Revised: 05/31/2021] [Accepted: 11/11/2021] [Indexed: 01/04/2023] Open
Abstract
BACKGROUND Certain types of artificial intelligence (AI), that is, deep learning models, can outperform health care professionals in particular domains. Such models hold considerable promise for improved diagnostics, treatment, and prevention, as well as more cost-efficient health care. They are, however, opaque in the sense that their exact reasoning cannot be fully explicated. Different stakeholders have emphasized the importance of the transparency/explainability of AI decision making. Transparency/explainability may come at the cost of performance. There is need for a public policy regulating the use of AI in health care that balances the societal interests in high performance as well as in transparency/explainability. A public policy should consider the wider public's interests in such features of AI. OBJECTIVE This study elicited the public's preferences for the performance and explainability of AI decision making in health care and determined whether these preferences depend on respondent characteristics, including trust in health and technology and fears and hopes regarding AI. METHODS We conducted a choice-based conjoint survey of public preferences for attributes of AI decision making in health care in a representative sample of the adult Danish population. Initial focus group interviews yielded 6 attributes playing a role in the respondents' views on the use of AI decision support in health care: (1) type of AI decision, (2) level of explanation, (3) performance/accuracy, (4) responsibility for the final decision, (5) possibility of discrimination, and (6) severity of the disease to which the AI is applied. In total, 100 unique choice sets were developed using fractional factorial design. In a 12-task survey, respondents were asked about their preference for AI system use in hospitals in relation to 3 different scenarios. RESULTS Of the 1678 potential respondents, 1027 (61.2%) participated. The respondents consider the physician having the final responsibility for treatment decisions the most important attribute, with 46.8% of the total weight of attributes, followed by explainability of the decision (27.3%) and whether the system has been tested for discrimination (14.8%). Other factors, such as gender, age, level of education, whether respondents live rurally or in towns, respondents' trust in health and technology, and respondents' fears and hopes regarding AI, do not play a significant role in the majority of cases. CONCLUSIONS The 3 factors that are most important to the public are, in descending order of importance, (1) that physicians are ultimately responsible for diagnostics and treatment planning, (2) that the AI decision support is explainable, and (3) that the AI system has been tested for discrimination. Public policy on AI system use in health care should give priority to such AI system use and ensure that patients are provided with information.
Collapse
Affiliation(s)
- Thomas Ploug
- Department of Communication and Psychology, Aalborg University, Copenhagen, Denmark
| | - Anna Sundby
- Department of Communication and Psychology, Aalborg University, Copenhagen, Denmark
| | - Thomas B Moeslund
- Visual Analysis and Perception Lab, Aalborg University, Aalborg, Denmark
| | - Søren Holm
- Centre for Social Ethics and Policy, University of Manchester, Manchester, United Kingdom
| |
Collapse
|
21
|
Young AT, Amara D, Bhattacharya A, Wei ML. Patient and general public attitudes towards clinical artificial intelligence: a mixed methods systematic review. LANCET DIGITAL HEALTH 2021; 3:e599-e611. [PMID: 34446266 DOI: 10.1016/s2589-7500(21)00132-1] [Citation(s) in RCA: 58] [Impact Index Per Article: 19.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/28/2021] [Revised: 06/15/2021] [Accepted: 06/17/2021] [Indexed: 12/14/2022]
Abstract
Artificial intelligence (AI) promises to change health care, with some studies showing proof of concept of a provider-level performance in various medical specialties. However, there are many barriers to implementing AI, including patient acceptance and understanding of AI. Patients' attitudes toward AI are not well understood. We systematically reviewed the literature on patient and general public attitudes toward clinical AI (either hypothetical or realised), including quantitative, qualitative, and mixed methods original research articles. We searched biomedical and computational databases from Jan 1, 2000, to Sept 28, 2020, and screened 2590 articles, 23 of which met our inclusion criteria. Studies were heterogeneous regarding the study population, study design, and the field and type of AI under study. Six (26%) studies assessed currently available or soon-to-be available AI tools, whereas 17 (74%) assessed hypothetical or broadly defined AI. The quality of the methods of these studies was mixed, with a frequent issue of selection bias. Overall, patients and the general public conveyed positive attitudes toward AI but had many reservations and preferred human supervision. We summarise our findings in six themes: AI concept, AI acceptability, AI relationship with humans, AI development and implementation, AI strengths and benefits, and AI weaknesses and risks. We suggest guidance for future studies, with the goal of supporting the safe, equitable, and patient-centred implementation of clinical AI.
Collapse
Affiliation(s)
- Albert T Young
- School of Medicine, University of California, San Francisco, San Francisco, CA, USA
| | - Dominic Amara
- School of Medicine, University of California, San Francisco, San Francisco, CA, USA
| | | | - Maria L Wei
- Department of Dermatology, University of California, San Francisco, San Francisco, CA, USA; Dermatology Service, San Francisco Veterans Affairs Medical Center, San Francisco, CA, USA.
| |
Collapse
|
22
|
Lennartz S, Dratsch T, Zopfs D, Persigehl T, Maintz D, Große Hokamp N, Pinto Dos Santos D. Use and Control of Artificial Intelligence in Patients Across the Medical Workflow: Single-Center Questionnaire Study of Patient Perspectives. J Med Internet Res 2021; 23:e24221. [PMID: 33595451 PMCID: PMC7929746 DOI: 10.2196/24221] [Citation(s) in RCA: 32] [Impact Index Per Article: 10.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2020] [Revised: 11/01/2020] [Accepted: 11/30/2020] [Indexed: 12/12/2022] Open
Abstract
BACKGROUND Artificial intelligence (AI) is gaining increasing importance in many medical specialties, yet data on patients' opinions on the use of AI in medicine are scarce. OBJECTIVE This study aimed to investigate patients' opinions on the use of AI in different aspects of the medical workflow and the level of control and supervision under which they would deem the application of AI in medicine acceptable. METHODS Patients scheduled for computed tomography or magnetic resonance imaging voluntarily participated in an anonymized questionnaire between February 10, 2020, and May 24, 2020. Patient information, confidence in physicians vs AI in different clinical tasks, opinions on the control of AI, preference in cases of disagreement between AI and physicians, and acceptance of the use of AI for diagnosing and treating diseases of different severity were recorded. RESULTS In total, 229 patients participated. Patients favored physicians over AI for all clinical tasks except for treatment planning based on current scientific evidence. In case of disagreement between physicians and AI regarding diagnosis and treatment planning, most patients preferred the physician's opinion to AI (96.2% [153/159] vs 3.8% [6/159] and 94.8% [146/154] vs 5.2% [8/154], respectively; P=.001). AI supervised by a physician was considered more acceptable than AI without physician supervision at diagnosis (confidence rating 3.90 [SD 1.20] vs 1.64 [SD 1.03], respectively; P=.001) and therapy (3.77 [SD 1.18] vs 1.57 [SD 0.96], respectively; P=.001). CONCLUSIONS Patients favored physicians over AI in most clinical tasks and strongly preferred an application of AI with physician supervision. However, patients acknowledged that AI could help physicians integrate the most recent scientific evidence into medical care. Application of AI in medicine should be disclosed and controlled to protect patient interests and meet ethical standards.
Collapse
Affiliation(s)
- Simon Lennartz
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| | - Thomas Dratsch
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| | - David Zopfs
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| | - Thorsten Persigehl
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| | - David Maintz
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| | - Nils Große Hokamp
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| | - Daniel Pinto Dos Santos
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| |
Collapse
|
23
|
Vaishya R, Javaid M, Haleem A, Khan I, Vaish A. Extending capabilities of artificial intelligence for decision-making and healthcare education. APOLLO MEDICINE 2020. [DOI: 10.4103/am.am_10_20] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022] Open
|