1
|
Liao X, Yao C, Jin F, Zhang J, Liu L. Barriers and facilitators to implementing imaging-based diagnostic artificial intelligence-assisted decision-making software in hospitals in China: a qualitative study using the updated Consolidated Framework for Implementation Research. BMJ Open 2024; 14:e084398. [PMID: 39260855 PMCID: PMC11409362 DOI: 10.1136/bmjopen-2024-084398] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 09/13/2024] Open
Abstract
OBJECTIVES To identify the barriers and facilitators to the successful implementation of imaging-based diagnostic artificial intelligence (AI)-assisted decision-making software in China, using the updated Consolidated Framework for Implementation Research (CFIR) as a theoretical basis to develop strategies that promote effective implementation. DESIGN This qualitative study involved semistructured interviews with key stakeholders from both clinical settings and industry. Interview guide development, coding, analysis and reporting of findings were thoroughly informed by the updated CFIR. SETTING Four healthcare institutions in Beijing and Shanghai and two vendors of AI-assisted decision-making software for lung nodules detection and diabetic retinopathy screening were selected based on purposive sampling. PARTICIPANTS A total of 23 healthcare practitioners, 6 hospital informatics specialists, 4 hospital administrators and 7 vendors of the selected AI-assisted decision-making software were included in the study. RESULTS Within the 5 CFIR domains, 10 constructs were identified as barriers, 8 as facilitators and 3 as both barriers and facilitators. Major barriers included unsatisfactory clinical performance (Innovation); lack of collaborative network between primary and tertiary hospitals, lack of information security measures and certification (outer setting); suboptimal data quality, misalignment between software functions and goals of healthcare institutions (inner setting); unmet clinical needs (individuals). Key facilitators were strong empirical evidence of effectiveness, improved clinical efficiency (innovation); national guidelines related to AI, deployment of AI software in peer hospitals (outer setting); integration of AI software into existing hospital systems (inner setting) and involvement of clinicians (implementation process). CONCLUSIONS The study findings contributed to the ongoing exploration of AI integration in healthcare from the perspective of China, emphasising the need for a comprehensive approach considering both innovation-specific factors and the broader organisational and contextual dynamics. As China and other developing countries continue to advance in adopting AI technologies, the derived insights could further inform healthcare practitioners, industry stakeholders and policy-makers, guiding policies and practices that promote the successful implementation of imaging-based diagnostic AI-assisted decision-making software in healthcare for optimal patient care.
Collapse
Affiliation(s)
- Xiwen Liao
- Peking University First Hospital, Beijing, China
- Clinical Research Institute, Institute of Advanced Clinical Medicine, Peking University, Beijing, China
| | - Chen Yao
- Peking University First Hospital, Beijing, China
- Clinical Research Institute, Institute of Advanced Clinical Medicine, Peking University, Beijing, China
| | - Feifei Jin
- Trauma Medicine Center, Peking University People's Hospital, Beijing, China
- Key Laboratory of Trauma treatment and Neural Regeneration, Peking University, Ministry of Education, Beijing, China
| | - Jun Zhang
- MSD R&D (China) Co., Ltd, Beijing, China
| | - Larry Liu
- Merck & Co Inc, Rahway, New Jersey, USA
- Weill Cornell Medical College, New York City, New York, USA
| |
Collapse
|
2
|
Barnes AJ, Zhang Y, Valenzuela A. AI and culture: Culturally dependent responses to AI systems. Curr Opin Psychol 2024; 58:101838. [PMID: 39002473 DOI: 10.1016/j.copsyc.2024.101838] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2024] [Revised: 06/06/2024] [Accepted: 06/25/2024] [Indexed: 07/15/2024]
Abstract
This article synthesizes recent research connected to how cultural identity can determine responses to artificial intelligence. National differences in AI adoption imply that culturally-driven psychological differences may offer a nuanced understanding and interventions. Our review suggests that cultural identity shapes how individuals include AI in constructing the self in relation to others and determines the effect of AI on key decision-making processes. Individualists may be more prone to view AI as external to the self and interpret AI features to infringe upon their uniqueness, autonomy, and privacy. In contrast, collectivists may be more prone to view AI as an extension of the self and interpret AI features to facilitate conforming to consensus, respond to their environment, and protect privacy.
Collapse
Affiliation(s)
- Aaron J Barnes
- University of Louisville, 110 W Brandeis Ave., Louisville, KY 40208, USA.
| | | | - Ana Valenzuela
- ESADE-Ramon Llul, Barcelona, Spain; Baruch College, City University of New York, USA
| |
Collapse
|
3
|
Sobieski M, Grata-Borkowska U, Bujnowska-Fedak MM. Implementing an Early Detection Program for Autism Spectrum Disorders in the Polish Primary Healthcare Setting-Possible Obstacles and Experiences from Online ASD Screening. Brain Sci 2024; 14:388. [PMID: 38672037 PMCID: PMC11047999 DOI: 10.3390/brainsci14040388] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2024] [Revised: 04/08/2024] [Accepted: 04/15/2024] [Indexed: 04/28/2024] Open
Abstract
A screening questionnaire for autism symptoms is not yet available in Poland, and there are no recommendations regarding screening for developmental disorders in Polish primary healthcare. The aim of this study was to assess the opinions of parents and physicians on the legitimacy and necessity of screening for autism spectrum disorders, potential barriers to the implementation of the screening program, and the evaluation and presentation of the process of online ASD screening, which was part of the validation program for the Polish version of one of the screening tools. This study involved 418 parents whose children were screened online and 95 primary care physicians who expressed their opinions in prepared surveys. The results indicate that both parents and doctors perceive the need to screen children for ASD in the general population without a clear preference as to the screening method (online or in person). Moreover, online screening is considered by respondents as a satisfactory diagnostic method. Therefore, online screening may prove to be at least a partial method of solving numerous obstacles indicated by participants' systemic difficulties including time constraints, the lack of experienced specialists in the field of developmental disorders and organizational difficulties of healthcare systems.
Collapse
Affiliation(s)
- Mateusz Sobieski
- Department of Family Medicine, Wroclaw Medical University, Syrokomli 1, 51-141 Wroclaw, Poland; (U.G.-B.); (M.M.B.-F.)
| | | | | |
Collapse
|
4
|
Kim K. Maximizers' Reactance to Algorithm-Recommended Options: The Moderating Role of Autotelic vs. Instrumental Choices. Behav Sci (Basel) 2023; 13:938. [PMID: 37998684 PMCID: PMC10669481 DOI: 10.3390/bs13110938] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2023] [Revised: 11/01/2023] [Accepted: 11/12/2023] [Indexed: 11/25/2023] Open
Abstract
The previous literature has provided mixed findings regarding whether consumers appreciate or are opposed to algorithms. The primary goal of this paper is to address these inconsistencies by identifying the maximizing tendency as a critical moderating variable. In Study 1, it was found that maximizers, individuals who strive for the best possible outcomes, exhibit greater reactance toward algorithm-recommended choices than satisficers, those who are satisfied with a good-enough option. This increased reactance also resulted in decreased algorithm adoption intention. Study 2 replicated and extended the findings from Study 1 by identifying the moderating role of choice goals. Maximizers are more likely to experience reactance to algorithm-recommended options when the act of choosing itself is intrinsically motivating and meaningful (i.e., autotelic choices) compared to when the decision is merely a means to an end (i.e., instrumental choices). The results of this research contribute to a nuanced understanding of how consumers with different decision-making styles navigate the landscape of choice in the digital age. Furthermore, it offers practical insights for firms that utilize algorithmic recommendations in their businesses.
Collapse
Affiliation(s)
- Kaeun Kim
- Department of Business Administration, Dong-A University, Busan 49236, Republic of Korea
| |
Collapse
|
5
|
De Freitas J, Agarwal S, Schmitt B, Haslam N. Psychological factors underlying attitudes toward AI tools. Nat Hum Behav 2023; 7:1845-1854. [PMID: 37985913 DOI: 10.1038/s41562-023-01734-2] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2023] [Accepted: 09/26/2023] [Indexed: 11/22/2023]
Abstract
What are the psychological factors driving attitudes toward artificial intelligence (AI) tools, and how can resistance to AI systems be overcome when they are beneficial? Here we first organize the main sources of resistance into five main categories: opacity, emotionlessness, rigidity, autonomy and group membership. We relate each of these barriers to fundamental aspects of cognition, then cover empirical studies providing correlational or causal evidence for how the barrier influences attitudes toward AI tools. Second, we separate each of the five barriers into AI-related and user-related factors, which is of practical relevance in developing interventions towards the adoption of beneficial AI tools. Third, we highlight potential risks arising from these well-intentioned interventions. Fourth, we explain how the current Perspective applies to various stakeholders, including how to approach interventions that carry known risks, and point to outstanding questions for future work.
Collapse
Affiliation(s)
| | - Stuti Agarwal
- Marketing Unit, Harvard Business School, Boston, MA, USA
| | - Bernd Schmitt
- Marketing Division, Columbia Business School, New York, NY, USA
| | - Nick Haslam
- School of Psychological Sciences, University of Melbourne, Parkville, Victoria, Australia
| |
Collapse
|
6
|
Liefooghe B, Min E, Aarts H. The effects of social presence on cooperative trust with algorithms. Sci Rep 2023; 13:17463. [PMID: 37838816 PMCID: PMC10576745 DOI: 10.1038/s41598-023-44354-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2023] [Accepted: 10/06/2023] [Indexed: 10/16/2023] Open
Abstract
Algorithms support many processes in modern society. Research using trust games frequently reports that people are less inclined to cooperate when believed to play against an algorithm. Trust is, however, malleable by contextual factors and social presence can increase the willingness to collaborate. We investigated whether situating cooperation with an algorithm in the presence of another person increases cooperative trust. Three groups of participants played a trust game against a pre-programmed algorithm in an online webhosted experiment. The first group was told they played against another person who was present online. The second group was told they played against an algorithm. The third group was told they played against an algorithm while another person was present online. More cooperative responses were observed in the first group compared to the second group. A difference in cooperation that replicates previous findings. In addition, cooperative trust dropped more over the course of the trust game when participants interacted with an algorithm in the absence another person compared to the other two groups. This latter finding suggests that social presence can mitigate distrust in interacting with an algorithm. We discuss the cognitive mechanisms that can mediate this effect.
Collapse
Affiliation(s)
| | - Ebelien Min
- Utrecht University, Utrecht, The Netherlands
| | - Henk Aarts
- Utrecht University, Utrecht, The Netherlands
| |
Collapse
|
7
|
Sun K, Zheng X, Liu W. Increasing clinical medical service satisfaction: An investigation into the impacts of Physicians' use of clinical decision-making support AI on patients' service satisfaction. Int J Med Inform 2023; 176:105107. [PMID: 37257235 DOI: 10.1016/j.ijmedinf.2023.105107] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2022] [Revised: 04/12/2023] [Accepted: 05/19/2023] [Indexed: 06/02/2023]
Abstract
BACKGROUND The medical industry is one of the key industries for the application of artificial intelligence (AI). Although it is believed that the combination of CDSS and physicians could improve the medical service, there are still many concerns about the usage of CDSS. Based on these concerns, limited studies have answered the question that when a physician makes decision independently or with AI's help, will there be any differences in patients' satisfaction with the medical service? METHODS This study uses the service fairness theory as a theoretical lens and employs three vignette experiments to address this research gap. There are totally 740 subjects recruited to participate into the three experiments. Group comparison methods and structural equation model are used to verify the hypotheses. RESULTS The experimental results reveal that: (1) physicians using AI can reduce patients' service satisfaction (Mdifference=0.404,p=0.004); (2) the negative relationship between AI usage and service satisfaction can partially be mediated through distributive fairness and procedural fairness; (3) physicians actively informing their patients about the usage of AI can help mitigate the reduction in service satisfaction (Mdifference=0.400,p=0.003) and three types of fairness Mdifferencedistributive=0.307,p=0.042;Mdifferenceprocedural=0.483,p<0.001;Mdifferenceinteractional=0.253,p=0.027. CONCLUSION This study investigates the effect of physicians using decision-making support AI on their patients' service satisfaction. These results contribute to the existing literature pertaining to AI and fairness theory, and also help in formulating some practical suggestions for medical staff and AI development companies.
Collapse
Affiliation(s)
- Kai Sun
- School of Management Science and Engineering, Shandong University of Finance and Economics, Jinan, China.
| | - Xiangwei Zheng
- School of Information Science and Engineering, Shandong Normal University, Jinan, China
| | - Weilong Liu
- School of Management Science and Engineering, Shandong University of Finance and Economics, Jinan, China
| |
Collapse
|
8
|
Robertson C, Woods A, Bergstrand K, Findley J, Balser C, Slepian MJ. Diverse patients' attitudes towards Artificial Intelligence (AI) in diagnosis. PLOS DIGITAL HEALTH 2023; 2:e0000237. [PMID: 37205713 DOI: 10.1371/journal.pdig.0000237] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/27/2022] [Accepted: 03/20/2023] [Indexed: 05/21/2023]
Abstract
Artificial intelligence (AI) has the potential to improve diagnostic accuracy. Yet people are often reluctant to trust automated systems, and some patient populations may be particularly distrusting. We sought to determine how diverse patient populations feel about the use of AI diagnostic tools, and whether framing and informing the choice affects uptake. To construct and pretest our materials, we conducted structured interviews with a diverse set of actual patients. We then conducted a pre-registered (osf.io/9y26x), randomized, blinded survey experiment in factorial design. A survey firm provided n = 2675 responses, oversampling minoritized populations. Clinical vignettes were randomly manipulated in eight variables with two levels each: disease severity (leukemia versus sleep apnea), whether AI is proven more accurate than human specialists, whether the AI clinic is personalized to the patient through listening and/or tailoring, whether the AI clinic avoids racial and/or financial biases, whether the Primary Care Physician (PCP) promises to explain and incorporate the advice, and whether the PCP nudges the patient towards AI as the established, recommended, and easy choice. Our main outcome measure was selection of AI clinic or human physician specialist clinic (binary, "AI uptake"). We found that with weighting representative to the U.S. population, respondents were almost evenly split (52.9% chose human doctor and 47.1% chose AI clinic). In unweighted experimental contrasts of respondents who met pre-registered criteria for engagement, a PCP's explanation that AI has proven superior accuracy increased uptake (OR = 1.48, CI 1.24-1.77, p < .001), as did a PCP's nudge towards AI as the established choice (OR = 1.25, CI: 1.05-1.50, p = .013), as did reassurance that the AI clinic had trained counselors to listen to the patient's unique perspectives (OR = 1.27, CI: 1.07-1.52, p = .008). Disease severity (leukemia versus sleep apnea) and other manipulations did not affect AI uptake significantly. Compared to White respondents, Black respondents selected AI less often (OR = .73, CI: .55-.96, p = .023) and Native Americans selected it more often (OR: 1.37, CI: 1.01-1.87, p = .041). Older respondents were less likely to choose AI (OR: .99, CI: .987-.999, p = .03), as were those who identified as politically conservative (OR: .65, CI: .52-.81, p < .001) or viewed religion as important (OR: .64, CI: .52-.77, p < .001). For each unit increase in education, the odds are 1.10 greater for selecting an AI provider (OR: 1.10, CI: 1.03-1.18, p = .004). While many patients appear resistant to the use of AI, accuracy information, nudges and a listening patient experience may help increase acceptance. To ensure that the benefits of AI are secured in clinical practice, future research on best methods of physician incorporation and patient decision making is required.
Collapse
Affiliation(s)
- Christopher Robertson
- University of Arizona, Tucson, Arizona, United States of America
- Boston University, Boston, Massachusetts, United States of America
| | - Andrew Woods
- University of Arizona, Tucson, Arizona, United States of America
| | - Kelly Bergstrand
- University of Texas at Arlington, Arlington Texas, United States of America
| | - Jess Findley
- University of Arizona, Tucson, Arizona, United States of America
| | - Cayley Balser
- University of Arizona, Tucson, Arizona, United States of America
| | - Marvin J Slepian
- University of Arizona, Tucson, Arizona, United States of America
| |
Collapse
|
9
|
Gain-loss separability in human- but not computer-based changes of mind. COMPUTERS IN HUMAN BEHAVIOR 2023. [DOI: 10.1016/j.chb.2023.107712] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/16/2023]
|
10
|
Li J, Huang J, Liu J, Zheng T. Human-AI cooperation: Modes and their effects on attitudes. TELEMATICS AND INFORMATICS 2022. [DOI: 10.1016/j.tele.2022.101862] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
11
|
The psychological and ethological antecedents of human consent to techno-empowerment of autonomous office assistants. AI & SOCIETY 2022. [DOI: 10.1007/s00146-022-01534-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
12
|
Agudo U, Arrese M, Liberal KG, Matute H. Assessing Emotion and Sensitivity of AI Artwork. Front Psychol 2022; 13:879088. [PMID: 35478752 PMCID: PMC9037325 DOI: 10.3389/fpsyg.2022.879088] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2022] [Accepted: 03/14/2022] [Indexed: 11/20/2022] Open
Abstract
Artificial Intelligence (AI) is currently present in areas that were, until recently, reserved for humans, such as, for instance, art. However, to the best of our knowledge, there is not much empirical evidence on how people perceive the skills of AI in these domains. In Experiment 1, participants were exposed to AI-generated audiovisual artwork and were asked to evaluate it. We told half of the participants that the artist was a human and we confessed to the other half that it was an AI. Although all of them were exposed to the same artwork, the results showed that people attributed lower sensitivity, lower ability to evoke their emotions, and lower quality to the artwork when they thought the artist was AI as compared to when they believed the artist was human. Experiment 2 reproduced these results and extended them to a slightly different setting, a different piece of (exclusively auditory) artwork, and added some additional measures. The results show that the evaluation of art seems to be modulated, at least in part, by prior stereotypes and biases about the creative skills of AI. The data and materials for these experiments are freely available at the Open Science Framework: https://osf.io/3r7xg/. Experiment 2 was preregistered at AsPredicted: https://aspredicted.org/fh2u2.pdf.
Collapse
Affiliation(s)
- Ujué Agudo
- Departamento de Psicología, Universidad de Deusto, Bilbao, Spain.,Laboratorio de intervención, Bikolabs/Biko, Pamplona, Spain
| | - Miren Arrese
- Laboratorio de intervención, Bikolabs/Biko, Pamplona, Spain
| | | | - Helena Matute
- Departamento de Psicología, Universidad de Deusto, Bilbao, Spain
| |
Collapse
|
13
|
Jussupow E, Spohrer K, Heinzl A. Radiologists’ Usage of Diagnostic AI Systems. BUSINESS & INFORMATION SYSTEMS ENGINEERING 2022. [DOI: 10.1007/s12599-022-00750-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
AbstractWhile diagnostic AI systems are implemented in medical practice, it is still unclear how physicians embed them in diagnostic decision making. This study examines how radiologists come to use diagnostic AI systems in different ways and what role AI assessments play in this process if they confirm or disconfirm radiologists’ own judgment. The study draws on rich qualitative data from a revelatory case study of an AI system for stroke diagnosis at a University Hospital to elaborate how three sensemaking processes revolve around confirming and disconfirming AI assessments. Through context-specific sensedemanding, sensegiving, and sensebreaking, radiologists develop distinct usage patterns of AI systems. The study reveals that diagnostic self-efficacy influences which of the three sensemaking processes radiologists engage in. In deriving six propositions, the account of sensemaking and usage of diagnostic AI systems in medical practice paves the way for future research.
Collapse
|
14
|
Liang G, Sloane JF, Donkin C, Newell BR. Adapting to the algorithm: how accuracy comparisons promote the use of a decision aid. Cogn Res Princ Implic 2022; 7:14. [PMID: 35133521 PMCID: PMC8825899 DOI: 10.1186/s41235-022-00364-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2021] [Accepted: 01/20/2022] [Indexed: 11/16/2022] Open
Abstract
In three experiments, we sought to understand when and why people use an algorithm decision aid. Distinct from recent approaches, we explicitly enumerate the algorithm’s accuracy while also providing summary feedback and training that allowed participants to assess their own skills. Our results highlight that such direct performance comparisons between the algorithm and the individual encourages a strategy of selective reliance on the decision aid; individuals ignored the algorithm when the task was easier and relied on the algorithm when the task was harder. Our systematic investigation of summary feedback, training experience, and strategy hint manipulations shows that further opportunities to learn about the algorithm encourage not only increased reliance on the algorithm but also engagement in experimentation and verification of its recommendations. Together, our findings emphasize the decision-maker’s capacity to learn about the algorithm providing insights for how we can improve the use of decision aids.
Collapse
Affiliation(s)
- Garston Liang
- School of Psychology, The University of New South Wales, Sydney, Kensington, NSW, 2052, Australia.
| | - Jennifer F Sloane
- School of Psychology, The University of New South Wales, Sydney, Kensington, NSW, 2052, Australia
| | - Christopher Donkin
- School of Psychology, The University of New South Wales, Sydney, Kensington, NSW, 2052, Australia
| | - Ben R Newell
- School of Psychology, The University of New South Wales, Sydney, Kensington, NSW, 2052, Australia
| |
Collapse
|
15
|
Gaczek P, Leszczyński G, Zieliński M. Is AI Augmenting or Substituting Humans? INTERNATIONAL JOURNAL OF TECHNOLOGY AND HUMAN INTERACTION 2022. [DOI: 10.4018/ijthi.293193] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
In this paper, the authors focus on Artificial Intelligence as a tangible technology that is designed to sense, comprehend, act, and learn. There are two manifestations of AI in the medical service: an algorithm that analyzes and interprets the test result and a virtual assistant that communicates the result to the patient. The aim of this paper is to consider how AI can substitute a doctor in measuring human health and how the interaction with virtual assistant impacts one’s visual attention processes. Theoretically, the article refers to the following research strands: Human-Computer Interaction, technology in services, implementation of AI in the medical sector, and behavioral economy. By conducting an eye-tracking experimental study, it is demonstrated that the perception of medical diagnosis does not differ across experimental groups (human vs. AI). However, it is observed that participants exposed to AI-based assistant focused more on button allowing to contact a real doctor.
Collapse
Affiliation(s)
- Piotr Gaczek
- Poznan University of Economics and Business, Poland
| | | | | |
Collapse
|
16
|
Kane PB, Broomell SB. Investigating lay evaluations of models. THINKING & REASONING 2021. [DOI: 10.1080/13546783.2021.1999327] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Affiliation(s)
- Patrick Bodilly Kane
- Biomedical Ethics Unit in the School of Social Studies of Medicine, McGill University, Montreal, Quebec, Canada
| | - Stephen B. Broomell
- Social and Decision Sciences, Carnegie Mellon University, Pittsburgh, PA, USA
| |
Collapse
|
17
|
Rebitschek FG, Gigerenzer G, Wagner GG. People underestimate the errors made by algorithms for credit scoring and recidivism prediction but accept even fewer errors. Sci Rep 2021; 11:20171. [PMID: 34635779 PMCID: PMC8505498 DOI: 10.1038/s41598-021-99802-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2021] [Accepted: 09/24/2021] [Indexed: 11/25/2022] Open
Abstract
This study provides the first representative analysis of error estimations and willingness to accept errors in a Western country (Germany) with regards to algorithmic decision-making systems (ADM). We examine people's expectations about the accuracy of algorithms that predict credit default, recidivism of an offender, suitability of a job applicant, and health behavior. Also, we ask whether expectations about algorithm errors vary between these domains and how they differ from expectations about errors made by human experts. In a nationwide representative study (N = 3086) we find that most respondents underestimated the actual errors made by algorithms and are willing to accept even fewer errors than estimated. Error estimates and error acceptance did not differ consistently for predictions made by algorithms or human experts, but people's living conditions (e.g. unemployment, household income) affected domain-specific acceptance (job suitability, credit defaulting) of misses and false alarms. We conclude that people have unwarranted expectations about the performance of ADM systems and evaluate errors in terms of potential personal consequences. Given the general public's low willingness to accept errors, we further conclude that acceptance of ADM appears to be conditional to strict accuracy requirements.
Collapse
Affiliation(s)
- Felix G Rebitschek
- Harding Center for Risk Literacy, Faculty of Health Sciences Brandenburg, University of Potsdam, Potsdam, Germany.
- Max Planck Institute for Human Development, Berlin, Germany.
| | - Gerd Gigerenzer
- Harding Center for Risk Literacy, Faculty of Health Sciences Brandenburg, University of Potsdam, Potsdam, Germany
- Max Planck Institute for Human Development, Berlin, Germany
| | - Gert G Wagner
- Harding Center for Risk Literacy, Faculty of Health Sciences Brandenburg, University of Potsdam, Potsdam, Germany
- Max Planck Institute for Human Development, Berlin, Germany
- German Socio-Economic Panel Study (SOEP), Berlin, Germany
| |
Collapse
|
18
|
Langer M, Landers RN. The future of artificial intelligence at work: A review on effects of decision automation and augmentation on workers targeted by algorithms and third-party observers. COMPUTERS IN HUMAN BEHAVIOR 2021. [DOI: 10.1016/j.chb.2021.106878] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/30/2023]
|
19
|
Soellner M, Koenigstorfer J. Compliance with medical recommendations depending on the use of artificial intelligence as a diagnostic method. BMC Med Inform Decis Mak 2021; 21:236. [PMID: 34362359 PMCID: PMC8344186 DOI: 10.1186/s12911-021-01596-6] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2020] [Accepted: 07/28/2021] [Indexed: 12/16/2022] Open
Abstract
Background Advanced analytics, such as artificial intelligence (AI), increasingly gain relevance in medicine. However, patients’ responses to the involvement of AI in the care process remains largely unclear. The study aims to explore whether individuals were more likely to follow a recommendation when a physician used AI in the diagnostic process considering a highly (vs. less) severe disease compared to when the physician did not use AI or when AI fully replaced the physician. Methods Participants from the USA (n = 452) were randomly assigned to a hypothetical scenario where they imagined that they received a treatment recommendation after a skin cancer diagnosis (high vs. low severity) from a physician, a physician using AI, or an automated AI tool. They then indicated their intention to follow the recommendation. Regression analyses were used to test hypotheses. Beta coefficients (ß) describe the nature and strength of relationships between predictors and outcome variables; confidence intervals [CI] excluding zero indicate significant mediation effects. Results The total effects reveal the inferiority of automated AI (ß = .47, p = .001 vs. physician; ß = .49, p = .001 vs. physician using AI). Two pathways increase intention to follow the recommendation. When a physician performs the assessment (vs. automated AI), the perception that the physician is real and present (a concept called social presence) is high, which increases intention to follow the recommendation (ß = .22, 95% CI [.09; 0.39]). When AI performs the assessment (vs. physician only), perceived innovativeness of the method is high, which increases intention to follow the recommendation (ß = .15, 95% CI [− .28; − .04]). When physicians use AI, social presence does not decrease and perceived innovativeness increases. Conclusion Pairing AI with a physician in medical diagnosis and treatment in a hypothetical scenario using topical therapy and oral medication as treatment recommendations leads to a higher intention to follow the recommendation than AI on its own. The findings might help develop practice guidelines for cases where AI involvement benefits outweigh risks, such as using AI in pathology and radiology, to enable augmented human intelligence and inform physicians about diagnoses and treatments. Supplementary Information The online version contains supplementary material available at 10.1186/s12911-021-01596-6.
Collapse
Affiliation(s)
- Michaela Soellner
- Chair of Sport and Health Management, Technical University of Munich, Campus D - Uptown Munich, Georg-Brauchle-Ring 60/62, 80992, Munich, Germany
| | - Joerg Koenigstorfer
- Chair of Sport and Health Management, Technical University of Munich, Campus D - Uptown Munich, Georg-Brauchle-Ring 60/62, 80992, Munich, Germany.
| |
Collapse
|
20
|
Pezzo MV, Nash BED, Vieux P, Foster-Grammer HW. Effect of Having, but Not Consulting, a Computerized Diagnostic Aid. Med Decis Making 2021; 42:94-104. [PMID: 33966519 DOI: 10.1177/0272989x211011160] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
Abstract
Previous research has described physicians' reluctance to use computerized diagnostic aids (CDAs) but has never experimentally examined the effects of not consulting an aid that was readily available. Experiment 1. Participants read about a diagnosis made either by a physician or an auto mechanic (to control for perceived expertise). Half read that a CDA was available but never actually consulted; no mention of a CDA was made for the remaining half. For the physician, failure to consult the CDA had no significant effect on competence ratings for either the positive or negative outcome. For the auto mechanic, failure to consult the CDA actually increased competence ratings following a negative but not a positive outcome. Negligence judgments were greater for the mechanic than for the physician overall. Experiment 2. Using only a negative outcome, we included 2 different reasons for not consulting the aid and provided accuracy information highlighting the superiority of the CDA over the physician. In neither condition was the physician rated lower than when no aid was mentioned. Ratings were lower when the physician did not trust the CDA and, surprisingly, higher when the physician believed he or she already knew what the CDA would say. Finally, consistent with our previous research, ratings were also high when the physician consulted and then followed the advice of a CDA and low when the CDA was consulted but ignored. Individual differences in numeracy did not qualify these results. Implications for the literature on algorithm aversion and clinical practice are discussed.
Collapse
Affiliation(s)
- Mark V Pezzo
- University of South Florida, St. Petersburg, FL, USA
| | | | - Pierre Vieux
- University of South Florida, St. Petersburg, FL, USA
| | | |
Collapse
|
21
|
Yin J, Ngiam KY, Teo HH. Role of Artificial Intelligence Applications in Real-Life Clinical Practice: Systematic Review. J Med Internet Res 2021; 23:e25759. [PMID: 33885365 PMCID: PMC8103304 DOI: 10.2196/25759] [Citation(s) in RCA: 104] [Impact Index Per Article: 34.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2020] [Revised: 03/08/2021] [Accepted: 03/09/2021] [Indexed: 12/19/2022] Open
Abstract
BACKGROUND Artificial intelligence (AI) applications are growing at an unprecedented pace in health care, including disease diagnosis, triage or screening, risk analysis, surgical operations, and so forth. Despite a great deal of research in the development and validation of health care AI, only few applications have been actually implemented at the frontlines of clinical practice. OBJECTIVE The objective of this study was to systematically review AI applications that have been implemented in real-life clinical practice. METHODS We conducted a literature search in PubMed, Embase, Cochrane Central, and CINAHL to identify relevant articles published between January 2010 and May 2020. We also hand searched premier computer science journals and conferences as well as registered clinical trials. Studies were included if they reported AI applications that had been implemented in real-world clinical settings. RESULTS We identified 51 relevant studies that reported the implementation and evaluation of AI applications in clinical practice, of which 13 adopted a randomized controlled trial design and eight adopted an experimental design. The AI applications targeted various clinical tasks, such as screening or triage (n=16), disease diagnosis (n=16), risk analysis (n=14), and treatment (n=7). The most commonly addressed diseases and conditions were sepsis (n=6), breast cancer (n=5), diabetic retinopathy (n=4), and polyp and adenoma (n=4). Regarding the evaluation outcomes, we found that 26 studies examined the performance of AI applications in clinical settings, 33 studies examined the effect of AI applications on clinician outcomes, 14 studies examined the effect on patient outcomes, and one study examined the economic impact associated with AI implementation. CONCLUSIONS This review indicates that research on the clinical implementation of AI applications is still at an early stage despite the great potential. More research needs to assess the benefits and challenges associated with clinical AI applications through a more rigorous methodology.
Collapse
Affiliation(s)
- Jiamin Yin
- Department of Information Systems and Analytics, School of Computing, National University of Singapore, Singapore, Singapore
| | - Kee Yuan Ngiam
- Department of Surgery, National University Hospital, Singapore, Singapore
| | - Hock Hai Teo
- Department of Information Systems and Analytics, School of Computing, National University of Singapore, Singapore, Singapore
| |
Collapse
|
22
|
Fenneman A, Sickmann J, Pitz T, Sanfey AG. Two distinct and separable processes underlie individual differences in algorithm adherence: Differences in predictions and differences in trust thresholds. PLoS One 2021; 16:e0247084. [PMID: 33630894 PMCID: PMC7906384 DOI: 10.1371/journal.pone.0247084] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2020] [Accepted: 01/29/2021] [Indexed: 11/18/2022] Open
Abstract
Algorithms play an increasingly ubiquitous and vitally important role in modern society. However, recent findings suggest substantial individual variability in the degree to which people make use of such algorithmic systems, with some users preferring the advice of algorithms whereas others selectively avoid algorithmic systems. The mechanisms that give rise to these individual differences are currently poorly understood. Previous studies have suggested two possible effects that may underlie this variability: users may differ in their predictions of the efficacy of algorithmic systems, and/or in the relative thresholds they hold to place trust in these systems. Based on a novel judgment task with a large number of within-subject repetitions, here we report evidence that both mechanisms exert an effect on experimental participant’s degree of algorithm adherence, but, importantly, that these two mechanisms are independent from each-other. Furthermore, participants are more likely to place their trust in an algorithmically managed fund if their first exposure to the task was with an algorithmic manager. These findings open the door for future research into the mechanisms driving individual differences in algorithm adherence, and allow for novel interventions to increase adherence to algorithms.
Collapse
Affiliation(s)
- Achiel Fenneman
- Institute for Management Research, Radboud University, Nijmegen, Netherlands
- Faculty Society and Economics, Rhein-Waal University of Applied Sciences, Kleve, Germany
- * E-mail:
| | - Joern Sickmann
- Faculty Society and Economics, Rhein-Waal University of Applied Sciences, Kleve, Germany
| | - Thomas Pitz
- Faculty Society and Economics, Rhein-Waal University of Applied Sciences, Kleve, Germany
| | - Alan G. Sanfey
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, Netherlands
| |
Collapse
|
23
|
Findley J, Woods A, Robertson C, Slepian M. Keeping the Patient at the Center of Machine Learning in Healthcare. THE AMERICAN JOURNAL OF BIOETHICS : AJOB 2020; 20:54-56. [PMID: 33103979 DOI: 10.1080/15265161.2020.1820100] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
|
24
|
Hamilton JG, Genoff Garzon M, Westerman JS, Shuk E, Hay JL, Walters C, Elkin E, Bertelsen C, Cho J, Daly B, Gucalp A, Seidman AD, Zauderer MG, Epstein AS, Kris MG. "A Tool, Not a Crutch": Patient Perspectives About IBM Watson for Oncology Trained by Memorial Sloan Kettering. J Oncol Pract 2019; 15:e277-e288. [PMID: 30689492 PMCID: PMC6494242 DOI: 10.1200/jop.18.00417] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/28/2018] [Indexed: 11/20/2022] Open
Abstract
PURPOSE IBM Watson for Oncology trained by Memorial Sloan Kettering (WFO) is a clinical decision support tool designed to assist physicians in choosing therapies for patients with cancer. Although substantial technical and clinical expertise has guided the development of WFO, patients' perspectives of this technology have not been examined. To facilitate the optimal delivery and implementation of this tool, we solicited patients' perceptions and preferences about WFO. METHODS We conducted nine focus groups with 46 patients with breast, lung, or colorectal cancer with various treatment experiences: neoadjuvant/adjuvant chemotherapy, chemotherapy for metastatic disease, or systemic therapy through a clinical trial. In-depth qualitative and quantitative data were collected and analyzed to describe patients' attitudes and perspectives concerning WFO and how it may be used in clinical care. RESULTS Analysis of the qualitative data identified three main themes: patient acceptance of WFO, physician competence and the physician-patient relationship, and practical and logistic aspects of WFO. Overall, participant feedback suggested high levels of patient interest, perceived value, and acceptance of WFO, as long as it was used as a supplementary tool to inform their physicians' decision making. Participants also described important concerns, including the need for strict processes to guarantee the integrity and completeness of the data presented and the possibility of physician overreliance on WFO. CONCLUSION Participants generally reacted favorably to the prospect of WFO being integrated into the cancer treatment decision-making process, but with caveats regarding the comprehensiveness and accuracy of the data powering the system and the potential for giving WFO excessive emphasis in the decision-making process. Addressing patients' perspectives will be critical to ensuring the smooth integration of WFO into cancer care.
Collapse
Affiliation(s)
- Jada G. Hamilton
- Memorial Sloan Kettering Cancer Center; and Weill Cornell Medical College, New York, NY
| | - Margaux Genoff Garzon
- Memorial Sloan Kettering Cancer Center; and Weill Cornell Medical College, New York, NY
| | - Joy S. Westerman
- Memorial Sloan Kettering Cancer Center; and Weill Cornell Medical College, New York, NY
| | - Elyse Shuk
- Memorial Sloan Kettering Cancer Center; and Weill Cornell Medical College, New York, NY
| | - Jennifer L. Hay
- Memorial Sloan Kettering Cancer Center; and Weill Cornell Medical College, New York, NY
| | - Chasity Walters
- Memorial Sloan Kettering Cancer Center; and Weill Cornell Medical College, New York, NY
| | - Elena Elkin
- Memorial Sloan Kettering Cancer Center; and Weill Cornell Medical College, New York, NY
| | - Corinna Bertelsen
- Memorial Sloan Kettering Cancer Center; and Weill Cornell Medical College, New York, NY
| | - Jessica Cho
- Memorial Sloan Kettering Cancer Center; and Weill Cornell Medical College, New York, NY
| | - Bobby Daly
- Memorial Sloan Kettering Cancer Center; and Weill Cornell Medical College, New York, NY
| | - Ayca Gucalp
- Memorial Sloan Kettering Cancer Center; and Weill Cornell Medical College, New York, NY
| | - Andrew D. Seidman
- Memorial Sloan Kettering Cancer Center; and Weill Cornell Medical College, New York, NY
| | - Marjorie G. Zauderer
- Memorial Sloan Kettering Cancer Center; and Weill Cornell Medical College, New York, NY
| | - Andrew S. Epstein
- Memorial Sloan Kettering Cancer Center; and Weill Cornell Medical College, New York, NY
| | - Mark G. Kris
- Memorial Sloan Kettering Cancer Center; and Weill Cornell Medical College, New York, NY
| |
Collapse
|
25
|
den Bakker CM, Schaafsma FG, Huirne JAF, Consten ECJ, Stockmann HBAC, Rodenburg CJ, de Klerk GJ, Bonjer HJ, Anema JR. Cancer survivors' needs during various treatment phases after multimodal treatment for colon cancer - is there a role for eHealth? BMC Cancer 2018; 18:1207. [PMID: 30514325 PMCID: PMC6278104 DOI: 10.1186/s12885-018-5105-z] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2017] [Accepted: 11/16/2018] [Indexed: 12/22/2022] Open
Abstract
BACKGROUND More colon cancer patients are expected to fully recover after treatment due to earlier detection of cancer and improvements in general health- and cancer care. The objective of this study was to gather participants' experiences with full recovery in the different treatment phases of multimodal treatment and to identify their needs during these phases. The second aim was to propose and evaluate possible solutions for unmet needs by the introduction of eHealth. METHODS A qualitative study based on two focus group discussions with 22 participants was performed. The validated Supportive Care Needs Survey and the Cancer Treatment Survey were used to form the topic list. The verbatim transcripts were analyzed with Atlas.ti. 7th version comprising open, axial and selective coding. The guidelines of the consolidated criteria for reporting qualitative research (COREQ) were used. RESULTS Experiences with the treatment for colon cancer were in general positive. Most important unmet needs were 'receiving information about the total duration of side effects', 'receiving information about the minimum amount of chemo needed to overall survival' and 'receiving a longer aftercare period (with additional attention for psychological guidance)'. More provision of information online, a chat function with the oncological nurse specialist via a website, and access to scientific articles regarding the optimal dose of chemotherapy were often mentioned as worthwhile additions to the current health care for colon cancer. CONCLUSIONS Many of the unmet needs of colon cancer survivors occur during the adjuvant treatment phase and thereafter. To further optimize recovery and cancer care, it is necessary to have more focus on these unmet needs. More attention for identifying patients' problems and side-effects during chemotherapy; and identifying patients' supportive care needs after finishing chemotherapy are necessary. For some of these needs, eHealth in the form of blended care will be a possible solution.
Collapse
Affiliation(s)
- C M den Bakker
- Department of Occupational and Public Health, VU University Medical Center, Amsterdam Public health institute, Van der Boechorststraat 7, 1081 BT, Amsterdam, The Netherlands. .,Department of Surgery, VU University Medical Center, Amsterdam, The Netherlands.
| | - F G Schaafsma
- Department of Occupational and Public Health, VU University Medical Center, Amsterdam Public health institute, Van der Boechorststraat 7, 1081 BT, Amsterdam, The Netherlands
| | - J A F Huirne
- Department of Gynecology, VU University Medical Center, Amsterdam, The Netherlands
| | - E C J Consten
- Department of Surgery, Meander Medical Center, Amersfoort, The Netherlands
| | | | - C J Rodenburg
- Department of Medical Oncology, Meander Medical Center, Amersfoort, The Netherlands
| | - G J de Klerk
- Department of Medical Oncology, Spaarne Gasthuis, Hoofddorp, The Netherlands
| | - H J Bonjer
- Department of Surgery, VU University Medical Center, Amsterdam, The Netherlands
| | - J R Anema
- Department of Occupational and Public Health, VU University Medical Center, Amsterdam Public health institute, Van der Boechorststraat 7, 1081 BT, Amsterdam, The Netherlands
| |
Collapse
|
26
|
Porat T, Delaney B, Kostopoulou O. The impact of a diagnostic decision support system on the consultation: perceptions of GPs and patients. BMC Med Inform Decis Mak 2017; 17:79. [PMID: 28576145 PMCID: PMC5457602 DOI: 10.1186/s12911-017-0477-6] [Citation(s) in RCA: 33] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2017] [Accepted: 05/25/2017] [Indexed: 11/17/2022] Open
Abstract
Background Clinical decision support systems (DSS) aimed at supporting diagnosis are not widely used. This is mainly due to usability issues and lack of integration into clinical work and the electronic health record (EHR). In this study we examined the usability and acceptability of a diagnostic DSS prototype integrated with the EHR and in comparison with the EHR alone. Methods Thirty-four General Practitioners (GPs) consulted with 6 standardised patients (SPs) using only their EHR system (baseline session); on another day, they consulted with 6 different but matched for difficulty SPs, using the EHR with the integrated DSS prototype (DSS session). GPs were interviewed twice (at the end of each session), and completed the Post-Study System Usability Questionnaire at the end of the DSS session. The SPs completed the Consultation Satisfaction Questionnaire after each consultation. Results The majority of GPs (74%) found the DSS useful: it helped them consider more diagnoses and ask more targeted questions. They considered three user interface features to be the most useful: (1) integration with the EHR; (2) suggested diagnoses to consider at the start of the consultation and; (3) the checklist of symptoms and signs in relation to each suggested diagnosis. There were also criticisms: half of the GPs felt that the DSS changed their consultation style, by requiring them to code symptoms and signs while interacting with the patient. SPs sometimes commented that GPs were looking at their computer more than at them; this comment was made more often in the DSS session (15%) than in the baseline session (3%). Nevertheless, SP ratings on the satisfaction questionnaire did not differ between the two sessions. Conclusions To use the DSS effectively, GPs would need to adapt their consultation style, so that they code more information during rather than at the end of the consultation. This presents a potential barrier to adoption. Training GPs to use the system in a patient-centred way, as well as improvement of the DSS interface itself, could facilitate coding. To enhance patient acceptability, patients should be informed about the potential of the DSS to improve diagnostic accuracy. Electronic supplementary material The online version of this article (doi:10.1186/s12911-017-0477-6) contains supplementary material, which is available to authorized users.
Collapse
Affiliation(s)
- Talya Porat
- Department of Primary Care and Public Health Sciences, King's College London, 3rd floor Addison House, Guy's Campus, London, SE1 3QD, UK.
| | - Brendan Delaney
- Department of Surgery and Cancer, Imperial College London, London, UK
| | - Olga Kostopoulou
- Department of Surgery and Cancer, Imperial College London, London, UK
| |
Collapse
|
27
|
Meyer AND, Singh H. Calibrating how doctors think and seek information to minimise errors in diagnosis. BMJ Qual Saf 2016; 26:436-438. [PMID: 27672123 DOI: 10.1136/bmjqs-2016-006071] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/06/2016] [Indexed: 11/03/2022]
Affiliation(s)
- Ashley N D Meyer
- Houston Veterans Affairs Center for Innovations in Quality, Effectiveness and Safety, Michael E. DeBakey VA Medical Center, Houston, Texas, USA.,Section of Health Services Research, Department of Medicine, Baylor College of Medicine, Houston, Texas, USA
| | - Hardeep Singh
- Houston Veterans Affairs Center for Innovations in Quality, Effectiveness and Safety, Michael E. DeBakey VA Medical Center, Houston, Texas, USA.,Section of Health Services Research, Department of Medicine, Baylor College of Medicine, Houston, Texas, USA
| |
Collapse
|
28
|
Han PK, Dieckmann NF, Holt C, Gutheil C, Peters E. Factors Affecting Physicians' Intentions to Communicate Personalized Prognostic Information to Cancer Patients at the End of Life: An Experimental Vignette Study. Med Decis Making 2016; 36:703-13. [PMID: 26985015 PMCID: PMC4930679 DOI: 10.1177/0272989x16638321] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2015] [Accepted: 12/21/2015] [Indexed: 11/16/2022]
Abstract
PURPOSE To explore the effects of personalized prognostic information on physicians' intentions to communicate prognosis to cancer patients at the end of life, and to identify factors that moderate these effects. METHODS A factorial experiment was conducted in which 93 family medicine physicians were presented with a hypothetical vignette depicting an end-stage gastric cancer patient seeking prognostic information. Physicians' intentions to communicate prognosis were assessed before and after provision of personalized prognostic information, while emotional distress of the patient and ambiguity (imprecision) of the prognostic estimate were varied between subjects. General linear models were used to test the effects of personalized prognostic information, patient distress, and ambiguity on prognostic communication intentions, and potential moderating effects of 1) perceived patient distress, 2) perceived credibility of prognostic models, 3) physician numeracy (objective and subjective), and 4) physician aversion to risk and ambiguity. RESULTS Provision of personalized prognostic information increased prognostic communication intentions (P < 0.001, η(2) = 0.38), although experimentally manipulated patient distress and prognostic ambiguity had no effects. Greater change in communication intentions was positively associated with higher perceived credibility of prognostic models (P = 0.007, η(2) = 0.10), higher objective numeracy (P = 0.01, η(2) = 0.09), female sex (P = 0.01, η(2) = 0.08), and lower perceived patient distress (P = 0.02, η(2) = 0.07). Intentions to communicate available personalized prognostic information were positively associated with higher perceived credibility of prognostic models (P = 0.02, η(2) = 0.09), higher subjective numeracy (P = 0.02, η(2) = 0.08), and lower ambiguity aversion (P = 0.06, η(2) = 0.04). CONCLUSIONS Provision of personalized prognostic information increases physicians' prognostic communication intentions to a hypothetical end-stage cancer patient, and situational and physician characteristics moderate this effect. More research is needed to confirm these findings and elucidate the determinants of prognostic communication at the end of life.
Collapse
Affiliation(s)
- Paul K.J. Han
- Center for Outcomes Research and Evaluation, Maine Medical Center, Portland, ME
- Tufts University Clinical and Translational Sciences Institute, Boston, MA
| | - Nathan F. Dieckmann
- School of Nursing & School of Medicine, Oregon Health & Science University, Portland, OR
- Decision Research, Eugene, OR
| | - Christina Holt
- Department of Family Medicine, Maine Medical Center, Portland, ME
| | - Caitlin Gutheil
- Center for Outcomes Research and Evaluation, Maine Medical Center, Portland, ME
| | - Ellen Peters
- Department of Psychology, Ohio State University, Columbus, OH
| |
Collapse
|
29
|
Yang Q, Zimmerman J, Steinfeld A, Carey L, Antaki JF. Investigating the Heart Pump Implant Decision Process: Opportunities for Decision Support Tools to Help. ACM TRANSACTIONS ON COMPUTER-HUMAN INTERACTION : A PUBLICATION OF THE ASSOCIATION FOR COMPUTING MACHINERY 2016; 2016:4477-4488. [PMID: 27833397 DOI: 10.1145/2858036.2858373] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
Clinical decision support tools (DSTs) are computational systems that aid healthcare decision-making. While effective in labs, almost all these systems failed when they moved into clinical practice. Healthcare researchers speculated it is most likely due to a lack of user-centered HCI considerations in the design of these systems. This paper describes a field study investigating how clinicians make a heart pump implant decision with a focus on how to best integrate an intelligent DST into their work process. Our findings reveal a lack of perceived need for and trust of machine intelligence, as well as many barriers to computer use at the point of clinical decision-making. These findings suggest an alternative perspective to the traditional use models, in which clinicians engage with DSTs at the point of making a decision. We identify situations across patients' healthcare trajectories when decision supports would help, and we discuss new forms it might take in these situations.
Collapse
Affiliation(s)
- Qian Yang
- School of Computer Science, Carnegie Mellon University, Pittsburgh PA, USA
| | - John Zimmerman
- School of Computer Science, Carnegie Mellon University, Pittsburgh PA, USA
| | - Aaron Steinfeld
- School of Computer Science, Carnegie Mellon University, Pittsburgh PA, USA
| | - Lisa Carey
- School of Biomedical Engineering, Carnegie Mellon University, Pittsburgh PA, USA
| | - James F Antaki
- School of Biomedical Engineering, Carnegie Mellon University, Pittsburgh PA, USA
| |
Collapse
|
30
|
Attitudes and Behaviours to Antimicrobial Prescribing following Introduction of a Smartphone App. PLoS One 2016; 11:e0154202. [PMID: 27111775 PMCID: PMC4844117 DOI: 10.1371/journal.pone.0154202] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2015] [Accepted: 04/10/2016] [Indexed: 11/19/2022] Open
Abstract
OBJECTIVES Our hospital replaced the format for delivering portable antimicrobial prescribing guidance from a paper-based pocket guide to a smartphone application (app). We used this opportunity to assess the relationship between its use and the attitudes and behaviours of antimicrobial prescribers. METHODS We used 2 structured cross-sectional questionnaires issued just prior to and 3 months following the launch of the smartphone app. Ordinal Likert scale responses to both frequencies of use and agreement statements permitted quantitative assessment of the relationship between variables. RESULTS The smartphone app was used more frequently than the pocket guide it replaced (p < 0.01), and its increased use was associated with sentiments that the app was useful, easy to navigate and its content relevant. Users who used the app more frequently were more likely to agree that the app encouraged them to challenge inappropriate prescribing by their colleagues (p = 0.001) and were more aware of the importance of antimicrobial stewardship (p = 0.005). Reduced use of the app was associated with agreement that senior physicians' preferences for antimicrobial prescribing would irrespectively overrule guideline recommendations (p = 0.0002). CONCLUSIONS Smartphone apps are an effective and acceptable format to deliver guidance on antimicrobial prescribing. Our findings suggest that they may empower users to challenge incorrect prescribing, breaking well-established behaviours, and thus supporting vital stewardship efforts in an era of increased antimicrobial resistance. Future work will need to focus on the direct impact on drug prescriptions as well as identifying barriers to implementing smartphone apps in other clinical settings.
Collapse
|
31
|
Patel R, Green W, Shahzad MW, Larkin C. Use of Mobile Clinical Decision Support Software by Junior Doctors at a UK Teaching Hospital: Identification and Evaluation of Barriers to Engagement. JMIR Mhealth Uhealth 2015; 3:e80. [PMID: 26272411 PMCID: PMC4705011 DOI: 10.2196/mhealth.4388] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2015] [Revised: 06/04/2015] [Accepted: 06/19/2015] [Indexed: 01/20/2023] Open
Abstract
Background Clinical decision support (CDS) tools improve clinical diagnostic decision making and patient safety. The availability of CDS to health care professionals has grown in line with the increased prevalence of apps and smart mobile devices. Despite these benefits, patients may have safety concerns about the use of mobile devices around medical equipment. Objective This research explored the engagement of junior doctors (JDs) with CDS and the perceptions of patients about their use. There were three objectives for this research: (1) to measure the actual usage of CDS tools on mobile devices (mCDS) by JDs, (2) to explore the perceptions of JDs about the drivers and barriers to using mCDS, and (3) to explore the perceptions of patients about the use of mCDS. Methods This study used a mixed-methods approach to study the engagement of JDs with CDS accessed through mobile devices. Usage data were collected on the number of interactions by JDs with mCDS. The perceived drivers and barriers for JDs to using CDS were then explored by interviews. Finally, these findings were contrasted with the perception of patients about the use of mCDS by JDs. Results Nine of the 16 JDs made a total of 142 recorded interactions with the mCDS over a 4-month period. Only 27 of the 114 interactions (24%) that could be categorized as on-shift or off-shift occurred on-shift. Eight individual, institutional, and cultural barriers to engagement emerged from interviews with the user group. In contrast to reported cautions and concerns about the impact of clinicians’ use of mobile phone on patient health and safety, patients had positive perceptions about the use of mCDS. Conclusions Patients reported positive perceptions toward mCDS. The usage of mCDS to support clinical decision making was considered to be positive as part of everyday clinical practice. The degree of engagement was found to be limited due to a number of individual, institutional, and cultural barriers. The majority of mCDS engagement occurred outside of the workplace. Further research is required to verify these findings and assess their implications for future policy and practice.
Collapse
Affiliation(s)
- Rakesh Patel
- University of Leicester, Department of Medical & Social Care Education, Leicester, United Kingdom.
| | | | | | | |
Collapse
|
32
|
Abstract
Although much research examines how physicians perceive their patients, here we study how patients perceive physicians. We propose patients consider their physicians like personally emotionless “empty vessels”: The higher is individuals’ need for care, the less they value physicians’ traits related to their personal lives (e.g., self-focused emotions), but the more they value physicians’ traits related to patients (e.g., patient-focused emotions). In an initial study, participants recalled fewer personal facts (e.g., marital status) about physicians who seemed more important to their health. In subsequent experiments, participants in higher need for care believed physicians have less personal emotions. Although higher need individuals, such as patients in a clinic, perceived their physicians to be personally emotionless, they wanted the clinic to hire physicians who displayed patient-focused emotion. We discuss implications of perceiving physicians as empty vessels for health care.
Collapse
|
33
|
Lubberding S, van Uden-Kraan CF, Te Velde EA, Cuijpers P, Leemans CR, Verdonck-de Leeuw IM. Improving access to supportive cancer care through an eHealth application: a qualitative needs assessment among cancer survivors. J Clin Nurs 2015; 24:1367-79. [PMID: 25677218 DOI: 10.1111/jocn.12753] [Citation(s) in RCA: 79] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/14/2014] [Indexed: 11/26/2022]
Affiliation(s)
- Sanne Lubberding
- Department of Otolaryngology/Head and Neck Surgery; VU University Medical Center; Amsterdam The Netherlands
| | - Cornelia F van Uden-Kraan
- Department of Otolaryngology/Head and Neck Surgery; VU University Medical Center; Amsterdam The Netherlands
- Department of Clinical Psychology; VU University; Amsterdam The Netherlands
| | | | - Pim Cuijpers
- Department of Clinical Psychology; VU University; Amsterdam The Netherlands
| | - C René Leemans
- Department of Otolaryngology/Head and Neck Surgery; VU University Medical Center; Amsterdam The Netherlands
| | - Irma M Verdonck-de Leeuw
- Department of Otolaryngology/Head and Neck Surgery; VU University Medical Center; Amsterdam The Netherlands
- Department of Clinical Psychology; VU University; Amsterdam The Netherlands
| |
Collapse
|
34
|
Bouaud J, Lamy JB. A 2014 medical informatics perspective on clinical decision support systems: do we hit the ceiling of effectiveness? Yearb Med Inform 2014; 9:163-6. [PMID: 25123737 DOI: 10.15265/iy-2014-0036] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
OBJECTIVE To summarize recent research and propose a selection of best papers published in 2013 in the field of computer-based decision support in health care. METHOD Two literature reviews were performed by the two section editors from bibliographic databases with a focus on clinical decision support systems (CDSSs) and computer provider order entry in order to select a list of candidate best papers to be peer-reviewed by external reviewers. RESULTS The full review process highlighted three papers, illustrating current trends in the domain of clinical decision support. The first trend is the development of theoretical approaches for CDSSs, and is exemplified by a paper proposing the integration of family histories and pedigrees in a CDSS. The second trend is illustrated by well-designed CDSSs, showing good theoretical performances and acceptance, while failing to show a clinical impact. An example is given with a paper reporting on scorecards aiming to reduce adverse drug events. The third trend is represented by research works that try to understand the limits of CDSS use, for instance by analyzing interactions between general practitioners, patients, and a CDSS. CONCLUSIONS CDSSs can achieve good theoretical results in terms of sensibility and specificity, as well as a good acceptance, but evaluations often fail to demonstrate a clinical impact. Future research is needed to better understand the causes of this observation and imagine new effective solutions for CDSS implementation.
Collapse
|
35
|
Hamm RM, Beasley WH, Johnson WJ. A balance beam aid for instruction in clinical diagnostic reasoning. Med Decis Making 2014; 34:854-62. [PMID: 24739532 DOI: 10.1177/0272989x14529623] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
We describe a balance beam aid for instruction in diagnosis (BBAID) and demonstrate its potential use in supplementing the training of medical students to diagnose acute chest pain. We suggest the BBAID helps students understand the process of diagnosis because the impact of tokens (weights and helium balloons) attached to a beam at different distances from the fulcrum is analogous to the impact of evidence to the relative support for 2 diseases. The BBAID presents a list of potential findings and allows students to specify whether each is present, absent, or unknown. It displays the likelihood ratios corresponding to a positive (LR+) or negative (LR-) observation for each symptom, for any pair of diseases. For each specified finding, a token is placed on the beam at a location whose distance from the fulcrum is proportional to the finding's log(LR): a downward force (a weight) if the finding is present and a lifting force (a balloon) if it is absent. Combining the physical torques of multiple tokens is mathematically identical to applying Bayes' theorem to multiple independent findings, so the balance beam is a high-fidelity metaphor. Seven first-year medical students and 3 faculty members consulted the BBAID while diagnosing brief patient case vignettes. Student comments indicated the program is usable, helpful for understanding pertinent positive and negative findings' usefulness in particular situations, and welcome as a reference or self-test. All students attended the effect of the tokens on the beam, although some stated they did not use the numerical statistics. Faculty noted the BBAID might be particularly helpful in reminding students of diseases that should not be missed and identifying pertinent findings to ask for.
Collapse
Affiliation(s)
- Robert M Hamm
- University of Oklahoma Health Sciences Center, Oklahoma City, OK, USA (RMH, WHB)
| | - William Howard Beasley
- University of Oklahoma Health Sciences Center, Oklahoma City, OK, USA (RMH, WHB),Howard Live Oak, LLC, Norman, OK, USA (WHB)
| | | |
Collapse
|
36
|
Charani E, Castro-Sánchez E, Moore LSP, Holmes A. Do smartphone applications in healthcare require a governance and legal framework? It depends on the application! BMC Med 2014; 12:29. [PMID: 24524344 PMCID: PMC3929845 DOI: 10.1186/1741-7015-12-29] [Citation(s) in RCA: 83] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/08/2013] [Accepted: 01/14/2014] [Indexed: 11/19/2022] Open
Abstract
The fast pace of technological improvement and the rapid development and adoption of healthcare applications present crucial challenges for clinicians, users and policy makers. Some of the most pressing dilemmas include the need to ensure the safety of applications and establish their cost-effectiveness while engaging patients and users to optimize their integration into health decision-making. Healthcare organizations need to consider the risk of fragmenting clinical practice within the organization as a result of too many apps being developed or used, as well as mechanisms for app integration into the wider electronic health records through development of governance framework for their use. The impact of app use on the interactions between clinicians and patients needs to be explored, together with the skills required for both groups to benefit from the use of apps. Although healthcare and academic institutions should support the improvements offered by technological advances, they must strive to do so within robust governance frameworks, after sound evaluation of clinical outcomes and examination of potential unintended consequences.
Collapse
Affiliation(s)
- Esmita Charani
- Centre for Infection Prevention and Management, Imperial College London, Du Cane Road, London W12 0NN, USA.
| | | | | | | |
Collapse
|