1
|
Benda N, Desai P, Reza Z, Zheng A, Kumar S, Harkins S, Hermann A, Zhang Y, Joly R, Kim J, Pathak J, Reading Turchioe M. Patient Perspectives on AI for Mental Health Care: Cross-Sectional Survey Study. JMIR Ment Health 2024; 11:e58462. [PMID: 39293056 DOI: 10.2196/58462] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/15/2024] [Revised: 06/26/2024] [Accepted: 07/14/2024] [Indexed: 09/20/2024] Open
Abstract
BACKGROUND The application of artificial intelligence (AI) to health and health care is rapidly increasing. Several studies have assessed the attitudes of health professionals, but far fewer studies have explored the perspectives of patients or the general public. Studies investigating patient perspectives have focused on somatic issues, including those related to radiology, perinatal health, and general applications. Patient feedback has been elicited in the development of specific mental health care solutions, but broader perspectives toward AI for mental health care have been underexplored. OBJECTIVE This study aims to understand public perceptions regarding potential benefits of AI, concerns about AI, comfort with AI accomplishing various tasks, and values related to AI, all pertaining to mental health care. METHODS We conducted a 1-time cross-sectional survey with a nationally representative sample of 500 US-based adults. Participants provided structured responses on their perceived benefits, concerns, comfort, and values regarding AI for mental health care. They could also add free-text responses to elaborate on their concerns and values. RESULTS A plurality of participants (245/497, 49.3%) believed AI may be beneficial for mental health care, but this perspective differed based on sociodemographic variables (all P<.05). Specifically, Black participants (odds ratio [OR] 1.76, 95% CI 1.03-3.05) and those with lower health literacy (OR 2.16, 95% CI 1.29-3.78) perceived AI to be more beneficial, and women (OR 0.68, 95% CI 0.46-0.99) perceived AI to be less beneficial. Participants endorsed concerns about accuracy, possible unintended consequences such as misdiagnosis, the confidentiality of their information, and the loss of connection with their health professional when AI is used for mental health care. A majority of participants (80.4%, 402/500) valued being able to understand individual factors driving their risk, confidentiality, and autonomy as it pertained to the use of AI for their mental health. When asked who was responsible for the misdiagnosis of mental health conditions using AI, 81.6% (408/500) of participants found the health professional to be responsible. Qualitative results revealed similar concerns related to the accuracy of AI and how its use may impact the confidentiality of patients' information. CONCLUSIONS Future work involving the use of AI for mental health care should investigate strategies for conveying the level of AI's accuracy, factors that drive patients' mental health risks, and how data are used confidentially so that patients can determine with their health professionals when AI may be beneficial. It will also be important in a mental health care context to ensure the patient-health professional relationship is preserved when AI is used.
Collapse
Affiliation(s)
- Natalie Benda
- School of Nursing, Columbia University, New York, NY, United States
| | - Pooja Desai
- Department of Biomedical Informatics, Columbia University, New York, NY, United States
| | - Zayan Reza
- Mailman School of Public Health, Columbia University, New York, NY, United States
| | - Anna Zheng
- Stuyvestant High School, New York, NY, United States
| | - Shiveen Kumar
- College of Agriculture and Life Science, Cornell University, Ithaca, NY, United States
| | - Sarah Harkins
- School of Nursing, Columbia University, New York, NY, United States
| | - Alison Hermann
- Department of Psychiatry, Weill Cornell Medicine, New York, NY, United States
| | - Yiye Zhang
- Department of Population Health Sciences, Weill Cornell Medicine, New York, NY, United States
| | - Rochelle Joly
- Department of Obstetrics and Gynecology, Weill Cornell Medicine, New York, NY, United States
| | - Jessica Kim
- Department of Population Health Sciences, Weill Cornell Medicine, New York, NY, United States
| | - Jyotishman Pathak
- Department of Population Health Sciences, Weill Cornell Medicine, New York, NY, United States
| | | |
Collapse
|
2
|
Nong P, Adler-Milstein J, Kardia S, Platt J. Public perspectives on the use of different data types for prediction in healthcare. J Am Med Inform Assoc 2024; 31:893-900. [PMID: 38302616 PMCID: PMC10990535 DOI: 10.1093/jamia/ocae009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2023] [Revised: 01/02/2024] [Accepted: 01/16/2024] [Indexed: 02/03/2024] Open
Abstract
OBJECTIVE Understand public comfort with the use of different data types for predictive models. MATERIALS AND METHODS We analyzed data from a national survey of US adults (n = 1436) fielded from November to December 2021. For three categories of data (identified using factor analysis), we use descriptive statistics to capture comfort level. RESULTS Public comfort with data use for prediction is low. For 13 of 15 data types, most respondents were uncomfortable with that data being used for prediction. In factor analysis, 15 types of data grouped into three categories based on public comfort: (1) personal characteristic data, (2) health-related data, and (3) sensitive data. Mean comfort was highest for health-related data (2.45, SD 0.84, range 1-4), followed by personal characteristic data (2.36, SD 0.94), and sensitive data (1.88, SD 0.77). Across these categories, we observe a statistically significant positive relationship between trust in health systems' use of patient information and comfort with data use for prediction. DISCUSSION Although public trust is recognized as important for the sustainable expansion of predictive tools, current policy does not reflect public concerns. Low comfort with data use for prediction should be addressed in order to prevent potential negative impacts on trust in healthcare. CONCLUSION Our results provide empirical evidence on public perspectives, which are important for shaping the use of predictive models. Findings demonstrate a need for realignment of policy around the sensitivity of non-clinical data categories.
Collapse
Affiliation(s)
- Paige Nong
- Division of Health Policy and Management, University of Minnesota School of Public Health, Minneapolis, MN 55455, United States
| | - Julia Adler-Milstein
- Division of Clinical Informatics and Digital Transformation, University of California San Francisco Department of Medicine, San Francisco, CA 94143, United States
| | - Sharon Kardia
- Department of Epidemiology, University of Michigan School of Public Health, Ann Arbor, MI 48109, United States
| | - Jodyn Platt
- Department of Learning Health Sciences, Michigan Medicine, Ann Arbor, MI 48109, United States
| |
Collapse
|
3
|
Fritz BA, Pugazenthi S, Budelier TP, Tellor Pennington BR, King CR, Avidan MS, Abraham J. User-Centered Design of a Machine Learning Dashboard for Prediction of Postoperative Complications. Anesth Analg 2024; 138:804-813. [PMID: 37339083 PMCID: PMC10730770 DOI: 10.1213/ane.0000000000006577] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/22/2023]
Abstract
BACKGROUND Machine learning models can help anesthesiology clinicians assess patients and make clinical and operational decisions, but well-designed human-computer interfaces are necessary for machine learning model predictions to result in clinician actions that help patients. Therefore, the goal of this study was to apply a user-centered design framework to create a user interface for displaying machine learning model predictions of postoperative complications to anesthesiology clinicians. METHODS Twenty-five anesthesiology clinicians (attending anesthesiologists, resident physicians, and certified registered nurse anesthetists) participated in a 3-phase study that included (phase 1) semistructured focus group interviews and a card sorting activity to characterize user workflows and needs; (phase 2) simulated patient evaluation incorporating a low-fidelity static prototype display interface followed by a semistructured interview; and (phase 3) simulated patient evaluation with concurrent think-aloud incorporating a high-fidelity prototype display interface in the electronic health record. In each phase, data analysis included open coding of session transcripts and thematic analysis. RESULTS During the needs assessment phase (phase 1), participants voiced that (a) identifying preventable risk related to modifiable risk factors is more important than nonpreventable risk, (b) comprehensive patient evaluation follows a systematic approach that relies heavily on the electronic health record, and (c) an easy-to-use display interface should have a simple layout that uses color and graphs to minimize time and energy spent reading it. When performing simulations using the low-fidelity prototype (phase 2), participants reported that (a) the machine learning predictions helped them to evaluate patient risk, (b) additional information about how to act on the risk estimate would be useful, and (c) correctable problems related to textual content existed. When performing simulations using the high-fidelity prototype (phase 3), usability problems predominantly related to the presentation of information and functionality. Despite the usability problems, participants rated the system highly on the System Usability Scale (mean score, 82.5; standard deviation, 10.5). CONCLUSIONS Incorporating user needs and preferences into the design of a machine learning dashboard results in a display interface that clinicians rate as highly usable. Because the system demonstrates usability, evaluation of the effects of implementation on both process and clinical outcomes is warranted.
Collapse
Affiliation(s)
| | | | | | | | | | | | - Joanna Abraham
- From the Department of Anesthesiology
- Institute for Informatics, Washington University School of Medicine, St. Louis, Missouri
| |
Collapse
|
4
|
Ewals LJS, Heesterbeek LJJ, Yu B, van der Wulp K, Mavroeidis D, Funk M, Snijders CCP, Jacobs I, Nederend J, Pluyter JR. The Impact of Expectation Management and Model Transparency on Radiologists' Trust and Utilization of AI Recommendations for Lung Nodule Assessment on Computed Tomography: Simulated Use Study. JMIR AI 2024; 3:e52211. [PMID: 38875574 PMCID: PMC11041414 DOI: 10.2196/52211] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/27/2023] [Revised: 11/14/2023] [Accepted: 02/03/2024] [Indexed: 06/16/2024]
Abstract
BACKGROUND Many promising artificial intelligence (AI) and computer-aided detection and diagnosis systems have been developed, but few have been successfully integrated into clinical practice. This is partially owing to a lack of user-centered design of AI-based computer-aided detection or diagnosis (AI-CAD) systems. OBJECTIVE We aimed to assess the impact of different onboarding tutorials and levels of AI model explainability on radiologists' trust in AI and the use of AI recommendations in lung nodule assessment on computed tomography (CT) scans. METHODS In total, 20 radiologists from 7 Dutch medical centers performed lung nodule assessment on CT scans under different conditions in a simulated use study as part of a 2×2 repeated-measures quasi-experimental design. Two types of AI onboarding tutorials (reflective vs informative) and 2 levels of AI output (black box vs explainable) were designed. The radiologists first received an onboarding tutorial that was either informative or reflective. Subsequently, each radiologist assessed 7 CT scans, first without AI recommendations. AI recommendations were shown to the radiologist, and they could adjust their initial assessment. Half of the participants received the recommendations via black box AI output and half received explainable AI output. Mental model and psychological trust were measured before onboarding, after onboarding, and after assessing the 7 CT scans. We recorded whether radiologists changed their assessment on found nodules, malignancy prediction, and follow-up advice for each CT assessment. In addition, we analyzed whether radiologists' trust in their assessments had changed based on the AI recommendations. RESULTS Both variations of onboarding tutorials resulted in a significantly improved mental model of the AI-CAD system (informative P=.01 and reflective P=.01). After using AI-CAD, psychological trust significantly decreased for the group with explainable AI output (P=.02). On the basis of the AI recommendations, radiologists changed the number of reported nodules in 27 of 140 assessments, malignancy prediction in 32 of 140 assessments, and follow-up advice in 12 of 140 assessments. The changes were mostly an increased number of reported nodules, a higher estimated probability of malignancy, and earlier follow-up. The radiologists' confidence in their found nodules changed in 82 of 140 assessments, in their estimated probability of malignancy in 50 of 140 assessments, and in their follow-up advice in 28 of 140 assessments. These changes were predominantly increases in confidence. The number of changed assessments and radiologists' confidence did not significantly differ between the groups that received different onboarding tutorials and AI outputs. CONCLUSIONS Onboarding tutorials help radiologists gain a better understanding of AI-CAD and facilitate the formation of a correct mental model. If AI explanations do not consistently substantiate the probability of malignancy across patient cases, radiologists' trust in the AI-CAD system can be impaired. Radiologists' confidence in their assessments was improved by using the AI recommendations.
Collapse
Affiliation(s)
- Lotte J S Ewals
- Catharina Cancer Institute, Catharina Hospital Eindhoven, Eindhoven, Netherlands
| | | | - Bin Yu
- Research Center for Marketing and Supply Chain Management, Nyenrode Business University, Breukelen, Netherlands
| | - Kasper van der Wulp
- Catharina Cancer Institute, Catharina Hospital Eindhoven, Eindhoven, Netherlands
| | | | - Mathias Funk
- Department of Industrial Design, Eindhoven University of Technology, Eindhoven, Netherlands
| | - Chris C P Snijders
- Department of Human Technology Interaction, Eindhoven University of Technology, Eindhoven, Netherlands
| | - Igor Jacobs
- Department of Hospital Services and Informatics, Philips Research, Eindhoven, Netherlands
| | - Joost Nederend
- Catharina Cancer Institute, Catharina Hospital Eindhoven, Eindhoven, Netherlands
| | - Jon R Pluyter
- Department of Experience Design, Royal Philips, Eindhoven, Netherlands
| |
Collapse
|
5
|
Evans RP, Bryant LD, Russell G, Absolom K. Trust and acceptability of data-driven clinical recommendations in everyday practice: A scoping review. Int J Med Inform 2024; 183:105342. [PMID: 38266426 DOI: 10.1016/j.ijmedinf.2024.105342] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2023] [Revised: 12/08/2023] [Accepted: 01/14/2024] [Indexed: 01/26/2024]
Abstract
BACKGROUND Increasing attention is being given to the analysis of large health datasets to derive new clinical decision support systems (CDSS). However, few data-driven CDSS are being adopted into clinical practice. Trust in these tools is believed to be fundamental for acceptance and uptake but to date little attention has been given to defining or evaluating trust in clinical settings. OBJECTIVES A scoping review was conducted to explore how and where acceptability and trustworthiness of data-driven CDSS have been assessed from the health professional's perspective. METHODS Medline, Embase, PsycInfo, Web of Science, Scopus, ACM Digital, IEEE Xplore and Google Scholar were searched in March 2022 using terms expanded from: "data-driven" AND "clinical decision support" AND "acceptability". Included studies focused on healthcare practitioner-facing data-driven CDSS, relating directly to clinical care. They included trust or a proxy as an outcome, or in the discussion. The preferred reporting items for systematic reviews and meta-analyses extension for scoping reviews (PRISMA-ScR) is followed in the reporting of this review. RESULTS 3291 papers were screened, with 85 primary research studies eligible for inclusion. Studies covered a diverse range of clinical specialisms and intended contexts, but hypothetical systems (24) outnumbered those in clinical use (18). Twenty-five studies measured trust, via a wide variety of quantitative, qualitative and mixed methods. A further 24 discussed themes of trust without it being explicitly evaluated, and from these, themes of transparency, explainability, and supporting evidence were identified as factors influencing healthcare practitioner trust in data-driven CDSS. CONCLUSION There is a growing body of research on data-driven CDSS, but few studies have explored stakeholder perceptions in depth, with limited focused research on trustworthiness. Further research on healthcare practitioner acceptance, including requirements for transparency and explainability, should inform clinical implementation.
Collapse
Affiliation(s)
- Ruth P Evans
- University of Leeds, Woodhouse Lane, Leeds LS2 9JT, UK.
| | | | - Gregor Russell
- Bradford District Care Trust, Bradford, New Mill, Victoria Rd, BD18 3LD, UK.
| | - Kate Absolom
- University of Leeds, Woodhouse Lane, Leeds LS2 9JT, UK.
| |
Collapse
|
6
|
Bergquist M, Rolandsson B, Gryska E, Laesser M, Hoefling N, Heckemann R, Schneiderman JF, Björkman-Burtscher IM. Trust and stakeholder perspectives on the implementation of AI tools in clinical radiology. Eur Radiol 2024; 34:338-347. [PMID: 37505245 PMCID: PMC10791850 DOI: 10.1007/s00330-023-09967-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2022] [Revised: 04/22/2023] [Accepted: 05/26/2023] [Indexed: 07/29/2023]
Abstract
OBJECTIVES To define requirements that condition trust in artificial intelligence (AI) as clinical decision support in radiology from the perspective of various stakeholders and to explore ways to fulfil these requirements. METHODS Semi-structured interviews were conducted with twenty-five respondents-nineteen directly involved in the development, implementation, or use of AI applications in radiology and six working with AI in other areas of healthcare. We designed the questions to explore three themes: development and use of AI, professional decision-making, and management and organizational procedures connected to AI. The transcribed interviews were analysed in an iterative coding process from open coding to theoretically informed thematic coding. RESULTS We identified four aspects of trust that relate to reliability, transparency, quality verification, and inter-organizational compatibility. These aspects fall under the categories of substantial and procedural requirements. CONCLUSIONS Development of appropriate levels of trust in AI in healthcare is complex and encompasses multiple dimensions of requirements. Various stakeholders will have to be involved in developing AI solutions for healthcare and radiology to fulfil these requirements. CLINICAL RELEVANCE STATEMENT For AI to achieve advances in radiology, it must be given the opportunity to support, rather than replace, human expertise. Support requires trust. Identification of aspects and conditions for trust allows developing AI implementation strategies that facilitate advancing the field. KEY POINTS • Dimensions of procedural and substantial demands that need to be fulfilled to foster appropriate levels of trust in AI in healthcare are conditioned on aspects related to reliability, transparency, quality verification, and inter-organizational compatibility. •Creating the conditions for trust to emerge requires the involvement of various stakeholders, who will have to compensate the problem's inherent complexity by finding and promoting well-defined solutions.
Collapse
Affiliation(s)
- Magnus Bergquist
- School of Information Technology, Halmstad University, Halmstad, Sweden
| | - Bertil Rolandsson
- Department of Sociology and Work Science, University of Gothenburg, Gothenburg, Sweden
- Department of Sociology, Lund University, Lund, Sweden
| | - Emilia Gryska
- Department of Radiology, Institute of Clinical Sciences, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden.
| | - Mats Laesser
- Department of Radiology, Institute of Clinical Sciences, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
- Department of Radiology, Sahlgrenska University Hospital, Region Västra Götaland, Gothenburg, Sweden
| | - Nickoleta Hoefling
- Department of Radiology, Institute of Clinical Sciences, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
- Department of Radiology, Sahlgrenska University Hospital, Region Västra Götaland, Gothenburg, Sweden
| | - Rolf Heckemann
- Department of Medical Radiation Sciences, Institute of Clinical Sciences, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
| | - Justin F Schneiderman
- Department of Clinical Neuroscience, Institute of Neuroscience and Physiology, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
| | - Isabella M Björkman-Burtscher
- Department of Radiology, Institute of Clinical Sciences, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
- Department of Radiology, Sahlgrenska University Hospital, Region Västra Götaland, Gothenburg, Sweden
| |
Collapse
|
7
|
Fischer A, Rietveld A, Teunissen P, Hoogendoorn M, Bakker P. What is the future of artificial intelligence in obstetrics? A qualitative study among healthcare professionals. BMJ Open 2023; 13:e076017. [PMID: 37879682 PMCID: PMC10603416 DOI: 10.1136/bmjopen-2023-076017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/27/2023] Open
Abstract
OBJECTIVE This work explores the perceptions of obstetrical clinicians about artificial intelligence (AI) in order to bridge the gap in uptake of AI between research and medical practice. Identifying potential areas where AI can contribute to clinical practice, enables AI research to align with the needs of clinicians and ultimately patients. DESIGN Qualitative interview study. SETTING A national study conducted in the Netherlands between November 2022 and February 2023. PARTICIPANTS Dutch clinicians working in obstetrics with varying relevant work experience, gender and age. ANALYSIS Thematic analysis of qualitative interview transcripts. RESULTS Thirteen gynaecologists were interviewed about hypothetical scenarios of an implemented AI model. Thematic analysis identified two major themes: perceived usefulness and trust. Usefulness involved AI extending human brain capacity in complex pattern recognition and information processing, reducing contextual influence and saving time. Trust required validation, explainability and successful personal experience. This result shows two paradoxes: first, AI is expected to provide added value by surpassing human capabilities, yet also a need to understand the parameters and their influence on predictions for trust and adoption was expressed. Second, participants recognised the value of incorporating numerous parameters into a model, but they also believed that certain contextual factors should only be considered by humans, as it would be undesirable for AI models to use that information. CONCLUSIONS Obstetricians' opinions on the potential value of AI highlight the need for clinician-AI researcher collaboration. Trust can be built through conventional means like randomised controlled trials and guidelines. Holistic impact metrics, such as changes in workflow, not just clinical outcomes, should guide AI model development. Further research is needed for evaluating evolving AI systems beyond traditional validation methods.
Collapse
Affiliation(s)
- Anne Fischer
- Department of Obstetrics and Gynecology, Amsterdam UMC Location VUmc, Amsterdam, The Netherlands
- Department of Computer Science, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
- Amsterdam Reproduction and Development Research Institute, Amsterdam, The Netherlands
| | - Anna Rietveld
- Department of Obstetrics and Gynecology, Amsterdam UMC Location VUmc, Amsterdam, The Netherlands
- Amsterdam Reproduction and Development Research Institute, Amsterdam, The Netherlands
| | - Pim Teunissen
- School of Health Professions Education, Faculty of Health Medicine and Life Sciences, Maastricht University, Maastricht, The Netherlands
- Department of Gynaecology & Obstetrics, Maastricht UMC, Maastricht, The Netherlands
| | - Mark Hoogendoorn
- Department of Computer Science, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | - Petra Bakker
- Department of Obstetrics and Gynecology, Amsterdam UMC Location VUmc, Amsterdam, The Netherlands
- Amsterdam Reproduction and Development Research Institute, Amsterdam, The Netherlands
| |
Collapse
|
8
|
Ulloa M, Rothrock B, Ahmad FS, Jacobs M. Invisible clinical labor driving the successful integration of AI in healthcare. FRONTIERS IN COMPUTER SCIENCE 2022. [DOI: 10.3389/fcomp.2022.1045704] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/03/2022] Open
Abstract
Artificial Intelligence and Machine Learning (AI/ML) tools are changing the landscape of healthcare decision-making. Vast amounts of data can lead to efficient triage and diagnosis of patients with the assistance of ML methodologies. However, more research has focused on the technological challenges of developing AI, rather than the system integration. As a result, clinical teams' role in developing and deploying these tools has been overlooked. We look to three case studies from our research to describe the often invisible work that clinical teams do in driving the successful integration of clinical AI tools. Namely, clinical teams support data labeling, identifying algorithmic errors and accounting for workflow exceptions, translating algorithmic output to clinical next steps in care, and developing team awareness of how the tool is used once deployed. We call for detailed and extensive documentation strategies (of clinical labor, workflows, and team structures) to ensure this labor is valued and to promote sharing of sociotechnical implementation strategies.
Collapse
|
9
|
Barry B, Zhu X, Behnken E, Inselman J, Schaepe K, McCoy R, Rushlow D, Noseworthy P, Richardson J, Curtis S, Sharp R, Misra A, Akfaly A, Molling P, Bernard M, Yao X. Provider Perspectives on Artificial Intelligence-Guided Screening for Low Ejection Fraction in Primary Care: Qualitative Study. JMIR AI 2022; 1:e41940. [PMID: 38875550 PMCID: PMC11041436 DOI: 10.2196/41940] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/16/2022] [Revised: 09/13/2022] [Accepted: 09/17/2022] [Indexed: 06/16/2024]
Abstract
BACKGROUND The promise of artificial intelligence (AI) to transform health care is threatened by a tangle of challenges that emerge as new AI tools are introduced into clinical practice. AI tools with high accuracy, especially those that detect asymptomatic cases, may be hindered by barriers to adoption. Understanding provider needs and concerns is critical to inform implementation strategies that improve provider buy-in and adoption of AI tools in medicine. OBJECTIVE This study aimed to describe provider perspectives on the adoption of an AI-enabled screening tool in primary care to inform effective integration and sustained use. METHODS A qualitative study was conducted between December 2019 and February 2020 as part of a pragmatic randomized controlled trial at a large academic medical center in the United States. In all, 29 primary care providers were purposively sampled using a positive deviance approach for participation in semistructured focus groups after their use of the AI tool in the randomized controlled trial was complete. Focus group data were analyzed using a grounded theory approach; iterative analysis was conducted to identify codes and themes, which were synthesized into findings. RESULTS Our findings revealed that providers understood the purpose and functionality of the AI tool and saw potential value for more accurate and faster diagnoses. However, successful adoption into routine patient care requires the smooth integration of the tool with clinical decision-making and existing workflow to address provider needs and preferences during implementation. To fulfill the AI tool's promise of clinical value, providers identified areas for improvement including integration with clinical decision-making, cost-effectiveness and resource allocation, provider training, workflow integration, care pathway coordination, and provider-patient communication. CONCLUSIONS The implementation of AI-enabled tools in medicine can benefit from sensitivity to the nuanced context of care and provider needs to enable the useful adoption of AI tools at the point of care. TRIAL REGISTRATION ClinicalTrials.gov NCT04000087; https://clinicaltrials.gov/ct2/show/NCT04000087.
Collapse
Affiliation(s)
- Barbara Barry
- Division of Health Care Delivery Research, Mayo Clinic, Rochester, MN, United States
- Robert D. and Patricia E. Kern Center for the Science of Health Care Delivery, Mayo Clinic, Rochester, MN, United States
| | - Xuan Zhu
- Robert D. and Patricia E. Kern Center for the Science of Health Care Delivery, Mayo Clinic, Rochester, MN, United States
| | - Emma Behnken
- Knowledge and Evaluation Research Unit, Mayo Clinic, Rochester, MN, United States
| | - Jonathan Inselman
- Robert D. and Patricia E. Kern Center for the Science of Health Care Delivery, Mayo Clinic, Rochester, MN, United States
| | - Karen Schaepe
- Robert D. and Patricia E. Kern Center for the Science of Health Care Delivery, Mayo Clinic, Rochester, MN, United States
| | - Rozalina McCoy
- Department of Quantitative Health Sciences, Mayo Clinic, Rochester, MN, United States
| | - David Rushlow
- Department of Family Medicine, Mayo Clinic, Rochester, MN, United States
| | - Peter Noseworthy
- Department of Cardiovascular Medicine, Mayo Clinic, Rochester, MN, United States
| | - Jordan Richardson
- Biomedical Ethics Research Program, Mayo Clinic, Rochester, MN, United States
| | - Susan Curtis
- Biomedical Ethics Research Program, Mayo Clinic, Rochester, MN, United States
| | - Richard Sharp
- Biomedical Ethics Research Program, Mayo Clinic, Rochester, MN, United States
| | - Artika Misra
- Department of Family Medicine, Mayo Clinic Health System, Mankato, MN, United States
| | - Abdulla Akfaly
- Department of Community Internal Medicine, Mayo Clinic Health System, Eau Claire, WI, United States
| | - Paul Molling
- Department of Family Medicine, Mayo Clinic Health System, Onalaska, WI, United States
| | - Matthew Bernard
- Department of Family Medicine, Mayo Clinic, Rochester, MN, United States
| | - Xiaoxi Yao
- Division of Health Care Delivery Research, Mayo Clinic, Rochester, MN, United States
- Robert D. and Patricia E. Kern Center for the Science of Health Care Delivery, Mayo Clinic, Rochester, MN, United States
| |
Collapse
|