1
|
Park SH, Pinto-Powell R, Thesen T, Lindqwister A, Levy J, Chacko R, Gonzalez D, Bridges C, Schwendt A, Byrum T, Fong J, Shasavari S, Hassanpour S. Preparing healthcare leaders of the digital age with an integrative artificial intelligence curriculum: a pilot study. MEDICAL EDUCATION ONLINE 2024; 29:2315684. [PMID: 38351737 PMCID: PMC10868429 DOI: 10.1080/10872981.2024.2315684] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/10/2023] [Accepted: 02/02/2024] [Indexed: 02/16/2024]
Abstract
Artificial intelligence (AI) is rapidly being introduced into the clinical workflow of many specialties. Despite the need to train physicians who understand the utility and implications of AI and mitigate a growing skills gap, no established consensus exists on how to best introduce AI concepts to medical students during preclinical training. This study examined the effectiveness of a pilot Digital Health Scholars (DHS) non-credit enrichment elective that paralleled the Dartmouth Geisel School of Medicine's first-year preclinical curriculum with a focus on introducing AI algorithms and their applications in the concurrently occurring systems-blocks. From September 2022 to March 2023, ten self-selected first-year students enrolled in the elective curriculum run in parallel with four existing curricular blocks (Immunology, Hematology, Cardiology, and Pulmonology). Each DHS block consisted of a journal club, a live-coding demonstration, and an integration session led by a researcher in that field. Students' confidence in explaining the content objectives (high-level knowledge, implications, and limitations of AI) was measured before and after each block and compared using Mann-Whitney U tests. Students reported significant increases in confidence in describing the content objectives after all four blocks (Immunology: U = 4.5, p = 0.030; Hematology: U = 1.0, p = 0.009; Cardiology: U = 4.0, p = 0.019; Pulmonology: U = 4.0, p = 0.030) as well as an average overall satisfaction level of 4.29/5 in rating the curriculum content. Our study demonstrates that a digital health enrichment elective that runs in parallel to an institution's preclinical curriculum and embeds AI concepts into relevant clinical topics can enhance students' confidence in describing the content objectives that pertain to high-level algorithmic understanding, implications, and limitations of the studied models. Building on this elective curricular design, further studies with a larger enrollment can help determine the most effective approach in preparing future physicians for the AI-enhanced clinical workflow.
Collapse
Affiliation(s)
- Soo Hwan Park
- Geisel School of Medicine at Dartmouth, Hanover, NH, USA
| | | | - Thomas Thesen
- Geisel School of Medicine at Dartmouth, Hanover, NH, USA
| | | | - Joshua Levy
- Geisel School of Medicine at Dartmouth, Hanover, NH, USA
| | - Rachael Chacko
- Geisel School of Medicine at Dartmouth, Hanover, NH, USA
| | | | - Connor Bridges
- Geisel School of Medicine at Dartmouth, Hanover, NH, USA
| | - Adam Schwendt
- Geisel School of Medicine at Dartmouth, Hanover, NH, USA
| | - Travis Byrum
- Geisel School of Medicine at Dartmouth, Hanover, NH, USA
| | - Justin Fong
- Geisel School of Medicine at Dartmouth, Hanover, NH, USA
| | | | | |
Collapse
|
2
|
Witkowski K, Okhai R, Neely SR. Public perceptions of artificial intelligence in healthcare: ethical concerns and opportunities for patient-centered care. BMC Med Ethics 2024; 25:74. [PMID: 38909180 PMCID: PMC11193174 DOI: 10.1186/s12910-024-01066-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2023] [Accepted: 05/29/2024] [Indexed: 06/24/2024] Open
Abstract
BACKGROUND In an effort to improve the quality of medical care, the philosophy of patient-centered care has become integrated into almost every aspect of the medical community. Despite its widespread acceptance, among patients and practitioners, there are concerns that rapid advancements in artificial intelligence may threaten elements of patient-centered care, such as personal relationships with care providers and patient-driven choices. This study explores the extent to which patients are confident in and comfortable with the use of these technologies when it comes to their own individual care and identifies areas that may align with or threaten elements of patient-centered care. METHODS An exploratory, mixed-method approach was used to analyze survey data from 600 US-based adults in the State of Florida. The survey was administered through a leading market research provider (August 10-21, 2023), and responses were collected to be representative of the state's population based on age, gender, race/ethnicity, and political affiliation. RESULTS Respondents were more comfortable with the use of AI in health-related tasks that were not associated with doctor-patient relationships, such as scheduling patient appointments or follow-ups (84.2%). Fear of losing the 'human touch' associated with doctors was a common theme within qualitative coding, suggesting a potential conflict between the implementation of AI and patient-centered care. In addition, decision self-efficacy was associated with higher levels of comfort with AI, but there were also concerns about losing decision-making control, workforce changes, and cost concerns. A small majority of participants mentioned that AI could be useful for doctors and lead to more equitable care but only when used within limits. CONCLUSION The application of AI in medical care is rapidly advancing, but oversight, regulation, and guidance addressing critical aspects of patient-centered care are lacking. While there is no evidence that AI will undermine patient-physician relationships at this time, there is concern on the part of patients regarding the application of AI within medical care and specifically as it relates to their interaction with physicians. Medical guidance on incorporating AI while adhering to the principles of patient-centered care is needed to clarify how AI will augment medical care.
Collapse
|
3
|
Frost EK, Bosward R, Aquino YSJ, Braunack-Mayer A, Carter SM. Facilitating public involvement in research about healthcare AI: A scoping review of empirical methods. Int J Med Inform 2024; 186:105417. [PMID: 38564959 DOI: 10.1016/j.ijmedinf.2024.105417] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2024] [Revised: 03/06/2024] [Accepted: 03/17/2024] [Indexed: 04/04/2024]
Abstract
OBJECTIVE With the recent increase in research into public views on healthcare artificial intelligence (HCAI), the objective of this review is to examine the methods of empirical studies on public views on HCAI. We map how studies provided participants with information about HCAI, and we examine the extent to which studies framed publics as active contributors to HCAI governance. MATERIALS AND METHODS We searched 5 academic databases and Google Advanced for empirical studies investigating public views on HCAI. We extracted information including study aims, research instruments, and recommendations. RESULTS Sixty-two studies were included. Most were quantitative (N = 42). Most (N = 47) reported providing participants with background information about HCAI. Despite this, studies often reported participants' lack of prior knowledge about HCAI as a limitation. Over three quarters (N = 48) of the studies made recommendations that envisaged public views being used to guide governance of AI. DISCUSSION Provision of background information is an important component of facilitating research with publics on HCAI. The high proportion of studies reporting participants' lack of knowledge about HCAI as a limitation reflects the need for more guidance on how information should be presented. A minority of studies adopted technocratic positions that construed publics as passive beneficiaries of AI, rather than as active stakeholders in HCAI design and implementation. CONCLUSION This review draws attention to how public roles in HCAI governance are constructed in empirical studies. To facilitate active participation, we recommend that research with publics on HCAI consider methodological designs that expose participants to diverse information sources.
Collapse
Affiliation(s)
- Emma Kellie Frost
- Australian Centre for Health Engagement, Evidence and Values, School of Health and Society, Faculty of the Arts, Social Sciences, and Humanities, University of Wollongong, Australia.
| | - Rebecca Bosward
- Australian Centre for Health Engagement, Evidence and Values, School of Health and Society, Faculty of the Arts, Social Sciences, and Humanities, University of Wollongong, Australia.
| | - Yves Saint James Aquino
- Australian Centre for Health Engagement, Evidence and Values, School of Health and Society, Faculty of the Arts, Social Sciences, and Humanities, University of Wollongong, Australia.
| | - Annette Braunack-Mayer
- Australian Centre for Health Engagement, Evidence and Values, School of Health and Society, Faculty of the Arts, Social Sciences, and Humanities, University of Wollongong, Australia.
| | - Stacy M Carter
- Australian Centre for Health Engagement, Evidence and Values, School of Health and Society, Faculty of the Arts, Social Sciences, and Humanities, University of Wollongong, Australia.
| |
Collapse
|
4
|
Conradsen S, Vardinghus-Nielsen H, Skirbekk H. Patient Knowledge and Trust in Health Care. A Theoretical Discussion on the Relationship Between Patients' Knowledge and Their Trust in Health Care Personnel in High Modernity. HEALTH CARE ANALYSIS 2024; 32:73-87. [PMID: 37807014 PMCID: PMC11133163 DOI: 10.1007/s10728-023-00467-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/11/2023] [Indexed: 10/10/2023]
Abstract
In this paper we aim to discuss a theoretical explanation for the positive relationship between patients' knowledge and their trust in healthcare personnel. Our approach is based on John Dewey's notion of continuity. This notion entails that the individual's experiences are interpreted as interrelated to each other, and that knowledge is related to future experience, not merely a record of the past. Furthermore, we apply Niklas Luhmann's theory on trust as a way of reducing complexity and enabling action. Anthony Giddens' description and analysis of the high modern society provides a frame for discussing the preconditions for patient-healthcare personnel interaction. High modernity is dominated by expert systems and demands trust in these. We conclude that patient knowledge and trust in healthcare personnel is related because both knowledge and trust are future- and action-oriented concepts. The traits of high modernity provides opportunities and challenges as the personnel can and must perform discretion. This discretion must be made in a context where knowledge is considered uncertain and preliminary.
Collapse
Affiliation(s)
- Stein Conradsen
- Department of Education, Faculty of Humanities and Education, Volda University College, Volda, Norway.
| | - Henrik Vardinghus-Nielsen
- Department of Health Science and Technology, The Faculty of Medicine, Aalborg University, Aalborg, Denmark
| | - Helge Skirbekk
- Department of Nursing and Health Promotion, Faculty of Health Sciences, OsloMet, Oslo, Norway
| |
Collapse
|
5
|
Zondag AGM, Rozestraten R, Grimmelikhuijsen SG, Jongsma KR, van Solinge WW, Bots ML, Vernooij RWM, Haitjema S. The Effect of Artificial Intelligence on Patient-Physician Trust: Cross-Sectional Vignette Study. J Med Internet Res 2024; 26:e50853. [PMID: 38805702 PMCID: PMC11167322 DOI: 10.2196/50853] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2023] [Revised: 03/21/2024] [Accepted: 04/16/2024] [Indexed: 05/30/2024] Open
Abstract
BACKGROUND Clinical decision support systems (CDSSs) based on routine care data, using artificial intelligence (AI), are increasingly being developed. Previous studies focused largely on the technical aspects of using AI, but the acceptability of these technologies by patients remains unclear. OBJECTIVE We aimed to investigate whether patient-physician trust is affected when medical decision-making is supported by a CDSS. METHODS We conducted a vignette study among the patient panel (N=860) of the University Medical Center Utrecht, the Netherlands. Patients were randomly assigned into 4 groups-either the intervention or control groups of the high-risk or low-risk cases. In both the high-risk and low-risk case groups, a physician made a treatment decision with (intervention groups) or without (control groups) the support of a CDSS. Using a questionnaire with a 7-point Likert scale, with 1 indicating "strongly disagree" and 7 indicating "strongly agree," we collected data on patient-physician trust in 3 dimensions: competence, integrity, and benevolence. We assessed differences in patient-physician trust between the control and intervention groups per case using Mann-Whitney U tests and potential effect modification by the participant's sex, age, education level, general trust in health care, and general trust in technology using multivariate analyses of (co)variance. RESULTS In total, 398 patients participated. In the high-risk case, median perceived competence and integrity were lower in the intervention group compared to the control group but not statistically significant (5.8 vs 5.6; P=.16 and 6.3 vs 6.0; P=.06, respectively). However, the effect of a CDSS application on the perceived competence of the physician depended on the participant's sex (P=.03). Although no between-group differences were found in men, in women, the perception of the physician's competence and integrity was significantly lower in the intervention compared to the control group (P=.009 and P=.01, respectively). In the low-risk case, no differences in trust between the groups were found. However, increased trust in technology positively influenced the perceived benevolence and integrity in the low-risk case (P=.009 and P=.04, respectively). CONCLUSIONS We found that, in general, patient-physician trust was high. However, our findings indicate a potentially negative effect of AI applications on the patient-physician relationship, especially among women and in high-risk situations. Trust in technology, in general, might increase the likelihood of embracing the use of CDSSs by treating professionals.
Collapse
Affiliation(s)
- Anna G M Zondag
- Central Diagnostic Laboratory, University Medical Center Utrecht, Utrecht University, Utrecht, Netherlands
| | - Raoul Rozestraten
- Utrecht University School of Governance, Utrecht University, Utrecht, Netherlands
| | | | - Karin R Jongsma
- Julius Center for Health Sciences and Primary Care, University Medical Center Utrecht, Utrecht University, Utrecht, Netherlands
| | - Wouter W van Solinge
- Central Diagnostic Laboratory, University Medical Center Utrecht, Utrecht University, Utrecht, Netherlands
| | - Michiel L Bots
- Julius Center for Health Sciences and Primary Care, University Medical Center Utrecht, Utrecht University, Utrecht, Netherlands
| | - Robin W M Vernooij
- Julius Center for Health Sciences and Primary Care, University Medical Center Utrecht, Utrecht University, Utrecht, Netherlands
- Department of Nephrology and Hypertension, University Medical Center Utrecht, Utrecht, Netherlands
| | - Saskia Haitjema
- Central Diagnostic Laboratory, University Medical Center Utrecht, Utrecht University, Utrecht, Netherlands
| |
Collapse
|
6
|
Gordon ER, Trager MH, Kontos D, Weng C, Geskin LJ, Dugdale LS, Samie FH. Ethical considerations for artificial intelligence in dermatology: a scoping review. Br J Dermatol 2024; 190:789-797. [PMID: 38330217 DOI: 10.1093/bjd/ljae040] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2023] [Revised: 12/26/2023] [Accepted: 01/23/2024] [Indexed: 02/10/2024]
Abstract
The field of dermatology is experiencing the rapid deployment of artificial intelligence (AI), from mobile applications (apps) for skin cancer detection to large language models like ChatGPT that can answer generalist or specialist questions about skin diagnoses. With these new applications, ethical concerns have emerged. In this scoping review, we aimed to identify the applications of AI to the field of dermatology and to understand their ethical implications. We used a multifaceted search approach, searching PubMed, MEDLINE, Cochrane Library and Google Scholar for primary literature, following the PRISMA Extension for Scoping Reviews guidance. Our advanced query included terms related to dermatology, AI and ethical considerations. Our search yielded 202 papers. After initial screening, 68 studies were included. Thirty-two were related to clinical image analysis and raised ethical concerns for misdiagnosis, data security, privacy violations and replacement of dermatologist jobs. Seventeen discussed limited skin of colour representation in datasets leading to potential misdiagnosis in the general population. Nine articles about teledermatology raised ethical concerns, including the exacerbation of health disparities, lack of standardized regulations, informed consent for AI use and privacy challenges. Seven addressed inaccuracies in the responses of large language models. Seven examined attitudes toward and trust in AI, with most patients requesting supplemental assessment by a physician to ensure reliability and accountability. Benefits of AI integration into clinical practice include increased patient access, improved clinical decision-making, efficiency and many others. However, safeguards must be put in place to ensure the ethical application of AI.
Collapse
Affiliation(s)
- Emily R Gordon
- Columbia University Vagelos College of Physicians and Surgeons, New York, NY, USA
| | - Megan H Trager
- Columbia University Irving Medical Center, Departments of Dermatology
| | - Despina Kontos
- University of Pennsylvania, Perelman School of Medicine, Department of Radiology, Philadelphia, PA, USA
- Radiology
| | | | - Larisa J Geskin
- Columbia University Irving Medical Center, Departments of Dermatology
| | - Lydia S Dugdale
- Columbia University Vagelos College of Physicians and Surgeons, Department of Medicine, Center for Clinical Medical Ethics, New York, NY, USA
| | - Faramarz H Samie
- Columbia University Irving Medical Center, Departments of Dermatology
| |
Collapse
|
7
|
St John A, Cooper L, Kavic SM. The Role of Artificial Intelligence in Surgery: What do General Surgery Residents Think? Am Surg 2024; 90:541-549. [PMID: 37863479 DOI: 10.1177/00031348231209524] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2023]
Abstract
BACKGROUND Artificial intelligence (AI) holds significant potential in medical education and patient care, but its rapid emergence presents ethical and practical challenges. This study explored the perspectives of surgical residents on AI's role in medicine. METHODS We performed a cross-sectional study surveying general surgery residents at a university-affiliated teaching hospital about their views on AI in medicine and surgical training. The survey covered demographics, residents' understanding of AI, its integration into medical practice, and use of AI tools like ChatGPT. The survey design was inspired by a recent national survey and underwent pretesting before deployment. RESULTS Of the 31 participants surveyed, 24% identified diagnostics as AI's top application, 12% favored its use in identifying anatomical structures in surgeries, and 20% endorsed AI integration into EMRs for predictive models. Attitudes toward AI varied based on its intended application: 77.41% expressed concern about AI making life decisions and 70.97% felt excited about its application for repetitive tasks. A significant 67.74% believed AI could enhance the understanding of medical knowledge. Perception of AI integration varied with AI familiarity (P = .01), with more knowledgeable respondents expressing more positivity. Moreover, familiarity influenced the perceived academic use of ChatGPT (P = .039) and attitudes toward AI in operating rooms (P = .032). Conclusion: This study provides insights into surgery residents' perceptions of AI in medical practice and training. These findings can inform future research, shape policy decisions, and guide AI development, promoting a harmonious collaboration between AI and surgeons to improve both training and patient care.
Collapse
Affiliation(s)
- Ace St John
- University of Maryland Medical Center, Baltimore, MD, USA
| | - Laura Cooper
- University of Maryland Medical Center, Baltimore, MD, USA
| | - Stephen M Kavic
- University of Maryland School of Medicine, Baltimore, MD, USA
| |
Collapse
|
8
|
Tan TF, Thirunavukarasu AJ, Campbell JP, Keane PA, Pasquale LR, Abramoff MD, Kalpathy-Cramer J, Lum F, Kim JE, Baxter SL, Ting DSW. Generative Artificial Intelligence Through ChatGPT and Other Large Language Models in Ophthalmology: Clinical Applications and Challenges. OPHTHALMOLOGY SCIENCE 2023; 3:100394. [PMID: 37885755 PMCID: PMC10598525 DOI: 10.1016/j.xops.2023.100394] [Citation(s) in RCA: 12] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/26/2023] [Revised: 08/07/2023] [Accepted: 08/30/2023] [Indexed: 10/28/2023]
Abstract
The rapid progress of large language models (LLMs) driving generative artificial intelligence applications heralds the potential of opportunities in health care. We conducted a review up to April 2023 on Google Scholar, Embase, MEDLINE, and Scopus using the following terms: "large language models," "generative artificial intelligence," "ophthalmology," "ChatGPT," and "eye," based on relevance to this review. From a clinical viewpoint specific to ophthalmologists, we explore from the different stakeholders' perspectives-including patients, physicians, and policymakers-the potential LLM applications in education, research, and clinical domains specific to ophthalmology. We also highlight the foreseeable challenges of LLM implementation into clinical practice, including the concerns of accuracy, interpretability, perpetuating bias, and data security. As LLMs continue to mature, it is essential for stakeholders to jointly establish standards for best practices to safeguard patient safety. Financial Disclosures Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.
Collapse
Affiliation(s)
- Ting Fang Tan
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
| | - Arun James Thirunavukarasu
- University of Cambridge School of Clinical Medicine, Cambridge, United Kingdom
- Corpus Christi College, University of Cambridge, Cambridge, United Kingdom
| | - J. Peter Campbell
- Department of Ophthalmology, Casey Eye Institute, Oregon Health and Science University, Portland, Oregon
| | - Pearse A. Keane
- Moorfields Eye Hospital, University of College London, London, United Kingdom
| | - Louis R. Pasquale
- Department of Ophthalmology, Icahn School of Medicine at Mount Sinai, New York City, New York
| | - Michael D. Abramoff
- American Medical Association's Digital Medicine Payment Advisory Group (DMPAG) Artificial Intelligence Workgroup, American Medical Association, Chicago, Illinois
- Department of Ophthalmology, University of Iowa, Iowa City, Iowa
- Digital Diagnostics, Inc, Coralville, Iowa
| | | | - Flora Lum
- American Academy of Ophthalmology, San Francisco, California
| | - Judy E. Kim
- Department of Ophthalmology, Medical College of Wisconsin, Milwaukee, Wisconsin
| | - Sally L. Baxter
- Division of Ophthalmology Informatics and Data Science, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, La Jolla, California
- Health Department of Biomedical Informatics, University of California San Diego, La Jolla, California
| | - Daniel Shu Wei Ting
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
- Byers Eye Institute, Stanford University, Stanford, California
| |
Collapse
|
9
|
Sengupta D. Artificial Intelligence in Diagnostic Dermatology: Challenges and the Way Forward. Indian Dermatol Online J 2023; 14:782-787. [PMID: 38099026 PMCID: PMC10718130 DOI: 10.4103/idoj.idoj_462_23] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2023] [Revised: 08/07/2023] [Accepted: 08/17/2023] [Indexed: 12/17/2023] Open
Abstract
Artificial Intelligence (AI) has emerged as a transformative force in the field of diagnostic dermatology, offering unprecedented capabilities in image recognition and data analysis. Despite its promise, the integration of AI into clinical practice faces multifaceted challenges that span technical, ethical, and regulatory domains. This article provides a narrative overview of the current state of AI in dermatology, tracing its historical evolution from early diagnostic tools to contemporary hybrid supervised models. We identify and categorize six critical challenges: data quality and quantity, algorithmic development and explainability, ethical considerations, clinical workflow integration, regulatory frameworks, and stakeholder collaboration. Each challenge is dissected from the perspectives of academia, industry, and healthcare providers, offering actionable recommendations for future research and implementation. We also highlight the paradigm shift in AI research, emphasizing the potential of transformer architectures in revolutionizing diagnostic methodologies. By addressing the challenges and harnessing the latest advancements, AI has the potential to significantly impact diagnostic accuracy and patient outcomes in dermatology.
Collapse
Affiliation(s)
- Dipayan Sengupta
- Consultant Dermatologist, Euro Skin Cliniq, Kolkata, West Bengal, India
| |
Collapse
|
10
|
Chen Y, Wu Z, Wang P, Xie L, Yan M, Jiang M, Yang Z, Zheng J, Zhang J, Zhu J. Radiology Residents' Perceptions of Artificial Intelligence: Nationwide Cross-Sectional Survey Study. J Med Internet Res 2023; 25:e48249. [PMID: 37856181 PMCID: PMC10623237 DOI: 10.2196/48249] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2023] [Revised: 07/07/2023] [Accepted: 09/01/2023] [Indexed: 10/20/2023] Open
Abstract
BACKGROUND Artificial intelligence (AI) is transforming various fields, with health care, especially diagnostic specialties such as radiology, being a key but controversial battleground. However, there is limited research systematically examining the response of "human intelligence" to AI. OBJECTIVE This study aims to comprehend radiologists' perceptions regarding AI, including their views on its potential to replace them, its usefulness, and their willingness to accept it. We examine the influence of various factors, encompassing demographic characteristics, working status, psychosocial aspects, personal experience, and contextual factors. METHODS Between December 1, 2020, and April 30, 2021, a cross-sectional survey was completed by 3666 radiology residents in China. We used multivariable logistic regression models to examine factors and associations, reporting odds ratios (ORs) and 95% CIs. RESULTS In summary, radiology residents generally hold a positive attitude toward AI, with 29.90% (1096/3666) agreeing that AI may reduce the demand for radiologists, 72.80% (2669/3666) believing AI improves disease diagnosis, and 78.18% (2866/3666) feeling that radiologists should embrace AI. Several associated factors, including age, gender, education, region, eye strain, working hours, time spent on medical images, resilience, burnout, AI experience, and perceptions of residency support and stress, significantly influence AI attitudes. For instance, burnout symptoms were associated with greater concerns about AI replacement (OR 1.89; P<.001), less favorable views on AI usefulness (OR 0.77; P=.005), and reduced willingness to use AI (OR 0.71; P<.001). Moreover, after adjusting for all other factors, perceived AI replacement (OR 0.81; P<.001) and AI usefulness (OR 5.97; P<.001) were shown to significantly impact the intention to use AI. CONCLUSIONS This study profiles radiology residents who are accepting of AI. Our comprehensive findings provide insights for a multidimensional approach to help physicians adapt to AI. Targeted policies, such as digital health care initiatives and medical education, can be developed accordingly.
Collapse
Affiliation(s)
- Yanhua Chen
- Vanke School of Public Health, Tsinghua University, Beijing, China
- School of Medicine, Tsinghua University, Beijing, China
| | - Ziye Wu
- Vanke School of Public Health, Tsinghua University, Beijing, China
| | - Peicheng Wang
- Vanke School of Public Health, Tsinghua University, Beijing, China
- School of Medicine, Tsinghua University, Beijing, China
| | - Linbo Xie
- Vanke School of Public Health, Tsinghua University, Beijing, China
- School of Medicine, Tsinghua University, Beijing, China
| | - Mengsha Yan
- Vanke School of Public Health, Tsinghua University, Beijing, China
| | - Maoqing Jiang
- Department of Radiology, Ningbo No. 2 Hospital, Ningbo, China
| | - Zhenghan Yang
- Department of Radiology, Beijing Friendship Hospital, Capital Medical University, Beijing, China
| | - Jianjun Zheng
- Department of Radiology, Ningbo No. 2 Hospital, Ningbo, China
| | - Jingfeng Zhang
- Department of Radiology, Ningbo No. 2 Hospital, Ningbo, China
| | - Jiming Zhu
- Vanke School of Public Health, Tsinghua University, Beijing, China
- Institute for Healthy China, Tsinghua University, Beijing, China
| |
Collapse
|
11
|
Rojas JC, Teran M, Umscheid CA. Clinician Trust in Artificial Intelligence: What is Known and How Trust Can Be Facilitated. Crit Care Clin 2023; 39:769-782. [PMID: 37704339 DOI: 10.1016/j.ccc.2023.02.004] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/29/2023]
Abstract
Predictive analytics based on artificial intelligence (AI) offer clinicians the opportunity to leverage big data available in electronic health records (EHR) to improve clinical decision-making, and thus patient outcomes. Despite this, many barriers exist to facilitating trust between clinicians and AI-based tools, limiting its current impact. Potential solutions are available at both the local and national level. It will take a broad and diverse coalition of stakeholders, from health-care systems, EHR vendors, and clinical educators to regulators, researchers and the patient community, to help facilitate this trust so that the promise of AI in health care can be realized.
Collapse
Affiliation(s)
- Juan C Rojas
- Department of Internal Medicine, Rush University, 1725 West Harrison Street, Suite 010, Chicago, IL 60612, USA.
| | - Mario Teran
- Agency for Healthcare Research and Quality, 5600 Fishers Lane, Mail Stop 06E53A, Rockville, MD 20857, USA
| | - Craig A Umscheid
- Agency for Healthcare Research and Quality, 5600 Fishers Lane, Mail Stop 06E53A, Rockville, MD 20857, USA
| |
Collapse
|
12
|
Gould DJ, Dowsey MM, Glanville-Hearst M, Spelman T, Bailey JA, Choong PFM, Bunzli S. Patients' Views on AI for Risk Prediction in Shared Decision-Making for Knee Replacement Surgery: Qualitative Interview Study. J Med Internet Res 2023; 25:e43632. [PMID: 37721797 PMCID: PMC10546266 DOI: 10.2196/43632] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2022] [Revised: 05/04/2023] [Accepted: 08/21/2023] [Indexed: 09/19/2023] Open
Abstract
BACKGROUND The use of artificial intelligence (AI) in decision-making around knee replacement surgery is increasing, and this technology holds promise to improve the prediction of patient outcomes. Ambiguity surrounds the definition of AI, and there are mixed views on its application in clinical settings. OBJECTIVE In this study, we aimed to explore the understanding and attitudes of patients who underwent knee replacement surgery regarding AI in the context of risk prediction for shared clinical decision-making. METHODS This qualitative study involved patients who underwent knee replacement surgery at a tertiary referral center for joint replacement surgery. The participants were selected based on their age and sex. Semistructured interviews explored the participants' understanding of AI and their opinions on its use in shared clinical decision-making. Data collection and reflexive thematic analyses were conducted concurrently. Recruitment continued until thematic saturation was achieved. RESULTS Thematic saturation was achieved with 19 interviews and confirmed with 1 additional interview, resulting in 20 participants being interviewed (female participants: n=11, 55%; male participants: n=9, 45%; median age: 66 years). A total of 11 (55%) participants had a substantial postoperative complication. Three themes captured the participants' understanding of AI and their perceptions of its use in shared clinical decision-making. The theme Expectations captured the participants' views of themselves as individuals with the right to self-determination as they sought therapeutic solutions tailored to their circumstances, needs, and desires, including whether to use AI at all. The theme Empowerment highlighted the potential of AI to enable patients to develop realistic expectations and equip them with personalized risk information to discuss in shared decision-making conversations with the surgeon. The theme Partnership captured the importance of symbiosis between AI and clinicians because AI has varied levels of interpretability and understanding of human emotions and empathy. CONCLUSIONS Patients who underwent knee replacement surgery in this study had varied levels of familiarity with AI and diverse conceptualizations of its definitions and capabilities. Educating patients about AI through nontechnical explanations and illustrative scenarios could help inform their decision to use it for risk prediction in the shared decision-making process with their surgeon. These findings could be used in the process of developing a questionnaire to ascertain the views of patients undergoing knee replacement surgery on the acceptability of AI in shared clinical decision-making. Future work could investigate the accuracy of this patient group's understanding of AI, beyond their familiarity with it, and how this influences their acceptance of its use. Surgeons may play a key role in finding a place for AI in the clinical setting as the uptake of this technology in health care continues to grow.
Collapse
Affiliation(s)
- Daniel J Gould
- St Vincent's Hospital, Department of Surgery, University of Melbourne, Melbourne, Australia
| | - Michelle M Dowsey
- St Vincent's Hospital, Department of Surgery, University of Melbourne, Melbourne, Australia
- Department of Orthopaedics, St Vincent's Hospital Melbourne, Melbourne, Australia
| | | | - Tim Spelman
- St Vincent's Hospital, Department of Surgery, University of Melbourne, Melbourne, Australia
| | - James A Bailey
- School of Computing and Information Systems, University of Melbourne, Melbourne, Australia
| | - Peter F M Choong
- St Vincent's Hospital, Department of Surgery, University of Melbourne, Melbourne, Australia
- Department of Orthopaedics, St Vincent's Hospital Melbourne, Melbourne, Australia
| | - Samantha Bunzli
- School of Health Sciences and Social Work, Griffith University, Brisbane, Australia
| |
Collapse
|
13
|
Ramgopal S, Kapes J, Alpern ER, Carroll MS, Heffernan M, Simon NJE, Florin TA, Macy ML. Perceptions of Artificial Intelligence-Assisted Care for Children With a Respiratory Complaint. Hosp Pediatr 2023; 13:802-810. [PMID: 37593809 DOI: 10.1542/hpeds.2022-007066] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/19/2023]
Abstract
OBJECTIVES To evaluate caregiver opinions on the use of artificial intelligence (AI)-assisted medical decision-making for children with a respiratory complaint in the emergency department (ED). METHODS We surveyed a sample of caregivers of children presenting to a pediatric ED with a respiratory complaint. We assessed caregiver opinions with respect to AI, defined as "specialized computer programs" that "help make decisions about the best way to care for children." We performed multivariable logistic regression to identify factors associated with discomfort with AI-assisted decision-making. RESULTS Of 279 caregivers who were approached, 254 (91.0%) participated. Most indicated they would want to know if AI was being used for their child's health care (93.5%) and were extremely or somewhat comfortable with the use of AI in deciding the need for blood (87.9%) and viral testing (87.6%), interpreting chest radiography (84.6%), and determining need for hospitalization (78.9%). In multivariable analysis, caregiver age of 30 to 37 years (adjusted odds ratio [aOR] 3.67, 95% confidence interval [CI] 1.43-9.38; relative to 18-29 years) and a diagnosis of bronchospasm (aOR 5.77, 95% CI 1.24-30.28 relative to asthma) were associated with greater discomfort with AI. Caregivers with children being admitted to the hospital (aOR 0.23, 95% CI 0.09-0.50) had less discomfort with AI. CONCLUSIONS Caregivers were receptive toward the use of AI-assisted decision-making. Some subgroups (caregivers aged 30-37 years with children discharged from the ED) demonstrated greater discomfort with AI. Engaging with these subgroups should be considered when developing AI applications for acute care.
Collapse
Affiliation(s)
- Sriram Ramgopal
- Division of Emergency Medicine, Ann & Robert H. Lurie Children's Hospital of Chicago, Department of Pediatrics, Northwestern University Feinberg School of Medicine, Chicago, Illinois
- Department of Pediatrics, Ann & Robert H. Lurie Children's Hospital of Chicago, Northwestern University Feinberg School of Medicine, Chicago, Illinois
| | - Jack Kapes
- Division of Emergency Medicine, Ann & Robert H. Lurie Children's Hospital of Chicago, Department of Pediatrics, Northwestern University Feinberg School of Medicine, Chicago, Illinois
| | - Elizabeth R Alpern
- Division of Emergency Medicine, Ann & Robert H. Lurie Children's Hospital of Chicago, Department of Pediatrics, Northwestern University Feinberg School of Medicine, Chicago, Illinois
| | - Michael S Carroll
- Data Analytics and Reporting
- Department of Pediatrics, Ann & Robert H. Lurie Children's Hospital of Chicago, Northwestern University Feinberg School of Medicine, Chicago, Illinois
| | - Marie Heffernan
- Department of Pediatrics, Ann & Robert H. Lurie Children's Hospital of Chicago, Northwestern University Feinberg School of Medicine, Chicago, Illinois
- Mary Ann & J. Milburn Smith Child Health Outcomes, Research, and Evaluation Center, Stanley Manne Children's Research Institute, Ann & Robert H. Lurie Children's Hospital of Chicago, Chicago, Illinois
| | - Norma-Jean E Simon
- Division of Emergency Medicine, Ann & Robert H. Lurie Children's Hospital of Chicago, Department of Pediatrics, Northwestern University Feinberg School of Medicine, Chicago, Illinois
- Mary Ann & J. Milburn Smith Child Health Outcomes, Research, and Evaluation Center, Stanley Manne Children's Research Institute, Ann & Robert H. Lurie Children's Hospital of Chicago, Chicago, Illinois
| | - Todd A Florin
- Division of Emergency Medicine, Ann & Robert H. Lurie Children's Hospital of Chicago, Department of Pediatrics, Northwestern University Feinberg School of Medicine, Chicago, Illinois
- Department of Pediatrics, Ann & Robert H. Lurie Children's Hospital of Chicago, Northwestern University Feinberg School of Medicine, Chicago, Illinois
| | - Michelle L Macy
- Division of Emergency Medicine, Ann & Robert H. Lurie Children's Hospital of Chicago, Department of Pediatrics, Northwestern University Feinberg School of Medicine, Chicago, Illinois
- Department of Pediatrics, Ann & Robert H. Lurie Children's Hospital of Chicago, Northwestern University Feinberg School of Medicine, Chicago, Illinois
- Mary Ann & J. Milburn Smith Child Health Outcomes, Research, and Evaluation Center, Stanley Manne Children's Research Institute, Ann & Robert H. Lurie Children's Hospital of Chicago, Chicago, Illinois
| |
Collapse
|
14
|
Borondy Kitts A. Patient Perspectives on Artificial Intelligence in Radiology. J Am Coll Radiol 2023; 20:863-867. [PMID: 37453601 DOI: 10.1016/j.jacr.2023.05.017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2023] [Revised: 04/24/2023] [Accepted: 05/03/2023] [Indexed: 07/18/2023]
Abstract
There are two major areas for patient engagement in radiology artificial intelligence (AI). One is in the sharing of data for AI development; the second is the use of AI in patient care. In general, individuals support sharing deidentified data if used for the common good, to help others with similar health conditions, or for research. However, there is concern with risk to privacy including reidentification and use for other than intended purposes. Lack of trust is mentioned as a barrier for data sharing. Individuals want to be involved in the data-sharing process. In the use of AI in medical care, patients generally support AI as an assist to the radiologist but lack trust in unsupervised AI. Patients worry about liability in case of bad outcomes. Patients are concerned about loss of the human connection and the loss of empathy during a vulnerable time in their lives. Patients expressed concern about risk of discrimination due to bias in AI algorithms. Building trust in AI requires transparency, explainability, security, and privacy protection. Radiologists can take action to prepare their patients to become more trusting of AI. Developing and implementing data-sharing agreements allows patients to voluntarily help in the algorithm development process. Developing AI disclosure guidelines and having AI use disclosure discussions with patients will help them understand the use of AI in their care. As the use of AI increases, there is an opportunity for radiologists to develop and maintain close relationships with their patients and to become more involved in their care.
Collapse
|
15
|
Steerling E, Siira E, Nilsen P, Svedberg P, Nygren J. Implementing AI in healthcare-the relevance of trust: a scoping review. FRONTIERS IN HEALTH SERVICES 2023; 3:1211150. [PMID: 37693234 PMCID: PMC10484529 DOI: 10.3389/frhs.2023.1211150] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/24/2023] [Accepted: 08/11/2023] [Indexed: 09/12/2023]
Abstract
Background The process of translation of AI and its potential benefits into practice in healthcare services has been slow in spite of its rapid development. Trust in AI in relation to implementation processes is an important aspect. Without a clear understanding, the development of effective implementation strategies will not be possible, nor will AI advance despite the significant investments and possibilities. Objective This study aimed to explore the scientific literature regarding how trust in AI in relation to implementation in healthcare is conceptualized and what influences trust in AI in relation to implementation in healthcare. Methods This scoping review included five scientific databases. These were searched to identify publications related to the study aims. Articles were included if they were published in English, after 2012, and peer-reviewed. Two independent reviewers conducted an abstract and full-text review, as well as carrying out a thematic analysis with an inductive approach to address the study aims. The review was reported in accordance with the PRISMA-ScR guidelines. Results A total of eight studies were included in the final review. We found that trust was conceptualized in different ways. Most empirical studies had an individual perspective where trust was directed toward the technology's capability. Two studies focused on trust as relational between people in the context of the AI application rather than as having trust in the technology itself. Trust was also understood by its determinants and as having a mediating role, positioned between characteristics and AI use. The thematic analysis yielded three themes: individual characteristics, AI characteristics and contextual characteristics, which influence trust in AI in relation to implementation in healthcare. Conclusions Findings showed that the conceptualization of trust in AI differed between the studies, as well as which determinants they accounted for as influencing trust. Few studies looked beyond individual characteristics and AI characteristics. Future empirical research addressing trust in AI in relation to implementation in healthcare should have a more holistic view of the concept to be able to manage the many challenges, uncertainties, and perceived risks.
Collapse
Affiliation(s)
- Emilie Steerling
- School of Health and Welfare, Halmstad University, Halmstad, Sweden
| | - Elin Siira
- School of Health and Welfare, Halmstad University, Halmstad, Sweden
| | - Per Nilsen
- School of Health and Welfare, Halmstad University, Halmstad, Sweden
- Department of Health, Medicine and Caring Sciences, Linköping University, Linköping, Sweden
| | - Petra Svedberg
- School of Health and Welfare, Halmstad University, Halmstad, Sweden
| | - Jens Nygren
- School of Health and Welfare, Halmstad University, Halmstad, Sweden
| |
Collapse
|
16
|
Temple S, Rowbottom C, Simpson J. Patient views on the implementation of artificial intelligence in radiotherapy. Radiography (Lond) 2023; 29 Suppl 1:S112-S116. [PMID: 36964044 DOI: 10.1016/j.radi.2023.03.006] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2022] [Revised: 03/03/2023] [Accepted: 03/08/2023] [Indexed: 03/26/2023]
Abstract
PURPOSE/OBJECTIVE To date there has been limited research looking at patient views on the implementation of artificial intelligence (AI) in radiotherapy. The aim of this study is to adapt and utilise a validated patient questionnaire to develop an understanding of current patient views on the use of AI in radiotherapy. MATERIALS/METHODS An existing questionnaire, developed to assess understanding of patients' views on the implementation of AI in radiology, was adapted to the field of radiotherapy. The questionnaire was distributed to cancer patients receiving radiotherapy treatment between November 2021 and March 2022. Completed questionnaires were analysed to assess patient levels of positivity or negativity towards AI. Results were grouped into five factors, representing underlying patient perspectives, and correlation of factors with demographic variables was assessed. RESULTS In total, 95 patients participated. Overall, there was a moderately negative patient view towards the use of AI in radiotherapy. Certain factors drew a more negative response than others, for example patients desire significant personal interaction with healthcare professionals during the course of their treatment. No significant correlation was found between the demographics of age and gender and the strength of views towards the use of AI in radiotherapy. CONCLUSION This study has found that there are clear patient concerns around the use of AI in radiotherapy. As the use of AI in this field increases in future years, it will therefore be extremely important to educate and involve patients in the future direction of this technology.
Collapse
Affiliation(s)
- S Temple
- The Clatterbridge Cancer Centre NHS Foundation Trust, 65 Pembroke Place, Liverpool L7 8YA, UK.
| | - C Rowbottom
- The Clatterbridge Cancer Centre NHS Foundation Trust, 65 Pembroke Place, Liverpool L7 8YA, UK
| | - J Simpson
- The Clatterbridge Cancer Centre NHS Foundation Trust, 65 Pembroke Place, Liverpool L7 8YA, UK
| |
Collapse
|
17
|
Vodanović M, Subašić M, Milošević D, Savić Pavičin I. Artificial Intelligence in Medicine and Dentistry. Acta Stomatol Croat 2023; 57:70-84. [PMID: 37288152 PMCID: PMC10243707 DOI: 10.15644/asc57/1/8] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2023] [Accepted: 03/01/2023] [Indexed: 09/14/2023] Open
Abstract
INTRODUCTION Artificial intelligence has been applied in various fields throughout history, but its integration into daily life is more recent. The first applications of AI were primarily in academia and government research institutions, but as technology has advanced, AI has also been applied in industry, commerce, medicine and dentistry. OBJECTIVE Considering that the possibilities of applying artificial intelligence are developing rapidly and that this field is one of the areas with the greatest increase in the number of newly published articles, the aim of this paper was to provide an overview of the literature and to give an insight into the possibilities of applying artificial intelligence in medicine and dentistry. In addition, the aim was to discuss its advantages and disadvantages. CONCLUSION The possibilities of applying artificial intelligence to medicine and dentistry are just being discovered. Artificial intelligence will greatly contribute to developments in medicine and dentistry, as it is a tool that enables development and progress, especially in terms of personalized healthcare that will lead to much better treatment outcomes.
Collapse
Affiliation(s)
- Marin Vodanović
- Department of Dental Anthropology, School of Dental Medicine, University of Zagreb, Croatia
- University Hospital Centre Zagreb, Croatia
| | - Marko Subašić
- Faculty of Electrical Engineering and Computing, University of Zagreb, Croatia
| | - Denis Milošević
- Faculty of Electrical Engineering and Computing, University of Zagreb, Croatia
| | - Ivana Savić Pavičin
- Department of Dental Anthropology, School of Dental Medicine, University of Zagreb, Croatia
- University Hospital Centre Zagreb, Croatia
| |
Collapse
|
18
|
Ramgopal S, Sanchez-Pinto LN, Horvat CM, Carroll MS, Luo Y, Florin TA. Artificial intelligence-based clinical decision support in pediatrics. Pediatr Res 2023; 93:334-341. [PMID: 35906317 PMCID: PMC9668209 DOI: 10.1038/s41390-022-02226-1] [Citation(s) in RCA: 20] [Impact Index Per Article: 20.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/21/2022] [Revised: 06/29/2022] [Accepted: 07/18/2022] [Indexed: 11/24/2022]
Abstract
Machine learning models may be integrated into clinical decision support (CDS) systems to identify children at risk of specific diagnoses or clinical deterioration to provide evidence-based recommendations. This use of artificial intelligence models in clinical decision support (AI-CDS) may have several advantages over traditional "rule-based" CDS models in pediatric care through increased model accuracy, with fewer false alerts and missed patients. AI-CDS tools must be appropriately developed, provide insight into the rationale behind decisions, be seamlessly integrated into care pathways, be intuitive to use, answer clinically relevant questions, respect the content expertise of the healthcare provider, and be scientifically sound. While numerous machine learning models have been reported in pediatric care, their integration into AI-CDS remains incompletely realized to date. Important challenges in the application of AI models in pediatric care include the relatively lower rates of clinically significant outcomes compared to adults, and the lack of sufficiently large datasets available necessary for the development of machine learning models. In this review article, we summarize key concepts related to AI-CDS, its current application to pediatric care, and its potential benefits and risks. IMPACT: The performance of clinical decision support may be enhanced by the utilization of machine learning-based algorithms to improve the predictive performance of underlying models. Artificial intelligence-based clinical decision support (AI-CDS) uses models that are experientially improved through training and are particularly well suited toward high-dimensional data. The application of AI-CDS toward pediatric care remains limited currently but represents an important area of future research.
Collapse
Affiliation(s)
- Sriram Ramgopal
- Division of Emergency Medicine, Ann & Robert H. Lurie Children's Hospital of Chicago, Department of Pediatrics, Northwestern University Feinberg School of Medicine, Chicago, IL, USA.
| | - L. Nelson Sanchez-Pinto
- grid.16753.360000 0001 2299 3507Division of Critical Care Medicine, Ann & Robert H. Lurie Children’s Hospital of Chicago, Department of Pediatrics, Northwestern University Feinberg School of Medicine, Chicago, IL USA ,grid.16753.360000 0001 2299 3507Department of Preventive Medicine (Health and Biomedical Informatics), Feinberg School of Medicine, Northwestern University, Chicago, IL USA
| | - Christopher M. Horvat
- grid.21925.3d0000 0004 1936 9000Department of Critical Care Medicine, UPMC Children’s Hospital of Pittsburgh, University of Pittsburgh School of Medicine, Pittsburgh, PA USA
| | - Michael S. Carroll
- grid.16753.360000 0001 2299 3507Data Analytics and Reporting, Ann & Robert H. Lurie Children’s Hospital of Chicago, Department of Pediatrics, Northwestern University Feinberg School of Medicine, Chicago, IL USA
| | - Yuan Luo
- grid.16753.360000 0001 2299 3507Department of Preventive Medicine (Health and Biomedical Informatics), Feinberg School of Medicine, Northwestern University, Chicago, IL USA
| | - Todd A. Florin
- grid.16753.360000 0001 2299 3507Division of Emergency Medicine, Ann & Robert H. Lurie Children’s Hospital of Chicago, Department of Pediatrics, Northwestern University Feinberg School of Medicine, Chicago, IL USA
| |
Collapse
|
19
|
Ramgopal S, Heffernan ME, Bendelow A, Davis MM, Carroll MS, Florin TA, Alpern ER, Macy ML. Parental Perceptions on Use of Artificial Intelligence in Pediatric Acute Care. Acad Pediatr 2023; 23:140-147. [PMID: 35577283 DOI: 10.1016/j.acap.2022.05.006] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/04/2022] [Revised: 04/26/2022] [Accepted: 05/07/2022] [Indexed: 02/09/2023]
Abstract
BACKGROUND Family engagement is critical in the implementation of artificial intelligence (AI)-based clinical decision support tools, which will play an increasing role in health care in the future. We sought to understand parental perceptions of computer-assisted health care of children in the emergency department (ED). METHODS We conducted a population-weighted household panel survey of parents with minor children in their home in a large US city to evaluate perceptions of the use of computer programs for the care of children with respiratory illness. We identified demographics associated with discomfort with AI using survey-weighted logistic regression. RESULTS Surveys were completed by 1620 parents (panel response rate = 49.7%). Most respondents were comfortable with the use of computer programs to determine the need for antibiotics (77.6%) or bloodwork (76.5%), and to interpret radiographs (77.5%). In multivariable analysis, Black non-Hispanic parents reported greater discomfort with AI relative to White non-Hispanic parents (odds ratio [OR] 1.67, 95% confidence interval [CI] 1.03-2.70) as did younger parents (18-25 years) relative to parents ≥46 years (OR 2.48, 95% CI 1.31-4.67). The greatest perceived benefits of computer programs were finding something a human would miss (64.2%, 95% CI 60.9%-67.4%) and obtaining a more rapid diagnosis (59.6%; 56.2%-62.9%). Areas of greatest concern were diagnostic errors (63.0%, 95% CI 59.6%-66.4%), and recommending incorrect treatment (58.9%, 95% CI 55.5%-62.3%). CONCLUSIONS Parents were generally receptive to computer-assisted management of children with respiratory illnesses in the ED, though reservations emerged. Black non-Hispanic and younger parents were more likely to express discomfort about AI.
Collapse
Affiliation(s)
- Sriram Ramgopal
- Division of Emergency Medicine, Department of Pediatrics, Ann & Robert H. Lurie Children's Hospital of Chicago, Northwestern University Feinberg School of Medicine (S Ramgopal, TA Florin, ER Alpern, and ML Macy), Chicago, Ill.
| | - Marie E Heffernan
- Mary Ann & J. Milburn Smith Child Health Outcomes, Research, and Evaluation Center, Stanley Manne Children's Research Institute, Ann & Robert H. Lurie Children's Hospital of Chicago (ME Heffernan, MM Davis, M Carroll, and ML Macy), Chicago, Ill; Department of Pediatrics, Ann & Robert H. Lurie Children's Hospital of Chicago, Northwestern University Feinberg School of Medicine (ME Heffernan and MM Davis), Chicago, Ill
| | - Anne Bendelow
- Data Analytics and Reporting, Ann & Robert H. Lurie Children's Hospital of Chicago, Northwestern University Feinberg School of Medicine (A Bendelow and M Carroll), Chicago, Ill
| | - Matthew M Davis
- Mary Ann & J. Milburn Smith Child Health Outcomes, Research, and Evaluation Center, Stanley Manne Children's Research Institute, Ann & Robert H. Lurie Children's Hospital of Chicago (ME Heffernan, MM Davis, M Carroll, and ML Macy), Chicago, Ill; Department of Pediatrics, Ann & Robert H. Lurie Children's Hospital of Chicago, Northwestern University Feinberg School of Medicine (ME Heffernan and MM Davis), Chicago, Ill
| | - Michael S Carroll
- Mary Ann & J. Milburn Smith Child Health Outcomes, Research, and Evaluation Center, Stanley Manne Children's Research Institute, Ann & Robert H. Lurie Children's Hospital of Chicago (ME Heffernan, MM Davis, M Carroll, and ML Macy), Chicago, Ill; Data Analytics and Reporting, Ann & Robert H. Lurie Children's Hospital of Chicago, Northwestern University Feinberg School of Medicine (A Bendelow and M Carroll), Chicago, Ill
| | - Todd A Florin
- Division of Emergency Medicine, Department of Pediatrics, Ann & Robert H. Lurie Children's Hospital of Chicago, Northwestern University Feinberg School of Medicine (S Ramgopal, TA Florin, ER Alpern, and ML Macy), Chicago, Ill
| | - Elizabeth R Alpern
- Division of Emergency Medicine, Department of Pediatrics, Ann & Robert H. Lurie Children's Hospital of Chicago, Northwestern University Feinberg School of Medicine (S Ramgopal, TA Florin, ER Alpern, and ML Macy), Chicago, Ill
| | - Michelle L Macy
- Division of Emergency Medicine, Department of Pediatrics, Ann & Robert H. Lurie Children's Hospital of Chicago, Northwestern University Feinberg School of Medicine (S Ramgopal, TA Florin, ER Alpern, and ML Macy), Chicago, Ill; Mary Ann & J. Milburn Smith Child Health Outcomes, Research, and Evaluation Center, Stanley Manne Children's Research Institute, Ann & Robert H. Lurie Children's Hospital of Chicago (ME Heffernan, MM Davis, M Carroll, and ML Macy), Chicago, Ill
| |
Collapse
|
20
|
van der Zander QEW, van der Ende-van Loon MCM, Janssen JMM, Winkens B, van der Sommen F, Masclee AAM, Schoon EJ. Artificial intelligence in (gastrointestinal) healthcare: patients' and physicians' perspectives. Sci Rep 2022; 12:16779. [PMID: 36202957 PMCID: PMC9537305 DOI: 10.1038/s41598-022-20958-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2022] [Accepted: 09/21/2022] [Indexed: 12/01/2022] Open
Abstract
Artificial intelligence (AI) is entering into daily life and has the potential to play a significant role in healthcare. Aim was to investigate the perspectives (knowledge, experience, and opinion) on AI in healthcare among patients with gastrointestinal (GI) disorders, gastroenterologists, and GI-fellows. In this prospective questionnaire study 377 GI-patients, 35 gastroenterologists, and 45 GI-fellows participated. Of GI-patients, 62.5% reported to be familiar with AI and 25.0% of GI-physicians had work-related experience with AI. GI-patients preferred their physicians to use AI (mean 3.9) and GI-physicians were willing to use AI (mean 4.4, on 5-point Likert-scale). More GI-physicians believed in an increase in quality of care (81.3%) than GI-patients (64.9%, χ2(2) = 8.2, p = 0.017). GI-fellows expected AI implementation within 6.0 years, gastroenterologists within 4.2 years (t(76) = − 2.6, p = 0.011), and GI-patients within 6.1 years (t(193) = − 2.0, p = 0.047). GI-patients and GI-physicians agreed on the most important advantages of AI in healthcare: improving quality of care, time saving, and faster diagnostics and shorter waiting times. The most important disadvantage for GI-patients was the potential loss of personal contact, for GI-physicians this was insufficiently developed IT infrastructures. GI-patients and GI-physicians hold positive perspectives towards AI in healthcare. Patients were significantly more reserved compared to GI-fellows and GI-fellows were more reserved compared to gastroenterologists.
Collapse
Affiliation(s)
- Quirine E W van der Zander
- Division of Gastroenterology and Hepatology, Maastricht University Medical Center, Maastricht, The Netherlands. .,GROW, School for Oncology and Reproduction, Maastricht University, Maastricht, The Netherlands.
| | | | - Janneke M M Janssen
- GROW, School for Oncology and Reproduction, Maastricht University, Maastricht, The Netherlands
| | - Bjorn Winkens
- Department of Methodology and Statistics, Maastricht University, Maastricht, The Netherlands.,CAPHRI, Care and Public Health Research Institute, Maastricht University, Maastricht, The Netherlands
| | - Fons van der Sommen
- Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands
| | - Ad A M Masclee
- Division of Gastroenterology and Hepatology, Maastricht University Medical Center, Maastricht, The Netherlands
| | - Erik J Schoon
- GROW, School for Oncology and Reproduction, Maastricht University, Maastricht, The Netherlands.,Division of Gastroenterology and Hepatology, Catharina Hospital Eindhoven, Eindhoven, The Netherlands
| |
Collapse
|
21
|
Rabaan AA, Alhumaid S, Mutair AA, Garout M, Abulhamayel Y, Halwani MA, Alestad JH, Bshabshe AA, Sulaiman T, AlFonaisan MK, Almusawi T, Albayat H, Alsaeed M, Alfaresi M, Alotaibi S, Alhashem YN, Temsah MH, Ali U, Ahmed N. Application of Artificial Intelligence in Combating High Antimicrobial Resistance Rates. Antibiotics (Basel) 2022; 11:antibiotics11060784. [PMID: 35740190 PMCID: PMC9220767 DOI: 10.3390/antibiotics11060784] [Citation(s) in RCA: 26] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2022] [Revised: 05/31/2022] [Accepted: 06/07/2022] [Indexed: 11/16/2022] Open
Abstract
Artificial intelligence (AI) is a branch of science and engineering that focuses on the computational understanding of intelligent behavior. Many human professions, including clinical diagnosis and prognosis, are greatly useful from AI. Antimicrobial resistance (AMR) is among the most critical challenges facing Pakistan and the rest of the world. The rising incidence of AMR has become a significant issue, and authorities must take measures to combat the overuse and incorrect use of antibiotics in order to combat rising resistance rates. The widespread use of antibiotics in clinical practice has not only resulted in drug resistance but has also increased the threat of super-resistant bacteria emergence. As AMR rises, clinicians find it more difficult to treat many bacterial infections in a timely manner, and therapy becomes prohibitively costly for patients. To combat the rise in AMR rates, it is critical to implement an institutional antibiotic stewardship program that monitors correct antibiotic use, controls antibiotics, and generates antibiograms. Furthermore, these types of tools may aid in the treatment of patients in the event of a medical emergency in which a physician is unable to wait for bacterial culture results. AI’s applications in healthcare might be unlimited, reducing the time it takes to discover new antimicrobial drugs, improving diagnostic and treatment accuracy, and lowering expenses at the same time. The majority of suggested AI solutions for AMR are meant to supplement rather than replace a doctor’s prescription or opinion, but rather to serve as a valuable tool for making their work easier. When it comes to infectious diseases, AI has the potential to be a game-changer in the battle against antibiotic resistance. Finally, when selecting antibiotic therapy for infections, data from local antibiotic stewardship programs are critical to ensuring that these bacteria are treated quickly and effectively. Furthermore, organizations such as the World Health Organization (WHO) have underlined the necessity of selecting the appropriate antibiotic and treating for the shortest time feasible to minimize the spread of resistant and invasive resistant bacterial strains.
Collapse
Affiliation(s)
- Ali A. Rabaan
- Molecular Diagnostic Laboratory, Johns Hopkins Aramco Healthcare, Dhahran 31311, Saudi Arabia
- College of Medicine, Alfaisal University, Riyadh 11533, Saudi Arabia
- Department of Public Health and Nutrition, The University of Haripur, Haripur 22610, Pakistan
- Correspondence: (A.A.R.); (N.A.)
| | - Saad Alhumaid
- Administration of Pharmaceutical Care, Al-Ahsa Health Cluster, Ministry of Health, Al-Ahsa 31982, Saudi Arabia;
| | - Abbas Al Mutair
- Research Center, Almoosa Specialist Hospital, Alhassa, Al-Ahsa 36342, Saudi Arabia;
- Almoosa College of Health Sciences, Alhassa, Al-Ahsa 36342, Saudi Arabia
- School of Nursing, Wollongong University, Wollongong, NSW 2522, Australia
- Nursing Department, Prince Sultan Military College of Health Sciences, Dhahran 34313, Saudi Arabia
| | - Mohammed Garout
- Department of Community Medicine and Health Care for Pilgrims, Faculty of Medicine, Umm Al-Qura University, Makkah 21955, Saudi Arabia;
| | - Yem Abulhamayel
- Specialty Internal Medicine Department, Johns Hopkins Aramco Healthcare, Dhahran 34465, Saudi Arabia;
| | - Muhammad A. Halwani
- Department of Medical Microbiology, Faculty of Medicine, Al Baha University, Al Baha 4781, Saudi Arabia;
| | - Jeehan H. Alestad
- Immunology and Infectious Microbiology Department, University of Glasgow, Glasgow G1 1XQ, UK;
- Microbiology Department, Collage of Medicine, Jabriya 46300, Kuwait
| | - Ali Al Bshabshe
- Adult Critical Care Department of Medicine, Division of Adult Critical Care, College of Medicine, King Khalid University, Abha 62561, Saudi Arabia;
| | - Tarek Sulaiman
- Infectious Diseases Section, Medical Specialties Department, King Fahad Medical City, Riyadh 12231, Saudi Arabia;
| | | | - Tariq Almusawi
- Infectious Disease and Critical Care Medicine Department, Dr. Sulaiman Alhabib Medical Group, Alkhobar 34423, Saudi Arabia;
- Department of Medicine, Royal College of Surgeons in Ireland-Medical University of Bahrain, Manama 15503, Bahrain
| | - Hawra Albayat
- Infectious Disease Department, King Saud Medical City, Riyadh 7790, Saudi Arabia;
| | - Mohammed Alsaeed
- Infectious Disease Division, Department of Medicine, Prince Sultan Military Medical City, Riyadh 11159, Saudi Arabia;
| | - Mubarak Alfaresi
- Department of Pathology and Laboratory Medicine, Sheikh Khalifa General Hospital, Umm Al Quwain 499, United Arab Emirates;
- Department of Pathology, College of Medicine, Mohammed Bin Rashid University of Medicine and Health Sciences, Dubai 505055, United Arab Emirates
| | - Sultan Alotaibi
- Molecular Microbiology Department, King Fahad Medical City, Riyadh 11525, Saudi Arabia;
| | - Yousef N. Alhashem
- Department of Clinical Laboratory Sciences, Mohammed AlMana College of Health Sciences, Dammam 34222, Saudi Arabia;
| | - Mohamad-Hani Temsah
- Pediatric Department, College of Medicine, King Saud University, Riyadh 11451, Saudi Arabia;
| | - Urooj Ali
- Department of Biotechnology, Faculty of Life Sciences, University of Central Punjab, Lahore 54000, Pakistan;
| | - Naveed Ahmed
- Department of Medical Microbiology and Parasitology, School of Medical Sciences, Universiti Sains Malaysia, Kubang Kerian, Kota Bharu 16150, Kelantan, Malaysia
- Correspondence: (A.A.R.); (N.A.)
| |
Collapse
|
22
|
Whicher D, Rapp T. The Value of Artificial Intelligence for Healthcare Decision Making-Lessons Learned. VALUE IN HEALTH : THE JOURNAL OF THE INTERNATIONAL SOCIETY FOR PHARMACOECONOMICS AND OUTCOMES RESEARCH 2022; 25:328-330. [PMID: 35227442 DOI: 10.1016/j.jval.2021.12.009] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/29/2021] [Accepted: 12/29/2021] [Indexed: 06/14/2023]
Affiliation(s)
| | - Thomas Rapp
- University of Paris, Paris, France; Sciences Po, LIEPP, Paris, France
| |
Collapse
|
23
|
Faria B, Perdigão D, Brás J, Macedo L. The Joint Role of Batch Size and Query Strategy in Active Learning-Based Prediction - A Case Study in the Heart Attack Domain. PROGRESS IN ARTIFICIAL INTELLIGENCE 2022. [DOI: 10.1007/978-3-031-16474-3_38] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|