1
|
Weber S, Wyszynski M, Godefroid M, Plattfaut R, Niehaves B. How do medical professionals make sense (or not) of AI? A social-media-based computational grounded theory study and an online survey. Comput Struct Biotechnol J 2024; 24:146-159. [PMID: 38434249 PMCID: PMC10904922 DOI: 10.1016/j.csbj.2024.02.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2023] [Revised: 02/14/2024] [Accepted: 02/14/2024] [Indexed: 03/05/2024] Open
Abstract
To investigate opinions and attitudes of medical professionals towards adopting AI-enabled healthcare technologies in their daily business, we used a mixed-methods approach. Study 1 employed a qualitative computational grounded theory approach analyzing 181 Reddit threads in the several subreddits of r/medicine. By utilizing an unsupervised machine learning clustering method, we identified three key themes: (1) consequences of AI, (2) physician-AI relationship, and (3) a proposed way forward. In particular Reddit posts related to the first two themes indicated that the medical professionals' fear of being replaced by AI and skepticism toward AI played a major role in the argumentations. Moreover, the results suggest that this fear is driven by little or moderate knowledge about AI. Posts related to the third theme focused on factual discussions about how AI and medicine have to be designed to become broadly adopted in health care. Study 2 quantitatively examined the relationship between the fear of AI, knowledge about AI, and medical professionals' intention to use AI-enabled technologies in more detail. Results based on a sample of 223 medical professionals who participated in the online survey revealed that the intention to use AI technologies increases with increasing knowledge about AI and that this effect is moderated by the fear of being replaced by AI.
Collapse
Affiliation(s)
- Sebastian Weber
- University of Bremen, Digital Public, Bibliothekstr. 1, 28359 Bremen, Germany
| | - Marc Wyszynski
- University of Bremen, Digital Public, Bibliothekstr. 1, 28359 Bremen, Germany
| | - Marie Godefroid
- University of Siegen, Information Systems, Kohlbettstr. 15, 57072 Siegen, Germany
| | - Ralf Plattfaut
- University of Duisburg-Essen, Information Systems and Transformation Management, Universitätsstr. 9, 45141 Essen, Germany
| | - Bjoern Niehaves
- University of Bremen, Digital Public, Bibliothekstr. 1, 28359 Bremen, Germany
| |
Collapse
|
2
|
Staunton C, Biasiotto R, Tschigg K, Mascalzoni D. Artificial Intelligence Needs Data: Challenges Accessing Italian Databases to Train AI. Asian Bioeth Rev 2024; 16:423-435. [PMID: 39022381 PMCID: PMC11250977 DOI: 10.1007/s41649-024-00282-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2023] [Revised: 01/15/2024] [Accepted: 01/23/2024] [Indexed: 07/20/2024] Open
Abstract
Population biobanks are an increasingly important infrastructure to support research and will be a much-needed resource in the delivery of personalised medicine. Artificial intelligence (AI) systems can process and cross-link very large amounts of data quickly and be used not only for improving research power but also for helping with complex diagnosis and prediction of diseases based on health profiles. AI, therefore, potentially has a critical role to play in personalised medicine, and biobanks can provide a lot of the necessary baseline data related to healthy populations that will enable the development of AI tools. To develop these tools, access to personal data, and in particular, sensitive data, is required. Such data could be accessed from biobanks. Biobanks are a valuable resource for research but accessing and using the data contained within such biobanks raise a host of legal, ethical, and social issues (ELSI). This includes the appropriate consent to manage the collection, storage, use, and sharing of samples and data, and appropriate governance models that provide oversight of secondary use of samples and data. Biobanks have developed new consent models and governance tools to enable access that address some of these ELSI-related issues. In this paper, we consider whether such governance frameworks can enable access to biobank data to develop AI. As Italy has one of the most restrictive regulatory frameworks on the use of genetic data in Europe, we examine the regulatory framework in Italy. We also look at the proposed changes under the European Health Data Space (EHDS). We conclude by arguing that currently, regulatory frameworks are misaligned and unless addressed, accessing data within Italian biobanks to train AI will be severely limited.
Collapse
Affiliation(s)
- Ciara Staunton
- Institute for Biomedicine, Eurac Research, Bolzano, Italy
- School of Law, University of KwaZulu-Natal, Durban, South Africa
| | - Roberta Biasiotto
- Institute for Biomedicine, Eurac Research, Bolzano, Italy
- Department of Biomedical, Metabolic and Neural Sciences, University of Modena and Reggio Emilia, Modena, Italy
| | | | - Deborah Mascalzoni
- Institute for Biomedicine, Eurac Research, Bolzano, Italy
- Center for Research Ethics and Bioethics, Department of Public Health and Caring Sciences, Uppsala University, Uppsala, Sweden
| |
Collapse
|
3
|
Jaber Amin MH, Mohamed Elhassan Elmahi MA, Abdelmonim GA, Fadlalmoula GA, Jaber Amin JH, Khalid Alrabee NH, Awad MH, Mohamed Omer ZY, Abu Dayyeh NTI, Hassan Abdalkareem NA, Meisara Seed Ahmed EMO, Hassan Osman HA, Mohamed HAO, Mohamedtoum Babiker AE, Diab Alnour AA, Mohamed Ahmed EA, Elamin Garban EH, Ali Mohammed NS, Mohamed Ahmed KAH, Beig MA, Shafique MA, Mohamed Elhag MG, Elfakey Omer MM, Abuzaid Ali AA, Mohamed Shatir DH, Ali MohamedElhassan HO, Bin Saleh KHA, Ali MB, Elzber Abdalla SS, Alhaj WM, Khalil Mergani ES, Mohammed HH. Knowledge, attitude, and practice of artificial intelligence among medical students in Sudan: a cross-sectional study. Ann Med Surg (Lond) 2024; 86:3917-3923. [PMID: 38989161 PMCID: PMC11230734 DOI: 10.1097/ms9.0000000000002070] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2024] [Accepted: 04/05/2024] [Indexed: 07/12/2024] Open
Abstract
Introduction In this cross-sectional study, the authors explored the knowledge, attitudes, and practices related to artificial intelligence (AI) among medical students in Sudan. With AI increasingly impacting healthcare, understanding its integration into medical education is crucial. This study aimed to assess the current state of AI awareness, perceptions, and practical experiences among medical students in Sudan. The authors aimed to evaluate the extent of AI familiarity among Sudanese medical students by examining their attitudes toward its application in medicine. Additionally, this study seeks to identify the factors influencing knowledge levels and explore the practical implementation of AI in the medical field. Method A web-based survey was distributed to medical students in Sudan via social media platforms and e-mail during October 2023. The survey included questions on demographic information, knowledge of AI, attitudes toward its applications, and practical experiences. The descriptive statistics, χ2 tests, logistic regression, and correlations were analyzed using SPSS version 26.0. Results Out of the 762 participants, the majority exhibited a basic understanding of AI, but detailed knowledge of its applications was limited. Positive attitudes toward the importance of AI in diagnosis, radiology, and pathology were prevalent. However, practical application of these methods was infrequent, with only a minority of the participants having hands-on experience. Factors influencing knowledge included the lack of a formal curriculum and gender disparities. Conclusion This study highlights the need for comprehensive AI education in medical training programs in Sudan. While participants displayed positive attitudes, there was a notable gap in practical experience. Addressing these gaps through targeted educational interventions is crucial for preparing future healthcare professionals to navigate the evolving landscape of AI in medicine. Recommendations Policy efforts should focus on integrating AI education into the medical curriculum to ensure readiness for the technological advancements shaping the future of healthcare.
Collapse
|
4
|
Cè M, Ibba S, Cellina M, Tancredi C, Fantesini A, Fazzini D, Fortunati A, Perazzo C, Presta R, Montanari R, Forzenigo L, Carrafiello G, Papa S, Alì M. Radiologists' perceptions on AI integration: An in-depth survey study. Eur J Radiol 2024; 177:111590. [PMID: 38959557 DOI: 10.1016/j.ejrad.2024.111590] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2024] [Revised: 06/18/2024] [Accepted: 06/24/2024] [Indexed: 07/05/2024]
Abstract
PURPOSE To assess the perceptions and attitudes of radiologists toward the adoption of artificial intelligence (AI) in clinical practice. METHODS A survey was conducted among members of the SIRM Lombardy. Radiologists' attitudes were assessed comprehensively, covering satisfaction with AI-based tools, propensity for innovation, and optimism for the future. The questionnaire consisted of two sections: the first gathered demographic and professional information using categorical responses, while the second evaluated radiologists' attitudes toward AI through Likert-type responses ranging from 1 to 5 (with 1 representing extremely negative attitudes, 3 indicating a neutral stance, and 5 reflecting extremely positive attitudes). Questionnaire refinement involved an iterative process with expert panels and a pilot phase to enhance consistency and eliminate redundancy. Exploratory data analysis employed descriptive statistics and visual assessment of Likert plots, supported by non-parametric tests for subgroup comparisons for a thorough analysis of specific emerging patterns. RESULTS The survey yielded 232 valid responses. The findings reveal a generally optimistic outlook on AI adoption, especially among young radiologist (<30) and seasoned professionals (>60, p<0.01). However, while 36.2 % (84 out 232) of subjects reported daily use of AI-based tools, only a third considered their contribution decisive (30 %, 25 out of 84). AI literacy varied, with a notable proportion feeling inadequately informed (36 %, 84 out of 232), particularly among younger radiologists (46 %, p < 0.01). Positive attitudes towards the potential of AI to improve detection, characterization of anomalies and reduce workload (positive answers > 80 %) and were consistent across subgroups. Radiologists' opinions were more skeptical about the role of AI in enhancing decision-making processes, including the choice of further investigation, and in personalized medicine in general. Overall, respondents recognized AI's significant impact on the radiology profession, viewing it as an opportunity (61 %, 141 out of 232) rather than a threat (18 %, 42 out of 232), with a majority expressing belief in AI's relevance to future radiologists' career choices (60 %, 139 out of 232). However, there were some concerns, particularly among breast radiologists (20 of 232 responders), regarding the potential impact of AI on the profession. Eighty-four percent of the respondents consider the final assessment by the radiologist still to be essential. CONCLUSION Our results indicate an overall positive attitude towards the adoption of AI in radiology, though this is moderated by concerns regarding training and practical efficacy. Addressing AI literacy gaps, especially among younger radiologists, is essential. Furthermore, proactively adapting to technological advancements is crucial to fully leverage AI's potential benefits. Despite the generally positive outlook among radiologists, there remains significant work to be done to enhance the integration and widespread use of AI tools in clinical practice.
Collapse
Affiliation(s)
- Maurizio Cè
- Postgraduation School of Radiodiagnostic, University of Milan, via Festa del Perdono 7, 20122 Milan, Italy
| | - Simona Ibba
- Unit of Diagnostic Imaging and Stereotactic Radiosurgery, CDI Centro Diagnostico Italiano S.p.A., Via Simone Saint Bon 20, 20147 Milan, Italy.
| | - Michaela Cellina
- Radiology Department, ASST Fatebenefratelli Sacco, Piazza Principessa Clotilde 3, 20121 Milan, Italy.
| | - Chiara Tancredi
- University Suor Orsola Benincasa, corso Vittorio Emanuele 292, 80135 Naples, Italy.
| | | | - Deborah Fazzini
- Unit of Diagnostic Imaging and Stereotactic Radiosurgery, CDI Centro Diagnostico Italiano S.p.A., Via Simone Saint Bon 20, 20147 Milan, Italy.
| | - Alice Fortunati
- Postgraduation School of Radiodiagnostic, University of Milan, via Festa del Perdono 7, 20122 Milan, Italy.
| | - Chiara Perazzo
- Postgraduation School of Radiodiagnostic, University of Milan, via Festa del Perdono 7, 20122 Milan, Italy.
| | - Roberta Presta
- University Suor Orsola Benincasa, corso Vittorio Emanuele 292, 80135 Naples, Italy.
| | - Roberto Montanari
- University Suor Orsola Benincasa, corso Vittorio Emanuele 292, 80135 Naples, Italy; RE:LAB s.r.l., Via Tamburini, 5, 42122 Reggio Emilia, Italy.
| | - Laura Forzenigo
- Radiology Department, Fondazione IRCCS Cà Granda Ospedale Maggiore Policlinico, Via Francesco Sforza, 35, 20122, Milan, Italy
| | - Gianpaolo Carrafiello
- Postgraduation School of Radiodiagnostic, University of Milan, via Festa del Perdono 7, 20122 Milan, Italy; Radiology Department, Fondazione IRCCS Cà Granda Ospedale Maggiore Policlinico, Via Francesco Sforza, 35, 20122, Milan, Italy; Department of Biomedical Sciences for Health, Università degli Studi di Milano, Via Mangiagalli 31, 20133 Milan, Italy
| | - Sergio Papa
- Unit of Diagnostic Imaging and Stereotactic Radiosurgery, CDI Centro Diagnostico Italiano S.p.A., Via Simone Saint Bon 20, 20147 Milan, Italy.
| | - Marco Alì
- Unit of Diagnostic Imaging and Stereotactic Radiosurgery, CDI Centro Diagnostico Italiano S.p.A., Via Simone Saint Bon 20, 20147 Milan, Italy; Bracco Imaging SpA, Via Caduti di Marcinelle, 20134 Milan, Italy.
| |
Collapse
|
5
|
Witkowski K, Okhai R, Neely SR. Public perceptions of artificial intelligence in healthcare: ethical concerns and opportunities for patient-centered care. BMC Med Ethics 2024; 25:74. [PMID: 38909180 PMCID: PMC11193174 DOI: 10.1186/s12910-024-01066-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2023] [Accepted: 05/29/2024] [Indexed: 06/24/2024] Open
Abstract
BACKGROUND In an effort to improve the quality of medical care, the philosophy of patient-centered care has become integrated into almost every aspect of the medical community. Despite its widespread acceptance, among patients and practitioners, there are concerns that rapid advancements in artificial intelligence may threaten elements of patient-centered care, such as personal relationships with care providers and patient-driven choices. This study explores the extent to which patients are confident in and comfortable with the use of these technologies when it comes to their own individual care and identifies areas that may align with or threaten elements of patient-centered care. METHODS An exploratory, mixed-method approach was used to analyze survey data from 600 US-based adults in the State of Florida. The survey was administered through a leading market research provider (August 10-21, 2023), and responses were collected to be representative of the state's population based on age, gender, race/ethnicity, and political affiliation. RESULTS Respondents were more comfortable with the use of AI in health-related tasks that were not associated with doctor-patient relationships, such as scheduling patient appointments or follow-ups (84.2%). Fear of losing the 'human touch' associated with doctors was a common theme within qualitative coding, suggesting a potential conflict between the implementation of AI and patient-centered care. In addition, decision self-efficacy was associated with higher levels of comfort with AI, but there were also concerns about losing decision-making control, workforce changes, and cost concerns. A small majority of participants mentioned that AI could be useful for doctors and lead to more equitable care but only when used within limits. CONCLUSION The application of AI in medical care is rapidly advancing, but oversight, regulation, and guidance addressing critical aspects of patient-centered care are lacking. While there is no evidence that AI will undermine patient-physician relationships at this time, there is concern on the part of patients regarding the application of AI within medical care and specifically as it relates to their interaction with physicians. Medical guidance on incorporating AI while adhering to the principles of patient-centered care is needed to clarify how AI will augment medical care.
Collapse
|
6
|
Ghadiri P, Yaffe MJ, Adams AM, Abbasgholizadeh-Rahimi S. Primary care physicians' perceptions of artificial intelligence systems in the care of adolescents' mental health. BMC PRIMARY CARE 2024; 25:215. [PMID: 38872128 PMCID: PMC11170885 DOI: 10.1186/s12875-024-02417-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/15/2023] [Accepted: 05/06/2024] [Indexed: 06/15/2024]
Abstract
BACKGROUND Given that mental health problems in adolescence may have lifelong impacts, the role of primary care physicians (PCPs) in identifying and managing these issues is important. Artificial Intelligence (AI) may offer solutions to the current challenges involved in mental health care. We therefore explored PCPs' challenges in addressing adolescents' mental health, along with their attitudes towards using AI to assist them in their tasks. METHODS We used purposeful sampling to recruit PCPs for a virtual Focus Group (FG). The virtual FG lasted 75 minutes and was moderated by two facilitators. A life transcription was produced by an online meeting software. Transcribed data was cleaned, followed by a priori and inductive coding and thematic analysis. RESULTS We reached out to 35 potential participants via email. Seven agreed to participate, and ultimately four took part in the FG. PCPs perceived that AI systems have the potential to be cost-effective, credible, and useful in collecting large amounts of patients' data, and relatively credible. They envisioned AI assisting with tasks such as diagnoses and establishing treatment plans. However, they feared that reliance on AI might result in a loss of clinical competency. PCPs wanted AI systems to be user-friendly, and they were willing to assist in achieving this goal if it was within their scope of practice and they were compensated for their contribution. They stressed a need for regulatory bodies to deal with medicolegal and ethical aspects of AI and clear guidelines to reduce or eliminate the potential of patient harm. CONCLUSION This study provides the groundwork for assessing PCPs' perceptions of AI systems' features and characteristics, potential applications, possible negative aspects, and requirements for using them. A future study of adolescents' perspectives on integrating AI into mental healthcare might contribute a fuller understanding of the potential of AI for this population.
Collapse
Affiliation(s)
- Pooria Ghadiri
- Department of Family Medicine and Faculty of Dental Medicine and Oral Health Sciences, McGill University, 5858 Ch. de la Côte-des-Neiges, Montréal, QC, H3S 1Z1, Canada
- Mila-Quebec AI Institute, Montréal, QC, Canada
| | - Mark J Yaffe
- Department of Family Medicine and Faculty of Dental Medicine and Oral Health Sciences, McGill University, 5858 Ch. de la Côte-des-Neiges, Montréal, QC, H3S 1Z1, Canada
- St. Mary's Hospital Center of the Integrated University Centre for Health and Social Services of West Island of Montreal, Montréal, QC, Canada
| | - Alayne Mary Adams
- Department of Family Medicine and Faculty of Dental Medicine and Oral Health Sciences, McGill University, 5858 Ch. de la Côte-des-Neiges, Montréal, QC, H3S 1Z1, Canada
| | - Samira Abbasgholizadeh-Rahimi
- Department of Family Medicine and Faculty of Dental Medicine and Oral Health Sciences, McGill University, 5858 Ch. de la Côte-des-Neiges, Montréal, QC, H3S 1Z1, Canada.
- Mila-Quebec AI Institute, Montréal, QC, Canada.
- Lady Davis Institute for Medical Research (LDI), Jewish General Hospital, Montréal, QC, Canada.
| |
Collapse
|
7
|
Estrada Alamo CE, Diatta F, Monsell SE, Lane-Fall MB. Artificial Intelligence in Anesthetic Care: A Survey of Physician Anesthesiologists. Anesth Analg 2024; 138:938-950. [PMID: 38055624 DOI: 10.1213/ane.0000000000006752] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/08/2023]
Abstract
BACKGROUND This study explored physician anesthesiologists' knowledge, exposure, and perceptions of artificial intelligence (AI) and their associations with attitudes and expectations regarding its use in clinical practice. The findings highlight the importance of understanding anesthesiologists' perspectives for the successful integration of AI into anesthesiology, as AI has the potential to revolutionize the field. METHODS A cross-sectional survey of 27,056 US physician anesthesiologists was conducted to assess their knowledge, perceptions, and expectations regarding the use of AI in clinical practice. The primary outcome measured was attitude toward the use of AI in clinical practice, with scores of 4 or 5 on a 5-point Likert scale indicating positive attitudes. The anticipated impact of AI on various aspects of professional work was measured using a 3-point Likert scale. Logistic regression was used to explore the relationship between participant responses and attitudes toward the use of AI in clinical practice. RESULTS A 2021 survey of 27,056 US physician anesthesiologists received 1086 responses (4% response rate). Most respondents were male (71%), active clinicians (93%) under 45 (34%). A majority of anesthesiologists (61%) had some knowledge of AI and 48% had a positive attitude toward using AI in clinical practice. While most respondents believed that AI can improve health care efficiency (79%), timeliness (75%), and effectiveness (69%), they are concerned that its integration in anesthesiology could lead to a decreased demand for anesthesiologists (45%) and decreased earnings (45%). Within a decade, respondents expected AI would outperform them in predicting adverse perioperative events (83%), formulating pain management plans (67%), and conducting airway exams (45%). The absence of algorithmic transparency (60%), an ambiguous environment regarding malpractice (47%), and the possibility of medical errors (47%) were cited as significant barriers to the use of AI in clinical practice. Respondents indicated that their motivation to use AI in clinical practice stemmed from its potential to enhance patient outcomes (81%), lower health care expenditures (54%), reduce bias (55%), and boost productivity (53%). Variables associated with positive attitudes toward AI use in clinical practice included male gender (odds ratio [OR], 1.7; P < .001), 20+ years of experience (OR, 1.8; P < .01), higher AI knowledge (OR, 2.3; P = .01), and greater AI openness (OR, 10.6; P < .01). Anxiety about future earnings was associated with negative attitudes toward AI use in clinical practice (OR, 0.54; P < .01). CONCLUSIONS Understanding anesthesiologists' perspectives on AI is essential for the effective integration of AI into anesthesiology, as AI has the potential to revolutionize the field.
Collapse
Affiliation(s)
- Carlos E Estrada Alamo
- From the Department of Anesthesiology, Virginia Mason Medical Center, Seattle, Washington
| | - Fortunay Diatta
- Division of Plastic and Reconstructive Surgery, Department of Surgery, Yale School of Medicine, New Haven, Connecticut
| | - Sarah E Monsell
- Department of Biostatistics, University of Washington, Hans Rosling Center for Population Health, Seattle, Washington
| | - Meghan B Lane-Fall
- Department of Anesthesiology and Critical Care, University of Pennsylvania, Philadelphia, Pennsylvania
| |
Collapse
|
8
|
Waheed MA, Liu L. Perceptions of Family Physicians About Applying AI in Primary Health Care: Case Study From a Premier Health Care Organization. JMIR AI 2024; 3:e40781. [PMID: 38875531 PMCID: PMC11063883 DOI: 10.2196/40781] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/06/2022] [Revised: 05/25/2023] [Accepted: 03/07/2024] [Indexed: 06/16/2024]
Abstract
BACKGROUND The COVID-19 pandemic has led to the rapid proliferation of artificial intelligence (AI), which was not previously anticipated; this is an unforeseen development. The use of AI in health care settings is increasing, as it proves to be a promising tool for transforming health care systems, improving operational and business processes, and efficiently simplifying health care tasks for family physicians and health care administrators. Therefore, it is necessary to assess the perspective of family physicians on AI and its impact on their job roles. OBJECTIVE This study aims to determine the impact of AI on the management and practices of Qatar's Primary Health Care Corporation (PHCC) in improving health care tasks and service delivery. Furthermore, it seeks to evaluate the impact of AI on family physicians' job roles, including associated risks and ethical ramifications from their perspective. METHODS We conducted a cross-sectional survey and sent a web-based questionnaire survey link to 724 practicing family physicians at the PHCC. In total, we received 102 eligible responses. RESULTS Of the 102 respondents, 72 (70.6%) were men and 94 (92.2%) were aged between 35 and 54 years. In addition, 58 (56.9%) of the 102 respondents were consultants. The overall awareness of AI was 80 (78.4%) out of 102, with no difference between gender (P=.06) and age groups (P=.12). AI is perceived to play a positive role in improving health care practices at PHCC (P<.001), managing health care tasks (P<.001), and positively impacting health care service delivery (P<.001). Family physicians also perceived that their clinical, administrative, and opportunistic health care management roles were positively influenced by AI (P<.001). Furthermore, perceptions of family physicians indicate that AI improves operational and human resource management (P<.001), does not undermine patient-physician relationships (P<.001), and is not considered superior to human physicians in the clinical judgment process (P<.001). However, its inclusion is believed to decrease patient satisfaction (P<.001). AI decision-making and accountability were recognized as ethical risks, along with data protection and confidentiality. The optimism regarding using AI for future medical decisions was low among family physicians. CONCLUSIONS This study indicated a positive perception among family physicians regarding AI integration into primary care settings. AI demonstrates significant potential for enhancing health care task management and overall service delivery at the PHCC. It augments family physicians' roles without replacing them and proves beneficial for operational efficiency, human resource management, and public health during pandemics. While the implementation of AI is anticipated to bring benefits, the careful consideration of ethical, privacy, confidentiality, and patient-centric concerns is essential. These insights provide valuable guidance for the strategic integration of AI into health care systems, with a focus on maintaining high-quality patient care and addressing the multifaceted challenges that arise during this transformative process.
Collapse
Affiliation(s)
| | - Lu Liu
- Bath Business School, Bath Spa University, Bath, United Kingdom
| |
Collapse
|
9
|
Serbaya SH, Khan AA, Surbaya SH, Alzahrani SM. Knowledge, Attitude and Practice Toward Artificial Intelligence Among Healthcare Workers in Private Polyclinics in Jeddah, Saudi Arabia. ADVANCES IN MEDICAL EDUCATION AND PRACTICE 2024; 15:269-280. [PMID: 38596622 PMCID: PMC11001543 DOI: 10.2147/amep.s448422] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/25/2023] [Accepted: 03/27/2024] [Indexed: 04/11/2024]
Abstract
Purpose The objective of our study was to assess awareness, attitudes, and practices regarding artificial intelligence (AI) among healthcare workers in private polyclinics in Jeddah, Saudi Arabia. Methods We conducted cross-sectional study among healthcare workers in private clinics in Jeddah. Data was collected using a structured, validated questionnaire in Arabic and English on awareness, attitudes, and behaviors regarding AI. Cronbach's alpha for the questionnaire ranged from 0.6 to 0.8. Descriptive and bivariate analysis was done to assess the scores and their association of various sociodemographic variables with awareness, attitudes, and behaviors regarding AI. Multiple linear regression was performed to predict the scores of awareness, attitudes, and behaviors based on the sociodemographic variables. Results We recruited 361 participants for this study. Approximately, 62% of the healthcare workers were female. The majority (36%) of healthcare workers were nurses, while 25% were physicians. The median awareness, attitude, and behavioral scores were 5/6 (IQR 3-6), 5/8 (IQR 4-7), and 0/3 (IQR 0), respectively. Approximately three-fourths (74%) of the healthcare workers believed that they understood the basic computational principles of AI. Only half of the participants were willing to use AI when making future medical decisions. We found that male healthcare workers had better knowledge scores regarding AI as compared to female healthcare workers (Beta = 0.555, 95%, p value = 0.010), while for attitude scores, being administrative employee as compared to other employees was found to have negative attitude towards AI (Beta = 0.049, 95%, p value = 0.03). Conclusion We found that healthcare workers had an overall good awareness and optimistic attitude toward AI. Despite this, the majority is worried about the potential consequences of replacing their jobs with AI in the future. There is a dire need to educate and sensitize healthcare workers regarding the potential impact of AI on healthcare.
Collapse
Affiliation(s)
- Suhail Hasan Serbaya
- Department of Industrial Engineering, Faculty of Engineering, King Abdulaziz University, Jeddah, Kingdom of Saudi Arabia
| | - Adeel Ahmed Khan
- Saudi Board Program of Preventive Medicine, Makkah Healthcare Cluster, Makkah, Kingdom of Saudi Arabia
| | - Saud Hasan Surbaya
- Inter-Professional Training Director Administration, Makkah Healthcare Cluster, Makkah, Kingdom of Saudi Arabia
| | - Safar Majhood Alzahrani
- Inter-professional Training Administration, Makkah Healthcare Cluster, Makkah, Kingdom of Saudi Arabia
| |
Collapse
|
10
|
Fazakarley CA, Breen M, Thompson B, Leeson P, Williamson V. Beliefs, experiences and concerns of using artificial intelligence in healthcare: A qualitative synthesis. Digit Health 2024; 10:20552076241230075. [PMID: 38347935 PMCID: PMC10860471 DOI: 10.1177/20552076241230075] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/16/2024] [Indexed: 02/15/2024] Open
Abstract
Objective Artificial intelligence (AI) is a developing field in the context of healthcare. As this technology continues to be implemented in patient care, there is a growing need to understand the thoughts and experiences of stakeholders in this area to ensure that future AI development and implementation is successful. The aim of this study was to conduct a literature search of qualitative studies exploring the opinions of stakeholders such as clinicians, patients, and technology experts in order to establish the most common themes and ideas that have been presented in this research. Methods A literature search was conducted of existing qualitative research on stakeholder beliefs about the use of AI use in healthcare. Twenty-one papers were selected and analysed resulting in the development of four key themes relating to patient care, patient-doctor relationships, lack of education and resources, and the need for regulations. Results Overall, patients and healthcare workers are open to the use of AI in care and appear positive about potential benefits. However, concerns were raised relating to the lack of empathy in interactions of AI tools, and potential risks that may arise from the data collection needed for AI use and development. Stakeholders in the healthcare, technology, and business sectors all stressed that there was a lack of appropriate education, funding, and guidelines surrounding AI, and these concerns needed to be addressed to ensure future implementation is safe and suitable for patient care. Conclusion Ultimately, the results found in this study highlighted that there was a need for communication between stakeholder in order for these concerns to be addressed, mitigate potential risks, and maximise benefits for patients and clinicians alike. The results also identified a need for further qualitative research in this area to further understand stakeholder experiences as AI use continues to develop.
Collapse
Affiliation(s)
| | | | | | - Paul Leeson
- RDM Division of Cardiovascular Medicine, University of Oxford, John Radcliffe Hospital, Oxford, UK
| | - Victoria Williamson
- King's Centre for Military Health Research, King's College London, London, UK
| |
Collapse
|
11
|
Khavandi S, Zaghloul F, Higham A, Lim E, de Pennington N, Celi LA. Investigating the Impact of Automation on the Health Care Workforce Through Autonomous Telemedicine in the Cataract Pathway: Protocol for a Multicenter Study. JMIR Res Protoc 2023; 12:e49374. [PMID: 38051569 PMCID: PMC10731565 DOI: 10.2196/49374] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2023] [Revised: 09/17/2023] [Accepted: 09/18/2023] [Indexed: 12/07/2023] Open
Abstract
BACKGROUND While digital health innovations are increasingly being adopted by health care organizations, implementation is often carried out without considering the impacts on frontline staff who will be using the technology and who will be affected by its introduction. The enthusiasm surrounding the use of artificial intelligence (AI)-enabled digital solutions in health care is tempered by uncertainty around how it will change the working lives and practices of health care professionals. Digital enablement can be viewed as facilitating enhanced effectiveness and efficiency by improving services and automating cognitive labor, yet the implementation of such AI technology comes with challenges related to changes in work practices brought by automation. This research explores staff experiences before and after care pathway automation with an autonomous clinical conversational assistant, Dora (Ufonia Ltd), that is able to automate routine clinical conversations. OBJECTIVE The primary objective is to examine the impact of AI-enabled automation on clinicians, allied health professionals, and administrators who provide or facilitate health care to patients in high-volume, low-complexity care pathways. In the process of transforming care pathways through automation of routine tasks, staff will increasingly "work at the top of their license." The impact of this fundamental change on the professional identity, well-being, and work practices of the individual is poorly understood at present. METHODS We will adopt a multiple case study approach, combining qualitative and quantitative data collection methods, over 2 distinct phases, namely phase A (preimplementation) and phase B (postimplementation). RESULTS The analysis is expected to reveal the interrelationship between Dora and those affected by its introduction. This will reveal how tasks and responsibilities have changed or shifted, current tensions and contradictions, ways of working, and challenges, benefits, and opportunities as perceived by those on the frontlines of the health care system. The findings will enable a better understanding of the resistance or susceptibility of different stakeholders within the health care workforce and encourage managerial awareness of differing needs, demands, and uncertainties. CONCLUSIONS The implementation of AI in the health care sector, as well as the body of research on this topic, remain in their infancy. The project's key contribution will be to understand the impact of AI-enabled automation on the health care workforce and their work practices. INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID) PRR1-10.2196/49374.
Collapse
Affiliation(s)
- Sarah Khavandi
- Ufonia, Oxford, United Kingdom
- Gloucestershire Hospitals NHS Foundation Trust, Cheltenham, United Kingdom
- Imperial College School of Medicine, Imperial College London, London, United Kingdom
| | - Fatema Zaghloul
- Operations and Management Science, Healthcare and Innovation, University of Bristol, Bristol, United Kingdom
| | - Aisling Higham
- Ufonia, Oxford, United Kingdom
- Royal Berkshire NHS Foundation Trust, Reading, United Kingdom
| | - Ernest Lim
- Ufonia, Oxford, United Kingdom
- Department of Computer Science, University of York, York, United Kingdom
| | | | - Leo Anthony Celi
- Laboratory for Computational Physiology, Massachusetts Institute of Technology, Cambridge, MA, United States
- Division of Pulmonary, Critical Care and Sleep Medicine, Beth Israel Deaconess Medical Center, Boston, MA, United States
- Department of Biostatistics, Harvard TH Chan School of Public Health, Boston, MA, United States
| |
Collapse
|
12
|
Chen JS, Lin MC, Yiu G, Thorne C, Kulasa K, Stewart J, Nudleman E, Freeby M, Han MA, Baxter SL. Barriers to Implementation of Teleretinal Diabetic Retinopathy Screening Programs Across the University of California. Telemed J E Health 2023; 29:1810-1818. [PMID: 37256712 PMCID: PMC10714257 DOI: 10.1089/tmj.2022.0489] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2022] [Revised: 12/17/2022] [Accepted: 12/19/2022] [Indexed: 06/02/2023] Open
Abstract
Aim: To describe barriers to implementation of diabetic retinopathy (DR) teleretinal screening programs and artificial intelligence (AI) integration at the University of California (UC). Methods: Institutional representatives from UC Los Angeles, San Diego, San Francisco, Irvine, and Davis were surveyed for the year of their program's initiation, active status at the time of survey (December 2021), number of primary care clinics involved, screening image quality, types of eye providers, image interpretation turnaround time, and billing codes used. Representatives were asked to rate perceptions toward barriers to teleretinal DR screening and AI implementation using a 5-point Likert scale. Results: Four UC campuses had active DR teleretinal screening programs at the time of survey and screened between 246 and 2,123 patients at 1-6 clinics per campus. Sites reported variation between poor-quality photos (<5% to 15%) and average image interpretation time (1-5 days). Patient education, resource availability, and infrastructural support were identified as barriers to DR teleretinal screening. Cost and integration into existing technology infrastructures were identified as barriers to AI integration in DR screening. Conclusions: Despite the potential to increase access to care, there remain several barriers to widespread implementation of DR teleretinal screening. More research is needed to develop best practices to overcome these barriers.
Collapse
Affiliation(s)
- Jimmy S. Chen
- Division of Ophthalmology Informatics and Data Science, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, California, USA
| | - Mark C. Lin
- Division of Ophthalmology Informatics and Data Science, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, California, USA
| | - Glenn Yiu
- Department of Ophthalmology and Vision Science, University of California Davis Health, Sacramento, California, USA
| | - Christine Thorne
- Department of Family Medicine and Public Health, University of California San Diego, La Jolla, California, USA
| | - Kristen Kulasa
- Department of Endocrinology, University of California San Diego, La Jolla, California, USA
| | - Jay Stewart
- Department of Ophthalmology, University of California, San Francisco, San Francisco, California, USA
- Department of Ophthalmology, Zuckerberg San Francisco General Hospital and Trauma Center, San Francisco, California, USA
| | - Eric Nudleman
- Division of Ophthalmology Informatics and Data Science, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, California, USA
| | - Matthew Freeby
- Department of Medicine, University of California Los Angeles, Los Angeles, California, USA
| | - Maria A. Han
- Department of Medicine, University of California Los Angeles, Los Angeles, California, USA
| | - Sally L. Baxter
- Division of Ophthalmology Informatics and Data Science, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, California, USA
- Health Department of Biomedical Informatics, University of California San Diego, La Jolla, California, USA
| |
Collapse
|
13
|
Vo V, Chen G, Aquino YSJ, Carter SM, Do QN, Woode ME. Multi-stakeholder preferences for the use of artificial intelligence in healthcare: A systematic review and thematic analysis. Soc Sci Med 2023; 338:116357. [PMID: 37949020 DOI: 10.1016/j.socscimed.2023.116357] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2023] [Revised: 09/04/2023] [Accepted: 10/24/2023] [Indexed: 11/12/2023]
Abstract
INTRODUCTION Despite the proliferation of Artificial Intelligence (AI) technology over the last decade, clinician, patient, and public perceptions of its use in healthcare raise a number of ethical, legal and social questions. We systematically review the literature on attitudes towards the use of AI in healthcare from patients, the general public and health professionals' perspectives to understand these issues from multiple perspectives. METHODOLOGY A search for original research articles using qualitative, quantitative, and mixed methods published between 1 Jan 2001 to 24 Aug 2021 was conducted on six bibliographic databases. Data were extracted and classified into different themes representing views on: (i) knowledge and familiarity of AI, (ii) AI benefits, risks, and challenges, (iii) AI acceptability, (iv) AI development, (v) AI implementation, (vi) AI regulations, and (vii) Human - AI relationship. RESULTS The final search identified 7,490 different records of which 105 publications were selected based on predefined inclusion/exclusion criteria. While the majority of patients, the general public and health professionals generally had a positive attitude towards the use of AI in healthcare, all groups indicated some perceived risks and challenges. Commonly perceived risks included data privacy; reduced professional autonomy; algorithmic bias; healthcare inequities; and greater burnout to acquire AI-related skills. While patients had mixed opinions on whether healthcare workers suffer from job loss due to the use of AI, health professionals strongly indicated that AI would not be able to completely replace them in their professions. Both groups shared similar doubts about AI's ability to deliver empathic care. The need for AI validation, transparency, explainability, and patient and clinical involvement in the development of AI was emphasised. To help successfully implement AI in health care, most participants envisioned that an investment in training and education campaigns was necessary, especially for health professionals. Lack of familiarity, lack of trust, and regulatory uncertainties were identified as factors hindering AI implementation. Regarding AI regulations, key themes included data access and data privacy. While the general public and patients exhibited a willingness to share anonymised data for AI development, there remained concerns about sharing data with insurance or technology companies. One key domain under this theme was the question of who should be held accountable in the case of adverse events arising from using AI. CONCLUSIONS While overall positivity persists in attitudes and preferences toward AI use in healthcare, some prevalent problems require more attention. There is a need to go beyond addressing algorithm-related issues to look at the translation of legislation and guidelines into practice to ensure fairness, accountability, transparency, and ethics in AI.
Collapse
Affiliation(s)
- Vinh Vo
- Centre for Health Economics, Monash University, Australia.
| | - Gang Chen
- Centre for Health Economics, Monash University, Australia
| | - Yves Saint James Aquino
- Australian Centre for Health Engagement, Evidence and Values, School of Health and Soceity, University of Wollongong, Australia
| | - Stacy M Carter
- Australian Centre for Health Engagement, Evidence and Values, School of Health and Soceity, University of Wollongong, Australia
| | - Quynh Nga Do
- Department of Economics, Monash University, Australia
| | - Maame Esi Woode
- Centre for Health Economics, Monash University, Australia; Monash Data Futures Research Institute, Australia
| |
Collapse
|
14
|
Willis K, Chaudhry UAR, Chandrasekaran L, Wahlich C, Olvera-Barrios A, Chambers R, Bolter L, Anderson J, Barman SA, Fajtl J, Welikala R, Egan C, Tufail A, Owen CG, Rudnicka A. What are the perceptions and concerns of people living with diabetes and National Health Service staff around the potential implementation of AI-assisted screening for diabetic eye disease? Development and validation of a survey for use in a secondary care screening setting. BMJ Open 2023; 13:e075558. [PMID: 37968006 PMCID: PMC10660949 DOI: 10.1136/bmjopen-2023-075558] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/12/2023] [Accepted: 09/05/2023] [Indexed: 11/17/2023] Open
Abstract
INTRODUCTION The English National Health Service (NHS) Diabetic Eye Screening Programme (DESP) performs around 2.3 million eye screening appointments annually, generating approximately 13 million retinal images that are graded by humans for the presence or severity of diabetic retinopathy. Previous research has shown that automated retinal image analysis systems, including artificial intelligence (AI), can identify images with no disease from those with diabetic retinopathy as safely and effectively as human graders, and could significantly reduce the workload for human graders. Some algorithms can also determine the level of severity of the retinopathy with similar performance to humans. There is a need to examine perceptions and concerns surrounding AI-assisted eye-screening among people living with diabetes and NHS staff, if AI was to be introduced into the DESP, to identify factors that may influence acceptance of this technology. METHODS AND ANALYSIS People living with diabetes and staff from the North East London (NEL) NHS DESP were invited to participate in two respective focus groups to codesign two online surveys exploring their perceptions and concerns around the potential introduction of AI-assisted screening.Focus group participants were representative of the local population in terms of ages and ethnicity. Participants' feedback was taken into consideration to update surveys which were circulated for further feedback. Surveys will be piloted at the NEL DESP and followed by semistructured interviews to assess accessibility, usability and to validate the surveys.Validated surveys will be distributed by other NHS DESP sites, and also via patient groups on social media, relevant charities and the British Association of Retinal Screeners. Post-survey evaluative interviews will be undertaken among those who consent to participate in further research. ETHICS AND DISSEMINATION Ethical approval has been obtained by the NHS Research Ethics Committee (IRAS ID: 316631). Survey results will be shared and discussed with focus groups to facilitate preparation of findings for publication and to inform codesign of outreach activities to address concerns and perceptions identified.
Collapse
Affiliation(s)
- Kathryn Willis
- Population Health Research Institute, St George's University of London, London, UK
| | - Umar A R Chaudhry
- Population Health Research Institute, St George's University of London, London, UK
| | | | - Charlotte Wahlich
- Population Health Research Institute, St George's University of London, London, UK
| | - Abraham Olvera-Barrios
- NIHR Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Ryan Chambers
- Diabetes and Endocrinolgy, Homerton Healthcare NHS Foundation Trust, London, UK
| | - Louis Bolter
- Diabetes and Endocrinolgy, Homerton Healthcare NHS Foundation Trust, London, UK
| | - John Anderson
- Diabetes and Endocrinolgy, Homerton Healthcare NHS Foundation Trust, London, UK
| | - S A Barman
- School of Computer Science and Mathematics, Kingston University London, London, UK
| | - Jiri Fajtl
- School of Computer Science and Mathematics, Kingston University London, London, UK
| | - Roshan Welikala
- School of Computer Science and Mathematics, Kingston University London, London, UK
| | - Catherine Egan
- NIHR Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Adnan Tufail
- NIHR Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Christopher G Owen
- Population Health Research Institute, St George's University of London, London, UK
| | - Alicja Rudnicka
- Population Health Research Institute, St George's University of London, London, UK
| |
Collapse
|
15
|
Li LT, Haley LC, Boyd AK, Bernstam EV. Technical/Algorithm, Stakeholder, and Society (TASS) barriers to the application of artificial intelligence in medicine: A systematic review. J Biomed Inform 2023; 147:104531. [PMID: 37884177 DOI: 10.1016/j.jbi.2023.104531] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2023] [Revised: 09/14/2023] [Accepted: 10/22/2023] [Indexed: 10/28/2023]
Abstract
INTRODUCTION The use of artificial intelligence (AI), particularly machine learning and predictive analytics, has shown great promise in health care. Despite its strong potential, there has been limited use in health care settings. In this systematic review, we aim to determine the main barriers to successful implementation of AI in healthcare and discuss potential ways to overcome these challenges. METHODS We conducted a literature search in PubMed (1/1/2001-1/1/2023). The search was restricted to publications in the English language, and human study subjects. We excluded articles that did not discuss AI, machine learning, predictive analytics, and barriers to the use of these techniques in health care. Using grounded theory methodology, we abstracted concepts to identify major barriers to AI use in medicine. RESULTS We identified a total of 2,382 articles. After reviewing the 306 included papers, we developed 19 major themes, which we categorized into three levels: the Technical/Algorithm, Stakeholder, and Social levels (TASS). These themes included: Lack of Explainability, Need for Validation Protocols, Need for Standards for Interoperability, Need for Reporting Guidelines, Need for Standardization of Performance Metrics, Lack of Plan for Updating Algorithm, Job Loss, Skills Loss, Workflow Challenges, Loss of Patient Autonomy and Consent, Disturbing the Patient-Clinician Relationship, Lack of Trust in AI, Logistical Challenges, Lack of strategic plan, Lack of Cost-effectiveness Analysis and Proof of Efficacy, Privacy, Liability, Bias and Social Justice, and Education. CONCLUSION We identified 19 major barriers to the use of AI in healthcare and categorized them into three levels: the Technical/Algorithm, Stakeholder, and Social levels (TASS). Future studies should expand on barriers in pediatric care and focus on developing clearly defined protocols to overcome these barriers.
Collapse
Affiliation(s)
- Linda T Li
- Department of Surgery, Division of Pediatric Surgery, Icahn School of Medicine at Mount Sinai, 1 Gustave L. Levy Pl, New York, NY 10029, United States; McWilliams School of Biomedical Informatics at UT Health Houston, 7000 Fannin St, Suite 600, Houston, TX 77030, United States.
| | - Lauren C Haley
- McGovern Medical School at the University of Texas Health Science Center at Houston, 6431 Fannin St, Houston, TX 77030, United States.
| | - Alexandra K Boyd
- McGovern Medical School at the University of Texas Health Science Center at Houston, 6431 Fannin St, Houston, TX 77030, United States.
| | - Elmer V Bernstam
- McWilliams School of Biomedical Informatics at UT Health Houston, 7000 Fannin St, Suite 600, Houston, TX 77030, United States; McGovern Medical School at the University of Texas Health Science Center at Houston, 6431 Fannin St, Houston, TX 77030, United States.
| |
Collapse
|
16
|
Andrade SM, da Silva-Sauer L, de Carvalho CD, de Araújo ELM, Lima EDO, Fernandes FML, Moreira KLDAF, Camilo ME, Andrade LMMDS, Borges DT, da Silva Filho EM, Lindquist AR, Pegado R, Morya E, Yamauti SY, Alves NT, Fernández-Calvo B, de Souza Neto JMR. Identifying biomarkers for tDCS treatment response in Alzheimer's disease patients: a machine learning approach using resting-state EEG classification. Front Hum Neurosci 2023; 17:1234168. [PMID: 37859768 PMCID: PMC10582524 DOI: 10.3389/fnhum.2023.1234168] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2023] [Accepted: 09/18/2023] [Indexed: 10/21/2023] Open
Abstract
Background Transcranial direct current stimulation (tDCS) is a promising treatment for Alzheimer's Disease (AD). However, identifying objective biomarkers that can predict brain stimulation efficacy, remains a challenge. The primary aim of this investigation is to delineate the cerebral regions implicated in AD, taking into account the existing lacuna in comprehension of these regions. In pursuit of this objective, we have employed a supervised machine learning algorithm to prognosticate the neurophysiological outcomes resultant from the confluence of tDCS therapy plus cognitive intervention within both the cohort of responders and non-responders to antecedent tDCS treatment, stratified on the basis of antecedent cognitive outcomes. Methods The data were obtained through an interventional trial. The study recorded high-resolution electroencephalography (EEG) in 70 AD patients and analyzed spectral power density during a 6 min resting period with eyes open focusing on a fixed point. The cognitive response was assessed using the AD Assessment Scale-Cognitive Subscale. The training process was carried out through a Random Forest classifier, and the dataset was partitioned into K equally-partitioned subsamples. The model was iterated k times using K-1 subsamples as the training bench and the remaining subsample as validation data for testing the model. Results A clinical discriminating EEG biomarkers (features) was found. The ML model identified four brain regions that best predict the response to tDCS associated with cognitive intervention in AD patients. These regions included the channels: FC1, F8, CP5, Oz, and F7. Conclusion These findings suggest that resting-state EEG features can provide valuable information on the likelihood of cognitive response to tDCS plus cognitive intervention in AD patients. The identified brain regions may serve as potential biomarkers for predicting treatment response and maybe guide a patient-centered strategy. Clinical Trial Registration https://classic.clinicaltrials.gov/ct2/show/NCT02772185?term=NCT02772185&draw=2&rank=1, identifier ID: NCT02772185.
Collapse
Affiliation(s)
- Suellen Marinho Andrade
- Aging and Neuroscience Laboratory, Federal University of Paraíba, João Pessoa, Paraíba, Brazil
| | - Leandro da Silva-Sauer
- Aging and Neuroscience Laboratory, Federal University of Paraíba, João Pessoa, Paraíba, Brazil
| | | | | | - Eloise de Oliveira Lima
- Aging and Neuroscience Laboratory, Federal University of Paraíba, João Pessoa, Paraíba, Brazil
| | - Fernanda Maria Lima Fernandes
- Center for Alternative and Renewable Energies (CEAR), Department of Electrical Engineering, Federal University of Paraíba, João Pessoa, Paraíba, Brazil
| | | | - Maria Eduarda Camilo
- Laboratory of Ergonomics and Health, Department of Physiotherapy, Federal University of Paraíba, João Pessoa, Paraíba, Brazil
| | | | - Daniel Tezoni Borges
- Department of Physiotherapy, Federal University of Rio Grande do Norte, Natal, Rio Grande do Norte, Brazil
| | | | - Ana Raquel Lindquist
- Department of Physiotherapy, Federal University of Rio Grande do Norte, Natal, Rio Grande do Norte, Brazil
| | - Rodrigo Pegado
- Department of Physiotherapy, Federal University of Rio Grande do Norte, Natal, Rio Grande do Norte, Brazil
| | - Edgard Morya
- Edmond and Lily Safra International Institute of Neurosciences (IIN-ELS), Macaíba, Rio Grande do Norte, Brazil
| | - Seidi Yonamine Yamauti
- Edmond and Lily Safra International Institute of Neurosciences (IIN-ELS), Macaíba, Rio Grande do Norte, Brazil
| | - Nelson Torro Alves
- Department of Psychology, Federal University of Paraíba, João Pessoa, Brazil
| | - Bernardino Fernández-Calvo
- Department of Psychology, Federal University of Paraíba, João Pessoa, Brazil
- Department of Psychology, Faculty of Educational Sciences and Psychology, University of Cordoba, Córdoba, Spain
- Maimonides Biomedical Research Institute of Cordoba (IMIBIC), Córdoba, Spain
| | - José Maurício Ramos de Souza Neto
- Center for Alternative and Renewable Energies (CEAR), Department of Electrical Engineering, Federal University of Paraíba, João Pessoa, Paraíba, Brazil
| |
Collapse
|
17
|
Ahmed MI, Spooner B, Isherwood J, Lane M, Orrock E, Dennison A. A Systematic Review of the Barriers to the Implementation of Artificial Intelligence in Healthcare. Cureus 2023; 15:e46454. [PMID: 37927664 PMCID: PMC10623210 DOI: 10.7759/cureus.46454] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/03/2023] [Indexed: 11/07/2023] Open
Abstract
Artificial intelligence (AI) is expected to improve healthcare outcomes by facilitating early diagnosis, reducing the medical administrative burden, aiding drug development, personalising medical and oncological management, monitoring healthcare parameters on an individual basis, and allowing clinicians to spend more time with their patients. In the post-pandemic world where there is a drive for efficient delivery of healthcare and manage long waiting times for patients to access care, AI has an important role in supporting clinicians and healthcare systems to streamline the care pathways and provide timely and high-quality care for the patients. Despite AI technologies being used in healthcare for some decades, and all the theoretical potential of AI, the uptake in healthcare has been uneven and slower than anticipated and there remain a number of barriers, both overt and covert, which have limited its incorporation. This literature review highlighted barriers in six key areas: ethical, technological, liability and regulatory, workforce, social, and patient safety barriers. Defining and understanding the barriers preventing the acceptance and implementation of AI in the setting of healthcare will enable clinical staff and healthcare leaders to overcome the identified hurdles and incorporate AI technologies for the benefit of patients and clinical staff.
Collapse
Affiliation(s)
- Molla Imaduddin Ahmed
- Paediatric Respiratory Medicine, University Hospitals of Leicester NHS Trust, Leicester, GBR
| | - Brendan Spooner
- Intensive Care and Anaesthesia, University Hospitals Coventry and Warwickshire NHS Trust, Coventry, GBR
| | - John Isherwood
- Hepatobiliary and Pancreatic Surgery, University Hospitals of Leicester NHS Trust, Leicester, GBR
| | - Mark Lane
- Ophthalmology, Birmingham and Midland Eye Centre, Birmingham, GBR
| | - Emma Orrock
- Head of Clinical Senates, East and West Midlands Clinical Senate, Leicester, GBR
| | - Ashley Dennison
- Hepatobiliary and Pancreatic Surgery, University Hospitals of Leicester NHS Trust, Leicester, GBR
| |
Collapse
|
18
|
Lammons W, Silkens M, Hunter J, Shah S, Stavropoulou C. Centering Public Perceptions on Translating AI Into Clinical Practice: Patient and Public Involvement and Engagement Consultation Focus Group Study. J Med Internet Res 2023; 25:e49303. [PMID: 37751234 PMCID: PMC10565616 DOI: 10.2196/49303] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2023] [Revised: 08/25/2023] [Accepted: 08/31/2023] [Indexed: 09/27/2023] Open
Abstract
BACKGROUND Artificial intelligence (AI) is widely considered to be the new technical advancement capable of a large-scale modernization of health care. Considering AI's potential impact on the clinician-patient relationship, health care provision, and health care systems more widely, patients and the wider public should be a part of the development, implementation, and embedding of AI applications in health care. Failing to establish patient and public engagement and involvement (PPIE) can limit AI's impact. OBJECTIVE This study aims to (1) understand patients' and the public's perceived benefits and challenges for AI and (2) clarify how to best conduct PPIE in projects on translating AI into clinical practice, given public perceptions of AI. METHODS We conducted this qualitative PPIE focus-group consultation in the United Kingdom. A total of 17 public collaborators representing 7 National Institute of Health and Care Research Applied Research Collaborations across England participated in 1 of 3 web-based semistructured focus group discussions. We explored public collaborators' understandings, experiences, and perceptions of AI applications in health care. Transcripts were coanalyzed iteratively with 2 public coauthors using thematic analysis. RESULTS We identified 3 primary deductive themes with 7 corresponding inductive subthemes. Primary theme 1, advantages of implementing AI in health care, had 2 subthemes: system improvements and improve quality of patient care and shared decision-making. Primary theme 2, challenges of implementing AI in health care, had 3 subthemes: challenges with security, bias, and access; public misunderstanding of AI; and lack of human touch in care and decision-making. Primary theme 3, recommendations on PPIE for AI in health care, had 2 subthemes: experience, empowerment, and raising awareness; and acknowledging and supporting diversity in PPIE. CONCLUSIONS Patients and the public can bring unique perspectives on the development, implementation, and embedding of AI in health care. Early PPIE is therefore crucial not only to safeguard patients but also to increase the chances of acceptance of AI by the public and the impact AI can make in terms of outcomes.
Collapse
Affiliation(s)
- William Lammons
- National Institute of Health and Care Research, Applied Research Collaboration North Thames, Department of Applied Health Research, University College London, London, United Kingdom
| | - Milou Silkens
- Erasmus School of Health Policy and Management, Erasmus University, Rotterdam, Netherlands
- Centre for Healthcare Innovation Research, City University of London, London, United Kingdom
| | - Jamie Hunter
- Public co-author, National Institute of Health and Care Research, Applied Research Collaboration North West Coast, Department of Health Services Research, The University of Liverpool, Liverpool, United Kingdom
| | - Sudhir Shah
- Public co-author, National Institute of Health and Care Research, Applied Research Collaboration North Thames, Department of Applied Health Research, University College London, London, United Kingdom
| | - Charitini Stavropoulou
- Centre for Healthcare Innovation Research, City University of London, London, United Kingdom
| |
Collapse
|
19
|
Akinrinmade AO, Adebile TM, Ezuma-Ebong C, Bolaji K, Ajufo A, Adigun AO, Mohammad M, Dike JC, Okobi OE. Artificial Intelligence in Healthcare: Perception and Reality. Cureus 2023; 15:e45594. [PMID: 37868407 PMCID: PMC10587915 DOI: 10.7759/cureus.45594] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/19/2023] [Indexed: 10/24/2023] Open
Abstract
Artificial intelligence (AI) has birthed the new "big thing" in modern medicine. It promises to bring about safer and improved care that will be beneficial to patients and become a helpful tool in the hands of a skilled physician. Despite its anticipation, however, the implementation and usage of AI are still in their elementary phases, particularly due to legal and ethical considerations that border on "data." These challenges should not be brushed aside but rather be recognized and resolved to enable acceptance by all relevant stakeholders without prejudice. Once these challenges can be overcome, AI will truly revolutionize the field of medicine with improved diagnostic accuracy, a reduction in physician burnout, and an enhanced treatment modality. It is therefore paramount that AI be embraced by physicians and integrated into medical education in order to be well-prepared for our role in the future of medicine.
Collapse
Affiliation(s)
- Abidemi O Akinrinmade
- Medicine and Surgery, Benjamin S. Carson School of Medicine, Babcock University, Ilishan-Remo, NGA
| | - Temitayo M Adebile
- Public Health, Georgia Southern University, Statesboro, USA
- Nephrology, Boston Medical Center, Malden, USA
| | | | | | | | - Aisha O Adigun
- Infectious Diseases, University of Louisville, Louisville, USA
| | - Majed Mohammad
- Geriatrics, Mount Carmel Grove City Hospital, Grove City, USA
| | - Juliet C Dike
- Internal Medicine, University of Calabar, Calabar, NGA
| | - Okelue E Okobi
- Family Medicine, Larkin Community Hospital Palm Springs Campus, Miami, USA
- Family Medicine, Medficient Health Systems, Laurel, USA
- Family Medicine, Lakeside Medical Center, Belle Glade, USA
| |
Collapse
|
20
|
Neher M, Petersson L, Nygren JM, Svedberg P, Larsson I, Nilsen P. Innovation in healthcare: leadership perceptions about the innovation characteristics of artificial intelligence-a qualitative interview study with healthcare leaders in Sweden. Implement Sci Commun 2023; 4:81. [PMID: 37464420 DOI: 10.1186/s43058-023-00458-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2022] [Accepted: 06/17/2023] [Indexed: 07/20/2023] Open
Abstract
BACKGROUND Despite the extensive hopes and expectations for value creation resulting from the implementation of artificial intelligence (AI) applications in healthcare, research has predominantly been technology-centric rather than focused on the many changes that are required in clinical practice for the technology to be successfully implemented. The importance of leaders in the successful implementation of innovations in healthcare is well recognised, yet their perspectives on the specific innovation characteristics of AI are still unknown. The aim of this study was therefore to explore the perceptions of leaders in healthcare concerning the innovation characteristics of AI intended to be implemented into their organisation. METHODS The study had a deductive qualitative design, using constructs from the innovation domain in the Consolidated Framework for Implementation Research (CFIR). Interviews were conducted with 26 leaders in healthcare. RESULTS Participants perceived that AI could provide relative advantages when it came to care management, supporting clinical decisions, and the early detection of disease and risk of disease. The development of AI in the organisation itself was perceived as the main current innovation source. The evidence base behind AI technology was questioned, in relation to its transparency, potential quality improvement, and safety risks. Although the participants acknowledged AI to be superior to human action in terms of effectiveness and precision in some situations, they also expressed uncertainty about the adaptability and trialability of AI. Complexities such as the characteristics of the technology, the lack of conceptual consensus about AI, and the need for a variety of implementation strategies to accomplish transformative change in practice were identified, as were uncertainties about the costs involved in AI implementation. CONCLUSION Healthcare leaders not only saw potential in the technology and its use in practice, but also felt that AI's opacity limits its evidence strength and that complexities in relation to AI itself and its implementation influence its current use in healthcare practice. More research is needed based on actual experiences using AI applications in real-world situations and their impact on clinical practice. New theories, models, and frameworks may need to be developed to meet challenges related to the implementation of AI in healthcare.
Collapse
Affiliation(s)
- Margit Neher
- School of Health and Welfare, Halmstad University, Box 823, SE-30118, Halmstad, Sweden.
| | - Lena Petersson
- School of Health and Welfare, Halmstad University, Box 823, SE-30118, Halmstad, Sweden
| | - Jens M Nygren
- School of Health and Welfare, Halmstad University, Box 823, SE-30118, Halmstad, Sweden
| | - Petra Svedberg
- School of Health and Welfare, Halmstad University, Box 823, SE-30118, Halmstad, Sweden
| | - Ingrid Larsson
- School of Health and Welfare, Halmstad University, Box 823, SE-30118, Halmstad, Sweden
| | - Per Nilsen
- School of Health and Welfare, Halmstad University, Box 823, SE-30118, Halmstad, Sweden
- Department of Health, Medicine and Caring Sciences, Division of Public Health, Faculty of Health Sciences, Linköping University, Linköping, Sweden
| |
Collapse
|
21
|
Kamal AH, Zakaria OM, Majzoub RA, Nasir EWF. Artificial intelligence in orthopedics: A qualitative exploration of the surgeon perspective. Medicine (Baltimore) 2023; 102:e34071. [PMID: 37327255 PMCID: PMC10270518 DOI: 10.1097/md.0000000000034071] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/04/2023] [Accepted: 06/01/2023] [Indexed: 06/18/2023] Open
Abstract
Artificial intelligence (AI) is currently integrated into many medical services. AI is utilized in many aspects of orthopedic surgery. The scope ranges from diagnosis to complex surgery. To evaluate the perceptions, attitudes, and interests of Sudanese orthopedic surgeons regarding the different applications of AI in orthopedic surgery. This qualitative questionnaire-based study was conducted through an anonymous electronic survey using Google Forms distributed among Sudanese orthopedic surgeons. The questionnaire entailed 4 sections. The first section included the participants' demographic data. The remaining 3 sections included questions for the assessment of the perception, attitude, and interest of surgeons toward (AI). The validity and reliability of the questionnaire were tested and piloted before the final dissemination. One hundred twenty-nine surgeons responded to the questionnaires. Most respondents needed to be more aware of the basic concepts of AI. However, most respondents were aware of its use in spinal and joint replacement surgeries. Most respondents had doubts regarding the safety of (AI). However, they were highly interested in utilizing (AI) in many orthopedic surgical aspects. Orthopedic surgery is a rapidly evolving branch of surgery that involves adoption of new technologies. Therefore, orthopedic surgeons should be encouraged to enroll in research activities to generate more studies and reviews to assess the usefulness and safety of emerging technologies.
Collapse
Affiliation(s)
- Ahmed Hassan Kamal
- Department of Surgery, College of Medicine, King Faisal University, Al-Ahsa, Saudi Arabia
| | | | - Rabab Abbas Majzoub
- Department of Pediatrics, College of Medicine, King Faisal University, Al-Ahsa, Saudi Arabia
| | - El Walid Fadul Nasir
- Department of Public Health & Biostatics, College of Dentistry, King Faisal University, Al-Ahsa, Saudi Arabia
| |
Collapse
|
22
|
Hallowell N, Badger S, McKay F, Kerasidou A, Nellåker C. Democratising or disrupting diagnosis? Ethical issues raised by the use of AI tools for rare disease diagnosis. SSM. QUALITATIVE RESEARCH IN HEALTH 2023; 3:100240. [PMID: 37426704 PMCID: PMC10323712 DOI: 10.1016/j.ssmqr.2023.100240] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/12/2022] [Revised: 02/13/2023] [Accepted: 02/13/2023] [Indexed: 07/11/2023]
Abstract
Computational phenotyping (CP) technology uses facial recognition algorithms to classify and potentially diagnose rare genetic disorders on the basis of digitised facial images. This AI technology has a number of research as well as clinical applications, such as supporting diagnostic decision-making. Using the example of CP, we examine stakeholders' views of the benefits and costs of using AI as a diagnostic tool within the clinic. Through a series of in-depth interviews (n = 20) with: clinicians, clinical researchers, data scientists, industry and support group representatives, we report stakeholder views regarding the adoption of this technology in a clinical setting. While most interviewees were supportive of employing CP as a diagnostic tool in some capacity we observed ambivalence around the potential for artificial intelligence to overcome diagnostic uncertainty in a clinical context. Thus, while there was widespread agreement amongst interviewees concerning the public benefits of AI assisted diagnosis, namely, its potential to increase diagnostic yield and enable faster more objective and accurate diagnoses by up skilling non specialists and thereby enabling access to diagnosis that is potentially lacking, interviewees also raised concerns about ensuring algorithmic reliability, expunging algorithmic bias and that the use of AI could result in deskilling the specialist clinical workforce. We conclude that, prior to widespread clinical implementation, on-going reflection is needed regarding the trade-offs required to determine acceptable levels of bias and conclude that diagnostic AI tools should only be employed as an assistive technology within the dysmorphology clinic.
Collapse
Affiliation(s)
- Nina Hallowell
- The Ethox Centre and Wellcome Centre for Ethics & Humanities, Nuffield Department of Population Health and Big Data Institute, University of Oxford, UK
| | | | - Francis McKay
- The Ethox Centre and Wellcome Centre for Ethics & Humanities, Nuffield Department of Population Health and Big Data Institute, University of Oxford, UK
| | - Angeliki Kerasidou
- The Ethox Centre and Wellcome Centre for Ethics & Humanities, Nuffield Department of Population Health and Big Data Institute, University of Oxford, UK
| | - Christoffer Nellåker
- Nuffield Department of Women's and Reproductive Health and Big Data Institute, University of Oxford, UK
| |
Collapse
|
23
|
Rajamäki J, Gioulekas F, Rocha PAL, Garcia XDT, Ofem P, Tyni J. ALTAI Tool for Assessing AI-Based Technologies: Lessons Learned and Recommendations from SHAPES Pilots. Healthcare (Basel) 2023; 11:healthcare11101454. [PMID: 37239739 DOI: 10.3390/healthcare11101454] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2023] [Revised: 05/13/2023] [Accepted: 05/15/2023] [Indexed: 05/28/2023] Open
Abstract
Across European countries, the SHAPES Project is piloting AI-based technologies that could improve healthcare delivery for older people over 60 years old. This article aims to present a study developed inside the SHAPES Project to find a theoretical framework focused on AI-assisted technology in healthcare for older people living in the home, to assess the SHAPES AI-based technologies using the ALTAI tool, and to derive ethical recommendations regarding AI-based technologies for ageing and healthcare. The study has highlighted concerns and reservations about AI-based technologies, namely dealing with living at home, mobility, accessibility, data exchange procedures in cross-board cases, interoperability, and security. A list of recommendations is built not only for the healthcare sector, but also for other pilot studies.
Collapse
Affiliation(s)
- Jyri Rajamäki
- Unit W, Laurea University of Applied Sciences, 02650 Espoo, Finland
| | - Fotios Gioulekas
- 5th Regional Health Authority of Thessaly & Sterea, 41110 Larissa, Greece
| | - Pedro Alfonso Lebre Rocha
- CINTESIS@RISE, Department of Behavioural Sciences, School of Medicine and Biomedical Sciences (ICBAS), University of Porto, 4200-450 Porto, Portugal
| | - Xavier Del Toro Garcia
- Computer Architecture and Networks Group, School of Computer Science, University of Castilla-La Mancha, Paseo de la Universidad, 4, 13071 Ciudad Real, Spain
| | - Paulinus Ofem
- Unit W, Laurea University of Applied Sciences, 02650 Espoo, Finland
| | - Jaakko Tyni
- Unit W, Laurea University of Applied Sciences, 02650 Espoo, Finland
| |
Collapse
|
24
|
Cinalioglu K, Elbaz S, Sekhon K, Su CL, Rej S, Sekhon H. Exploring Differential Perceptions of Artificial Intelligence in Health Care Among Younger Versus Older Canadians: Results From the 2021 Canadian Digital Health Survey. J Med Internet Res 2023; 25:e38169. [PMID: 37115588 PMCID: PMC10182456 DOI: 10.2196/38169] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2022] [Revised: 11/14/2022] [Accepted: 12/19/2022] [Indexed: 04/29/2023] Open
Abstract
BACKGROUND The changing landscape of health care has led to the incorporation of powerful new technologies like artificial intelligence (AI) to assist with various services across a hospital. However, despite the potential outcomes that this tool may provide, little work has examined public opinion regarding their use. OBJECTIVE In this study, we aim to explore differences between younger versus older Canadians with regard to the level of comfort and perceptions around the adoption and use of AI in health care settings. METHODS Using data from the 2021 Canadian Digital Health Survey (n=12,052), items related to perceptions about the use of AI as well as previous experience and satisfaction with health care were identified. We conducted Mann-Whitney U tests to compare the level of comfort of younger versus older Canadians regarding the use of AI in health care for a variety of purposes. Multinomial logistic regression was used to predict the comfort ratings based on categorical indicators. RESULTS Younger Canadians had greater knowledge of AI, but older Canadians were more comfortable with AI applied to monitoring and predicting health conditions, decision support, diagnostic imaging, precision medicine, drug and vaccine development, disease monitoring at home, tracking epidemics, and optimizing workflow to save time. Additionally, for older respondents, higher satisfaction led to higher comfort ratings. Only 1 interaction effect was identified between previous experience, satisfaction, and comfort with AI for drug and vaccine development. CONCLUSIONS Older Canadians may be more open to various applications of AI within health care than younger Canadians. High satisfaction may be a critical criterion for comfort with AI, especially for older Canadians. Additionally, in the case of drug and vaccine development, previous experience may be an important moderating factor. We conclude that gaining a greater understanding of the perceptions of all health care users is integral to the implementation and sustainability of new and cutting-edge technologies in health care settings.
Collapse
Affiliation(s)
- Karin Cinalioglu
- Department of Psychiatry, Lady Davis Institute for Medical Research, Jewish General Hospital, Montreal, QC, Canada
- Department of Psychiatry, Faculty of Medicine and Health Sciences, McGill University, Montreal, QC, Canada
| | - Sasha Elbaz
- Department of Psychology, Université du Québec à Montréal (UQAM), Montreal, QC, Canada
| | - Kerman Sekhon
- Temerty Faculty of Medicine, University of Toronto, Toronto, ON, Canada
| | - Chien-Lin Su
- Department of Psychiatry, Lady Davis Institute for Medical Research, Jewish General Hospital, Montreal, QC, Canada
| | - Soham Rej
- Department of Psychiatry, Lady Davis Institute for Medical Research, Jewish General Hospital, Montreal, QC, Canada
- Department of Psychiatry, Faculty of Medicine and Health Sciences, McGill University, Montreal, QC, Canada
| | - Harmehr Sekhon
- Department of Psychiatry, Lady Davis Institute for Medical Research, Jewish General Hospital, Montreal, QC, Canada
- Department of Psychiatry, McLean Hospital, Harvard Medical School, Boston, MA, United States
| |
Collapse
|
25
|
Ganapathi S, Duggal S. Exploring the experiences and views of doctors working with Artificial Intelligence in English healthcare; a qualitative study. PLoS One 2023; 18:e0282415. [PMID: 36862694 PMCID: PMC9980725 DOI: 10.1371/journal.pone.0282415] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2022] [Accepted: 02/14/2023] [Indexed: 03/03/2023] Open
Abstract
BACKGROUND The National Health Service (NHS) aspires to be a world leader of Artificial Intelligence (AI) in healthcare, however, there are several barriers facing translation and implementation. A key enabler of AI within the NHS is the education and engagement of doctors, however evidence suggests that there is an overall lack of awareness of and engagement with AI. RESEARCH AIM This qualitative study explores the experiences and views of doctor developers working with AI within the NHS exploring; their role within medical AI discourse, their views on the implementation of AI more widely and how they consider the engagement of doctors with AI technologies may increase in the future. METHODS This study involved eleven semi-structured, one-to-one interviews conducted with doctors working with AI in English healthcare. Data was subjected to thematic analysis. RESULTS The findings demonstrate that there is an unstructured pathway for doctors to enter the field of AI. The doctors described the various challenges they had experienced during their career, with many arising from the differing demands of operating in a commercial and technological environment. The perceived awareness and engagement among frontline doctors was low, with two prominent barriers being the hype surrounding AI and a lack of protected time. The engagement of doctors is vital for both the development and adoption of AI. CONCLUSIONS AI offers big potential within the medical field but is still in its infancy. For the NHS to leverage the benefits of AI, it must educate and empower current and future doctors. This can be achieved through; informative education within the medical undergraduate curriculum, protecting time for current doctors to develop understanding and providing flexible opportunities for NHS doctors to explore this field.
Collapse
Affiliation(s)
- Shaswath Ganapathi
- University of Birmingham Medical School, Birmingham, United Kingdom
- * E-mail:
| | - Sandhya Duggal
- University of Birmingham Medical School, Birmingham, United Kingdom
- The Strategy Unit, Midlands Lancashire Commissioning Support Unit, Leyland, United Kingdom
| |
Collapse
|
26
|
Winter PD, Carusi A. (De)troubling transparency: artificial intelligence (AI) for clinical applications. MEDICAL HUMANITIES 2023; 49:17-26. [PMID: 35545432 PMCID: PMC9985768 DOI: 10.1136/medhum-2021-012318] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 04/05/2022] [Indexed: 06/15/2023]
Abstract
Artificial intelligence (AI) and machine learning (ML) techniques occupy a prominent role in medical research in terms of the innovation and development of new technologies. However, while many perceive AI as a technology of promise and hope-one that is allowing for more early and accurate diagnosis-the acceptance of AI and ML technologies in hospitals remains low. A major reason for this is the lack of transparency associated with these technologies, in particular epistemic transparency, which results in AI disturbing or troubling established knowledge practices in clinical contexts. In this article, we describe the development process of one AI application for a clinical setting. We show how epistemic transparency is negotiated and co-produced in close collaboration between AI developers and clinicians and biomedical scientists, forming the context in which AI is accepted as an epistemic operator. Drawing on qualitative research with collaborative researchers developing an AI technology for the early diagnosis of a rare respiratory disease (pulmonary hypertension/PH), this paper examines how including clinicians and clinical scientists in the collaborative practices of AI developers de-troubles transparency. Our research shows how de-troubling transparency occurs in three dimensions of AI development relating to PH: querying of data sets, building software and training the model The close collaboration results in an AI application that is at once social and technological: it integrates and inscribes into the technology the knowledge processes of the different participants in its development. We suggest that it is a misnomer to call these applications 'artificial' intelligence, and that they would be better developed and implemented if they were reframed as forms of sociotechnical intelligence.
Collapse
Affiliation(s)
- Peter David Winter
- School of Sociology, Politics and International Studies, University of Bristol, Bristol, UK
| | - Annamaria Carusi
- Interchange Research, London, UK
- Department of Science and Technology Studies, University College London, London, London, UK
| |
Collapse
|
27
|
Macri R, Roberts SL. The Use of Artificial Intelligence in Clinical Care: A Values-Based Guide for Shared Decision Making. Curr Oncol 2023; 30:2178-2186. [PMID: 36826129 PMCID: PMC9955933 DOI: 10.3390/curroncol30020168] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2022] [Revised: 01/28/2023] [Accepted: 02/01/2023] [Indexed: 02/12/2023] Open
Abstract
Clinical applications of artificial intelligence (AI) in healthcare, including in the field of oncology, have the potential to advance diagnosis and treatment. The literature suggests that patient values should be considered in decision making when using AI in clinical care; however, there is a lack of practical guidance for clinicians on how to approach these conversations and incorporate patient values into clinical decision making. We provide a practical, values-based guide for clinicians to assist in critical reflection and the incorporation of patient values into shared decision making when deciding to use AI in clinical care. Values that are relevant to patients, identified in the literature, include trust, privacy and confidentiality, non-maleficence, safety, accountability, beneficence, autonomy, transparency, compassion, equity, justice, and fairness. The guide offers questions for clinicians to consider when adopting the potential use of AI in their practice; explores illness understanding between the patient and clinician; encourages open dialogue of patient values; reviews all clinically appropriate options; and makes a shared decision of what option best meets the patient's values. The guide can be used for diverse clinical applications of AI.
Collapse
Affiliation(s)
- Rosanna Macri
- Department of Bioethics, Sinai Health, Toronto, ON M5G 1X5, Canada
- Joint Centre for Bioethics, Dalla Lana School of Public Health, University of Toronto, Toronto, ON M5T 1P8, Canada
- Department of Radiation Oncology, Temerty Faculty of Medicine, University of Toronto, Toronto, ON M5T 1P5, Canada
- Correspondence:
| | - Shannon L. Roberts
- Project-Specific Bioethics Research Volunteer Student, Hennick Bridgepoint Hospital, Sinai Health, Toronto, ON M4M 2B5, Canada
| |
Collapse
|
28
|
AlZaabi A, AlMaskari S, AalAbdulsalam A. Are physicians and medical students ready for artificial intelligence applications in healthcare? Digit Health 2023; 9:20552076231152167. [PMID: 36762024 PMCID: PMC9903019 DOI: 10.1177/20552076231152167] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2022] [Accepted: 01/03/2023] [Indexed: 01/28/2023] Open
Abstract
Background Artificial intelligence (AI) Healthcare applications are listed in the national visions of some Gulf Cooperation Council countries. A successful use of AI depends on the attitude and perception of medical experts of its applications. Objective To evaluate physicians and medical students' attitude and perception on AI applications in healthcare. Method A web-based survey was disseminated by email to physicians and medical students. Results A total of 293 (82 physicians and 211 medical students) individuals have participated (response rate is 27%). Seven participants (9%) reported knowing nothing about AI, while 208 (69%) were aware that it is an emerging field and would like to learn about it. Concerns about AI impact on physicians' employability were not prominent. Instead, the majority (n=159) agreed that new positions will be created and the job market for those who embrace AI will increase. They reported willingness to adapt AI in practice if it was incorporated in international guidelines (30.5%), published in respected scientific journals (17.1%), or included in formal training (12.2%). Almost two of the three participants agreed that dedicated courses will help them to implement AI. The most commonly reported problem of AI is its inability to provide opinions in unexpected scenarios. Half of the participants think that both the manufacturer and physicians should be legally liable for medical errors occur due to AI-based decision support tools while more than one-third (36.77%) think that physicians who make the final decision should be legally liable. Senior physicians were found to be less familiar with AI and more concerned about physicians' legal liability in case of a medical error. Conclusion Physicians and medical students showed positive attitudes and willingness to learn about AI applications in healthcare. Introducing AI learning objectives or short courses in medical curriculum would help to equip physicians with the needed skills for AI-augmented healthcare system.
Collapse
Affiliation(s)
- Adhari AlZaabi
- Human and Clinical Anatomy Department, College of Medicine and Health Science, Muscat, Sultanate of Oman,Adhari AlZaabi, Human and Clinical Anatomy Department, College of Medicine and Health Science, Alkhodh, P.O 123, Muscat, Sultanate of Oman.
Abdulrahman AalAbdulsalam, College of Science, Sultan Qaboos University, Muscat, Sultanate of Oman.
| | - Saleh AlMaskari
- Human and Clinical Anatomy Department, College of Medicine and Health Science, Muscat, Sultanate of Oman
| | | |
Collapse
|
29
|
Hogg HDJ, Al-Zubaidy M, Talks J, Denniston AK, Kelly CJ, Malawana J, Papoutsi C, Teare MD, Keane PA, Beyer FR, Maniatopoulos G. Stakeholder Perspectives of Clinical Artificial Intelligence Implementation: Systematic Review of Qualitative Evidence. J Med Internet Res 2023; 25:e39742. [PMID: 36626192 PMCID: PMC9875023 DOI: 10.2196/39742] [Citation(s) in RCA: 16] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2022] [Revised: 09/28/2022] [Accepted: 11/30/2022] [Indexed: 12/03/2022] Open
Abstract
BACKGROUND The rhetoric surrounding clinical artificial intelligence (AI) often exaggerates its effect on real-world care. Limited understanding of the factors that influence its implementation can perpetuate this. OBJECTIVE In this qualitative systematic review, we aimed to identify key stakeholders, consolidate their perspectives on clinical AI implementation, and characterize the evidence gaps that future qualitative research should target. METHODS Ovid-MEDLINE, EBSCO-CINAHL, ACM Digital Library, Science Citation Index-Web of Science, and Scopus were searched for primary qualitative studies on individuals' perspectives on any application of clinical AI worldwide (January 2014-April 2021). The definition of clinical AI includes both rule-based and machine learning-enabled or non-rule-based decision support tools. The language of the reports was not an exclusion criterion. Two independent reviewers performed title, abstract, and full-text screening with a third arbiter of disagreement. Two reviewers assigned the Joanna Briggs Institute 10-point checklist for qualitative research scores for each study. A single reviewer extracted free-text data relevant to clinical AI implementation, noting the stakeholders contributing to each excerpt. The best-fit framework synthesis used the Nonadoption, Abandonment, Scale-up, Spread, and Sustainability (NASSS) framework. To validate the data and improve accessibility, coauthors representing each emergent stakeholder group codeveloped summaries of the factors most relevant to their respective groups. RESULTS The initial search yielded 4437 deduplicated articles, with 111 (2.5%) eligible for inclusion (median Joanna Briggs Institute 10-point checklist for qualitative research score, 8/10). Five distinct stakeholder groups emerged from the data: health care professionals (HCPs), patients, carers and other members of the public, developers, health care managers and leaders, and regulators or policy makers, contributing 1204 (70%), 196 (11.4%), 133 (7.7%), 129 (7.5%), and 59 (3.4%) of 1721 eligible excerpts, respectively. All stakeholder groups independently identified a breadth of implementation factors, with each producing data that were mapped between 17 and 24 of the 27 adapted Nonadoption, Abandonment, Scale-up, Spread, and Sustainability subdomains. Most of the factors that stakeholders found influential in the implementation of rule-based clinical AI also applied to non-rule-based clinical AI, with the exception of intellectual property, regulation, and sociocultural attitudes. CONCLUSIONS Clinical AI implementation is influenced by many interdependent factors, which are in turn influenced by at least 5 distinct stakeholder groups. This implies that effective research and practice of clinical AI implementation should consider multiple stakeholder perspectives. The current underrepresentation of perspectives from stakeholders other than HCPs in the literature may limit the anticipation and management of the factors that influence successful clinical AI implementation. Future research should not only widen the representation of tools and contexts in qualitative research but also specifically investigate the perspectives of all stakeholder HCPs and emerging aspects of non-rule-based clinical AI implementation. TRIAL REGISTRATION PROSPERO (International Prospective Register of Systematic Reviews) CRD42021256005; https://www.crd.york.ac.uk/prospero/display_record.php?RecordID=256005. INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID) RR2-10.2196/33145.
Collapse
Affiliation(s)
- Henry David Jeffry Hogg
- Population Health Science Institute, Newcastle University, Newcastle upon Tyne, United Kingdom
- Newcastle upon Tyne Hospitals NHS Foundation Trust, Newcastle upon Tyne, United Kingdom
- Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom
| | - Mohaimen Al-Zubaidy
- Population Health Science Institute, Newcastle University, Newcastle upon Tyne, United Kingdom
- Newcastle upon Tyne Hospitals NHS Foundation Trust, Newcastle upon Tyne, United Kingdom
| | - James Talks
- Newcastle upon Tyne Hospitals NHS Foundation Trust, Newcastle upon Tyne, United Kingdom
| | - Alastair K Denniston
- Institute of Inflammation and Ageing, College of Medical and Dental Sciences, University of Birmingham, Birmingham, United Kingdom
- University Hospitals Birmingham NHS Foundation Trust, Birmingham, United Kingdom
| | | | - Johann Malawana
- The Healthcare Leadership Academy, London, United Kingdom
- The Institute of Leadership and Management, Birmingham, United Kingdom
| | - Chrysanthi Papoutsi
- Nuffield Department of Primary Healthcare Sciences, Oxford University, Oxford, United Kingdom
| | - Marion Dawn Teare
- Population Health Science Institute, Newcastle University, Newcastle upon Tyne, United Kingdom
| | - Pearse A Keane
- Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom
- Institute of Ophthalmology, University College London, London, United Kingdom
| | - Fiona R Beyer
- Evidence Synthesis Group, Population Health Science Institute, Newcastle University, Newcastle upon Tyne, United Kingdom
| | - Gregory Maniatopoulos
- Population Health Science Institute, Newcastle University, Newcastle upon Tyne, United Kingdom
- Faculty of Business and Law, Northumbria University, Newcastle upon Tyne, United Kingdom
| |
Collapse
|
30
|
Tang L, Li J, Fantus S. Medical artificial intelligence ethics: A systematic review of empirical studies. Digit Health 2023; 9:20552076231186064. [PMID: 37434728 PMCID: PMC10331228 DOI: 10.1177/20552076231186064] [Citation(s) in RCA: 13] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2022] [Accepted: 06/16/2023] [Indexed: 07/13/2023] Open
Abstract
Background Artificial intelligence (AI) technologies are transforming medicine and healthcare. Scholars and practitioners have debated the philosophical, ethical, legal, and regulatory implications of medical AI, and empirical research on stakeholders' knowledge, attitude, and practices has started to emerge. This study is a systematic review of published empirical studies of medical AI ethics with the goal of mapping the main approaches, findings, and limitations of scholarship to inform future practice considerations. Methods We searched seven databases for published peer-reviewed empirical studies on medical AI ethics and evaluated them in terms of types of technologies studied, geographic locations, stakeholders involved, research methods used, ethical principles studied, and major findings. Findings Thirty-six studies were included (published 2013-2022). They typically belonged to one of the three topics: exploratory studies of stakeholder knowledge and attitude toward medical AI, theory-building studies testing hypotheses regarding factors contributing to stakeholders' acceptance of medical AI, and studies identifying and correcting bias in medical AI. Interpretation There is a disconnect between high-level ethical principles and guidelines developed by ethicists and empirical research on the topic and a need to embed ethicists in tandem with AI developers, clinicians, patients, and scholars of innovation and technology adoption in studying medical AI ethics.
Collapse
Affiliation(s)
- Lu Tang
- Department of Communication and Journalism, Texas A&M University, College Station, TX, USA
| | - Jinxu Li
- Department of Communication and Journalism, Texas A&M University, College Station, TX, USA
| | - Sophia Fantus
- School of Social Work, University of Texas at Arlington, Arlington, TX, USA
| |
Collapse
|
31
|
Catalina QM, Fuster-Casanovas A, Vidal-Alaball J, Escalé-Besa A, Marin-Gomez FX, Femenia J, Solé-Casals J. Knowledge and perception of primary care healthcare professionals on the use of artificial intelligence as a healthcare tool. Digit Health 2023; 9:20552076231180511. [PMID: 37361442 PMCID: PMC10286543 DOI: 10.1177/20552076231180511] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2023] [Accepted: 05/19/2023] [Indexed: 06/28/2023] Open
Abstract
Objective The rapid digitisation of healthcare data and the sheer volume being generated means that artificial intelligence (AI) is becoming a new reality in the practice of medicine. For this reason, describing the perception of primary care (PC) healthcare professionals on the use of AI as a healthcare tool and its impact in radiology is crucial to ensure its successful implementation. Methods Observational cross-sectional study, using the validated Shinners Artificial Intelligence Perception survey, aimed at all PC medical and nursing professionals in the health region of Central Catalonia. Results The survey was sent to 1068 health professionals, of whom 301 responded. And 85.7% indicated that they understood the concept of AI but there were discrepancies in the use of this tool; 65.8% indicated that they had not received any AI training and 91.4% that they would like to receive training. The mean score for the professional impact of AI was 3.62 points out of 5 (standard deviation (SD) = 0.72), with a higher score among practitioners who had some prior knowledge of and interest in AI. The mean score for preparedness for AI was 2.76 points out of 5 (SD = 0.70), with higher scores for nursing and those who use or do not know if they use AI. Conclusions The results of this study show that the majority of professionals understood the concept of AI, perceived its impact positively, and felt prepared for its implementation. In addition, despite being limited to a diagnostic aid, the implementation of AI in radiology was a high priority for these professionals.
Collapse
Affiliation(s)
- Queralt Miró Catalina
- Unitat de Suport a la Recerca de la Catalunya Central, Fundació Institut Universitari per a la Recerca a l'Atenció Primària de Salut Jordi Gol i Gurina, Sant Fruitós de Bages, Spain
- Health Promotion in Rural Areas Research Group, Gerència Territorial de la Catalunya Central, Institut Català de la Salut, Sant Fruitós de Bages, Spain
| | - Aïna Fuster-Casanovas
- Unitat de Suport a la Recerca de la Catalunya Central, Fundació Institut Universitari per a la Recerca a l'Atenció Primària de Salut Jordi Gol i Gurina, Sant Fruitós de Bages, Spain
- Health Promotion in Rural Areas Research Group, Gerència Territorial de la Catalunya Central, Institut Català de la Salut, Sant Fruitós de Bages, Spain
| | - Josep Vidal-Alaball
- Unitat de Suport a la Recerca de la Catalunya Central, Fundació Institut Universitari per a la Recerca a l'Atenció Primària de Salut Jordi Gol i Gurina, Sant Fruitós de Bages, Spain
- Health Promotion in Rural Areas Research Group, Gerència Territorial de la Catalunya Central, Institut Català de la Salut, Sant Fruitós de Bages, Spain
- Faculty of Medicine, University of Vic-Central University of Catalonia, Vic, Spain
| | - Anna Escalé-Besa
- Unitat de Suport a la Recerca de la Catalunya Central, Fundació Institut Universitari per a la Recerca a l'Atenció Primària de Salut Jordi Gol i Gurina, Sant Fruitós de Bages, Spain
- Health Promotion in Rural Areas Research Group, Gerència Territorial de la Catalunya Central, Institut Català de la Salut, Sant Fruitós de Bages, Spain
| | - Francesc X Marin-Gomez
- Unitat de Suport a la Recerca de la Catalunya Central, Fundació Institut Universitari per a la Recerca a l'Atenció Primària de Salut Jordi Gol i Gurina, Sant Fruitós de Bages, Spain
- Health Promotion in Rural Areas Research Group, Gerència Territorial de la Catalunya Central, Institut Català de la Salut, Sant Fruitós de Bages, Spain
| | - Joaquim Femenia
- Faculty of Medicine, University of Vic-Central University of Catalonia, Vic, Spain
| | - Jordi Solé-Casals
- Data and Signal Processing group, Faculty of Science, Technology and Engineering, University of Vic-Central University of Catalonia, Vic, Spain
- Department of Psychiatry, University of Cambridge, Cambridge, UK
| |
Collapse
|
32
|
Armero W, Gray KJ, Fields KG, Cole NM, Bates DW, Kovacheva VP. A survey of pregnant patients' perspectives on the implementation of artificial intelligence in clinical care. J Am Med Inform Assoc 2022; 30:46-53. [PMID: 36250788 PMCID: PMC9748543 DOI: 10.1093/jamia/ocac200] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2022] [Revised: 07/17/2022] [Accepted: 10/04/2022] [Indexed: 12/15/2022] Open
Abstract
OBJECTIVE To evaluate and understand pregnant patients' perspectives on the implementation of artificial intelligence (AI) in clinical care with a focus on opportunities to improve healthcare technologies and healthcare delivery. MATERIALS AND METHODS We developed an anonymous survey and enrolled patients presenting to the labor and delivery unit at a tertiary care center September 2019-June 2020. We investigated the role and interplay of patient demographic factors, healthcare literacy, understanding of AI, comfort levels with various AI scenarios, and preferences for AI use in clinical care. RESULTS Of the 349 parturients, 57.6% were between the ages of 25-34 years, 90.1% reported college or graduate education and 69.2% believed the benefits of AI use in clinical care outweighed the risks. Cluster analysis revealed 2 distinct groups: patients more comfortable with clinical AI use (Pro-AI) and those who preferred physician presence (AI-Cautious). Pro-AI patients had a higher degree of education, were more knowledgeable about AI use in their daily lives and saw AI use as a significant advancement in medicine. AI-Cautious patients reported a lack of human qualities and low trust in the technology as detriments to AI use. DISCUSSION Patient trust and the preservation of the human physician-patient relationship are critical in moving forward with AI implementation in healthcare. Pregnant individuals are cautiously optimistic about AI use in their care. CONCLUSION Our findings provide insights into the status of AI use in perinatal care and provide a platform for driving patient-centered innovations.
Collapse
Affiliation(s)
- William Armero
- Department of Anesthesiology, Perioperative and Pain Medicine, Brigham and Women’s Hospital, Harvard Medical School, Boston, Massachusetts, USA
- David Geffen School of Medicine at UCLA, Los Angeles, California, USA
| | - Kathryn J Gray
- Division of Maternal-Fetal Medicine, Brigham and Women’s Hospital, Harvard Medical School, Boston, Massachusetts, USA
| | - Kara G Fields
- Department of Anesthesiology, Perioperative and Pain Medicine, Brigham and Women’s Hospital, Harvard Medical School, Boston, Massachusetts, USA
| | - Naida M Cole
- Department of Anesthesiology, Perioperative and Pain Medicine, Brigham and Women’s Hospital, Harvard Medical School, Boston, Massachusetts, USA
- Department of Anesthesia and Critical Care, The University of Chicago, Chicago, Illinois, USA
| | - David W Bates
- Division of General Internal Medicine and Primary Care, Brigham and Women’s Hospital, Boston, Massachusetts, USA
- Department of Health Care Policy and Management, Harvard T.H. Chan School of Public Health, Boston, Massachusetts, USA
| | - Vesela P Kovacheva
- Department of Anesthesiology, Perioperative and Pain Medicine, Brigham and Women’s Hospital, Harvard Medical School, Boston, Massachusetts, USA
| |
Collapse
|
33
|
Alsobhi M, Sachdev HS, Chevidikunnan MF, Basuodan R, K U DK, Khan F. Facilitators and Barriers of Artificial Intelligence Applications in Rehabilitation: A Mixed-Method Approach. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2022; 19:15919. [PMID: 36497993 PMCID: PMC9737928 DOI: 10.3390/ijerph192315919] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/27/2022] [Revised: 11/24/2022] [Accepted: 11/25/2022] [Indexed: 06/17/2023]
Abstract
Artificial intelligence (AI) has been used in physical therapy diagnosis and management for various impairments. Physical therapists (PTs) need to be able to utilize the latest innovative treatment techniques to improve the quality of care. The study aimed to describe PTs' views on AI and investigate multiple factors as indicators of AI knowledge, attitude, and adoption among PTs. Moreover, the study aimed to identify the barriers to using AI in rehabilitation. Two hundred and thirty-six PTs participated voluntarily in the study. A concurrent mixed-method design was used to document PTs' opinions regarding AI deployment in rehabilitation. A self-administered survey consisting of several aspects, including demographic, knowledge, uses, advantages, impacts, and barriers limiting AI utilization in rehabilitation, was used. A total of 63.3% of PTs reported that they had not experienced any kind of AI applications at work. The major factors predicting a higher level of AI knowledge among PTs were being a non-academic worker (OR = 1.77 [95% CI; 1.01 to 3.12], p = 0.04), being a senior PT (OR = 2.44, [95%CI: 1.40 to 4.22], p = 0.002), and having a Master/Doctorate degree (OR = 1.97, [95%CI: 1.11 to 3.50], p = 0.02). However, the cost and resources of AI were the major reported barriers to adopting AI-based technologies. The study highlighted a remarkable dearth of AI knowledge among PTs. AI and advanced knowledge in technology need to be urgently transferred to PTs.
Collapse
Affiliation(s)
- Mashael Alsobhi
- Department of Physical Therapy, Faculty of Medical Rehabilitation Sciences, King Abdulaziz University, Jeddah 22252, Saudi Arabia
| | - Harpreet Singh Sachdev
- Department of Neurology, All India Institute of Medical Sciences, New Delhi 110029, India
| | - Mohamed Faisal Chevidikunnan
- Department of Physical Therapy, Faculty of Medical Rehabilitation Sciences, King Abdulaziz University, Jeddah 22252, Saudi Arabia
| | - Reem Basuodan
- Department of Rehabilitation Sciences, College of Health and Rehabilitation Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
| | - Dhanesh Kumar K U
- Nitte Institute of Physiotherapy, Nitte University, Deralaktte, Mangalore 575022, India
| | - Fayaz Khan
- Department of Physical Therapy, Faculty of Medical Rehabilitation Sciences, King Abdulaziz University, Jeddah 22252, Saudi Arabia
| |
Collapse
|
34
|
Hallowell N, Badger S, Sauerbrei A, Nellåker C, Kerasidou A. “I don’t think people are ready to trust these algorithms at face value”: trust and the use of machine learning algorithms in the diagnosis of rare disease. BMC Med Ethics 2022; 23:112. [DOI: 10.1186/s12910-022-00842-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2022] [Accepted: 10/17/2022] [Indexed: 11/17/2022] Open
Abstract
Abstract
Background
As the use of AI becomes more pervasive, and computerised systems are used in clinical decision-making, the role of trust in, and the trustworthiness of, AI tools will need to be addressed. Using the case of computational phenotyping to support the diagnosis of rare disease in dysmorphology, this paper explores under what conditions we could place trust in medical AI tools, which employ machine learning.
Methods
Semi-structured qualitative interviews (n = 20) with stakeholders (clinical geneticists, data scientists, bioinformaticians, industry and patient support group spokespersons) who design and/or work with computational phenotyping (CP) systems. The method of constant comparison was used to analyse the interview data.
Results
Interviewees emphasized the importance of establishing trust in the use of CP technology in identifying rare diseases. Trust was formulated in two interrelated ways in these data. First, interviewees talked about the importance of using CP tools within the context of a trust relationship; arguing that patients will need to trust clinicians who use AI tools and that clinicians will need to trust AI developers, if they are to adopt this technology. Second, they described a need to establish trust in the technology itself, or in the knowledge it provides—epistemic trust. Interviewees suggested CP tools used for the diagnosis of rare diseases might be perceived as more trustworthy if the user is able to vouchsafe for the technology’s reliability and accuracy and the person using/developing them is trusted.
Conclusion
This study suggests we need to take deliberate and meticulous steps to design reliable or confidence-worthy AI systems for use in healthcare. In addition, we need to devise reliable or confidence-worthy processes that would give rise to reliable systems; these could take the form of RCTs and/or systems of accountability transparency and responsibility that would signify the epistemic trustworthiness of these tools. words 294.
Collapse
|
35
|
Russell S, Kumar A. Providing Care: Intrinsic Human-Machine Teams and Data. ENTROPY (BASEL, SWITZERLAND) 2022; 24:1369. [PMID: 37420389 DOI: 10.3390/e24101369] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/30/2022] [Revised: 09/23/2022] [Accepted: 09/25/2022] [Indexed: 07/09/2023]
Abstract
Despite the many successes of artificial intelligence in healthcare applications where human-machine teaming is an intrinsic characteristic of the environment, there is little work that proposes methods for adapting quantitative health data-features with human expertise insights. A method for incorporating qualitative expert perspectives in machine learning training data is proposed. The method implements an entropy-based consensus construct that minimizes the challenges of qualitative-scale data such that they can be combined with quantitative measures in a critical clinical event (CCE) vector. Specifically, the CCE vector minimizes the effects where (a) the sample size is too small, (b) the data may not be normally distributed, or (c) The data are from Likert scales, which are ordinal, so parametric statistics cannot be used. The incorporation of human perspectives in machine learning training data provides encoding of human considerations in the subsequent machine learning model. This encoding provides a basis for increasing explainability, understandability, and ultimately trust in AI-based clinical decision support system (CDSS), thereby improving human-machine teaming concerns. A discussion of applying the CCE vector in a CDSS regime and implications for machine learning are also presented.
Collapse
Affiliation(s)
- Stephen Russell
- Department of Research, Opportunities and Innovation in Data Science, Jackson Health System, Miami, FL 33136, USA
| | - Ashwin Kumar
- Department of Research, Opportunities and Innovation in Data Science, Jackson Health System, Miami, FL 33136, USA
| |
Collapse
|
36
|
Akhtar N, Khan N, Qayyum S, Qureshi MI, Hishan SS. Efficacy and pitfalls of digital technologies in healthcare services: A systematic review of two decades. Front Public Health 2022; 10:869793. [PMID: 36187628 PMCID: PMC9523565 DOI: 10.3389/fpubh.2022.869793] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2022] [Accepted: 08/08/2022] [Indexed: 01/21/2023] Open
Abstract
The use of technology in the healthcare sector and its medical practices, from patient record maintenance to diagnostics, has significantly improved the health care emergency management system. At that backdrop, it is crucial to explore the role and challenges of these technologies in the healthcare sector. Therefore, this study provides a systematic review of the literature on technological developments in the healthcare sector and deduces its pros and cons. We curate the published studies from the Web of Science and Scopus databases by using PRISMA 2015 guidelines. After mining the data, we selected only 55 studies for the systematic literature review and bibliometric analysis. The study explores four significant classifications of technological development in healthcare: (a) digital technologies, (b) artificial intelligence, (c) blockchain, and (d) the Internet of Things. The novel contribution of current study indicate that digital technologies have significantly influenced the healthcare services such as the beginning of electronic health record, a new era of digital healthcare, while robotic surgeries and machine learning algorithms may replace practitioners as future technologies. However, a considerable number of studies have criticized these technologies in the health sector based on trust, security, privacy, and accuracy. The study suggests that future studies, on technological development in healthcare services, may take into account these issues for sustainable development of the healthcare sector.
Collapse
Affiliation(s)
- Nadeem Akhtar
- School of Urban Culture, South China Normal University, Foshan, China
| | - Nohman Khan
- UniKL Business School, Universiti Kuala Lumpur, Kuala Lumpur, Malaysia
| | - Shazia Qayyum
- Institute of Applied Psychology, University of the Punjab, Lahore, Pakistan
| | - Muhammad Imran Qureshi
- Teesside University International Business School, Middlesbrough, United Kingdom,*Correspondence: Muhammad Imran Qureshi
| | - Snail S. Hishan
- Azman Hashim International Business School, Universiti Teknologi, Kuala Lumpur, Malaysia,Independent Researcher, THRIVE Project, Brisbane, QLD, Australia
| |
Collapse
|
37
|
Terry AL, Kueper JK, Beleno R, Brown JB, Cejic S, Dang J, Leger D, McKay S, Meredith L, Pinto AD, Ryan BL, Stewart M, Zwarenstein M, Lizotte DJ. Is primary health care ready for artificial intelligence? What do primary health care stakeholders say? BMC Med Inform Decis Mak 2022; 22:237. [PMID: 36085203 PMCID: PMC9461192 DOI: 10.1186/s12911-022-01984-6] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2022] [Accepted: 09/02/2022] [Indexed: 11/10/2022] Open
Abstract
Abstract
Background
Effective deployment of AI tools in primary health care requires the engagement of practitioners in the development and testing of these tools, and a match between the resulting AI tools and clinical/system needs in primary health care. To set the stage for these developments, we must gain a more in-depth understanding of the views of practitioners and decision-makers about the use of AI in primary health care. The objective of this study was to identify key issues regarding the use of AI tools in primary health care by exploring the views of primary health care and digital health stakeholders.
Methods
This study utilized a descriptive qualitative approach, including thematic data analysis. Fourteen in-depth interviews were conducted with primary health care and digital health stakeholders in Ontario. NVivo software was utilized in the coding of the interviews.
Results
Five main interconnected themes emerged: (1) Mismatch Between Envisioned Uses and Current Reality—denoting the importance of potential applications of AI in primary health care practice, with a recognition of the current reality characterized by a lack of available tools; (2) Mechanics of AI Don’t Matter: Just Another Tool in the Toolbox– reflecting an interest in what value AI tools could bring to practice, rather than concern with the mechanics of the AI tools themselves; (3) AI in Practice: A Double-Edged Sword—the possible benefits of AI use in primary health care contrasted with fundamental concern about the possible threats posed by AI in terms of clinical skills and capacity, mistakes, and loss of control; (4) The Non-Starters: A Guarded Stance Regarding AI Adoption in Primary Health Care—broader concerns centred on the ethical, legal, and social implications of AI use in primary health care; and (5) Necessary Elements: Facilitators of AI in Primary Health Care—elements required to support the uptake of AI tools, including co-creation, availability and use of high quality data, and the need for evaluation.
Conclusion
The use of AI in primary health care may have a positive impact, but many factors need to be considered regarding its implementation. This study may help to inform the development and deployment of AI tools in primary health care.
Collapse
|
38
|
Li B, de Mestral C, Mamdani M, Al-Omran M. Perceptions of Canadian vascular surgeons toward artificial intelligence and machine learning. J Vasc Surg Cases Innov Tech 2022; 8:466-472. [PMID: 36016703 PMCID: PMC9396444 DOI: 10.1016/j.jvscit.2022.06.018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2022] [Accepted: 06/06/2022] [Indexed: 11/16/2022] Open
Abstract
Background Artificial intelligence (AI) and machine learning (ML) are rapidly advancing fields with increasing utility in health care. We conducted a survey to determine the perceptions of Canadian vascular surgeons toward AI/ML. Methods An online questionnaire was distributed to 162 members of the Canadian Society for Vascular Surgery. Self-reported knowledge, attitudes, and perceptions with respect to potential applications, limitations, and facilitators of AI/ML were assessed. Results Overall, 50 of the 162 Canadian vascular surgeons (31%) responded to the survey. Most respondents were aged 30 to 59 years (72%), male (80%), and White (67%) and practiced in academic settings (72%). One half of the participants reported that their knowledge of AI/ML was poor or very poor. Most were excited or very excited about AI/ML (66%) and were interested or very interested in learning more about the field (83.7%). The respondents believed that AI/ML would be useful or very useful for diagnosis (62%), prognosis (72%), patient selection (56%), image analysis (64%), intraoperative guidance (52%), research (88%), and education (80%). The limitations that the participants were most concerned about were errors leading to patient harm (42%), bias based on patient demographics (42%), and lack of clinician knowledge and skills in AI/ML (40%). Most were not concerned or were mildly concerned about job replacement (86%). The factors that were most important to encouraging clinicians to use AI/ML models were improvements in efficiency (88%), accurate predictions (84%), and ease of use (84%). The comments from respondents focused on the pressing need for the implementation of AI/ML in vascular surgery owing to the potential to improve care delivery. Conclusions Canadian vascular surgeons have positive views on AI/ML and believe this technology can be applied to multiple aspects of the specialty to improve patient care, research, and education. Current self-reported knowledge is poor, although interest was expressed in learning more about the field. The facilitators and barriers to the effective use of AI/ML identified in the present study can guide future development of these tools in vascular surgery.
Collapse
|
39
|
Choudhury A. Factors influencing clinicians' willingness to use an AI-based clinical decision support system. Front Digit Health 2022; 4:920662. [PMID: 36339516 PMCID: PMC9628998 DOI: 10.3389/fdgth.2022.920662] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2022] [Accepted: 08/01/2022] [Indexed: 11/07/2022] Open
Abstract
Background Given the opportunities created by artificial intelligence (AI) based decision support systems in healthcare, the vital question is whether clinicians are willing to use this technology as an integral part of clinical workflow. Purpose This study leverages validated questions to formulate an online survey and consequently explore cognitive human factors influencing clinicians' intention to use an AI-based Blood Utilization Calculator (BUC), an AI system embedded in the electronic health record that delivers data-driven personalized recommendations for the number of packed red blood cells to transfuse for a given patient. Method A purposeful sampling strategy was used to exclusively include BUC users who are clinicians in a university hospital in Wisconsin. We recruited 119 BUC users who completed the entire survey. We leveraged structural equation modeling to capture the direct and indirect effects of “AI Perception” and “Expectancy” on clinicians' Intention to use the technology when mediated by “Perceived Risk”. Results The findings indicate a significant negative relationship concerning the direct impact of AI's perception on BUC Risk (ß = −0.23, p < 0.001). Similarly, Expectancy had a significant negative effect on Risk (ß = −0.49, p < 0.001). We also noted a significant negative impact of Risk on the Intent to use BUC (ß = −0.34, p < 0.001). Regarding the indirect effect of Expectancy on the Intent to Use BUC, the findings show a significant positive impact mediated by Risk (ß = 0.17, p = 0.004). The study noted a significant positive and indirect effect of AI Perception on the Intent to Use BUC when mediated by risk (ß = 0.08, p = 0.027). Overall, this study demonstrated the influences of expectancy, perceived risk, and perception of AI on clinicians' intent to use BUC (an AI system). AI developers need to emphasize the benefits of AI technology, ensure ease of use (effort expectancy), clarify the system's potential (performance expectancy), and minimize the risk perceptions by improving the overall design. Conclusion Identifying the factors that determine clinicians' intent to use AI-based decision support systems can help improve technology adoption and use in the healthcare domain. Enhanced and safe adoption of AI can uplift the overall care process and help standardize clinical decisions and procedures. An improved AI adoption in healthcare will help clinicians share their everyday clinical workload and make critical decisions.
Collapse
|
40
|
Frost EK, Bosward R, Aquino YSJ, Braunack-Mayer A, Carter SM. Public views on ethical issues in healthcare artificial intelligence: protocol for a scoping review. Syst Rev 2022; 11:142. [PMID: 35841073 PMCID: PMC9288036 DOI: 10.1186/s13643-022-02012-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/23/2021] [Accepted: 06/25/2022] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND In recent years, innovations in artificial intelligence (AI) have led to the development of new healthcare AI (HCAI) technologies. Whilst some of these technologies show promise for improving the patient experience, ethicists have warned that AI can introduce and exacerbate harms and wrongs in healthcare. It is important that HCAI reflects the values that are important to people. However, involving patients and publics in research about AI ethics remains challenging due to relatively limited awareness of HCAI technologies. This scoping review aims to map how the existing literature on publics' views on HCAI addresses key issues in AI ethics and governance. METHODS We developed a search query to conduct a comprehensive search of PubMed, Scopus, Web of Science, CINAHL, and Academic Search Complete from January 2010 onwards. We will include primary research studies which document publics' or patients' views on machine learning HCAI technologies. A coding framework has been designed and will be used capture qualitative and quantitative data from the articles. Two reviewers will code a proportion of the included articles and any discrepancies will be discussed amongst the team, with changes made to the coding framework accordingly. Final results will be reported quantitatively and qualitatively, examining how each AI ethics issue has been addressed by the included studies. DISCUSSION Consulting publics and patients about the ethics of HCAI technologies and innovations can offer important insights to those seeking to implement HCAI ethically and legitimately. This review will explore how ethical issues are addressed in literature examining publics' and patients' views on HCAI, with the aim of determining the extent to which publics' views on HCAI ethics have been addressed in existing research. This has the potential to support the development of implementation processes and regulation for HCAI that incorporates publics' values and perspectives.
Collapse
Affiliation(s)
- Emma Kellie Frost
- Australian Centre for Health Engagement, Evidence and Values, School of Health and Society, Faculty of the Arts, Social Sciences, and Humanities, University of Wollongong, Northfields Ave, Wollongong, NSW, 2522, Australia.
| | - Rebecca Bosward
- Australian Centre for Health Engagement, Evidence and Values, School of Health and Society, Faculty of the Arts, Social Sciences, and Humanities, University of Wollongong, Northfields Ave, Wollongong, NSW, 2522, Australia
| | - Yves Saint James Aquino
- Australian Centre for Health Engagement, Evidence and Values, School of Health and Society, Faculty of the Arts, Social Sciences, and Humanities, University of Wollongong, Northfields Ave, Wollongong, NSW, 2522, Australia
| | - Annette Braunack-Mayer
- Australian Centre for Health Engagement, Evidence and Values, School of Health and Society, Faculty of the Arts, Social Sciences, and Humanities, University of Wollongong, Northfields Ave, Wollongong, NSW, 2522, Australia
| | - Stacy M Carter
- Australian Centre for Health Engagement, Evidence and Values, School of Health and Society, Faculty of the Arts, Social Sciences, and Humanities, University of Wollongong, Northfields Ave, Wollongong, NSW, 2522, Australia
| |
Collapse
|
41
|
Petersson L, Larsson I, Nygren JM, Nilsen P, Neher M, Reed JE, Tyskbo D, Svedberg P. Challenges to implementing artificial intelligence in healthcare: a qualitative interview study with healthcare leaders in Sweden. BMC Health Serv Res 2022; 22:850. [PMID: 35778736 PMCID: PMC9250210 DOI: 10.1186/s12913-022-08215-8] [Citation(s) in RCA: 32] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2022] [Accepted: 06/20/2022] [Indexed: 12/23/2022] Open
Abstract
BACKGROUND Artificial intelligence (AI) for healthcare presents potential solutions to some of the challenges faced by health systems around the world. However, it is well established in implementation and innovation research that novel technologies are often resisted by healthcare leaders, which contributes to their slow and variable uptake. Although research on various stakeholders' perspectives on AI implementation has been undertaken, very few studies have investigated leaders' perspectives on the issue of AI implementation in healthcare. It is essential to understand the perspectives of healthcare leaders, because they have a key role in the implementation process of new technologies in healthcare. The aim of this study was to explore challenges perceived by leaders in a regional Swedish healthcare setting concerning the implementation of AI in healthcare. METHODS The study takes an explorative qualitative approach. Individual, semi-structured interviews were conducted from October 2020 to May 2021 with 26 healthcare leaders. The analysis was performed using qualitative content analysis, with an inductive approach. RESULTS The analysis yielded three categories, representing three types of challenge perceived to be linked with the implementation of AI in healthcare: 1) Conditions external to the healthcare system; 2) Capacity for strategic change management; 3) Transformation of healthcare professions and healthcare practice. CONCLUSIONS In conclusion, healthcare leaders highlighted several implementation challenges in relation to AI within and beyond the healthcare system in general and their organisations in particular. The challenges comprised conditions external to the healthcare system, internal capacity for strategic change management, along with transformation of healthcare professions and healthcare practice. The results point to the need to develop implementation strategies across healthcare organisations to address challenges to AI-specific capacity building. Laws and policies are needed to regulate the design and execution of effective AI implementation strategies. There is a need to invest time and resources in implementation processes, with collaboration across healthcare, county councils, and industry partnerships.
Collapse
Affiliation(s)
- Lena Petersson
- School of Health and Welfare, Halmstad University, Box 823, 301 18, Halmstad, Sweden.
| | - Ingrid Larsson
- School of Health and Welfare, Halmstad University, Box 823, 301 18, Halmstad, Sweden
| | - Jens M Nygren
- School of Health and Welfare, Halmstad University, Box 823, 301 18, Halmstad, Sweden
| | - Per Nilsen
- School of Health and Welfare, Halmstad University, Box 823, 301 18, Halmstad, Sweden.,Department of Health, Medicine and Caring Sciences, Division of Public Health, Faculty of Health Sciences, Linköping University, Linköping, Sweden
| | - Margit Neher
- School of Health and Welfare, Halmstad University, Box 823, 301 18, Halmstad, Sweden.,Department of Rehabilitation, School of Health Sciences, Jönköping University, Jönköping, Sweden
| | - Julie E Reed
- School of Health and Welfare, Halmstad University, Box 823, 301 18, Halmstad, Sweden
| | - Daniel Tyskbo
- School of Health and Welfare, Halmstad University, Box 823, 301 18, Halmstad, Sweden
| | - Petra Svedberg
- School of Health and Welfare, Halmstad University, Box 823, 301 18, Halmstad, Sweden
| |
Collapse
|
42
|
Cornelissen L, Egher C, van Beek V, Williamson L, Hommes D. The Drivers of Acceptance of Artificial Intelligence–Powered Care Pathways Among Medical Professionals: Web-Based Survey Study. JMIR Form Res 2022; 6:e33368. [PMID: 35727614 PMCID: PMC9384807 DOI: 10.2196/33368] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2021] [Revised: 04/14/2022] [Accepted: 05/02/2022] [Indexed: 11/13/2022] Open
Abstract
Background
The emergence of Artificial Intelligence (AI) has been proven beneficial in several health care areas. Nevertheless, the uptake of AI in health care delivery remains poor. Despite the fact that the acceptance of AI-based technologies among medical professionals is a key barrier to their implementation, knowledge about what informs such attitudes is scarce.
Objective
The aim of this study was to identify and examine factors that influence the acceptability of AI-based technologies among medical professionals.
Methods
A survey was developed based on the Unified Theory of Acceptance and Use of Technology model, which was extended by adding the predictor variables perceived trust, anxiety and innovativeness, and the moderator profession. The web-based survey was completed by 67 medical professionals in the Netherlands. The data were analyzed by performing a multiple linear regression analysis followed by a moderating analysis using the Hayes PROCESS macro (SPSS; version 26.0, IBM Corp).
Results
Multiple linear regression showed that the model explained 75.4% of the variance in the acceptance of AI-powered care pathways (adjusted R2=0.754; F9,0=22.548; P<.001). The variables medical performance expectancy (β=.465; P<.001), effort expectancy (β=–.215; P=.005), perceived trust (β=.221; P=.007), nonmedical performance expectancy (β=.172; P=.08), facilitating conditions (β=–.160; P=.005), and professional identity (β=.156; P=.06) were identified as significant predictors of acceptance. Social influence of patients (β=.042; P=.63), anxiety (β=.021; P=.84), and innovativeness (β=.078; P=.30) were not identified as significant predictors. A moderating effect by gender was found between the relationship of facilitating conditions and acceptance (β=–.406; P=.09).
Conclusions
Medical performance expectancy was the most significant predictor of AI-powered care pathway acceptance among medical professionals. Nonmedical performance expectancy, effort expectancy, perceived trust, and professional identity were also found to significantly influence the acceptance of AI-powered care pathways. These factors should be addressed for successful implementation of AI-powered care pathways in health care delivery. The study was limited to medical professionals in the Netherlands, where uptake of AI technologies is still in an early stage. Follow-up multinational studies should further explore the predictors of acceptance of AI-powered care pathways over time, in different geographies, and with bigger samples.
Collapse
Affiliation(s)
- Lisa Cornelissen
- Faculty of Science, Athena Institute, Vrije Universiteit Amsterdam, Amsterdam, Netherlands
| | - Claudia Egher
- Faculty of Science, Athena Institute, Vrije Universiteit Amsterdam, Amsterdam, Netherlands
- Faculty of Health, Medicine and Life Sciences, Maastricht University, Maastricht, Netherlands
| | - Vincent van Beek
- DEARhealth, Amsterdam, Netherlands
- Department of Gastroenterology & Hepatology, Leiden University Medical Center, Leiden, Netherlands
| | | | - Daniel Hommes
- DEARhealth, Amsterdam, Netherlands
- Department of Gastroenterology & Hepatology, Leiden University Medical Center, Leiden, Netherlands
| |
Collapse
|
43
|
Bhatt P, Liu J, Gong Y, Wang J, Guo Y. Emerging Artificial Intelligence–Empowered mHealth: Scoping Review. JMIR Mhealth Uhealth 2022; 10:e35053. [PMID: 35679107 PMCID: PMC9227797 DOI: 10.2196/35053] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2021] [Revised: 01/23/2022] [Accepted: 04/08/2022] [Indexed: 11/13/2022] Open
Abstract
Background
Artificial intelligence (AI) has revolutionized health care delivery in recent years. There is an increase in research for advanced AI techniques, such as deep learning, to build predictive models for the early detection of diseases. Such predictive models leverage mobile health (mHealth) data from wearable sensors and smartphones to discover novel ways for detecting and managing chronic diseases and mental health conditions.
Objective
Currently, little is known about the use of AI-powered mHealth (AIM) settings. Therefore, this scoping review aims to map current research on the emerging use of AIM for managing diseases and promoting health. Our objective is to synthesize research in AIM models that have increasingly been used for health care delivery in the last 2 years.
Methods
Using Arksey and O’Malley’s 5-point framework for conducting scoping reviews, we reviewed AIM literature from the past 2 years in the fields of biomedical technology, AI, and information systems. We searched 3 databases, PubsOnline at INFORMS, e-journal archive at MIS Quarterly, and Association for Computing Machinery (ACM) Digital Library using keywords such as “mobile healthcare,” “wearable medical sensors,” “smartphones”, and “AI.” We included AIM articles and excluded technical articles focused only on AI models. We also used the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) technique for identifying articles that represent a comprehensive view of current research in the AIM domain.
Results
We screened 108 articles focusing on developing AIM models for ensuring better health care delivery, detecting diseases early, and diagnosing chronic health conditions, and 37 articles were eligible for inclusion, with 31 of the 37 articles being published last year (76%). Of the included articles, 9 studied AI models to detect serious mental health issues, such as depression and suicidal tendencies, and chronic health conditions, such as sleep apnea and diabetes. Several articles discussed the application of AIM models for remote patient monitoring and disease management. The considered primary health concerns belonged to 3 categories: mental health, physical health, and health promotion and wellness. Moreover, 14 of the 37 articles used AIM applications to research physical health, representing 38% of the total studies. Finally, 28 out of the 37 (76%) studies used proprietary data sets rather than public data sets. We found a lack of research in addressing chronic mental health issues and a lack of publicly available data sets for AIM research.
Conclusions
The application of AIM models for disease detection and management is a growing research domain. These models provide accurate predictions for enabling preventive care on a broader scale in the health care domain. Given the ever-increasing need for remote disease management during the pandemic, recent AI techniques, such as federated learning and explainable AI, can act as a catalyst for increasing the adoption of AIM and enabling secure data sharing across the health care industry.
Collapse
Affiliation(s)
- Paras Bhatt
- Department of Electrical & Computer Engineering, The University of Texas at San Antonio, San Antonio, TX, United States
| | - Jia Liu
- The University of Texas Health Science Center at San Antonio, San Antonio, TX, United States
| | - Yanmin Gong
- Department of Electrical & Computer Engineering, The University of Texas at San Antonio, San Antonio, TX, United States
| | - Jing Wang
- Florida State University, Tallahassee, FL, United States
| | - Yuanxiong Guo
- Department of Electrical & Computer Engineering, The University of Texas at San Antonio, San Antonio, TX, United States
| |
Collapse
|
44
|
Boillat T, Nawaz FA, Rivas H. Readiness to Embrace Artificial Intelligence Among Medical Doctors and Students: Questionnaire-Based Study. JMIR MEDICAL EDUCATION 2022; 8:e34973. [PMID: 35412463 PMCID: PMC9044144 DOI: 10.2196/34973] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/15/2021] [Revised: 02/14/2022] [Accepted: 02/17/2022] [Indexed: 05/04/2023]
Abstract
BACKGROUND Similar to understanding how blood pressure is measured by a sphygmomanometer, physicians will soon have to understand how an artificial intelligence-based application has come to the conclusion that a patient has hypertension, diabetes, or cancer. Although there are an increasing number of use cases where artificial intelligence is or can be applied to improve medical outcomes, the extent to which medical doctors and students are ready to work and leverage this paradigm is unclear. OBJECTIVE This research aims to capture medical students' and doctors' level of familiarity toward artificial intelligence in medicine as well as their challenges, barriers, and potential risks linked to the democratization of this new paradigm. METHODS A web-based questionnaire comprising five dimensions-demographics, concepts and definitions, training and education, implementation, and risks-was systematically designed from a literature search. It was completed by 207 participants in total, of which 105 (50.7%) medical doctors and 102 (49.3%) medical students trained in all continents, with most of them in Europe, the Middle East, Asia, and North America. RESULTS The results revealed no significant difference in the familiarity of artificial intelligence between medical doctors and students (P=.91), except that medical students perceived artificial intelligence in medicine to lead to higher risks for patients and the field of medicine in general (P<.001). We also identified a rather low level of familiarity with artificial intelligence (medical students=2.11/5; medical doctors=2.06/5) as well as a low attendance to education or training. Only 2.9% (3/105) of medical doctors attended a course on artificial intelligence within the previous year, compared with 9.8% (10/102) of medical students. The complexity of the field of medicine was considered one of the biggest challenges (medical doctors=3.5/5; medical students=3.8/5), whereas the reduction of physicians' skills was the most important risk (medical doctors=3.3; medical students=3.6; P=.03). CONCLUSIONS The question is not whether artificial intelligence will be used in medicine, but when it will become a standard practice for optimizing health care. The low level of familiarity with artificial intelligence identified in this study calls for the implementation of specific education and training in medical schools and hospitals to ensure that medical professionals can leverage this new paradigm and improve health outcomes.
Collapse
Affiliation(s)
- Thomas Boillat
- Design Lab, College of Medicine, Mohammed Bin Rashid University of Medicine and Health Sciences, Dubai, United Arab Emirates
| | - Faisal A Nawaz
- College of Medicine, Mohammed Bin Rashid University of Medicine and Health Sciences, Dubai, United Arab Emirates
| | - Homero Rivas
- Design Lab, College of Medicine, Mohammed Bin Rashid University of Medicine and Health Sciences, Dubai, United Arab Emirates
| |
Collapse
|
45
|
Nader K, Toprac P, Scott S, Baker S. Public understanding of artificial intelligence through entertainment media. AI & SOCIETY 2022:1-14. [PMID: 35400854 PMCID: PMC8976224 DOI: 10.1007/s00146-022-01427-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2021] [Accepted: 03/01/2022] [Indexed: 11/27/2022]
Abstract
Artificial intelligence (AI) is becoming part of our everyday experience and is expected to be ever more integrated into ordinary life for many years to come. Thus, it is important for those in product development, research, and public policy to understand how the public's perception of AI is shaped. In this study, we conducted focus groups and an online survey to determine the knowledge of AI held by the American public, and to judge whether entertainment media is a major influence on how Americans perceive AI. What we found is that the American public's knowledge of AI is patchy: some have a good understanding of what is and what is not AI, but many do not. When it came to understanding what AI can do, most respondents believe that AI could "replace human jobs" but few thought that it could "feel emotion." Most respondents were optimistic about the future and impact of AI, though about one third were not sure. Most respondents also did not think they could develop an emotional bond with or be comfortable being provided care by an AI. Regarding the influence of entertainment media on perceptions of AI, we found a significant relationship (p < 0.5) between people's beliefs about AI in entertainment media and their beliefs about AI in reality. Those who believe AI is realistically depicted in entertainment media were more likely to see AIs as potential emotional partners or apocalyptic robots than to imagine AIs taking over jobs or operating as surveillance tools.
Collapse
Affiliation(s)
- Karim Nader
- Department of Philosophy, The University of Texas at Austin, Austin, USA
| | - Paul Toprac
- Department of Computer Science, The University of Texas at Austin, Austin, USA
| | - Suzanne Scott
- Department of Radio-Television-Film, The University of Texas at Austin, Austin, USA
| | - Samuel Baker
- Department of English, The University of Texas at Austin, Austin, USA
| |
Collapse
|
46
|
Ahmed Z, Bhinder KK, Tariq A, Tahir MJ, Mehmood Q, Tabassum MS, Malik M, Aslam S, Asghar MS, Yousaf Z. Knowledge, attitude, and practice of artificial intelligence among doctors and medical students in Pakistan: A cross-sectional online survey. Ann Med Surg (Lond) 2022; 76:103493. [PMID: 35308436 PMCID: PMC8928127 DOI: 10.1016/j.amsu.2022.103493] [Citation(s) in RCA: 23] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2022] [Revised: 03/05/2022] [Accepted: 03/06/2022] [Indexed: 11/27/2022] Open
Abstract
Background The use of Artificial intelligence (AI) has gained popularity during the last few decades and its use in medicine is increasing globally. Developing countries like Pakistan are lagging in the implementation of AI-based solutions in healthcare. There is a need to incorporate AI in the health system which may help not only in expediting diagnosis and management but also injudicious resource allocation. Objective To determine the knowledge, attitude, and practice of AI among doctors and medical students in Pakistan. Materials and methods We conducted a cross-sectional study using an online questionnaire-based survey regarding demographic details, knowledge, perception, and practice of AI. A sample of 470 individuals including doctors and medical students were selected using the convenient sampling technique. The chi-square test was applied for the comparison of variables. Results Out of 470 individuals, 223(47.45%) were doctors and 247(52.55%) were medical students. Among these, 165(74%) doctors and 170(68.8%) medical students had a basic knowledge of AI but only 61(27.3%) doctors and 48(19.4%) students were aware of its medical applications. Regarding attitude, 237(76.7%) individuals supported AI's inclusion in curriculum, 368(78.3%) and 305(64.9%), 281(59.8%) and 269(57.2%) acknowledged its necessity in radiology, pathology, and COVID-19 pandemic respectively. Conclusion The majority of doctors and medical students lack knowledge about AI and its applications, but had a positive view of AI in the field of medicine and were willing to adopt it. The majority of doctors and medical students lack knowledge about AI and its applications. Developing countries like Pakistan are lagging in the implementation of AI-based solutions in healthcare.
There is a need to incorporate AI in the health system which may help in expediting diagnosis, management and injudicious resource allocation. More resources need to be allocated for the planning and implementation of AI in the medical curriculum.
Collapse
|
47
|
Hung CM, Shi HY, Lee PH, Chang CS, Rau KM, Lee HM, Tseng CH, Pei SN, Tsai KJ, Chiu CC. Potential and role of artificial intelligence in current medical healthcare. Artif Intell Cancer 2022; 3:1-10. [DOI: 10.35713/aic.v3.i1.1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/09/2021] [Revised: 12/31/2021] [Accepted: 02/20/2022] [Indexed: 02/06/2023] Open
Abstract
Artificial intelligence (AI) is defined as the digital computer or computer-controlled robot's ability to mimic intelligent conduct and crucial thinking commonly associated with intelligent beings. The application of AI technology and machine learning in medicine have allowed medical practitioners to provide patients with better quality of services; and current advancements have led to a dramatic change in the healthcare system. However, many efficient applications are still in their initial stages, which need further evaluations to improve and develop these applications. Clinicians must recognize and acclimate themselves with the developments in AI technology to improve their delivery of healthcare services; but for this to be possible, a significant revision of medical education is needed to provide future leaders with the required competencies. This article reviews the potential and limitations of AI in healthcare, as well as the current medical application trends including healthcare administration, clinical decision assistance, patient health monitoring, healthcare resource allocation, medical research, and public health policy development. Also, future possibilities for further clinical and scientific practice were also summarized.
Collapse
Affiliation(s)
- Chao-Ming Hung
- Department of General Surgery, E-Da Cancer Hospital, Kaohsiung 82445, Taiwan
- College of Medicine, I-Shou University, Kaohsiung 82445, Taiwan
| | - Hon-Yi Shi
- Department of Healthcare Administration and Medical Informatics, Kaohsiung Medical University, Kaohsiung 80708, Taiwan
- Department of Business Management, National Sun Yat-Sen University, Kaohsiung 80420, Taiwan
- Department of Medical Research, Kaohsiung Medical University Hospital, Kaohsiung 80708, Taiwan
- Department of Medical Research, China Medical University Hospital, China Medical University, Taichung 40402, Taiwan
| | - Po-Huang Lee
- College of Medicine, I-Shou University, Kaohsiung 82445, Taiwan
- Department of Surgery, E-Da Hospital, Kaohsiung 82445, Taiwan
| | - Chao-Sung Chang
- Department of Hematology & Oncology, E-Da Cancer Hospital, Kaohsiung 82445, Taiwan
- School of Medicine for International Students, College of Medicine, I-Shou University, Kaohsiung 82445, Taiwan
| | - Kun-Ming Rau
- Department of Hematology & Oncology, E-Da Cancer Hospital, Kaohsiung 82445, Taiwan
- School of Medicine, College of Medicine, I-Shou University, Kaohsiung 82445, Taiwan
| | - Hui-Ming Lee
- Department of General Surgery, E-Da Cancer Hospital, Kaohsiung 82445, Taiwan
- College of Medicine, I-Shou University, Kaohsiung 82445, Taiwan
| | - Cheng-Hao Tseng
- School of Medicine, College of Medicine, I-Shou University, Kaohsiung 82445, Taiwan
- Department of Gastroenterology and Hepatology, E-Da Cancer Hospital, Kaohsiung 82445, Taiwan
- Department of Gastroenterology and Hepatology, E-Da Hospital, Kaohsiung 82445, Taiwan
| | - Sung-Nan Pei
- Department of Hematology & Oncology, E-Da Cancer Hospital, Kaohsiung 82445, Taiwan
- School of Medicine, College of Medicine, I-Shou University, Kaohsiung 82445, Taiwan
| | - Kuen-Jang Tsai
- Department of General Surgery, E-Da Cancer Hospital, Kaohsiung 82445, Taiwan
| | - Chong-Chi Chiu
- Department of General Surgery, E-Da Cancer Hospital, Kaohsiung 82445, Taiwan
- School of Medicine, College of Medicine, I-Shou University, Kaohsiung 82445, Taiwan
- Department of Medical Education and Research, E-Da Cancer Hospital, Kaohsiung 82445, Taiwan
| |
Collapse
|
48
|
Teng M, Singla R, Yau O, Lamoureux D, Gupta A, Hu Z, Hu R, Aissiou A, Eaton S, Hamm C, Hu S, Kelly D, MacMillan KM, Malik S, Mazzoli V, Teng YW, Laricheva M, Jarus T, Field TS. Health Care Students' Perspectives on Artificial Intelligence: Countrywide Survey in Canada. JMIR MEDICAL EDUCATION 2022; 8:e33390. [PMID: 35099397 PMCID: PMC8845000 DOI: 10.2196/33390] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/05/2021] [Revised: 11/29/2021] [Accepted: 12/17/2021] [Indexed: 05/02/2023]
Abstract
BACKGROUND Artificial intelligence (AI) is no longer a futuristic concept; it is increasingly being integrated into health care. As studies on attitudes toward AI have primarily focused on physicians, there is a need to assess the perspectives of students across health care disciplines to inform future curriculum development. OBJECTIVE This study aims to explore and identify gaps in the knowledge that Canadian health care students have regarding AI, capture how health care students in different fields differ in their knowledge and perspectives on AI, and present student-identified ways that AI literacy may be incorporated into the health care curriculum. METHODS The survey was developed from a narrative literature review of topics in attitudinal surveys on AI. The final survey comprised 15 items, including multiple-choice questions, pick-group-rank questions, 11-point Likert scale items, slider scale questions, and narrative questions. We used snowball and convenience sampling methods by distributing an email with a description and a link to the web-based survey to representatives from 18 Canadian schools. RESULTS A total of 2167 students across 10 different health professions from 18 universities across Canada responded to the survey. Overall, 78.77% (1707/2167) predicted that AI technology would affect their careers within the coming decade and 74.5% (1595/2167) reported a positive outlook toward the emerging role of AI in their respective fields. Attitudes toward AI varied by discipline. Students, even those opposed to AI, identified the need to incorporate a basic understanding of AI into their curricula. CONCLUSIONS We performed a nationwide survey of health care students across 10 different health professions in Canada. The findings would inform student-identified topics within AI and their preferred delivery formats, which would advance education across different health care professions.
Collapse
Affiliation(s)
- Minnie Teng
- Faculty of Medicine, University of British Columbia, Vancouver, BC, Canada
- School of Occupational Science and Occupational Therapy, Faculty of Medicine, University of British Columbia, Vancouver, BC, Canada
| | - Rohit Singla
- Faculty of Medicine, University of British Columbia, Vancouver, BC, Canada
- School of Biomedical Engineering, University of British Columbia, Vancouver, BC, Canada
| | - Olivia Yau
- Faculty of Medicine, University of British Columbia, Vancouver, BC, Canada
| | | | - Aurinjoy Gupta
- Northern Ontario School of Medicine, Thunder Bay, ON, Canada
| | - Zoe Hu
- Queen's University, Kingston, ON, Canada
| | - Ricky Hu
- Queen's University, Kingston, ON, Canada
| | | | | | - Camille Hamm
- Northern Ontario School of Medicine, Thunder Bay, ON, Canada
| | - Sophie Hu
- University of Calgary, Calgary, AB, Canada
| | - Dayton Kelly
- Northern Ontario School of Medicine, Sudbury, ON, Canada
| | | | - Shamir Malik
- Temerty Faculty of Medicine, University of Toronto, Toronto, ON, Canada
| | - Vienna Mazzoli
- Temerty Faculty of Medicine, University of Toronto, Toronto, ON, Canada
| | - Yu-Wen Teng
- Vancouver Coastal Health, Vancouver, BC, Canada
| | - Maria Laricheva
- Faculty of Arts, University of British Columbia, Vancouver, BC, Canada
| | - Tal Jarus
- Faculty of Medicine, University of British Columbia, Vancouver, BC, Canada
- School of Occupational Science and Occupational Therapy, Faculty of Medicine, University of British Columbia, Vancouver, BC, Canada
| | - Thalia S Field
- Faculty of Medicine, University of British Columbia, Vancouver, BC, Canada
- Vancouver Stroke Program, Division of Neurology, University of British Columbia, Vancouver, BC, Canada
| |
Collapse
|
49
|
Chew HSJ, Achananuparp P. Perceptions and Needs of Artificial Intelligence in Health Care to Increase Adoption: Scoping Review. J Med Internet Res 2022; 24:e32939. [PMID: 35029538 PMCID: PMC8800095 DOI: 10.2196/32939] [Citation(s) in RCA: 26] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2021] [Revised: 11/08/2021] [Accepted: 12/03/2021] [Indexed: 01/20/2023] Open
Abstract
BACKGROUND Artificial intelligence (AI) has the potential to improve the efficiency and effectiveness of health care service delivery. However, the perceptions and needs of such systems remain elusive, hindering efforts to promote AI adoption in health care. OBJECTIVE This study aims to provide an overview of the perceptions and needs of AI to increase its adoption in health care. METHODS A systematic scoping review was conducted according to the 5-stage framework by Arksey and O'Malley. Articles that described the perceptions and needs of AI in health care were searched across nine databases: ACM Library, CINAHL, Cochrane Central, Embase, IEEE Xplore, PsycINFO, PubMed, Scopus, and Web of Science for studies that were published from inception until June 21, 2021. Articles that were not specific to AI, not research studies, and not written in English were omitted. RESULTS Of the 3666 articles retrieved, 26 (0.71%) were eligible and included in this review. The mean age of the participants ranged from 30 to 72.6 years, the proportion of men ranged from 0% to 73.4%, and the sample sizes for primary studies ranged from 11 to 2780. The perceptions and needs of various populations in the use of AI were identified for general, primary, and community health care; chronic diseases self-management and self-diagnosis; mental health; and diagnostic procedures. The use of AI was perceived to be positive because of its availability, ease of use, and potential to improve efficiency and reduce the cost of health care service delivery. However, concerns were raised regarding the lack of trust in data privacy, patient safety, technological maturity, and the possibility of full automation. Suggestions for improving the adoption of AI in health care were highlighted: enhancing personalization and customizability; enhancing empathy and personification of AI-enabled chatbots and avatars; enhancing user experience, design, and interconnectedness with other devices; and educating the public on AI capabilities. Several corresponding mitigation strategies were also identified in this study. CONCLUSIONS The perceptions and needs of AI in its use in health care are crucial in improving its adoption by various stakeholders. Future studies and implementations should consider the points highlighted in this study to enhance the acceptability and adoption of AI in health care. This would facilitate an increase in the effectiveness and efficiency of health care service delivery to improve patient outcomes and satisfaction.
Collapse
Affiliation(s)
- Han Shi Jocelyn Chew
- Alice Lee Centre for Nursing Studies, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| | - Palakorn Achananuparp
- Living Analytics Research Centre, Singapore Management University, Singapore, Singapore
| |
Collapse
|
50
|
Drogt J, Milota M, Vos S, Bredenoord A, Jongsma K. Integrating artificial intelligence in pathology: a qualitative interview study of users' experiences and expectations. Mod Pathol 2022; 35:1540-1550. [PMID: 35927490 PMCID: PMC9596368 DOI: 10.1038/s41379-022-01123-6] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2022] [Revised: 05/24/2022] [Accepted: 05/31/2022] [Indexed: 11/24/2022]
Abstract
Recent progress in the development of artificial intelligence (AI) has sparked enthusiasm for its potential use in pathology. As pathology labs are currently starting to shift their focus towards AI implementation, a better understanding how AI tools can be optimally aligned with the medical and social context of pathology daily practice is urgently needed. Strikingly, studies often fail to mention the ways in which AI tools should be integrated in the decision-making processes of pathologists, nor do they address how this can be achieved in an ethically sound way. Moreover, the perspectives of pathologists and other professionals within pathology concerning the integration of AI within pathology remains an underreported topic. This article aims to fill this gap in the literature and presents the first in-depth interview study in which professionals' perspectives on the possibilities, conditions and prerequisites of AI integration in pathology are explicated. The results of this study have led to the formulation of three concrete recommendations to support AI integration, namely: (1) foster a pragmatic attitude toward AI development, (2) provide task-sensitive information and training to health care professionals working in pathology departments and (3) take time to reflect upon users' changing roles and responsibilities.
Collapse
Affiliation(s)
- Jojanneke Drogt
- Department of Medical Humanities, University Medical Center, Utrecht, The Netherlands.
| | - Megan Milota
- grid.7692.a0000000090126352Department of Medical Humanities, University Medical Center, Utrecht, The Netherlands
| | - Shoko Vos
- grid.10417.330000 0004 0444 9382Department of Pathology, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Annelien Bredenoord
- grid.7692.a0000000090126352Department of Medical Humanities, University Medical Center, Utrecht, The Netherlands
| | - Karin Jongsma
- grid.7692.a0000000090126352Department of Medical Humanities, University Medical Center, Utrecht, The Netherlands
| |
Collapse
|