51
|
Fritsch SJ, Blankenheim A, Wahl A, Hetfeld P, Maassen O, Deffge S, Kunze J, Rossaint R, Riedel M, Marx G, Bickenbach J. Attitudes and perception of artificial intelligence in healthcare: A cross-sectional survey among patients. Digit Health 2022; 8:20552076221116772. [PMID: 35983102 PMCID: PMC9380417 DOI: 10.1177/20552076221116772] [Citation(s) in RCA: 25] [Impact Index Per Article: 12.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2022] [Accepted: 07/13/2022] [Indexed: 12/23/2022] Open
Abstract
Objective The attitudes about the usage of artificial intelligence in healthcare are
controversial. Unlike the perception of healthcare professionals, the
attitudes of patients and their companions have been of less interest so
far. In this study, we aimed to investigate the perception of artificial
intelligence in healthcare among this highly relevant group along with the
influence of digital affinity and sociodemographic factors. Methods We conducted a cross-sectional study using a paper-based questionnaire with
patients and their companions at a German tertiary referral hospital from
December 2019 to February 2020. The questionnaire consisted of three
sections examining (a) the respondents’ technical affinity, (b) their
perception of different aspects of artificial intelligence in healthcare and
(c) sociodemographic characteristics. Results From a total of 452 participants, more than 90% already read or heard about
artificial intelligence, but only 24% reported good or expert knowledge.
Asked on their general perception, 53.18% of the respondents rated the use
of artificial intelligence in medicine as positive or very positive, but
only 4.77% negative or very negative. The respondents denied concerns about
artificial intelligence, but strongly agreed that artificial intelligence
must be controlled by a physician. Older patients, women, persons with lower
education and technical affinity were more cautious on the
healthcare-related artificial intelligence usage. Conclusions German patients and their companions are open towards the usage of artificial
intelligence in healthcare. Although showing only a mediocre knowledge about
artificial intelligence, a majority rated artificial intelligence in
healthcare as positive. Particularly, patients insist that a physician
supervises the artificial intelligence and keeps ultimate responsibility for
diagnosis and therapy.
Collapse
Affiliation(s)
- Sebastian J Fritsch
- Department of Intensive Care Medicine, University Hospital RWTH Aachen, Germany
- SMITH Consortium of the German Medical Informatics Initiative, Germany
- Juelich Supercomputing Centre, Forschungszentrum Juelich, Germany
| | - Andrea Blankenheim
- Department of Intensive Care Medicine, University Hospital RWTH Aachen, Germany
| | - Alina Wahl
- Department of Intensive Care Medicine, University Hospital RWTH Aachen, Germany
| | - Petra Hetfeld
- Department of Intensive Care Medicine, University Hospital RWTH Aachen, Germany
- SMITH Consortium of the German Medical Informatics Initiative, Germany
| | - Oliver Maassen
- Department of Intensive Care Medicine, University Hospital RWTH Aachen, Germany
- SMITH Consortium of the German Medical Informatics Initiative, Germany
| | - Saskia Deffge
- Department of Intensive Care Medicine, University Hospital RWTH Aachen, Germany
- SMITH Consortium of the German Medical Informatics Initiative, Germany
| | - Julian Kunze
- SMITH Consortium of the German Medical Informatics Initiative, Germany
- Department of Anesthesiology, University Hospital RWTH Aachen, Germany
| | - Rolf Rossaint
- Department of Anesthesiology, University Hospital RWTH Aachen, Germany
| | - Morris Riedel
- SMITH Consortium of the German Medical Informatics Initiative, Germany
- Juelich Supercomputing Centre, Forschungszentrum Juelich, Germany
- Faculty of Industrial Engineering, Mechanical Engineering and Computer Science, University of Iceland, Iceland
| | - Gernot Marx
- Department of Intensive Care Medicine, University Hospital RWTH Aachen, Germany
- SMITH Consortium of the German Medical Informatics Initiative, Germany
| | - Johannes Bickenbach
- Department of Intensive Care Medicine, University Hospital RWTH Aachen, Germany
- SMITH Consortium of the German Medical Informatics Initiative, Germany
| |
Collapse
|
52
|
Shinners L, Grace S, Smith S, Stephens A, Aggar C. Exploring healthcare professionals' perceptions of artificial intelligence: Piloting the Shinners Artificial Intelligence Perception tool. Digit Health 2022; 8:20552076221078110. [PMID: 35154807 PMCID: PMC8832586 DOI: 10.1177/20552076221078110] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2021] [Accepted: 01/18/2022] [Indexed: 12/31/2022] Open
Abstract
Objective There is an urgent need to prepare the healthcare workforce for the
implementation of artificial intelligence (AI) into the healthcare setting.
Insights into workforce perception of AI could identify potential challenges
that an organisation may face when implementing this new technology. The aim
of this study was to psychometrically evaluate and pilot the Shinners
Artificial Intelligence Perception (SHAIP) questionnaire that is designed to
explore healthcare professionals’ perceptions of AI. Instrument validation
was achieved through a cross-sectional study of healthcare professionals
(n = 252) from a regional health district in
Australia. Methods and Results Exploratory factor analysis was conducted and analysis yielded a two-factor
solution consisting of 10 items and explained 51.7% of the total variance.
Factor one represented perceptions of ‘Professional impact of
AI’ (α = .832) and Factor two represented ‘Preparedness
for AI’ (α = .632). An analysis of variance indicated that ‘use
of AI’ had a significant effect on healthcare professionals’ perceptions of
both factors. ‘Discipline’ had a significant effect on Allied Health
professionals’ perception of Factor one and low mean scale score across all
disciplines suggests that all disciplines perceive that they are not
prepared for AI. Conclusions The results of this study provide preliminary support for the SHAIP tool and
a two-factor solution that measures healthcare professionals’ perceptions of
AI. Further testing is needed to establish the reliability or re-modelling
of Factor 2 and the overall performance of the SHAIP tool as a global
instrument.
Collapse
Affiliation(s)
- Lucy Shinners
- (Faculty of Health), Southern Cross University, Australia
| | - Sandra Grace
- (Faculty of Health), Southern Cross University, Australia
| | - Stuart Smith
- (Faculty of Health), Southern Cross University, Australia
| | | | | |
Collapse
|
53
|
Perrier E, Rifai M, Terzic A, Dubois C, Cohen JF. Knowledge, attitudes, and practices towards artificial intelligence among young pediatricians: A nationwide survey in France. Front Pediatr 2022; 10:1065957. [PMID: 36619510 PMCID: PMC9816325 DOI: 10.3389/fped.2022.1065957] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/10/2022] [Accepted: 11/30/2022] [Indexed: 12/24/2022] Open
Abstract
OBJECTIVE To assess the knowledge, attitudes, and practices (KAP) towards artificial intelligence (AI) among young pediatricians in France. METHODS We invited young French pediatricians to participate in an online survey. Invitees were identified through various email listings and social media. We conducted a descriptive analysis and explored whether survey responses varied according to respondents' previous training in AI and level of clinical experience (i.e., residents vs. experienced doctors). RESULTS In total, 165 French pediatricians participated in the study (median age 27 years, women 78%, residents 64%). While 90% of participants declared they understood the term "artificial intelligence", only 40% understood the term "deep learning". Most participants expected AI would lead to improvements in healthcare (e.g., better access to healthcare, 80%; diagnostic assistance, 71%), and 86% declared they would favor implementing AI tools in pediatrics. Fifty-nine percent of respondents declared seeing AI as a threat to medical data security and 35% as a threat to the ethical and human dimensions of medicine. Thirty-nine percent of respondents feared losing clinical skills because of AI, and 6% feared losing their job because of AI. Only 5% of respondents had received specific training in AI, while 87% considered implementing such programs would be necessary. Respondents who received training in AI had significantly better knowledge and a higher probability of having encountered AI tools in their medical practice (p < 0.05 for both). There was no statistically significant difference between residents' and experienced doctors' responses. CONCLUSION In this survey, most young French pediatricians had favorable views toward AI, but a large proportion expressed concerns regarding the ethical, societal, and professional issues linked with the implementation of AI.
Collapse
Affiliation(s)
- Emma Perrier
- Child Neurological Rehabilitation Unit and Learning Disorders Reference Centre, Assistance Publique-Hôpitaux de Paris, Hôpital Bicêtre, Université Paris-Saclay, Le Kremlin-Bicêtre, France
| | - Mahmoud Rifai
- Pediatric Intensive Care Unit, Assistance Publique-Hôpitaux de Paris, Hôpital Raymond-Poincaré, Université Paris-Saclay, Paris, France
| | - Arnaud Terzic
- Pediatric Intensive Care and Neonatal Medicine, Assistance Publique - Hôpitaux de Paris, Hôpital Bicêtre, Université Paris-Saclay, Le Kremlin-Bicêtre, France
| | - Constance Dubois
- Centre of Research in Epidemiology and Statistics, Inserm UMR 1153, Université Paris Cité, Paris, France
| | - Jérémie F Cohen
- Centre of Research in Epidemiology and Statistics, Inserm UMR 1153, Université Paris Cité, Paris, France.,Department of General Pediatrics and Pediatric Infectious Disease, Assistance Publique - Hôpitaux de Paris, Hôpital Necker - Enfants Malades, Université Paris Cité, Paris, France
| |
Collapse
|
54
|
Chaibi A, Zaiem I. Doctor Resistance of Artificial Intelligence in Healthcare. INTERNATIONAL JOURNAL OF HEALTHCARE INFORMATION SYSTEMS AND INFORMATICS 2022. [DOI: 10.4018/ijhisi.315618] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
Artificial intelligence (AI) has revolutionized healthcare by enhancing the quality of patient care. Despite its advantages, doctors are still reluctant to use AI in healthcare. Thus, the authors' main objective is to obtain an in-depth understanding of the barriers to doctors' adoption of AI in healthcare. The authors conducted semi-structured interviews with 11 doctors. Thematic analysis as chosen to identify patterns using QSR NVivo (version 12). The results showed that the barriers to AI adoption are lack of financial resources, need for special training, performance risk, perceived cost, technology dependency, need for human interaction, and fear of AI replacing human work.
Collapse
Affiliation(s)
- Asma Chaibi
- FSEGT, University of El Manar, Mediterranean School of Business, South Mediterranean University, Tunisia
| | - Imed Zaiem
- Faculty of Economics and Management of Nabeul, University of Carthage, Tunisia
| |
Collapse
|
55
|
Yang K, Nambudiri VE. Anticipating Ambulatory Automation: Potential Applications of Administrative and Clinical Automation in Outpatient Healthcare Delivery. Appl Clin Inform 2021; 12:1157-1160. [PMID: 34965607 PMCID: PMC8716189 DOI: 10.1055/s-0041-1740259] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023] Open
Affiliation(s)
- Kevin Yang
- Department of Dermatology, Tufts University School of Medicine, Boston, Massachusetts, United States
| | - Vinod E. Nambudiri
- Department of Dermatology, Brigham and Women's Hospital, Boston, Massachusetts, United States,Address for correspondence Vinod E. Nambudiri, MD, MBA Department of Dermatology, Brigham and Women's Hospital221 Longwood Avenue, Boston, MA 02115United States
| |
Collapse
|
56
|
Möllmann NR, Mirbabaie M, Stieglitz S. Is it alright to use artificial intelligence in digital health? A systematic literature review on ethical considerations. Health Informatics J 2021; 27:14604582211052391. [PMID: 34935557 DOI: 10.1177/14604582211052391] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
The application of artificial intelligence (AI) not only yields in advantages for healthcare but raises several ethical questions. Extant research on ethical considerations of AI in digital health is quite sparse and a holistic overview is lacking. A systematic literature review searching across 853 peer-reviewed journals and conferences yielded in 50 relevant articles categorized in five major ethical principles: beneficence, non-maleficence, autonomy, justice, and explicability. The ethical landscape of AI in digital health is portrayed including a snapshot guiding future development. The status quo highlights potential areas with little empirical but required research. Less explored areas with remaining ethical questions are validated and guide scholars' efforts by outlining an overview of addressed ethical principles and intensity of studies including correlations. Practitioners understand novel questions AI raises eventually leading to properly regulated implementations and further comprehend that society is on its way from supporting technologies to autonomous decision-making systems.
Collapse
Affiliation(s)
- Nicholas Rj Möllmann
- Research Group Digital Communication and Transformation, 27170University of Duisburg-Essen, Duisburg, Germany
| | - Milad Mirbabaie
- Faculty of Business Administration and Economics, 9168Paderborn University, Paderborn, Germany
| | - Stefan Stieglitz
- Research Group Digital Communication and Transformation, 27170University of Duisburg-Essen, Duisburg, Germany
| |
Collapse
|
57
|
Ploug T, Sundby A, Moeslund TB, Holm S. Population Preferences for Performance and Explainability of Artificial Intelligence in Health Care: Choice-Based Conjoint Survey. J Med Internet Res 2021; 23:e26611. [PMID: 34898454 PMCID: PMC8713089 DOI: 10.2196/26611] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2020] [Revised: 05/31/2021] [Accepted: 11/11/2021] [Indexed: 01/04/2023] Open
Abstract
BACKGROUND Certain types of artificial intelligence (AI), that is, deep learning models, can outperform health care professionals in particular domains. Such models hold considerable promise for improved diagnostics, treatment, and prevention, as well as more cost-efficient health care. They are, however, opaque in the sense that their exact reasoning cannot be fully explicated. Different stakeholders have emphasized the importance of the transparency/explainability of AI decision making. Transparency/explainability may come at the cost of performance. There is need for a public policy regulating the use of AI in health care that balances the societal interests in high performance as well as in transparency/explainability. A public policy should consider the wider public's interests in such features of AI. OBJECTIVE This study elicited the public's preferences for the performance and explainability of AI decision making in health care and determined whether these preferences depend on respondent characteristics, including trust in health and technology and fears and hopes regarding AI. METHODS We conducted a choice-based conjoint survey of public preferences for attributes of AI decision making in health care in a representative sample of the adult Danish population. Initial focus group interviews yielded 6 attributes playing a role in the respondents' views on the use of AI decision support in health care: (1) type of AI decision, (2) level of explanation, (3) performance/accuracy, (4) responsibility for the final decision, (5) possibility of discrimination, and (6) severity of the disease to which the AI is applied. In total, 100 unique choice sets were developed using fractional factorial design. In a 12-task survey, respondents were asked about their preference for AI system use in hospitals in relation to 3 different scenarios. RESULTS Of the 1678 potential respondents, 1027 (61.2%) participated. The respondents consider the physician having the final responsibility for treatment decisions the most important attribute, with 46.8% of the total weight of attributes, followed by explainability of the decision (27.3%) and whether the system has been tested for discrimination (14.8%). Other factors, such as gender, age, level of education, whether respondents live rurally or in towns, respondents' trust in health and technology, and respondents' fears and hopes regarding AI, do not play a significant role in the majority of cases. CONCLUSIONS The 3 factors that are most important to the public are, in descending order of importance, (1) that physicians are ultimately responsible for diagnostics and treatment planning, (2) that the AI decision support is explainable, and (3) that the AI system has been tested for discrimination. Public policy on AI system use in health care should give priority to such AI system use and ensure that patients are provided with information.
Collapse
Affiliation(s)
- Thomas Ploug
- Department of Communication and Psychology, Aalborg University, Copenhagen, Denmark
| | - Anna Sundby
- Department of Communication and Psychology, Aalborg University, Copenhagen, Denmark
| | - Thomas B Moeslund
- Visual Analysis and Perception Lab, Aalborg University, Aalborg, Denmark
| | - Søren Holm
- Centre for Social Ethics and Policy, University of Manchester, Manchester, United Kingdom
| |
Collapse
|
58
|
Scott IA, Carter SM, Coiera E. Exploring stakeholder attitudes towards AI in clinical practice. BMJ Health Care Inform 2021; 28:bmjhci-2021-100450. [PMID: 34887331 PMCID: PMC8663096 DOI: 10.1136/bmjhci-2021-100450] [Citation(s) in RCA: 42] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2021] [Accepted: 11/14/2021] [Indexed: 12/31/2022] Open
Abstract
Objectives Different stakeholders may hold varying attitudes towards artificial intelligence (AI) applications in healthcare, which may constrain their acceptance if AI developers fail to take them into account. We set out to ascertain evidence of the attitudes of clinicians, consumers, managers, researchers, regulators and industry towards AI applications in healthcare. Methods We undertook an exploratory analysis of articles whose titles or abstracts contained the terms ‘artificial intelligence’ or ‘AI’ and ‘medical’ or ‘healthcare’ and ‘attitudes’, ‘perceptions’, ‘opinions’, ‘views’, ‘expectations’. Using a snowballing strategy, we searched PubMed and Google Scholar for articles published 1 January 2010 through 31 May 2021. We selected articles relating to non-robotic clinician-facing AI applications used to support healthcare-related tasks or decision-making. Results Across 27 studies, attitudes towards AI applications in healthcare, in general, were positive, more so for those with direct experience of AI, but provided certain safeguards were met. AI applications which automated data interpretation and synthesis were regarded more favourably by clinicians and consumers than those that directly influenced clinical decisions or potentially impacted clinician–patient relationships. Privacy breaches and personal liability for AI-related error worried clinicians, while loss of clinician oversight and inability to fully share in decision-making worried consumers. Both clinicians and consumers wanted AI-generated advice to be trustworthy, while industry groups emphasised AI benefits and wanted more data, funding and regulatory certainty. Discussion Certain expectations of AI applications were common to many stakeholder groups from which a set of dependencies can be defined. Conclusion Stakeholders differ in some but not all of their attitudes towards AI. Those developing and implementing applications should consider policies and processes that bridge attitudinal disconnects between different stakeholders.
Collapse
Affiliation(s)
- Ian A Scott
- Internal Medicine and Clinical Epidemiology, Princess Alexandra Hospital, Woolloongabba, Queensland, Australia .,School of Clinical Medicine, University of Queensland, Brisbane, Queensland, Australia
| | - Stacy M Carter
- Australian Centre for Health Engagement Evidence and Values, School of Health and Society, University of Wollongong, Wollongong, New South Wales, Australia
| | - Enrico Coiera
- Centre for Clinical Informatics, Macquarie University, Sydney, New South Wales, Australia
| |
Collapse
|
59
|
Morrison K. Artificial intelligence and the NHS: a qualitative exploration of the factors influencing adoption. Future Healthc J 2021; 8:e648-e654. [PMID: 34888459 DOI: 10.7861/fhj.2020-0258] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
Background AI has the potential to improve healthcare. However, there is limited research investigating the factors which influence the adoption of AI within a healthcare system. Research aims I aimed to use innovation theory to understand the barriers and facilitators that influence AI adoption in the NHS; and to explore solutions to overcome these barriers, and examine these factors, particularly within radiology, pathology and general practice. Methodology Twelve semi-structured, one-to-one interviews were conducted with key informants. Interview data were analysed using thematic analysis. Findings A range of barriers and facilitators to the adoption of AI within the NHS were identified, including IT infrastructure and language clarity. Several solutions to overcome the barriers were proposed by participants, including education strategies and innovation champions. Conclusion Future research should explore the importance of IT infrastructure in supporting AI adoption, examine the terminology around AI and explore specialty-specific barriers to AI adoption in greater depth.
Collapse
Affiliation(s)
- Kirsty Morrison
- University of Birmingham College of Medical and Dental Sciences, Birmingham, UK
| |
Collapse
|
60
|
Rainey C, O'Regan T, Matthew J, Skelton E, Woznitza N, Chu KY, Goodman S, McConnell J, Hughes C, Bond R, McFadden S, Malamateniou C. Beauty Is in the AI of the Beholder: Are We Ready for the Clinical Integration of Artificial Intelligence in Radiography? An Exploratory Analysis of Perceived AI Knowledge, Skills, Confidence, and Education Perspectives of UK Radiographers. Front Digit Health 2021; 3:739327. [PMID: 34859245 PMCID: PMC8631824 DOI: 10.3389/fdgth.2021.739327] [Citation(s) in RCA: 24] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2021] [Accepted: 10/19/2021] [Indexed: 12/19/2022] Open
Abstract
Introduction: The use of artificial intelligence (AI) in medical imaging and radiotherapy has been met with both scepticism and excitement. However, clinical integration of AI is already well-underway. Many authors have recently reported on the AI knowledge and perceptions of radiologists/medical staff and students however there is a paucity of information regarding radiographers. Published literature agrees that AI is likely to have significant impact on radiology practice. As radiographers are at the forefront of radiology service delivery, an awareness of the current level of their perceived knowledge, skills, and confidence in AI is essential to identify any educational needs necessary for successful adoption into practice. Aim: The aim of this survey was to determine the perceived knowledge, skills, and confidence in AI amongst UK radiographers and highlight priorities for educational provisions to support a digital healthcare ecosystem. Methods: A survey was created on Qualtrics® and promoted via social media (Twitter®/LinkedIn®). This survey was open to all UK radiographers, including students and retired radiographers. Participants were recruited by convenience, snowball sampling. Demographic information was gathered as well as data on the perceived, self-reported, knowledge, skills, and confidence in AI of respondents. Insight into what the participants understand by the term “AI” was gained by means of a free text response. Quantitative analysis was performed using SPSS® and qualitative thematic analysis was performed on NVivo®. Results: Four hundred and eleven responses were collected (80% from diagnostic radiography and 20% from a radiotherapy background), broadly representative of the workforce distribution in the UK. Although many respondents stated that they understood the concept of AI in general (78.7% for diagnostic and 52.1% for therapeutic radiography respondents, respectively) there was a notable lack of sufficient knowledge of AI principles, understanding of AI terminology, skills, and confidence in the use of AI technology. Many participants, 57% of diagnostic and 49% radiotherapy respondents, do not feel adequately trained to implement AI in the clinical setting. Furthermore 52% and 64%, respectively, said they have not developed any skill in AI whilst 62% and 55%, respectively, stated that there is not enough AI training for radiographers. The majority of the respondents indicate that there is an urgent need for further education (77.4% of diagnostic and 73.9% of therapeutic radiographers feeling they have not had adequate training in AI), with many respondents stating that they had to educate themselves to gain some basic AI skills. Notable correlations between confidence in working with AI and gender, age, and highest qualification were reported. Conclusion: Knowledge of AI terminology, principles, and applications by healthcare practitioners is necessary for adoption and integration of AI applications. The results of this survey highlight the perceived lack of knowledge, skills, and confidence for radiographers in applying AI solutions but also underline the need for formalised education on AI to prepare the current and prospective workforce for the upcoming clinical integration of AI in healthcare, to safely and efficiently navigate a digital future. Focus should be given on different needs of learners depending on age, gender, and highest qualification to ensure optimal integration.
Collapse
Affiliation(s)
- Clare Rainey
- Faculty of Life and Health Sciences, School of Health Sciences, Ulster University, Newtownabbey, United Kingdom
| | - Tracy O'Regan
- The Society and College of Radiographers, London, United Kingdom
| | - Jacqueline Matthew
- School of Biomedical Engineering and Imaging Sciences, King's College London, St Thomas' Hospital, London, United Kingdom
| | - Emily Skelton
- School of Biomedical Engineering and Imaging Sciences, King's College London, St Thomas' Hospital, London, United Kingdom.,Department of Radiography, Division of Midwifery and Radiography, School of Health Sciences, University of London, London, United Kingdom
| | - Nick Woznitza
- University College London Hospitals, London, United Kingdom.,School of Allied and Public Health Professions, Canterbury Christ Church University, Canterbury, United Kingdom
| | - Kwun-Ye Chu
- Department of Oncology, Oxford Institute for Radiation Oncology, University of Oxford, Oxford, United Kingdom.,Radiotherapy Department, Churchill Hospital, Oxford University Hospitals NHS FT, Oxford, United Kingdom
| | - Spencer Goodman
- The Society and College of Radiographers, London, United Kingdom
| | | | - Ciara Hughes
- Faculty of Life and Health Sciences, School of Health Sciences, Ulster University, Newtownabbey, United Kingdom
| | - Raymond Bond
- Faculty of Computing, Engineering and the Built Environment, School of Computing, Ulster University, Newtownabbey, United Kingdom
| | - Sonyia McFadden
- Faculty of Life and Health Sciences, School of Health Sciences, Ulster University, Newtownabbey, United Kingdom
| | - Christina Malamateniou
- School of Biomedical Engineering and Imaging Sciences, King's College London, St Thomas' Hospital, London, United Kingdom.,Department of Radiography, Division of Midwifery and Radiography, School of Health Sciences, University of London, London, United Kingdom
| |
Collapse
|
61
|
Esmaeilzadeh P, Mirzaei T, Dharanikota S. Patients' Perceptions Toward Human-Artificial Intelligence Interaction in Health Care: Experimental Study. J Med Internet Res 2021; 23:e25856. [PMID: 34842535 PMCID: PMC8663518 DOI: 10.2196/25856] [Citation(s) in RCA: 27] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2020] [Revised: 05/04/2021] [Accepted: 10/26/2021] [Indexed: 12/24/2022] Open
Abstract
Background It is believed that artificial intelligence (AI) will be an integral part of health care services in the near future and will be incorporated into several aspects of clinical care such as prognosis, diagnostics, and care planning. Thus, many technology companies have invested in producing AI clinical applications. Patients are one of the most important beneficiaries who potentially interact with these technologies and applications; thus, patients’ perceptions may affect the widespread use of clinical AI. Patients should be ensured that AI clinical applications will not harm them, and that they will instead benefit from using AI technology for health care purposes. Although human-AI interaction can enhance health care outcomes, possible dimensions of concerns and risks should be addressed before its integration with routine clinical care. Objective The main objective of this study was to examine how potential users (patients) perceive the benefits, risks, and use of AI clinical applications for their health care purposes and how their perceptions may be different if faced with three health care service encounter scenarios. Methods We designed a 2×3 experiment that crossed a type of health condition (ie, acute or chronic) with three different types of clinical encounters between patients and physicians (ie, AI clinical applications as substituting technology, AI clinical applications as augmenting technology, and no AI as a traditional in-person visit). We used an online survey to collect data from 634 individuals in the United States. Results The interactions between the types of health care service encounters and health conditions significantly influenced individuals’ perceptions of privacy concerns, trust issues, communication barriers, concerns about transparency in regulatory standards, liability risks, benefits, and intention to use across the six scenarios. We found no significant differences among scenarios regarding perceptions of performance risk and social biases. Conclusions The results imply that incompatibility with instrumental, technical, ethical, or regulatory values can be a reason for rejecting AI applications in health care. Thus, there are still various risks associated with implementing AI applications in diagnostics and treatment recommendations for patients with both acute and chronic illnesses. The concerns are also evident if the AI applications are used as a recommendation system under physician experience, wisdom, and control. Prior to the widespread rollout of AI, more studies are needed to identify the challenges that may raise concerns for implementing and using AI applications. This study could provide researchers and managers with critical insights into the determinants of individuals’ intention to use AI clinical applications. Regulatory agencies should establish normative standards and evaluation guidelines for implementing AI in health care in cooperation with health care institutions. Regular audits and ongoing monitoring and reporting systems can be used to continuously evaluate the safety, quality, transparency, and ethical factors of AI clinical applications.
Collapse
Affiliation(s)
- Pouyan Esmaeilzadeh
- Department of Information Systems and Business Analytics, College of Business, Florida International University, Miami, FL, United States
| | - Tala Mirzaei
- Department of Information Systems and Business Analytics, College of Business, Florida International University, Miami, FL, United States
| | - Spurthy Dharanikota
- Department of Information Systems and Business Analytics, College of Business, Florida International University, Miami, FL, United States
| |
Collapse
|
62
|
Martinho A, Kroesen M, Chorus C. A healthy debate: Exploring the views of medical doctors on the ethics of artificial intelligence. Artif Intell Med 2021; 121:102190. [PMID: 34763805 DOI: 10.1016/j.artmed.2021.102190] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2021] [Revised: 09/22/2021] [Accepted: 09/29/2021] [Indexed: 12/23/2022]
Abstract
Artificial Intelligence (AI) is moving towards the health space. It is generally acknowledged that, while there is great promise in the implementation of AI technologies in healthcare, it also raises important ethical issues. In this study we surveyed medical doctors based in The Netherlands, Portugal, and the U.S. from a diverse mix of medical specializations about the ethics surrounding Health AI. Four main perspectives have emerged from the data representing different views about this matter. The first perspective (AI is a helpful tool: Let physicians do what they were trained for) highlights the efficiency associated with automation, which will allow doctors to have the time to focus on expanding their medical knowledge and skills. The second perspective (Rules & Regulations are crucial: Private companies only think about money) shows strong distrust in private tech companies and emphasizes the need for regulatory oversight. The third perspective (Ethics is enough: Private companies can be trusted) puts more trust in private tech companies and maintains that ethics is sufficient to ground these corporations. And finally the fourth perspective (Explainable AI tools: Learning is necessary and inevitable) emphasizes the importance of explainability of AI tools in order to ensure that doctors are engaged in the technological progress. Each perspective provides valuable and often contrasting insights about ethical issues that should be operationalized and accounted for in the design and development of AI Health.
Collapse
Affiliation(s)
| | | | - Caspar Chorus
- Delft University of Technology, Delft, the Netherlands
| |
Collapse
|
63
|
Weinert L, Müller J, Svensson L, Heinze O. The perspective of IT decision makers on factors influencing adoption and implementation of AI-technologies in 40 German Hospitals: Descriptive Analysis (Preprint). JMIR Med Inform 2021; 10:e34678. [PMID: 35704378 PMCID: PMC9244653 DOI: 10.2196/34678] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2021] [Revised: 02/15/2022] [Accepted: 03/11/2022] [Indexed: 02/06/2023] Open
Abstract
Background New artificial intelligence (AI) tools are being developed at a high speed. However, strategies and practical experiences surrounding the adoption and implementation of AI in health care are lacking. This is likely because of the high implementation complexity of AI, legacy IT infrastructure, and unclear business cases, thus complicating AI adoption. Research has recently started to identify the factors influencing AI readiness of organizations. Objective This study aimed to investigate the factors influencing AI readiness as well as possible barriers to AI adoption and implementation in German hospitals. We also assessed the status quo regarding the dissemination of AI tools in hospitals. We focused on IT decision makers, a seldom studied but highly relevant group. Methods We created a web-based survey based on recent AI readiness and implementation literature. Participants were identified through a publicly accessible database and contacted via email or invitational leaflets sent by mail, in some cases accompanied by a telephonic prenotification. The survey responses were analyzed using descriptive statistics. Results We contacted 609 possible participants, and our database recorded 40 completed surveys. Most participants agreed or rather agreed with the statement that AI would be relevant in the future, both in Germany (37/40, 93%) and in their own hospital (36/40, 90%). Participants were asked whether their hospitals used or planned to use AI technologies. Of the 40 participants, 26 (65%) answered “yes.” Most AI technologies were used or planned for patient care, followed by biomedical research, administration, and logistics and central purchasing. The most important barriers to AI were lack of resources (staff, knowledge, and financial). Relevant possible opportunities for using AI were increase in efficiency owing to time-saving effects, competitive advantages, and increase in quality of care. Most AI tools in use or in planning have been developed with external partners. Conclusions Few tools have been implemented in routine care, and many hospitals do not use or plan to use AI in the future. This can likely be explained by missing or unclear business cases or the need for a modern IT infrastructure to integrate AI tools in a usable manner. These shortcomings complicate decision-making and resource attribution. As most AI technologies already in use were developed in cooperation with external partners, these relationships should be fostered. IT decision makers should assess their hospitals’ readiness for AI individually with a focus on resources. Further research should continue to monitor the dissemination of AI tools and readiness factors to determine whether improvements can be made over time. This monitoring is especially important with regard to government-supported investments in AI technologies that could alleviate financial burdens. Qualitative studies with hospital IT decision makers should be conducted to further explore the reasons for slow AI.
Collapse
Affiliation(s)
- Lina Weinert
- Institute of Medical Informatics, Heidelberg University Hospital, Heidelberg, Germany
| | - Julia Müller
- Institute of Medical Informatics, Heidelberg University Hospital, Heidelberg, Germany
| | - Laura Svensson
- Institute of Medical Informatics, Heidelberg University Hospital, Heidelberg, Germany
| | - Oliver Heinze
- Institute of Medical Informatics, Heidelberg University Hospital, Heidelberg, Germany
| |
Collapse
|
64
|
Moorman LP. Principles for Real-World Implementation of Bedside Predictive Analytics Monitoring. Appl Clin Inform 2021; 12:888-896. [PMID: 34553360 PMCID: PMC8458037 DOI: 10.1055/s-0041-1735183] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022] Open
Abstract
A new development in the practice of medicine is Artificial Intelligence-based predictive analytics that forewarn clinicians of future deterioration of their patients. This proactive opportunity, though, is different from the reactive stance that clinicians traditionally take. Implementing these tools requires new ideas about how to educate clinician users to facilitate trust and adoption and to promote sustained use. Our real-world hospital experience implementing a predictive analytics monitoring system that uses electronic health record and continuous monitoring data has taught us principles that we believe to be applicable to the implementation of other such analytics systems within the health care environment. These principles are mentioned below:• To promote trust, the science must be understandable.• To enhance uptake, the workflow should not be impacted greatly.• To maximize buy-in, engagement at all levels is important.• To ensure relevance, the education must be tailored to the clinical role and hospital culture.• To lead to clinical action, the information must integrate into clinical care.• To promote sustainability, there should be periodic support interactions after formal implementation.
Collapse
Affiliation(s)
- Liza Prudente Moorman
- Clinical Implementation Specialist, Advanced Medical Predictive Devices, Diagnostics, and Displays (AMP3D), Charlottesville, Virginia, United States
| |
Collapse
|
65
|
Botwe BO, Antwi WK, Arkoh S, Akudjedu TN. Radiographers' perspectives on the emerging integration of artificial intelligence into diagnostic imaging: The Ghana study. J Med Radiat Sci 2021; 68:260-268. [PMID: 33586361 PMCID: PMC8424310 DOI: 10.1002/jmrs.460] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2020] [Accepted: 01/16/2021] [Indexed: 12/19/2022] Open
Abstract
INTRODUCTION The integration of artificial intelligence (AI) systems into medical imaging is advancing the practice and patient care. It is thought to further revolutionise the entire field in the near future. This study explored Ghanaian radiographers' perspectives on the integration of AI into medical imaging. METHODS A cross-sectional online survey of registered Ghanaian radiographers was conducted within a 3-month period (February-April, 2020). The survey sought information relating to demography, general perspectives on AI and implementation issues. Descriptive and inferential statistics were used for data analyses. RESULTS A response rate of 64.5% (151/234) was achieved. Majority of the respondents (n = 122, 80.8%) agreed that AI technology is the future of medical imaging. A good number of them (n = 131, 87.4%) indicated that AI would have an overall positive impact on medical imaging practice. However, some expressed fears about AI-related errors (n = 126, 83.4%), while others expressed concerns relating to job security (n = 35, 23.2%). High equipment cost, lack of knowledge and fear of cyber threats were identified as some factors hindering AI implementation in Ghana. CONCLUSIONS The radiographers who responded to this survey demonstrated a positive attitude towards the integration of AI into medical imaging. However, there were concerns about AI-related errors, job displacement and salary reduction which need to be addressed. Lack of knowledge, high equipment cost and cyber threats could impede the implementation of AI in medical imaging in Ghana. These findings are likely comparable to most low resource countries and we suggest more education to promote credibility of AI in practice.
Collapse
Affiliation(s)
- Benard O. Botwe
- Department of RadiographySchool of Biomedical and Allied Health SciencesCollege of Health SciencesUniversity of GhanaAccraGhana
| | - William K. Antwi
- Department of RadiographySchool of Biomedical and Allied Health SciencesCollege of Health SciencesUniversity of GhanaAccraGhana
| | - Samuel Arkoh
- Department of RadiographySchool of Biomedical and Allied Health SciencesCollege of Health SciencesUniversity of GhanaAccraGhana
| | - Theophilus N. Akudjedu
- Department of Medical Science & Public HealthFaculty of Health & Social SciencesInstitute of Medical Imaging & VisualisationBournemouth UniversityPooleUK
| |
Collapse
|
66
|
Threat of racial and economic inequality increases preference for algorithm decision-making. COMPUTERS IN HUMAN BEHAVIOR 2021. [DOI: 10.1016/j.chb.2021.106859] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
|
67
|
Aggarwal R, Farag S, Martin G, Ashrafian H, Darzi A. Patient Perceptions on Data Sharing and Applying Artificial Intelligence to Health Care Data: Cross-sectional Survey. J Med Internet Res 2021; 23:e26162. [PMID: 34236994 PMCID: PMC8430862 DOI: 10.2196/26162] [Citation(s) in RCA: 32] [Impact Index Per Article: 10.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2020] [Revised: 04/04/2021] [Accepted: 07/05/2021] [Indexed: 12/25/2022] Open
Abstract
Background Considerable research is being conducted as to how artificial intelligence (AI) can be effectively applied to health care. However, for the successful implementation of AI, large amounts of health data are required for training and testing algorithms. As such, there is a need to understand the perspectives and viewpoints of patients regarding the use of their health data in AI research. Objective We surveyed a large sample of patients for identifying current awareness regarding health data research, and for obtaining their opinions and views on data sharing for AI research purposes, and on the use of AI technology on health care data. Methods A cross-sectional survey with patients was conducted at a large multisite teaching hospital in the United Kingdom. Data were collected on patient and public views about sharing health data for research and the use of AI on health data. Results A total of 408 participants completed the survey. The respondents had generally low levels of prior knowledge about AI. Most were comfortable with sharing health data with the National Health Service (NHS) (318/408, 77.9%) or universities (268/408, 65.7%), but far fewer with commercial organizations such as technology companies (108/408, 26.4%). The majority endorsed AI research on health care data (357/408, 87.4%) and health care imaging (353/408, 86.4%) in a university setting, provided that concerns about privacy, reidentification of anonymized health care data, and consent processes were addressed. Conclusions There were significant variations in the patient perceptions, levels of support, and understanding of health data research and AI. Greater public engagement levels and debates are necessary to ensure the acceptability of AI research and its successful integration into clinical practice in future.
Collapse
Affiliation(s)
- Ravi Aggarwal
- Institute of Global Health Innovation, Imperial College London, London, United Kingdom
| | - Soma Farag
- Institute of Global Health Innovation, Imperial College London, London, United Kingdom
| | - Guy Martin
- Institute of Global Health Innovation, Imperial College London, London, United Kingdom
| | - Hutan Ashrafian
- Institute of Global Health Innovation, Imperial College London, London, United Kingdom
| | - Ara Darzi
- Institute of Global Health Innovation, Imperial College London, London, United Kingdom
| |
Collapse
|
68
|
Chen Y, Stavropoulou C, Narasinkan R, Baker A, Scarbrough H. Professionals' responses to the introduction of AI innovations in radiology and their implications for future adoption: a qualitative study. BMC Health Serv Res 2021; 21:813. [PMID: 34389014 PMCID: PMC8364018 DOI: 10.1186/s12913-021-06861-y] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2021] [Accepted: 08/05/2021] [Indexed: 12/13/2022] Open
Abstract
Background Artificial Intelligence (AI) innovations in radiology offer a potential solution to the increasing demand for imaging tests and the ongoing workforce crisis. Crucial to their adoption is the involvement of different professional groups, namely radiologists and radiographers, who work interdependently but whose perceptions and responses towards AI may differ. We aim to explore the knowledge, awareness and attitudes towards AI amongst professional groups in radiology, and to analyse the implications for the future adoption of these technologies into practice. Methods We conducted 18 semi-structured interviews with 12 radiologists and 6 radiographers from four breast units in National Health Services (NHS) organisations and one focus group with 8 radiographers from a fifth NHS breast unit, between 2018 and 2020. Results We found that radiographers and radiologists vary with respect to their awareness and knowledge around AI. Through their professional networks, conference attendance, and contacts with industry developers, radiologists receive more information and acquire more knowledge of the potential applications of AI. Radiographers instead rely more on localized personal networks for information. Our results also show that although both groups believe AI innovations offer a potential solution to workforce shortages, they differ significantly regarding the impact they believe it will have on their professional roles. Radiologists believe AI has the potential to take on more repetitive tasks and allow them to focus on more interesting and challenging work. They are less concerned that AI technology might constrain their professional role and autonomy. Radiographers showed greater concern about the potential impact that AI technology could have on their roles and skills development. They were less confident of their ability to respond positively to the potential risks and opportunities posed by AI technology. Conclusions In summary, our findings suggest that professional responses to AI are linked to existing work roles, but are also mediated by differences in knowledge and attitudes attributable to inter-professional differences in status and identity. These findings question broad-brush assertions about the future deskilling impact of AI which neglect the need for AI innovations in healthcare to be integrated into existing work processes subject to high levels of professional autonomy.
Collapse
Affiliation(s)
- Yaru Chen
- Centre for Healthcare Innovation Research, City, University of London, London, UK
| | - Charitini Stavropoulou
- Centre for Healthcare Innovation Research, City, University of London, London, UK.,School of Health Sciences, City, University of London, Northampton Square, London, EC1V 0HB, UK
| | - Radhika Narasinkan
- Centre for Healthcare Innovation Research, City, University of London, London, UK
| | - Adrian Baker
- Centre for Healthcare Innovation Research, City, University of London, London, UK
| | - Harry Scarbrough
- Centre for Healthcare Innovation Research, City, University of London, London, UK. .,Bayes Business School, City, University of London, 106 Bunhill Row, London, EC1Y 8TZ, UK.
| |
Collapse
|
69
|
Nasseef OA, Baabdullah AM, Alalwan AA, Lal B, Dwivedi YK. Artificial intelligence-based public healthcare systems: G2G knowledge-based exchange to enhance the decision-making process. GOVERNMENT INFORMATION QUARTERLY 2021. [DOI: 10.1016/j.giq.2021.101618] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|
70
|
Santos JC, Wong JHD, Pallath V, Ng KH. The perceptions of medical physicists towards relevance and impact of artificial intelligence. Phys Eng Sci Med 2021; 44:833-841. [PMID: 34283393 DOI: 10.1007/s13246-021-01036-9] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2021] [Accepted: 07/13/2021] [Indexed: 01/04/2023]
Abstract
Artificial intelligence (AI) is an innovative tool with the potential to impact medical physicists' clinical practices, research, and the profession. The relevance of AI and its impact on the clinical practice and routine of professionals in medical physics were evaluated by medical physicists and researchers in this field. An online survey questionnaire was designed for distribution to professionals and students in medical physics around the world. In addition to demographics questions, we surveyed opinions on the role of AI in medical physicists' practices, the possibility of AI threatening/disrupting the medical physicists' practices and career, the need for medical physicists to acquire knowledge on AI, and the need for teaching AI in postgraduate medical physics programmes. The level of knowledge of medical physicists on AI was also consulted. A total of 1019 respondents from 94 countries participated. More than 85% of the respondents agreed that AI would play an essential role in medical physicists' practices. AI should be taught in the postgraduate medical physics programmes, and that more applications such as quality control (QC), treatment planning would be performed by AI. Half of the respondents thought AI would not threaten/disrupt the medical physicists' practices. AI knowledge was mainly acquired through self-taught and work-related activities. Nonetheless, many (40%) reported that they have no skill in AI. The general perception of medical physicists was that AI is here to stay, influencing our practices. Medical physicists should be prepared with education and training for this new reality.
Collapse
Affiliation(s)
- Josilene C Santos
- Department of Nuclear Physics, Federal University of Rio de Janeiro, Rio de Janeiro, Brazil
| | - Jeannie Hsiu Ding Wong
- Department of Biomedical Imaging, Faculty of Medicine, Universiti Malaya, Kuala Lumpur, Malaysia.
| | - Vinod Pallath
- Medical Education and Research Development Unit, Faculty of Medicine, Universiti Malaya, Kuala Lumpur, Malaysia
| | - Kwan Hoong Ng
- Department of Biomedical Imaging, Faculty of Medicine, Universiti Malaya, Kuala Lumpur, Malaysia
| |
Collapse
|
71
|
Scott IA, Abdel-Hafez A, Barras M, Canaris S. What is needed to mainstream artificial intelligence in health care? AUST HEALTH REV 2021; 45:591-596. [PMID: 34162464 DOI: 10.1071/ah21034] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2021] [Accepted: 04/27/2021] [Indexed: 11/23/2022]
Abstract
Artificial intelligence (AI) has become a mainstream technology in many industries, but not yet in health care. Although basic research and commercial investment are burgeoning across various clinical disciplines, AI remains relatively non-existent in most healthcare organisations. This is despite hundreds of AI applications having passed proof-of-concept phase, and scores receiving regulatory approval overseas. AI has considerable potential to optimise multiple care processes, maximise workforce capacity, reduce waste and costs, and improve patient outcomes. The current obstacles to wider AI adoption in health care and the pre-requisites for its successful development, evaluation and implementation need to be defined.
Collapse
Affiliation(s)
- Ian A Scott
- Princess Alexandra Hospital, Ipswich Road, Brisbane, Qld, Australia
| | - Ahmad Abdel-Hafez
- Division of Clinical Informatics, Metro South Hospital and Health Service, 199 Ipswich Road, Brisbane, Qld, Australia
| | - Michael Barras
- Princess Alexandra Hospital, Ipswich Road, Brisbane, Qld, Australia
| | - Stephen Canaris
- Division of Clinical Informatics, Metro South Hospital and Health Service, 199 Ipswich Road, Brisbane, Qld, Australia
| |
Collapse
|
72
|
van der Laak J, Litjens G, Ciompi F. Deep learning in histopathology: the path to the clinic. Nat Med 2021; 27:775-784. [PMID: 33990804 DOI: 10.1038/s41591-021-01343-4] [Citation(s) in RCA: 267] [Impact Index Per Article: 89.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2020] [Accepted: 03/31/2021] [Indexed: 02/08/2023]
Abstract
Machine learning techniques have great potential to improve medical diagnostics, offering ways to improve accuracy, reproducibility and speed, and to ease workloads for clinicians. In the field of histopathology, deep learning algorithms have been developed that perform similarly to trained pathologists for tasks such as tumor detection and grading. However, despite these promising results, very few algorithms have reached clinical implementation, challenging the balance between hope and hype for these new techniques. This Review provides an overview of the current state of the field, as well as describing the challenges that still need to be addressed before artificial intelligence in histopathology can achieve clinical value.
Collapse
Affiliation(s)
- Jeroen van der Laak
- Department of Pathology, Radboud University Medical Center, Nijmegen, the Netherlands. .,Center for Medical Image Science and Visualization, Linköping University, Linköping, Sweden.
| | - Geert Litjens
- Department of Pathology, Radboud University Medical Center, Nijmegen, the Netherlands
| | - Francesco Ciompi
- Department of Pathology, Radboud University Medical Center, Nijmegen, the Netherlands
| |
Collapse
|
73
|
Lennox-Chhugani N, Chen Y, Pearson V, Trzcinski B, James J. Women's attitudes to the use of AI image readers: a case study from a national breast screening programme. BMJ Health Care Inform 2021; 28:bmjhci-2020-100293. [PMID: 33795236 PMCID: PMC8021737 DOI: 10.1136/bmjhci-2020-100293] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2020] [Revised: 03/01/2021] [Accepted: 03/08/2021] [Indexed: 12/11/2022] Open
Abstract
Background Researchers and developers are evaluating the use of mammogram readers that use artificial intelligence (AI) in clinical settings. Objectives This study examines the attitudes of women, both current and future users of breast screening, towards the use of AI in mammogram reading. Methods We used a cross-sectional, mixed methods study design with data from the survey responses and focus groups. We researched in four National Health Service hospitals in England. There we approached female workers over the age of 18 years and their immediate friends and family. We collected 4096 responses. Results Through descriptive statistical analysis, we learnt that women of screening age (≥50 years) were less likely than women under screening age to use technology apps for healthcare advice (likelihood ratio=0.85, 95% CI 0.82 to 0.89, p<0.001). They were also less likely than women under screening age to agree that AI can have a positive effect on society (likelihood ratio=0.89, 95% CI 0.84 to 0.95, p<0.001). However, they were more likely to feel positive about AI used to read mammograms (likelihood ratio=1.09, 95% CI 1.02 to 1.17, p=0.009). Discussion and Conclusions Women of screening age are ready to accept the use of AI in breast screening but are less likely to use other AI-based health applications. A large number of women are undecided, or had mixed views, about the use of AI generally and they remain to be convinced that it can be trusted.
Collapse
Affiliation(s)
| | - Yan Chen
- School of Medicine, University of Nottingham, Nottingham, UK
| | - Veronica Pearson
- East Midlands Imaging Network, Nottingham University Hospitals NHS Trust, Nottingham, UK
| | | | - Jonathan James
- Nottingham Breast Institute, Nottingham University Hospitals NHS Trust, Nottingham, UK
| |
Collapse
|
74
|
Shinners L, Aggar C, Grace S, Smith S. Exploring healthcare professionals' perceptions of artificial intelligence: Validating a questionnaire using the e-Delphi method. Digit Health 2021; 7:20552076211003433. [PMID: 33815816 PMCID: PMC7995296 DOI: 10.1177/20552076211003433] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2020] [Accepted: 02/23/2021] [Indexed: 01/15/2023] Open
Abstract
Objective The aim of this study was to draw upon the collective knowledge of experts in the fields of health and technology to develop a questionnaire that measured healthcare professionals' perceptions of Artificial Intelligence (AI). Methods The panel for this study were carefully selected participants who demonstrated an interest and/or involvement in AI from the fields of health or information technology. Recruitment was accomplished via email which invited the panel member to participate and included study and consent information. Data were collected from three rounds in the form of an online survey, an online group meeting and email communication. A 75% median threshold was used to define consensus. Results Between January and March 2019, five healthcare professionals and three IT experts participated in three rounds of study to reach consensus on the structure and content of the questionnaire. In Round 1 panel members identified issues about general understanding of AI and achieved consensus on nine draft questionnaire items. In Round 2 the panel achieved consensus on demographic questions and comprehensive group discussion resulted in the development of two further questionnaire items for inclusion. In a final e-Delphi round, a draft of the final questionnaire was distributed via email to the panel members for comment. No further amendments were put forward and 100% consensus was achieved. Conclusion A modified e-Delphi method was used to validate and develop a questionnaire to explore healthcare professionals' perceptions of AI. The e-Delphi method was successful in achieving consensus from an interdisciplinary panel of experts from health and IT. Further research is recommended to test the reliability of this questionnaire.
Collapse
Affiliation(s)
- Lucy Shinners
- Faculty of Health, Southern Cross University, Gold Coast Airport, Bilinga, Australia
| | - Christina Aggar
- Faculty of Health, Southern Cross University, Gold Coast Airport, Bilinga, Australia
| | - Sandra Grace
- Faculty of Health, Southern Cross University, East Lismore, Australia
| | - Stuart Smith
- Faculty of Health, Southern Cross University, Coffs Harbour, Australia
| |
Collapse
|
75
|
Knop M, Weber S, Mueller M, Niehaves B. Human Factors and Technological Characteristics Influencing the Interaction with AI-enabled Clinical Decision Support Systems: A Literature Review (Preprint). JMIR Hum Factors 2021; 9:e28639. [PMID: 35323118 PMCID: PMC8990344 DOI: 10.2196/28639] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2021] [Revised: 06/02/2021] [Accepted: 02/07/2022] [Indexed: 01/22/2023] Open
Abstract
Background The digitization and automation of diagnostics and treatments promise to alter the quality of health care and improve patient outcomes, whereas the undersupply of medical personnel, high workload on medical professionals, and medical case complexity increase. Clinical decision support systems (CDSSs) have been proven to help medical professionals in their everyday work through their ability to process vast amounts of patient information. However, comprehensive adoption is partially disrupted by specific technological and personal characteristics. With the rise of artificial intelligence (AI), CDSSs have become an adaptive technology with human-like capabilities and are able to learn and change their characteristics over time. However, research has not reflected on the characteristics and factors essential for effective collaboration between human actors and AI-enabled CDSSs. Objective Our study aims to summarize the factors influencing effective collaboration between medical professionals and AI-enabled CDSSs. These factors are essential for medical professionals, management, and technology designers to reflect on the adoption, implementation, and development of an AI-enabled CDSS. Methods We conducted a literature review including 3 different meta-databases, screening over 1000 articles and including 101 articles for full-text assessment. Of the 101 articles, 7 (6.9%) met our inclusion criteria and were analyzed for our synthesis. Results We identified the technological characteristics and human factors that appear to have an essential effect on the collaboration of medical professionals and AI-enabled CDSSs in accordance with our research objective, namely, training data quality, performance, explainability, adaptability, medical expertise, technological expertise, personality, cognitive biases, and trust. Comparing our results with those from research on non-AI CDSSs, some characteristics and factors retain their importance, whereas others gain or lose relevance owing to the uniqueness of human-AI interactions. However, only a few (1/7, 14%) studies have mentioned the theoretical foundations and patient outcomes related to AI-enabled CDSSs. Conclusions Our study provides a comprehensive overview of the relevant characteristics and factors that influence the interaction and collaboration between medical professionals and AI-enabled CDSSs. Rather limited theoretical foundations currently hinder the possibility of creating adequate concepts and models to explain and predict the interrelations between these characteristics and factors. For an appropriate evaluation of the human-AI collaboration, patient outcomes and the role of patients in the decision-making process should be considered.
Collapse
Affiliation(s)
- Michael Knop
- Department of Information Systems, University of Siegen, Siegen, Germany
| | - Sebastian Weber
- Department of Information Systems, University of Siegen, Siegen, Germany
| | - Marius Mueller
- Department of Information Systems, University of Siegen, Siegen, Germany
| | - Bjoern Niehaves
- Department of Information Systems, University of Siegen, Siegen, Germany
| |
Collapse
|
76
|
Data preparation for artificial intelligence in medical imaging: A comprehensive guide to open-access platforms and tools. Phys Med 2021; 83:25-37. [DOI: 10.1016/j.ejmp.2021.02.007] [Citation(s) in RCA: 28] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/30/2020] [Revised: 01/27/2021] [Accepted: 02/15/2021] [Indexed: 02/06/2023] Open
|
77
|
Artificial Intelligence and the Medical Physicist: Welcome to the Machine. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app11041691] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
Artificial intelligence (AI) is a branch of computer science dedicated to giving machines or computers the ability to perform human-like cognitive functions, such as learning, problem-solving, and decision making. Since it is showing superior performance than well-trained human beings in many areas, such as image classification, object detection, speech recognition, and decision-making, AI is expected to change profoundly every area of science, including healthcare and the clinical application of physics to healthcare, referred to as medical physics. As a result, the Italian Association of Medical Physics (AIFM) has created the “AI for Medical Physics” (AI4MP) group with the aims of coordinating the efforts, facilitating the communication, and sharing of the knowledge on AI of the medical physicists (MPs) in Italy. The purpose of this review is to summarize the main applications of AI in medical physics, describe the skills of the MPs in research and clinical applications of AI, and define the major challenges of AI in healthcare.
Collapse
|
78
|
Samuel G, Diedericks H, Derrick G. Population health AI researchers' perceptions of the public portrayal of AI: A pilot study. PUBLIC UNDERSTANDING OF SCIENCE (BRISTOL, ENGLAND) 2021; 30:196-211. [PMID: 33084490 PMCID: PMC7859568 DOI: 10.1177/0963662520965490] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/03/2023]
Abstract
This article reports how 18 UK and Canadian population health artificial intelligence researchers in Higher Education Institutions perceive the use of artificial intelligence systems in their research, and how this compares with their perceptions about the media portrayal of artificial intelligence systems. This is triangulated with a small scoping analysis of how UK and Canadian news articles portray artificial intelligence systems associated with health research and care. Interviewees had concerns about what they perceived as sensationalist reporting of artificial intelligence systems - a finding reflected in the media analysis. In line with Pickersgill's concept of 'epistemic modesty', they considered artificial intelligence systems better perceived as non-exceptionalist methodological tools that were uncertain and unexciting. Adopting 'epistemic modesty' was sometimes hindered by stakeholders to whom the research is disseminated, who may be less interested in hearing about the uncertainties of scientific practice, having implications on both research and policy.
Collapse
Affiliation(s)
- Gabrielle Samuel
- Gabrielle Samuel, Department of
Global Health & Social Medicine, King’s College London, Bush
House, 30 Aldwych, London, WC2B 4BG, UK.
| | | | | |
Collapse
|
79
|
Diaz O, Guidi G, Ivashchenko O, Colgan N, Zanca F. Artificial intelligence in the medical physics community: An international survey. Phys Med 2021; 81:141-146. [DOI: 10.1016/j.ejmp.2020.11.037] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/28/2020] [Revised: 10/24/2020] [Accepted: 11/30/2020] [Indexed: 12/13/2022] Open
|
80
|
Sandhu S, Lin AL, Brajer N, Sperling J, Ratliff W, Bedoya AD, Balu S, O'Brien C, Sendak MP. Integrating a Machine Learning System Into Clinical Workflows: Qualitative Study. J Med Internet Res 2020; 22:e22421. [PMID: 33211015 PMCID: PMC7714645 DOI: 10.2196/22421] [Citation(s) in RCA: 47] [Impact Index Per Article: 11.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2020] [Revised: 09/16/2020] [Accepted: 10/26/2020] [Indexed: 12/22/2022] Open
Abstract
Background Machine learning models have the potential to improve diagnostic accuracy and management of acute conditions. Despite growing efforts to evaluate and validate such models, little is known about how to best translate and implement these products as part of routine clinical care. Objective This study aims to explore the factors influencing the integration of a machine learning sepsis early warning system (Sepsis Watch) into clinical workflows. Methods We conducted semistructured interviews with 15 frontline emergency department physicians and rapid response team nurses who participated in the Sepsis Watch quality improvement initiative. Interviews were audio recorded and transcribed. We used a modified grounded theory approach to identify key themes and analyze qualitative data. Results A total of 3 dominant themes emerged: perceived utility and trust, implementation of Sepsis Watch processes, and workforce considerations. Participants described their unfamiliarity with machine learning models. As a result, clinician trust was influenced by the perceived accuracy and utility of the model from personal program experience. Implementation of Sepsis Watch was facilitated by the easy-to-use tablet application and communication strategies that were developed by nurses to share model outputs with physicians. Barriers included the flow of information among clinicians and gaps in knowledge about the model itself and broader workflow processes. Conclusions This study generated insights into how frontline clinicians perceived machine learning models and the barriers to integrating them into clinical workflows. These findings can inform future efforts to implement machine learning interventions in real-world settings and maximize the adoption of these interventions.
Collapse
Affiliation(s)
- Sahil Sandhu
- Trinity College of Arts & Sciences, Duke University, Durham, NC, United States
| | - Anthony L Lin
- Duke University School of Medicine, Durham, NC, United States
| | - Nathan Brajer
- Duke University School of Medicine, Durham, NC, United States
| | - Jessica Sperling
- Social Science Research Institute, Duke University, Durham, NC, United States
| | - William Ratliff
- Duke Institute for Health Innovation, Durham, NC, United States
| | - Armando D Bedoya
- Division of Pulmonary, Allergy, and Critical Care Medicine, Duke University School of Medicine, Durham, NC, United States
| | - Suresh Balu
- Duke Institute for Health Innovation, Durham, NC, United States
| | - Cara O'Brien
- Department of Medicine, Duke University School of Medicine, Durham, NC, United States
| | - Mark P Sendak
- Duke Institute for Health Innovation, Durham, NC, United States
| |
Collapse
|
81
|
Castagno S, Khalifa M. Perceptions of Artificial Intelligence Among Healthcare Staff: A Qualitative Survey Study. Front Artif Intell 2020; 3:578983. [PMID: 33733219 PMCID: PMC7861214 DOI: 10.3389/frai.2020.578983] [Citation(s) in RCA: 41] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2020] [Accepted: 09/22/2020] [Indexed: 01/16/2023] Open
Abstract
Objectives: The medical community is in agreement that artificial intelligence (AI) will have a radical impact on patient care in the near future. The purpose of this study is to assess the awareness of AI technologies among health professionals and to investigate their perceptions toward AI applications in medicine. Design: A web-based Google Forms survey was distributed via the Royal Free London NHS Foundation Trust e-newsletter. Setting: Only staff working at the NHS Foundation Trust received an invitation to complete the online questionnaire. Participants: 98 healthcare professionals out of 7,538 (response rate 1.3%; CI 95%; margin of error 9.64%) completed the survey, including medical doctors, nurses, therapists, managers, and others. Primary outcome: To investigate the prior knowledge of health professionals on the subject of AI as well as their attitudes and worries about its current and future applications. Results: 64% of respondents reported never coming across applications of AI in their work and 87% did not know the difference between machine learning and deep learning, although 50% knew at least one of the two terms. Furthermore, only 5% stated using speech recognition or transcription applications on a daily basis, while 63% never utilize them. 80% of participants believed there may be serious privacy issues associated with the use of AI and 40% considered AI to be potentially even more dangerous than nuclear weapons. However, 79% also believed AI could be useful or extremely useful in their field of work and only 10% were worried AI will replace them at their job. Conclusions: Despite agreeing on the usefulness of AI in the medical field, most health professionals lack a full understanding of the principles of AI and are worried about potential consequences of its widespread use in clinical practice. The cooperation of healthcare workers is crucial for the integration of AI into clinical practice and without it the NHS may miss out on an exceptionally rewarding opportunity. This highlights the need for better education and clear regulatory frameworks.
Collapse
Affiliation(s)
- Simone Castagno
- Department of Interventional Radiology, Royal Free Hospital, London, United Kingdom
| | - Mohamed Khalifa
- Department of Interventional Radiology, Royal Free Hospital, London, United Kingdom
| |
Collapse
|
82
|
Abstract
PURPOSE OF REVIEW In this article, we review the current state of artificial intelligence applications in retinopathy of prematurity (ROP) and provide insight on challenges as well as strategies for bringing these algorithms to the bedside. RECENT FINDINGS In the past few years, there has been a dramatic shift from machine learning approaches based on feature extraction to 'deep' convolutional neural networks for artificial intelligence applications. Several artificial intelligence for ROP approaches have demonstrated adequate proof-of-concept performance in research studies. The next steps are to determine whether these algorithms are robust to variable clinical and technical parameters in practice. Integration of artificial intelligence into ROP screening and treatment is limited by generalizability of the algorithms to maintain performance on unseen data and integration of artificial intelligence technology into new or existing clinical workflows. SUMMARY Real-world implementation of artificial intelligence for ROP diagnosis will require massive efforts targeted at developing standards for data acquisition, true external validation, and demonstration of feasibility. We must now focus on ethical, technical, clinical, regulatory, and financial considerations to bring this technology to the infant bedside to realize the promise offered by this technology to reduce preventable blindness from ROP.
Collapse
|
83
|
Esmaeilzadeh P. Use of AI-based tools for healthcare purposes: a survey study from consumers' perspectives. BMC Med Inform Decis Mak 2020; 20:170. [PMID: 32698869 PMCID: PMC7376886 DOI: 10.1186/s12911-020-01191-1] [Citation(s) in RCA: 96] [Impact Index Per Article: 24.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2020] [Accepted: 07/16/2020] [Indexed: 12/31/2022] Open
Abstract
BACKGROUND Several studies highlight the effects of artificial intelligence (AI) systems on healthcare delivery. AI-based tools may improve prognosis, diagnostics, and care planning. It is believed that AI will be an integral part of healthcare services in the near future and will be incorporated into several aspects of clinical care. Thus, many technology companies and governmental projects have invested in producing AI-based clinical tools and medical applications. Patients can be one of the most important beneficiaries and users of AI-based applications whose perceptions may affect the widespread use of AI-based tools. Patients should be ensured that they will not be harmed by AI-based devices, and instead, they will be benefited by using AI technology for healthcare purposes. Although AI can enhance healthcare outcomes, possible dimensions of concerns and risks should be addressed before its integration with routine clinical care. METHODS We develop a model mainly based on value perceptions due to the specificity of the healthcare field. This study aims at examining the perceived benefits and risks of AI medical devices with clinical decision support (CDS) features from consumers' perspectives. We use an online survey to collect data from 307 individuals in the United States. RESULTS The proposed model identifies the sources of motivation and pressure for patients in the development of AI-based devices. The results show that technological, ethical (trust factors), and regulatory concerns significantly contribute to the perceived risks of using AI applications in healthcare. Of the three categories, technological concerns (i.e., performance and communication feature) are found to be the most significant predictors of risk beliefs. CONCLUSIONS This study sheds more light on factors affecting perceived risks and proposes some recommendations on how to practically reduce these concerns. The findings of this study provide implications for research and practice in the area of AI-based CDS. Regulatory agencies, in cooperation with healthcare institutions, should establish normative standard and evaluation guidelines for the implementation and use of AI in healthcare. Regular audits and ongoing monitoring and reporting systems can be used to continuously evaluate the safety, quality, transparency, and ethical factors of AI-based services.
Collapse
Affiliation(s)
- Pouyan Esmaeilzadeh
- Department of Information Systems and Business Analytics, College of Business, Florida International University, Miami, FL, 33199, USA.
| |
Collapse
|
84
|
Alami H, Lehoux P, Auclair Y, de Guise M, Gagnon MP, Shaw J, Roy D, Fleet R, Ag Ahmed MA, Fortin JP. Artificial Intelligence and Health Technology Assessment: Anticipating a New Level of Complexity. J Med Internet Res 2020; 22:e17707. [PMID: 32406850 PMCID: PMC7380986 DOI: 10.2196/17707] [Citation(s) in RCA: 35] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2020] [Revised: 04/25/2020] [Accepted: 05/13/2020] [Indexed: 12/12/2022] Open
Abstract
Artificial intelligence (AI) is seen as a strategic lever to improve access, quality, and efficiency of care and services and to build learning and value-based health systems. Many studies have examined the technical performance of AI within an experimental context. These studies provide limited insights into the issues that its use in a real-world context of care and services raises. To help decision makers address these issues in a systemic and holistic manner, this viewpoint paper relies on the health technology assessment core model to contrast the expectations of the health sector toward the use of AI with the risks that should be mitigated for its responsible deployment. The analysis adopts the perspective of payers (ie, health system organizations and agencies) because of their central role in regulating, financing, and reimbursing novel technologies. This paper suggests that AI-based systems should be seen as a health system transformation lever, rather than a discrete set of technological devices. Their use could bring significant changes and impacts at several levels: technological, clinical, human and cognitive (patient and clinician), professional and organizational, economic, legal, and ethical. The assessment of AI's value proposition should thus go beyond technical performance and cost logic by performing a holistic analysis of its value in a real-world context of care and services. To guide AI development, generate knowledge, and draw lessons that can be translated into action, the right political, regulatory, organizational, clinical, and technological conditions for innovation should be created as a first step.
Collapse
Affiliation(s)
- Hassane Alami
- Public Health Research Center, Université de Montréal, Montreal, QC, Canada
- Department of Health Management, Evaluation and Policy, École de santé publique de l'Université de Montréal, Montreal, QC, Canada
- Institut national d'excellence en santé et services sociaux, Montréal, QC, Canada
| | - Pascale Lehoux
- Public Health Research Center, Université de Montréal, Montreal, QC, Canada
- Department of Health Management, Evaluation and Policy, École de santé publique de l'Université de Montréal, Montreal, QC, Canada
| | - Yannick Auclair
- Institut national d'excellence en santé et services sociaux, Montréal, QC, Canada
| | - Michèle de Guise
- Institut national d'excellence en santé et services sociaux, Montréal, QC, Canada
| | - Marie-Pierre Gagnon
- Research Center on Healthcare and Services in Primary Care, Université Laval, Quebec, QC, Canada
- Faculty of Nursing Science, Université Laval, Quebec, QC, Canada
| | - James Shaw
- Joint Centre for Bioethics, University of Toronto, Toronto, ON, Canada
- Institute for Health System Solutions and Virtual Care, Women's College Hospital, Toronto, ON, Canada
| | - Denis Roy
- Institut national d'excellence en santé et services sociaux, Montréal, QC, Canada
| | - Richard Fleet
- Research Center on Healthcare and Services in Primary Care, Université Laval, Quebec, QC, Canada
- Department of Family Medicine and Emergency Medicine, Faculty of Medicine, Université Laval, Quebec, QC, Canada
- Research Chair in Emergency Medicine, Université Laval - CHAU Hôtel-Dieu de Lévis, Lévis, QC, Canada
| | - Mohamed Ali Ag Ahmed
- Research Chair on Chronic Diseases in Primary Care, Université de Sherbrooke, Chicoutimi, QC, Canada
| | - Jean-Paul Fortin
- Research Center on Healthcare and Services in Primary Care, Université Laval, Quebec, QC, Canada
- Department of Social and Preventive Medicine, Faculty of Medicine, Université Laval, Quebec, QC, Canada
| |
Collapse
|
85
|
Xiong J, Zuo M. Adoption of the mobile platform of medical and senior care in China: An empirical examination of perceived value (Preprint). JMIR Aging 2020. [DOI: 10.2196/20684] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
|