1
|
Franco D’Souza R, Mathew M, Mishra V, Surapaneni KM. Twelve tips for addressing ethical concerns in the implementation of artificial intelligence in medical education. MEDICAL EDUCATION ONLINE 2024; 29:2330250. [PMID: 38566608 PMCID: PMC10993743 DOI: 10.1080/10872981.2024.2330250] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/30/2024] [Accepted: 03/08/2024] [Indexed: 04/04/2024]
Abstract
Artificial Intelligence (AI) holds immense potential for revolutionizing medical education and healthcare. Despite its proven benefits, the full integration of AI faces hurdles, with ethical concerns standing out as a key obstacle. Thus, educators should be equipped to address the ethical issues that arise and ensure the seamless integration and sustainability of AI-based interventions. This article presents twelve essential tips for addressing the major ethical concerns in the use of AI in medical education. These include emphasizing transparency, addressing bias, validating content, prioritizing data protection, obtaining informed consent, fostering collaboration, training educators, empowering students, regularly monitoring, establishing accountability, adhering to standard guidelines, and forming an ethics committee to address the issues that arise in the implementation of AI. By adhering to these tips, medical educators and other stakeholders can foster a responsible and ethical integration of AI in medical education, ensuring its long-term success and positive impact.
Collapse
Affiliation(s)
- Russell Franco D’Souza
- Department of Education, UNESCO Chair in Bioethics, Melbourne, Australia
- Department of Organisational Psychological Medicine, International Institute of Organisational Psychological Medicine, Melbourne, Australia
| | - Mary Mathew
- Department of Pathology, Kasturba Medical College, Manipal, Manipal Academy of Higher Education (MAHE), Manipal, India
| | - Vedprakash Mishra
- School of Hogher Education and Research, Datta Meghe Institute of Higher Education and Research (Deemed to be University), Nagpur, India
| | - Krishna Mohan Surapaneni
- Department of Biochemistry, Panimalar Medical College Hospital & Research Institute, Chennai, India
- Department of Medical Education, Panimalar Medical College Hospital & Research Institute, Chennai, India
| |
Collapse
|
2
|
Ibrahim AM, Abdel-Aziz HR, Mohamed HAH, Zaghamir DEF, Wahba NMI, Hassan GA, Shaban M, El-Nablaway M, Aldughmi ON, Aboelola TH. Balancing confidentiality and care coordination: challenges in patient privacy. BMC Nurs 2024; 23:564. [PMID: 39148055 PMCID: PMC11328515 DOI: 10.1186/s12912-024-02231-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2024] [Accepted: 08/06/2024] [Indexed: 08/17/2024] Open
Abstract
BACKGROUND In the digital age, maintaining patient confidentiality while ensuring effective care coordination poses significant challenges for healthcare providers, particularly nurses. AIM To investigate the challenges and strategies associated with balancing patient confidentiality and effective care coordination in the digital age. METHODS A cross-sectional study was conducted in a general hospital in Egypt to collect data from 150 nurses across various departments with at least six months of experience in patient care. Data were collected using six tools: Demographic Form, HIPAA Compliance Checklist, Privacy Impact Assessment (PIA) Tool, Data Sharing Agreement (DSA) Framework, EHR Privacy and Security Assessment Tool, and NIST Cybersecurity Framework. Validity and Reliability were ensured through pilot testing and factor analysis. RESULTS Participants were primarily aged 31-40 years (45%), with 75% female and 60% staff nurses. High compliance was observed in the HIPAA Compliance Checklist, especially in Administrative Safeguards (3.8 ± 0.5), indicating strong management and training processes, with an overall score of 85 ± 10. The PIA Tool showed robust privacy management, with Project Descriptions scoring 4.5 ± 0.3 and a total score of 30 ± 3. The DSA Framework had a mean total score of 20 ± 2, with Data Protection Measures scoring highest at 4.0 ± 0.4. The EHR assessments revealed high scores in Access Controls (4.4 ± 0.3) and Data Integrity Measures (4.3 ± 0.3), with an overall score of 22 ± 1.5. The NIST Cybersecurity Framework had a total score of 18 ± 2, with the highest scores in Protect (3.8) and lower in Detect (3.6). Strong positive correlations were found between HIPAA Compliance and EHR Privacy (r = 0.70, p < 0.05) and NIST Cybersecurity (r = 0.55, p < 0.05), reflecting effective data protection practices. CONCLUSION The study suggests that continuous improvement in privacy practices among healthcare providers, through ongoing training and comprehensive privacy frameworks, is vital for enhancing patient confidentiality and supporting effective care coordination.
Collapse
Affiliation(s)
- Ateya Megahed Ibrahim
- College of Nursing, Prince Sattam Bin Abdulaziz University, Alkarj, Saudi Arabia.
- Family and Community Health Nursing Department, Faculty of Nursing, Port Said University, Port Said City, Port Said, 42526, Egypt.
| | - Hassanat Ramadan Abdel-Aziz
- College of Nursing, Prince Sattam Bin Abdulaziz University, Alkarj, Saudi Arabia
- Gerontological Nursing Department, Faculty of Nursing, Zagazig University, Zagazig, Egypt
| | - Heba Ali Hamed Mohamed
- Community Health Nursing Department, Faculty of Nursing, Mansoura University, Mansoura City, Dakahlia, Egypt
| | - Donia Elsaid Fathi Zaghamir
- College of Nursing, Prince Sattam Bin Abdulaziz University, Alkarj, Saudi Arabia
- Pediatric Nursing Department, Faculty of Nursing, Port Said University, Port Said City, 42526, Egypt
| | - Nadia Mohamed Ibrahim Wahba
- College of Nursing, Prince Sattam Bin Abdulaziz University, Alkarj, Saudi Arabia
- Psychiatric Nursing and Mental Health Department, Faculty of Nursing, Port Said University, Port Said, 42526, Egypt
| | - Ghada A Hassan
- Pediatric Nursing Department, Faculty of Nursing, Menoufia University, Shibin el Kom, Egypt
| | - Mostafa Shaban
- Community Health Nursing Department, College of Nursing, Jouf University, Sakaka, Al Jouf, 72388, Saudi Arabia
| | - Mohammad El-Nablaway
- Department of Basic Medical Sciences, College of Medicine, AlMaarefa University, P.O.Box 71666, 11597, Riyadh, Saudi Arabia
| | - Ohoud Naif Aldughmi
- Department of Medical and Surgical Nursing, Northern Border University, Arar, Saudi Arabia
| | | |
Collapse
|
3
|
Kinney M, Anastasiadou M, Naranjo-Zolotov M, Santos V. Expectation management in AI: A framework for understanding stakeholder trust and acceptance of artificial intelligence systems. Heliyon 2024; 10:e28562. [PMID: 38576546 PMCID: PMC10990870 DOI: 10.1016/j.heliyon.2024.e28562] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2023] [Revised: 03/11/2024] [Accepted: 03/20/2024] [Indexed: 04/06/2024] Open
Abstract
As artificial intelligence systems gain traction, their trustworthiness becomes paramount to harness their benefits and mitigate risks. This study underscores the pressing need for an expectation management framework to align stakeholder anticipations before any system-related activities, such as data collection, modeling, or implementation. To this end, we introduce a comprehensive framework tailored to capture end-user expectations specifically for trustworthy artificial intelligence systems. To ensure its relevance and robustness, we validated the framework via semi-structured interviews, encompassing questions rooted in the framework's constructs and principles. These interviews engaged fourteen diverse end users across the healthcare and education sectors, including physicians, teachers, and students. Through a meticulous qualitative analysis of the interview transcripts, we unearthed pivotal themes and discerned varying perspectives among the interviewee groups. Ultimately, our framework stands as a pivotal tool, paving the way for in-depth discussions about user expectations, illuminating the significance of various system attributes, and spotlighting potential challenges that might jeopardize the system's efficacy.
Collapse
Affiliation(s)
- Marjorie Kinney
- NOVA Information Management School (NOVA IMS), Universidade NOVA de Lisboa, Campus de Campolide, 1070-312, Lisboa, Portugal
| | - Maria Anastasiadou
- NOVA Information Management School (NOVA IMS), Universidade NOVA de Lisboa, Campus de Campolide, 1070-312, Lisboa, Portugal
| | - Mijail Naranjo-Zolotov
- NOVA Information Management School (NOVA IMS), Universidade NOVA de Lisboa, Campus de Campolide, 1070-312, Lisboa, Portugal
| | - Vitor Santos
- NOVA Information Management School (NOVA IMS), Universidade NOVA de Lisboa, Campus de Campolide, 1070-312, Lisboa, Portugal
| |
Collapse
|
4
|
Shamszare H, Choudhury A. Clinicians' Perceptions of Artificial Intelligence: Focus on Workload, Risk, Trust, Clinical Decision Making, and Clinical Integration. Healthcare (Basel) 2023; 11:2308. [PMID: 37628506 PMCID: PMC10454426 DOI: 10.3390/healthcare11162308] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2023] [Revised: 08/09/2023] [Accepted: 08/10/2023] [Indexed: 08/27/2023] Open
Abstract
Artificial intelligence (AI) offers the potential to revolutionize healthcare, from improving diagnoses to patient safety. However, many healthcare practitioners are hesitant to adopt AI technologies fully. To understand why, this research explored clinicians' views on AI, especially their level of trust, their concerns about potential risks, and how they believe AI might affect their day-to-day workload. We surveyed 265 healthcare professionals from various specialties in the U.S. The survey aimed to understand their perceptions and any concerns they might have about AI in their clinical practice. We further examined how these perceptions might align with three hypothetical approaches to integrating AI into healthcare: no integration, sequential (step-by-step) integration, and parallel (side-by-side with current practices) integration. The results reveal that clinicians who view AI as a workload reducer are more inclined to trust it and are more likely to use it in clinical decision making. However, those perceiving higher risks with AI are less inclined to adopt it in decision making. While the role of clinical experience was found to be statistically insignificant in influencing trust in AI and AI-driven decision making, further research might explore other potential moderating variables, such as technical aptitude, previous exposure to AI, or the specific medical specialty of the clinician. By evaluating three hypothetical scenarios of AI integration in healthcare, our study elucidates the potential pitfalls of sequential AI integration and the comparative advantages of parallel integration. In conclusion, this study underscores the necessity of strategic AI integration into healthcare. AI should be perceived as a supportive tool rather than an intrusive entity, augmenting the clinicians' skills and facilitating their workflow rather than disrupting it. As we move towards an increasingly digitized future in healthcare, comprehending the among AI technology, clinician perception, trust, and decision making is fundamental.
Collapse
Affiliation(s)
| | - Avishek Choudhury
- Industrial and Management Systems Engineering, West Virginia University, Morgantown, WV 26506, USA;
| |
Collapse
|
5
|
King H, Williams B, Treanor D, Randell R. How, for whom, and in what contexts will artificial intelligence be adopted in pathology? A realist interview study. J Am Med Inform Assoc 2023; 30:529-538. [PMID: 36565465 PMCID: PMC9933065 DOI: 10.1093/jamia/ocac254] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2022] [Revised: 11/14/2022] [Accepted: 12/09/2022] [Indexed: 12/25/2022] Open
Abstract
OBJECTIVE There is increasing interest in using artificial intelligence (AI) in pathology to improve accuracy and efficiency. Studies of clinicians' perceptions of AI have found only moderate acceptability, suggesting further research is needed regarding integration into clinical practice. This study aimed to explore stakeholders' theories concerning how and in what contexts AI is likely to become integrated into pathology. MATERIALS AND METHODS A literature review provided tentative theories that were revised through a realist interview study with 20 pathologists and 5 pathology trainees. Questions sought to elicit whether, and in what ways, the tentative theories fitted with interviewees' perceptions and experiences. Analysis focused on identifying the contextual factors that may support or constrain uptake of AI in pathology. RESULTS Interviews highlighted the importance of trust in AI, with interviewees emphasizing evaluation and the opportunity for pathologists to become familiar with AI as means for establishing trust. Interviewees expressed a desire to be involved in design and implementation of AI tools, to ensure such tools address pressing needs, but needs vary by subspecialty. Workflow integration is desired but whether AI tools should work automatically will vary according to the task and the context. CONCLUSIONS It must not be assumed that AI tools that provide benefit in one subspecialty will provide benefit in others. Pathologists should be involved in the decision to introduce AI, with opportunity to assess strengths and weaknesses. Further research is needed concerning the evidence required to satisfy pathologists regarding the benefits of AI.
Collapse
Affiliation(s)
- Henry King
- School of Medicine, University of Leeds, Leeds, UK
| | - Bethany Williams
- Department of Pathology, Leeds Teaching Hospitals NHS Trust, Leeds, UK
| | - Darren Treanor
- School of Medicine, University of Leeds, Leeds, UK
- Department of Pathology, Leeds Teaching Hospitals NHS Trust, Leeds, UK
- Department of Clinical Pathology, Linköping University, Linköping, Sweden
- Department of Clinical and Experimental Medicine, Linköping University, Linköping, Sweden
- Center for Medical Image Science and Visualization (CMIV), Linköping University, Linköping, Sweden
| | - Rebecca Randell
- Faculty of Health Studies, University of Bradford, Bradford, UK
- Wolfson Centre for Applied Health Research, Bradford, UK
| |
Collapse
|
6
|
Wu C, Xu H, Bai D, Chen X, Gao J, Jiang X. Public perceptions on the application of artificial intelligence in healthcare: a qualitative meta-synthesis. BMJ Open 2023; 13:e066322. [PMID: 36599634 PMCID: PMC9815015 DOI: 10.1136/bmjopen-2022-066322] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/05/2023] Open
Abstract
OBJECTIVES Medical artificial intelligence (AI) has been used widely applied in clinical field due to its convenience and innovation. However, several policy and regulatory issues such as credibility, sharing of responsibility and ethics have raised concerns in the use of AI. It is therefore necessary to understand the general public's views on medical AI. Here, a meta-synthesis was conducted to analyse and summarise the public's understanding of the application of AI in the healthcare field, to provide recommendations for future use and management of AI in medical practice. DESIGN This was a meta-synthesis of qualitative studies. METHOD A search was performed on the following databases to identify studies published in English and Chinese: MEDLINE, CINAHL, Web of science, Cochrane library, Embase, PsycINFO, CNKI, Wanfang and VIP. The search was conducted from database inception to 25 December 2021. The meta-aggregation approach of JBI was used to summarise findings from qualitative studies, focusing on the public's perception of the application of AI in healthcare. RESULTS Of the 5128 studies screened, 12 met the inclusion criteria, hence were incorporated into analysis. Three synthesised findings were used as the basis of our conclusions, including advantages of medical AI from the public's perspective, ethical and legal concerns about medical AI from the public's perspective, and public suggestions on the application of AI in medical field. CONCLUSION Results showed that the public acknowledges the unique advantages and convenience of medical AI. Meanwhile, several concerns about the application of medical AI were observed, most of which involve ethical and legal issues. The standard application and reasonable supervision of medical AI is key to ensuring its effective utilisation. Based on the public's perspective, this analysis provides insights and suggestions for health managers on how to implement and apply medical AI smoothly, while ensuring safety in healthcare practice. PROSPERO REGISTRATION NUMBER CRD42022315033.
Collapse
Affiliation(s)
- Chenxi Wu
- West China School of Nursing/West China Hospital, Sichuan University, Chengdu, Sichuan, China
- School of Nursing, Chengdu University of Traditional Chinese Medicine, Chengdu, Sichuan, China
| | - Huiqiong Xu
- West China School of Nursing,Sichuan University/ Abdominal Oncology Ward, Cancer Center,West China Hospital, Sichuan University, Chengdu, Sichuan, People's Republic of China
| | - Dingxi Bai
- School of Nursing, Chengdu University of Traditional Chinese Medicine, Chengdu, Sichuan, China
| | - Xinyu Chen
- School of Nursing, Chengdu University of Traditional Chinese Medicine, Chengdu, Sichuan, China
| | - Jing Gao
- School of Nursing, Chengdu University of Traditional Chinese Medicine, Chengdu, Sichuan, China
| | - Xiaolian Jiang
- West China School of Nursing/West China Hospital, Sichuan University, Chengdu, Sichuan, China
| |
Collapse
|
7
|
Choudhury A, Elkefi S. Acceptance, initial trust formation, and human biases in artificial intelligence: Focus on clinicians. Front Digit Health 2022; 4:966174. [PMID: 36082231 PMCID: PMC9445304 DOI: 10.3389/fdgth.2022.966174] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2022] [Accepted: 08/08/2022] [Indexed: 11/13/2022] Open
Affiliation(s)
- Avishek Choudhury
- Industrial and Management Systems Engineering, Benjamin M. Statler College of Engineering and Mineral Resources, West Virginia University, Morgantown, WV, United States
- Correspondence: Avishek Choudhury,
| | - Safa Elkefi
- School of Systems and Enterprises, Stevens Institute of Technology, Hoboken, NJ, United States
| |
Collapse
|
8
|
Choudhury A. Factors influencing clinicians' willingness to use an AI-based clinical decision support system. Front Digit Health 2022; 4:920662. [PMID: 36339516 PMCID: PMC9628998 DOI: 10.3389/fdgth.2022.920662] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2022] [Accepted: 08/01/2022] [Indexed: 11/07/2022] Open
Abstract
Background Given the opportunities created by artificial intelligence (AI) based decision support systems in healthcare, the vital question is whether clinicians are willing to use this technology as an integral part of clinical workflow. Purpose This study leverages validated questions to formulate an online survey and consequently explore cognitive human factors influencing clinicians' intention to use an AI-based Blood Utilization Calculator (BUC), an AI system embedded in the electronic health record that delivers data-driven personalized recommendations for the number of packed red blood cells to transfuse for a given patient. Method A purposeful sampling strategy was used to exclusively include BUC users who are clinicians in a university hospital in Wisconsin. We recruited 119 BUC users who completed the entire survey. We leveraged structural equation modeling to capture the direct and indirect effects of “AI Perception” and “Expectancy” on clinicians' Intention to use the technology when mediated by “Perceived Risk”. Results The findings indicate a significant negative relationship concerning the direct impact of AI's perception on BUC Risk (ß = −0.23, p < 0.001). Similarly, Expectancy had a significant negative effect on Risk (ß = −0.49, p < 0.001). We also noted a significant negative impact of Risk on the Intent to use BUC (ß = −0.34, p < 0.001). Regarding the indirect effect of Expectancy on the Intent to Use BUC, the findings show a significant positive impact mediated by Risk (ß = 0.17, p = 0.004). The study noted a significant positive and indirect effect of AI Perception on the Intent to Use BUC when mediated by risk (ß = 0.08, p = 0.027). Overall, this study demonstrated the influences of expectancy, perceived risk, and perception of AI on clinicians' intent to use BUC (an AI system). AI developers need to emphasize the benefits of AI technology, ensure ease of use (effort expectancy), clarify the system's potential (performance expectancy), and minimize the risk perceptions by improving the overall design. Conclusion Identifying the factors that determine clinicians' intent to use AI-based decision support systems can help improve technology adoption and use in the healthcare domain. Enhanced and safe adoption of AI can uplift the overall care process and help standardize clinical decisions and procedures. An improved AI adoption in healthcare will help clinicians share their everyday clinical workload and make critical decisions.
Collapse
|