1
|
Rogan J, Firth J, Bucci S. Healthcare Professionals' Views on the Use of Passive Sensing and Machine Learning Approaches in Secondary Mental Healthcare: A Qualitative Study. Health Expect 2024; 27:e70116. [PMID: 39587845 PMCID: PMC11589162 DOI: 10.1111/hex.70116] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2024] [Revised: 11/05/2024] [Accepted: 11/14/2024] [Indexed: 11/27/2024] Open
Abstract
INTRODUCTION Globally, many people experience mental health difficulties, and the current workforce capacity is insufficient to meet this demand, with growth not keeping pace with need. Digital devices that passively collect data and utilise machine learning to generate insights could enhance current mental health practices and help service users manage their mental health. However, little is known about mental healthcare professionals' perspectives on these approaches. This study aims to explore mental health professionals' views on using digital devices to passively collect data and apply machine learning in mental healthcare, as well as the potential barriers and facilitators to their implementation in practice. METHODS Qualitative semi-structured interviews were conducted with 15 multidisciplinary staff who work in secondary mental health settings. Interview topics included the use of digital devices for passive sensing, developing machine learning algorithms from this data, the clinician's role, and the barriers and facilitators to their use in practice. Interview data were analysed using reflexive thematic analysis. RESULTS Participants noted that digital devices for healthcare can motivate and empower users, but caution is needed to prevent feelings of abandonment and widening inequalities. Passive sensing can enhance assessment objectivity, but it raises concerns about privacy, data storage, consent and data accuracy. Machine learning algorithms may increase awareness of support needs, yet lack context, risking misdiagnosis. Barriers for service users include access, accessibility and the impact of receiving insights from passively collected data. For staff, barriers involve infrastructure and increased workload. Staff support facilitated service users' adoption of digital systems, while for staff, training, ease of use and feeling supported were key enablers. CONCLUSIONS Several recommendations have arisen from this study, including ensuring devices are user-friendly and equitably applied in clinical practice. Being with a blended approach to prevent service users from feeling abandoned and provide staff with training and access to technology to enhance uptake. PATIENT OR PUBLIC CONTRIBUTION The study design, protocol and topic guide were informed by a lived experience community group that advises on research projects at the authors' affiliation.
Collapse
Affiliation(s)
- Jessica Rogan
- Division of Psychology and Mental Health, School of Health Sciences, Faculty of Biology, Medicine and Health, Manchester Academic Health Science CentreThe University of ManchesterManchesterUK
| | - Joseph Firth
- Division of Psychology and Mental Health, School of Health Sciences, Faculty of Biology, Medicine and Health, Manchester Academic Health Science CentreThe University of ManchesterManchesterUK
- Greater Manchester Mental Health NHS Foundation TrustManchesterUK
| | - Sandra Bucci
- Division of Psychology and Mental Health, School of Health Sciences, Faculty of Biology, Medicine and Health, Manchester Academic Health Science CentreThe University of ManchesterManchesterUK
- Greater Manchester Mental Health NHS Foundation TrustManchesterUK
| |
Collapse
|
2
|
Varghese MA, Sharma P, Patwardhan M. Public Perception on Artificial Intelligence-Driven Mental Health Interventions: Survey Research. JMIR Form Res 2024; 8:e64380. [PMID: 39607994 PMCID: PMC11638687 DOI: 10.2196/64380] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2024] [Revised: 09/09/2024] [Accepted: 09/30/2024] [Indexed: 11/30/2024] Open
Abstract
BACKGROUND Artificial intelligence (AI) has become increasingly important in health care, generating both curiosity and concern. With a doctor-patient ratio of 1:834 in India, AI has the potential to alleviate a significant health care burden. Public perception plays a crucial role in shaping attitudes that can facilitate the adoption of new technologies. Similarly, the acceptance of AI-driven mental health interventions is crucial in determining their effectiveness and widespread adoption. Therefore, it is essential to study public perceptions and usage of existing AI-driven mental health interventions by exploring user experiences and opinions on their future applicability, particularly in comparison to traditional, human-based interventions. OBJECTIVE This study aims to explore the use, perception, and acceptance of AI-driven mental health interventions in comparison to traditional, human-based interventions. METHODS A total of 466 adult participants from India voluntarily completed a 30-item web-based survey on the use and perception of AI-based mental health interventions between November and December 2023. RESULTS Of the 466 respondents, only 163 (35%) had ever consulted a mental health professional. Additionally, 305 (65.5%) reported very low knowledge of AI-driven interventions. In terms of trust, 247 (53%) expressed a moderate level of Trust in AI-Driven Mental Health Interventions, while only 24 (5.2%) reported a high level of trust. By contrast, 114 (24.5%) reported high trust and 309 (66.3%) reported moderate Trust in Human-Based Mental Health Interventions; 242 (51.9%) participants reported a high level of stigma associated with using human-based interventions, compared with only 50 (10.7%) who expressed concerns about stigma related to AI-driven interventions. Additionally, 162 (34.8%) expressed a positive outlook toward the future use and social acceptance of AI-based interventions. The majority of respondents indicated that AI could be a useful option for providing general mental health tips and conducting initial assessments. The key benefits of AI highlighted by participants were accessibility, cost-effectiveness, 24/7 availability, and reduced stigma. Major concerns included data privacy, security, the lack of human touch, and the potential for misdiagnosis. CONCLUSIONS There is a general lack of awareness about AI-driven mental health interventions. However, AI shows potential as a viable option for prevention, primary assessment, and ongoing mental health maintenance. Currently, people tend to trust traditional mental health practices more. Stigma remains a significant barrier to accessing traditional mental health services. Currently, the human touch remains an indispensable aspect of human-based mental health care, one that AI cannot replace. However, integrating AI with human mental health professionals is seen as a compelling model. AI is positively perceived in terms of accessibility, availability, and destigmatization. Knowledge and perceived trustworthiness are key factors influencing the acceptance and effectiveness of AI-driven mental health interventions.
Collapse
Affiliation(s)
- Mahima Anna Varghese
- Department of Social Science and Language, Vellore Institute of Technology, Vellore, India
| | - Poonam Sharma
- Department of Social Science and Language, Vellore Institute of Technology, Vellore, India
| | | |
Collapse
|
3
|
Benda N, Desai P, Reza Z, Zheng A, Kumar S, Harkins S, Hermann A, Zhang Y, Joly R, Kim J, Pathak J, Reading Turchioe M. Patient Perspectives on AI for Mental Health Care: Cross-Sectional Survey Study. JMIR Ment Health 2024; 11:e58462. [PMID: 39293056 PMCID: PMC11447436 DOI: 10.2196/58462] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/15/2024] [Revised: 06/26/2024] [Accepted: 07/14/2024] [Indexed: 09/20/2024] Open
Abstract
BACKGROUND The application of artificial intelligence (AI) to health and health care is rapidly increasing. Several studies have assessed the attitudes of health professionals, but far fewer studies have explored the perspectives of patients or the general public. Studies investigating patient perspectives have focused on somatic issues, including those related to radiology, perinatal health, and general applications. Patient feedback has been elicited in the development of specific mental health care solutions, but broader perspectives toward AI for mental health care have been underexplored. OBJECTIVE This study aims to understand public perceptions regarding potential benefits of AI, concerns about AI, comfort with AI accomplishing various tasks, and values related to AI, all pertaining to mental health care. METHODS We conducted a 1-time cross-sectional survey with a nationally representative sample of 500 US-based adults. Participants provided structured responses on their perceived benefits, concerns, comfort, and values regarding AI for mental health care. They could also add free-text responses to elaborate on their concerns and values. RESULTS A plurality of participants (245/497, 49.3%) believed AI may be beneficial for mental health care, but this perspective differed based on sociodemographic variables (all P<.05). Specifically, Black participants (odds ratio [OR] 1.76, 95% CI 1.03-3.05) and those with lower health literacy (OR 2.16, 95% CI 1.29-3.78) perceived AI to be more beneficial, and women (OR 0.68, 95% CI 0.46-0.99) perceived AI to be less beneficial. Participants endorsed concerns about accuracy, possible unintended consequences such as misdiagnosis, the confidentiality of their information, and the loss of connection with their health professional when AI is used for mental health care. A majority of participants (80.4%, 402/500) valued being able to understand individual factors driving their risk, confidentiality, and autonomy as it pertained to the use of AI for their mental health. When asked who was responsible for the misdiagnosis of mental health conditions using AI, 81.6% (408/500) of participants found the health professional to be responsible. Qualitative results revealed similar concerns related to the accuracy of AI and how its use may impact the confidentiality of patients' information. CONCLUSIONS Future work involving the use of AI for mental health care should investigate strategies for conveying the level of AI's accuracy, factors that drive patients' mental health risks, and how data are used confidentially so that patients can determine with their health professionals when AI may be beneficial. It will also be important in a mental health care context to ensure the patient-health professional relationship is preserved when AI is used.
Collapse
Affiliation(s)
- Natalie Benda
- School of Nursing, Columbia University, New York, NY, United States
| | - Pooja Desai
- Department of Biomedical Informatics, Columbia University, New York, NY, United States
| | - Zayan Reza
- Mailman School of Public Health, Columbia University, New York, NY, United States
| | - Anna Zheng
- Stuyvestant High School, New York, NY, United States
| | - Shiveen Kumar
- College of Agriculture and Life Science, Cornell University, Ithaca, NY, United States
| | - Sarah Harkins
- School of Nursing, Columbia University, New York, NY, United States
| | - Alison Hermann
- Department of Psychiatry, Weill Cornell Medicine, New York, NY, United States
| | - Yiye Zhang
- Department of Population Health Sciences, Weill Cornell Medicine, New York, NY, United States
| | - Rochelle Joly
- Department of Obstetrics and Gynecology, Weill Cornell Medicine, New York, NY, United States
| | - Jessica Kim
- Department of Population Health Sciences, Weill Cornell Medicine, New York, NY, United States
| | - Jyotishman Pathak
- Department of Population Health Sciences, Weill Cornell Medicine, New York, NY, United States
| | | |
Collapse
|
4
|
Rogan J, Bucci S, Firth J. Health Care Professionals' Views on the Use of Passive Sensing, AI, and Machine Learning in Mental Health Care: Systematic Review With Meta-Synthesis. JMIR Ment Health 2024; 11:e49577. [PMID: 38261403 PMCID: PMC10848143 DOI: 10.2196/49577] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/02/2023] [Revised: 10/30/2023] [Accepted: 11/01/2023] [Indexed: 01/24/2024] Open
Abstract
BACKGROUND Mental health difficulties are highly prevalent worldwide. Passive sensing technologies and applied artificial intelligence (AI) methods can provide an innovative means of supporting the management of mental health problems and enhancing the quality of care. However, the views of stakeholders are important in understanding the potential barriers to and facilitators of their implementation. OBJECTIVE This study aims to review, critically appraise, and synthesize qualitative findings relating to the views of mental health care professionals on the use of passive sensing and AI in mental health care. METHODS A systematic search of qualitative studies was performed using 4 databases. A meta-synthesis approach was used, whereby studies were analyzed using an inductive thematic analysis approach within a critical realist epistemological framework. RESULTS Overall, 10 studies met the eligibility criteria. The 3 main themes were uses of passive sensing and AI in clinical practice, barriers to and facilitators of use in practice, and consequences for service users. A total of 5 subthemes were identified: barriers, facilitators, empowerment, risk to well-being, and data privacy and protection issues. CONCLUSIONS Although clinicians are open-minded about the use of passive sensing and AI in mental health care, important factors to consider are service user well-being, clinician workloads, and therapeutic relationships. Service users and clinicians must be involved in the development of digital technologies and systems to ensure ease of use. The development of, and training in, clear policies and guidelines on the use of passive sensing and AI in mental health care, including risk management and data security procedures, will also be key to facilitating clinician engagement. The means for clinicians and service users to provide feedback on how the use of passive sensing and AI in practice is being received should also be considered. TRIAL REGISTRATION PROSPERO International Prospective Register of Systematic Reviews CRD42022331698; https://www.crd.york.ac.uk/prospero/display_record.php?RecordID=331698.
Collapse
Affiliation(s)
- Jessica Rogan
- Division of Psychology and Mental Health, School of Health Sciences, Faculty of Biology, Medicine and Health, Manchester Academic Health Sciences, The University of Manchester, Manchester, United Kingdom
- Greater Manchester Mental Health NHS Foundation Trust, Manchester, United Kingdom
| | - Sandra Bucci
- Division of Psychology and Mental Health, School of Health Sciences, Faculty of Biology, Medicine and Health, Manchester Academic Health Sciences, The University of Manchester, Manchester, United Kingdom
- Greater Manchester Mental Health NHS Foundation Trust, Manchester, United Kingdom
| | - Joseph Firth
- Division of Psychology and Mental Health, School of Health Sciences, Faculty of Biology, Medicine and Health, Manchester Academic Health Sciences, The University of Manchester, Manchester, United Kingdom
| |
Collapse
|
5
|
Fazakarley CA, Breen M, Thompson B, Leeson P, Williamson V. Beliefs, experiences and concerns of using artificial intelligence in healthcare: A qualitative synthesis. Digit Health 2024; 10:20552076241230075. [PMID: 38347935 PMCID: PMC10860471 DOI: 10.1177/20552076241230075] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/16/2024] [Indexed: 02/15/2024] Open
Abstract
Objective Artificial intelligence (AI) is a developing field in the context of healthcare. As this technology continues to be implemented in patient care, there is a growing need to understand the thoughts and experiences of stakeholders in this area to ensure that future AI development and implementation is successful. The aim of this study was to conduct a literature search of qualitative studies exploring the opinions of stakeholders such as clinicians, patients, and technology experts in order to establish the most common themes and ideas that have been presented in this research. Methods A literature search was conducted of existing qualitative research on stakeholder beliefs about the use of AI use in healthcare. Twenty-one papers were selected and analysed resulting in the development of four key themes relating to patient care, patient-doctor relationships, lack of education and resources, and the need for regulations. Results Overall, patients and healthcare workers are open to the use of AI in care and appear positive about potential benefits. However, concerns were raised relating to the lack of empathy in interactions of AI tools, and potential risks that may arise from the data collection needed for AI use and development. Stakeholders in the healthcare, technology, and business sectors all stressed that there was a lack of appropriate education, funding, and guidelines surrounding AI, and these concerns needed to be addressed to ensure future implementation is safe and suitable for patient care. Conclusion Ultimately, the results found in this study highlighted that there was a need for communication between stakeholder in order for these concerns to be addressed, mitigate potential risks, and maximise benefits for patients and clinicians alike. The results also identified a need for further qualitative research in this area to further understand stakeholder experiences as AI use continues to develop.
Collapse
Affiliation(s)
| | | | | | - Paul Leeson
- RDM Division of Cardiovascular Medicine, University of Oxford, John Radcliffe Hospital, Oxford, UK
| | - Victoria Williamson
- King's Centre for Military Health Research, King's College London, London, UK
| |
Collapse
|
6
|
Vo V, Chen G, Aquino YSJ, Carter SM, Do QN, Woode ME. Multi-stakeholder preferences for the use of artificial intelligence in healthcare: A systematic review and thematic analysis. Soc Sci Med 2023; 338:116357. [PMID: 37949020 DOI: 10.1016/j.socscimed.2023.116357] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2023] [Revised: 09/04/2023] [Accepted: 10/24/2023] [Indexed: 11/12/2023]
Abstract
INTRODUCTION Despite the proliferation of Artificial Intelligence (AI) technology over the last decade, clinician, patient, and public perceptions of its use in healthcare raise a number of ethical, legal and social questions. We systematically review the literature on attitudes towards the use of AI in healthcare from patients, the general public and health professionals' perspectives to understand these issues from multiple perspectives. METHODOLOGY A search for original research articles using qualitative, quantitative, and mixed methods published between 1 Jan 2001 to 24 Aug 2021 was conducted on six bibliographic databases. Data were extracted and classified into different themes representing views on: (i) knowledge and familiarity of AI, (ii) AI benefits, risks, and challenges, (iii) AI acceptability, (iv) AI development, (v) AI implementation, (vi) AI regulations, and (vii) Human - AI relationship. RESULTS The final search identified 7,490 different records of which 105 publications were selected based on predefined inclusion/exclusion criteria. While the majority of patients, the general public and health professionals generally had a positive attitude towards the use of AI in healthcare, all groups indicated some perceived risks and challenges. Commonly perceived risks included data privacy; reduced professional autonomy; algorithmic bias; healthcare inequities; and greater burnout to acquire AI-related skills. While patients had mixed opinions on whether healthcare workers suffer from job loss due to the use of AI, health professionals strongly indicated that AI would not be able to completely replace them in their professions. Both groups shared similar doubts about AI's ability to deliver empathic care. The need for AI validation, transparency, explainability, and patient and clinical involvement in the development of AI was emphasised. To help successfully implement AI in health care, most participants envisioned that an investment in training and education campaigns was necessary, especially for health professionals. Lack of familiarity, lack of trust, and regulatory uncertainties were identified as factors hindering AI implementation. Regarding AI regulations, key themes included data access and data privacy. While the general public and patients exhibited a willingness to share anonymised data for AI development, there remained concerns about sharing data with insurance or technology companies. One key domain under this theme was the question of who should be held accountable in the case of adverse events arising from using AI. CONCLUSIONS While overall positivity persists in attitudes and preferences toward AI use in healthcare, some prevalent problems require more attention. There is a need to go beyond addressing algorithm-related issues to look at the translation of legislation and guidelines into practice to ensure fairness, accountability, transparency, and ethics in AI.
Collapse
Affiliation(s)
- Vinh Vo
- Centre for Health Economics, Monash University, Australia.
| | - Gang Chen
- Centre for Health Economics, Monash University, Australia
| | - Yves Saint James Aquino
- Australian Centre for Health Engagement, Evidence and Values, School of Health and Soceity, University of Wollongong, Australia
| | - Stacy M Carter
- Australian Centre for Health Engagement, Evidence and Values, School of Health and Soceity, University of Wollongong, Australia
| | - Quynh Nga Do
- Department of Economics, Monash University, Australia
| | - Maame Esi Woode
- Centre for Health Economics, Monash University, Australia; Monash Data Futures Research Institute, Australia
| |
Collapse
|
7
|
Sharma S, Rawal R, Shah D. Addressing the challenges of AI-based telemedicine: Best practices and lessons learned. JOURNAL OF EDUCATION AND HEALTH PROMOTION 2023; 12:338. [PMID: 38023098 PMCID: PMC10671014 DOI: 10.4103/jehp.jehp_402_23] [Citation(s) in RCA: 15] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/21/2023] [Accepted: 06/02/2023] [Indexed: 12/01/2023]
Abstract
Telemedicine is the use of technology to provide healthcare services and information remotely, without requiring physical proximity between patients and healthcare providers. The coronavirus disease 2019 (COVID-19) pandemic has accelerated the rapid growth of telemedicine worldwide. Integrating artificial intelligence (AI) into telemedicine has the potential to enhance and expand its capabilities in addressing various healthcare needs, such as patient monitoring, healthcare information technology (IT), intelligent diagnosis, and assistance. Despite the potential benefits, implementing AI in telemedicine presents challenges that can be overcome with physician-guided implementation. AI can assist physicians in decision-making, improve healthcare delivery, and automate administrative tasks. To ensure optimal effectiveness, AI-powered telemedicine should comply with existing clinical practices and adhere to a framework adaptable to various technologies. It should also consider technical and scientific factors, including trustworthiness, reproducibility, usability, availability, and cost. Education and training are crucial for the appropriate use of new healthcare technologies such as AI-enabled telemedicine. This article examines the benefits and limitations of AI-based telemedicine in various medical domains and underscores the importance of physician-guided implementation, compliance with existing clinical practices, and appropriate education and training for healthcare providers.
Collapse
Affiliation(s)
- Sachin Sharma
- Department of Computer Science and Engineering, Indrashil University, Mehsana, Gujarat, India
| | - Raj Rawal
- Department of Critical Care, Gujarat Pulmonary and Critical Care Medicine, Ahmedabad, Gujarat, India
| | - Dharmesh Shah
- Department of ICT, Indrashil University, Mehsana, Gujarat, India
| |
Collapse
|
8
|
Wu C, Xu H, Bai D, Chen X, Gao J, Jiang X. Public perceptions on the application of artificial intelligence in healthcare: a qualitative meta-synthesis. BMJ Open 2023; 13:e066322. [PMID: 36599634 PMCID: PMC9815015 DOI: 10.1136/bmjopen-2022-066322] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/07/2022] [Accepted: 12/05/2022] [Indexed: 01/05/2023] Open
Abstract
OBJECTIVES Medical artificial intelligence (AI) has been used widely applied in clinical field due to its convenience and innovation. However, several policy and regulatory issues such as credibility, sharing of responsibility and ethics have raised concerns in the use of AI. It is therefore necessary to understand the general public's views on medical AI. Here, a meta-synthesis was conducted to analyse and summarise the public's understanding of the application of AI in the healthcare field, to provide recommendations for future use and management of AI in medical practice. DESIGN This was a meta-synthesis of qualitative studies. METHOD A search was performed on the following databases to identify studies published in English and Chinese: MEDLINE, CINAHL, Web of science, Cochrane library, Embase, PsycINFO, CNKI, Wanfang and VIP. The search was conducted from database inception to 25 December 2021. The meta-aggregation approach of JBI was used to summarise findings from qualitative studies, focusing on the public's perception of the application of AI in healthcare. RESULTS Of the 5128 studies screened, 12 met the inclusion criteria, hence were incorporated into analysis. Three synthesised findings were used as the basis of our conclusions, including advantages of medical AI from the public's perspective, ethical and legal concerns about medical AI from the public's perspective, and public suggestions on the application of AI in medical field. CONCLUSION Results showed that the public acknowledges the unique advantages and convenience of medical AI. Meanwhile, several concerns about the application of medical AI were observed, most of which involve ethical and legal issues. The standard application and reasonable supervision of medical AI is key to ensuring its effective utilisation. Based on the public's perspective, this analysis provides insights and suggestions for health managers on how to implement and apply medical AI smoothly, while ensuring safety in healthcare practice. PROSPERO REGISTRATION NUMBER CRD42022315033.
Collapse
Affiliation(s)
- Chenxi Wu
- West China School of Nursing/West China Hospital, Sichuan University, Chengdu, Sichuan, China
- School of Nursing, Chengdu University of Traditional Chinese Medicine, Chengdu, Sichuan, China
| | - Huiqiong Xu
- West China School of Nursing,Sichuan University/ Abdominal Oncology Ward, Cancer Center,West China Hospital, Sichuan University, Chengdu, Sichuan, People's Republic of China
| | - Dingxi Bai
- School of Nursing, Chengdu University of Traditional Chinese Medicine, Chengdu, Sichuan, China
| | - Xinyu Chen
- School of Nursing, Chengdu University of Traditional Chinese Medicine, Chengdu, Sichuan, China
| | - Jing Gao
- School of Nursing, Chengdu University of Traditional Chinese Medicine, Chengdu, Sichuan, China
| | - Xiaolian Jiang
- West China School of Nursing/West China Hospital, Sichuan University, Chengdu, Sichuan, China
| |
Collapse
|
9
|
Tang L, Li J, Fantus S. Medical artificial intelligence ethics: A systematic review of empirical studies. Digit Health 2023; 9:20552076231186064. [PMID: 37434728 PMCID: PMC10331228 DOI: 10.1177/20552076231186064] [Citation(s) in RCA: 26] [Impact Index Per Article: 26.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2022] [Accepted: 06/16/2023] [Indexed: 07/13/2023] Open
Abstract
Background Artificial intelligence (AI) technologies are transforming medicine and healthcare. Scholars and practitioners have debated the philosophical, ethical, legal, and regulatory implications of medical AI, and empirical research on stakeholders' knowledge, attitude, and practices has started to emerge. This study is a systematic review of published empirical studies of medical AI ethics with the goal of mapping the main approaches, findings, and limitations of scholarship to inform future practice considerations. Methods We searched seven databases for published peer-reviewed empirical studies on medical AI ethics and evaluated them in terms of types of technologies studied, geographic locations, stakeholders involved, research methods used, ethical principles studied, and major findings. Findings Thirty-six studies were included (published 2013-2022). They typically belonged to one of the three topics: exploratory studies of stakeholder knowledge and attitude toward medical AI, theory-building studies testing hypotheses regarding factors contributing to stakeholders' acceptance of medical AI, and studies identifying and correcting bias in medical AI. Interpretation There is a disconnect between high-level ethical principles and guidelines developed by ethicists and empirical research on the topic and a need to embed ethicists in tandem with AI developers, clinicians, patients, and scholars of innovation and technology adoption in studying medical AI ethics.
Collapse
Affiliation(s)
- Lu Tang
- Department of Communication and Journalism, Texas A&M University, College Station, TX, USA
| | - Jinxu Li
- Department of Communication and Journalism, Texas A&M University, College Station, TX, USA
| | - Sophia Fantus
- School of Social Work, University of Texas at Arlington, Arlington, TX, USA
| |
Collapse
|
10
|
Ranade K, Kapoor A, Fernandes TN. Mental health law, policy & program in India – A fragmented narrative of change, contradictions and possibilities. SSM - MENTAL HEALTH 2022. [DOI: 10.1016/j.ssmmh.2022.100174] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/07/2022] Open
|
11
|
Annamalai A. Functional and Process Model on Big Data, Machine Learning, and Digital Phenotyping in Clinical Psychiatry. Indian J Psychol Med 2022; 44:409-415. [PMID: 35949631 PMCID: PMC9301750 DOI: 10.1177/02537176221090793] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Affiliation(s)
- Arunkumar Annamalai
- Clinical Data Science and AI, Deep Medicine Labs, Chennai, Tamil Nadu, India
| |
Collapse
|