1
|
Krysta K, Cullivan R, Brittlebank A, Dragasek J, Hermans M, Strkalj Ivezics S, van Veelen N, Casanova Dias M. Artificial Intelligence in Healthcare and Psychiatry. ACADEMIC PSYCHIATRY : THE JOURNAL OF THE AMERICAN ASSOCIATION OF DIRECTORS OF PSYCHIATRIC RESIDENCY TRAINING AND THE ASSOCIATION FOR ACADEMIC PSYCHIATRY 2024:10.1007/s40596-024-02036-z. [PMID: 39313674 DOI: 10.1007/s40596-024-02036-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/15/2024] [Accepted: 08/18/2024] [Indexed: 09/25/2024]
Affiliation(s)
- Krzysztof Krysta
- Faculty of Medical Sciences in Katowice, Medical University of Silesia in Katowice, Katowice, Poland
| | - Rachael Cullivan
- Cavan/Monaghan Mental Health Services Ireland, Monaghan, Ireland
| | - Andrew Brittlebank
- Cumbria, Northumberland, Tyne and Wear NHS Foundation Trust, Cumbria, UK
| | - Jozef Dragasek
- Faculty of Medicine, University Hospital of Louis Pasteur and Pavol Jozef Safarik University, Trieda, Kosice, Slovak Republic
| | - Marc Hermans
- European Union of Medical Specialists, Brussels, Belgium
| | | | - Nicoletta van Veelen
- Brain Center, Psychiatry, Diagnostic and Early Psychosis, Universitair Medisch Centrum Utrecht, Utrecht, the Netherlands
| | | |
Collapse
|
2
|
Ahmed NN, Reagu S, Alkhoori S, Cherchali A, Purushottamahanti P, Siddiqui U. Improving Mental Health Outcomes in Patients with Major Depressive Disorder in the Gulf States: A Review of the Role of Electronic Enablers in Monitoring Residual Symptoms. J Multidiscip Healthc 2024; 17:3341-3354. [PMID: 39010931 PMCID: PMC11247372 DOI: 10.2147/jmdh.s475078] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2024] [Accepted: 06/27/2024] [Indexed: 07/17/2024] Open
Abstract
Up to 75% of individuals with major depressive disorder (MDD) may have residual symptoms such as amotivation or anhedonia, which prevent full functional recovery and are associated with relapse. Globally and in the Gulf region, primary care physicians (PCPs) have an important role in alleviating stigma and in identifying and monitoring the residual symptoms of depression, as PCPs are the preliminary interface between patients and specialists in the collaborative care model. Therefore, mental healthcare upskilling programmes for PCPs are needed, as are basic instruments to evaluate residual symptoms swiftly and accurately in primary care. Currently, few if any electronic enablers have been designed to specifically monitor residual symptoms in patients with MDD. The objectives of this review are to highlight how accurate evaluation of residual symptoms with an easy-to-use electronic enabler in primary care may improve functional recovery and overall mental health outcomes, and how such an enabler may guide pharmacotherapy selection and positively impact the patient journey. Here, we show the potential advantages of electronic enablers in primary care, which include the possibility for a deeper "dive" into the patient journey and facilitation of treatment optimisation. At the policy and practice levels, electronic enablers endorsed by government agencies and local psychiatric associations may receive greater PCP attention and backing, improve patient involvement in shared clinical decision-making, and help to reduce the general stigma around mental health disorders. In the Gulf region, an easy-to-use electronic enabler in primary care, incorporating aspects of the Hamilton Depression Rating Scale to monitor amotivation, and aspects of the Montgomery-Åsberg Depression Rating Scale to monitor anhedonia, could markedly improve the patient journey from residual symptoms through to full functional recovery in individuals with MDD.
Collapse
Affiliation(s)
- Nahida Nayaz Ahmed
- SEHA Mental Health & Wellbeing Services, College of Medicine and Health Sciences of the United Arab Emirates University, Abu Dhabi, United Arab Emirates
| | - Shuja Reagu
- Weill Cornell Medicine, Doha, Qatar; Hamad Medical Corporation, Doha, Qatar
| | - Samia Alkhoori
- Rashid Hospital, Dubai Health, Dubai, United Arab Emirates
| | | | | | | |
Collapse
|
3
|
Berardi C, Antonini M, Jordan Z, Wechtler H, Paolucci F, Hinwood M. Barriers and facilitators to the implementation of digital technologies in mental health systems: a qualitative systematic review to inform a policy framework. BMC Health Serv Res 2024; 24:243. [PMID: 38408938 PMCID: PMC10898174 DOI: 10.1186/s12913-023-10536-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2023] [Accepted: 12/28/2023] [Indexed: 02/28/2024] Open
Abstract
BACKGROUND Despite the potential for improved population mental health and wellbeing, the integration of mental health digital interventions has been difficult to achieve. In this qualitative systematic review, we aimed to identify barriers and facilitators to the implementation of digital technologies in mental healthcare systems, and map these to an implementation framework to inform policy development. METHODS We searched Medline, Embase, Scopus, PsycInfo, Web of Science, and Google Scholar for primary research articles published between January 2010 and 2022. Studies were considered eligible if they reported barriers and/or facilitators to the integration of any digital mental healthcare technologies. Data were extracted using EPPI-Reviewer Web and analysed thematically via inductive and deductive cycles. RESULTS Of 12,525 references identified initially, 81 studies were included in the final analysis. Barriers and facilitators were grouped within an implementation (evidence-practice gap) framework across six domains, organised by four levels of mental healthcare systems. Broadly, implementation was hindered by the perception of digital technologies as impersonal tools that add additional burden of care onto both providers and patients, and change relational power asymmetries; an absence of resources; and regulatory complexities that impede access to universal coverage. Facilitators included person-cantered approaches that consider patients' intersectional features e.g., gender, class, disability, illness severity; evidence-based training for providers; collaboration among colleagues; appropriate investment in human and financial resources; and policy reforms that tackle universal access to digital health. CONCLUSION It is important to consider the complex and interrelated nature of barriers across different domains and levels of the mental health system. To facilitate the equitable, sustainable, and long-term digital transition of mental health systems, policymakers should consider a systemic approach to collaboration between public and private sectors to inform evidence-based planning and strengthen mental health systems. PROTOCOL REGISTRATION The protocol is registered on PROSPERO, CRD42021276838.
Collapse
Affiliation(s)
- Chiara Berardi
- Newcastle Business School, The University of Newcastle, Hunter St & Auckland St, 2300, Newcastle, NSW, Australia.
| | - Marcello Antonini
- School of Medicine and Public Health, The University of Newcastle, Callaghan, NSW, Australia
- Department of Health Policy, London School of Economics and Political Science, London, WC2A 2AE, UK
| | - Zephanie Jordan
- Newcastle Business School, The University of Newcastle, Hunter St & Auckland St, 2300, Newcastle, NSW, Australia
| | - Heidi Wechtler
- Newcastle Business School, The University of Newcastle, Hunter St & Auckland St, 2300, Newcastle, NSW, Australia
| | - Francesco Paolucci
- Newcastle Business School, The University of Newcastle, Hunter St & Auckland St, 2300, Newcastle, NSW, Australia
| | - Madeleine Hinwood
- School of Medicine and Public Health, The University of Newcastle, Callaghan, NSW, Australia
- Hunter Medical Research Institute, New Lambton Heights, NSW, Australia
| |
Collapse
|
4
|
Rogan J, Bucci S, Firth J. Health Care Professionals' Views on the Use of Passive Sensing, AI, and Machine Learning in Mental Health Care: Systematic Review With Meta-Synthesis. JMIR Ment Health 2024; 11:e49577. [PMID: 38261403 PMCID: PMC10848143 DOI: 10.2196/49577] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/02/2023] [Revised: 10/30/2023] [Accepted: 11/01/2023] [Indexed: 01/24/2024] Open
Abstract
BACKGROUND Mental health difficulties are highly prevalent worldwide. Passive sensing technologies and applied artificial intelligence (AI) methods can provide an innovative means of supporting the management of mental health problems and enhancing the quality of care. However, the views of stakeholders are important in understanding the potential barriers to and facilitators of their implementation. OBJECTIVE This study aims to review, critically appraise, and synthesize qualitative findings relating to the views of mental health care professionals on the use of passive sensing and AI in mental health care. METHODS A systematic search of qualitative studies was performed using 4 databases. A meta-synthesis approach was used, whereby studies were analyzed using an inductive thematic analysis approach within a critical realist epistemological framework. RESULTS Overall, 10 studies met the eligibility criteria. The 3 main themes were uses of passive sensing and AI in clinical practice, barriers to and facilitators of use in practice, and consequences for service users. A total of 5 subthemes were identified: barriers, facilitators, empowerment, risk to well-being, and data privacy and protection issues. CONCLUSIONS Although clinicians are open-minded about the use of passive sensing and AI in mental health care, important factors to consider are service user well-being, clinician workloads, and therapeutic relationships. Service users and clinicians must be involved in the development of digital technologies and systems to ensure ease of use. The development of, and training in, clear policies and guidelines on the use of passive sensing and AI in mental health care, including risk management and data security procedures, will also be key to facilitating clinician engagement. The means for clinicians and service users to provide feedback on how the use of passive sensing and AI in practice is being received should also be considered. TRIAL REGISTRATION PROSPERO International Prospective Register of Systematic Reviews CRD42022331698; https://www.crd.york.ac.uk/prospero/display_record.php?RecordID=331698.
Collapse
Affiliation(s)
- Jessica Rogan
- Division of Psychology and Mental Health, School of Health Sciences, Faculty of Biology, Medicine and Health, Manchester Academic Health Sciences, The University of Manchester, Manchester, United Kingdom
- Greater Manchester Mental Health NHS Foundation Trust, Manchester, United Kingdom
| | - Sandra Bucci
- Division of Psychology and Mental Health, School of Health Sciences, Faculty of Biology, Medicine and Health, Manchester Academic Health Sciences, The University of Manchester, Manchester, United Kingdom
- Greater Manchester Mental Health NHS Foundation Trust, Manchester, United Kingdom
| | - Joseph Firth
- Division of Psychology and Mental Health, School of Health Sciences, Faculty of Biology, Medicine and Health, Manchester Academic Health Sciences, The University of Manchester, Manchester, United Kingdom
| |
Collapse
|
5
|
Nestor BA, Chimoff J, Koike C, Weitzman ER, Riley BL, Uhl K, Kossowsky J. Adolescent and Parent Perspectives on Digital Phenotyping in Youths With Chronic Pain: Cross-Sectional Mixed Methods Survey Study. J Med Internet Res 2024; 26:e47781. [PMID: 38206665 PMCID: PMC10811597 DOI: 10.2196/47781] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2023] [Revised: 09/28/2023] [Accepted: 11/29/2023] [Indexed: 01/12/2024] Open
Abstract
BACKGROUND Digital phenotyping is a promising methodology for capturing moment-to-moment data that can inform individually adapted and timely interventions for youths with chronic pain. OBJECTIVE This study aimed to investigate adolescent and parent endorsement, perceived utility, and concerns related to passive data stream collection through smartphones for digital phenotyping for clinical and research purposes in youths with chronic pain. METHODS Through multiple-choice and open-response survey questions, we assessed the perspectives of patient-parent dyads (103 adolescents receiving treatment for chronic pain at a pediatric hospital with an average age of 15.6, SD 1.6 years, and 99 parents with an average age of 47.8, SD 6.3 years) on passive data collection from the following 9 smartphone-embedded passive data streams: accelerometer, apps, Bluetooth, SMS text message and call logs, keyboard, microphone, light, screen, and GPS. RESULTS Quantitative and qualitative analyses indicated that adolescents and parent endorsement and perceived utility of digital phenotyping varied by stream, though participants generally endorsed the use of data collected by passive stream (35%-75.7% adolescent endorsement for clinical use and 37.9%-74.8% for research purposes; 53.5%-81.8% parent endorsement for clinical and 52.5%-82.8% for research purposes) if a certain level of utility could be provided. For adolescents and parents, adjusted logistic regression results indicated that the perceived utility of each stream significantly predicted the likelihood of endorsement of its use in both clinical practice and research (Ps<.05). Adolescents and parents alike identified accelerometer, light, screen, and GPS as the passive data streams with the highest utility (36.9%-47.5% identifying streams as useful). Similarly, adolescents and parents alike identified apps, Bluetooth, SMS text message and call logs, keyboard, and microphone as the passive data streams with the least utility (18.5%-34.3% identifying streams as useful). All participants reported primary concerns related to privacy, accuracy, and validity of the collected data. Passive data streams with the greatest number of total concerns were apps, Bluetooth, call and SMS text message logs, keyboard, and microphone. CONCLUSIONS Findings support the tailored use of digital phenotyping for this population and can help refine this methodology toward an acceptable, feasible, and ethical implementation of real-time symptom monitoring for assessment and intervention in youths with chronic pain.
Collapse
Affiliation(s)
- Bridget A Nestor
- Department of Anesthesiology, Critical Care, and Pain Medicine, Boston Children's Hospital, Boston, MA, United States
- Department of Anesthesia, Harvard Medical School, Boston, MA, United States
| | - Justin Chimoff
- Department of Anesthesiology, Critical Care, and Pain Medicine, Boston Children's Hospital, Boston, MA, United States
| | - Camila Koike
- Department of Anesthesiology, Critical Care, and Pain Medicine, Boston Children's Hospital, Boston, MA, United States
| | - Elissa R Weitzman
- Division of Adolescent and Young Adult Medicine, Boston Children's Hospital, Boston, MA, United States
- Department of Pediatrics, Harvard Medical School, Boston, MA, United States
- Division of Addiction Medicine, Boston Children's Hospital, Boston, MA, United States
| | - Bobbie L Riley
- Department of Anesthesiology, Critical Care, and Pain Medicine, Boston Children's Hospital, Boston, MA, United States
- Department of Anesthesia, Harvard Medical School, Boston, MA, United States
| | - Kristen Uhl
- Department of Psychosocial Oncology and Palliative Care, Dana Farber Cancer Institute, Boston, MA, United States
- Department of Psychiatry, Boston Children's Hospital, Boston, MA, United States
| | - Joe Kossowsky
- Department of Anesthesiology, Critical Care, and Pain Medicine, Boston Children's Hospital, Boston, MA, United States
- Department of Anesthesia, Harvard Medical School, Boston, MA, United States
- Division of Sleep Medicine, Harvard Medical School, Boston, MA, United States
| |
Collapse
|
6
|
Fazakarley CA, Breen M, Thompson B, Leeson P, Williamson V. Beliefs, experiences and concerns of using artificial intelligence in healthcare: A qualitative synthesis. Digit Health 2024; 10:20552076241230075. [PMID: 38347935 PMCID: PMC10860471 DOI: 10.1177/20552076241230075] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/16/2024] [Indexed: 02/15/2024] Open
Abstract
Objective Artificial intelligence (AI) is a developing field in the context of healthcare. As this technology continues to be implemented in patient care, there is a growing need to understand the thoughts and experiences of stakeholders in this area to ensure that future AI development and implementation is successful. The aim of this study was to conduct a literature search of qualitative studies exploring the opinions of stakeholders such as clinicians, patients, and technology experts in order to establish the most common themes and ideas that have been presented in this research. Methods A literature search was conducted of existing qualitative research on stakeholder beliefs about the use of AI use in healthcare. Twenty-one papers were selected and analysed resulting in the development of four key themes relating to patient care, patient-doctor relationships, lack of education and resources, and the need for regulations. Results Overall, patients and healthcare workers are open to the use of AI in care and appear positive about potential benefits. However, concerns were raised relating to the lack of empathy in interactions of AI tools, and potential risks that may arise from the data collection needed for AI use and development. Stakeholders in the healthcare, technology, and business sectors all stressed that there was a lack of appropriate education, funding, and guidelines surrounding AI, and these concerns needed to be addressed to ensure future implementation is safe and suitable for patient care. Conclusion Ultimately, the results found in this study highlighted that there was a need for communication between stakeholder in order for these concerns to be addressed, mitigate potential risks, and maximise benefits for patients and clinicians alike. The results also identified a need for further qualitative research in this area to further understand stakeholder experiences as AI use continues to develop.
Collapse
Affiliation(s)
| | | | | | - Paul Leeson
- RDM Division of Cardiovascular Medicine, University of Oxford, John Radcliffe Hospital, Oxford, UK
| | - Victoria Williamson
- King's Centre for Military Health Research, King's College London, London, UK
| |
Collapse
|
7
|
Fazakarley CA, Breen M, Leeson P, Thompson B, Williamson V. Experiences of using artificial intelligence in healthcare: a qualitative study of UK clinician and key stakeholder perspectives. BMJ Open 2023; 13:e076950. [PMID: 38081671 PMCID: PMC10729128 DOI: 10.1136/bmjopen-2023-076950] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/21/2023] [Accepted: 11/24/2023] [Indexed: 12/18/2023] Open
Abstract
OBJECTIVES Artificial intelligence (AI) is a rapidly developing field in healthcare, with tools being developed across various specialties to support healthcare professionals and reduce workloads. It is important to understand the experiences of professionals working in healthcare to ensure that future AI tools are acceptable and effectively implemented. The aim of this study was to gain an in-depth understanding of the experiences and perceptions of UK healthcare workers and other key stakeholders about the use of AI in the National Health Service (NHS). DESIGN A qualitative study using semistructured interviews conducted remotely via MS Teams. Thematic analysis was carried out. SETTING NHS and UK higher education institutes. PARTICIPANTS Thirteen participants were recruited, including clinical and non-clinical participants working for the NHS and researchers working to develop AI tools for healthcare settings. RESULTS Four core themes were identified: positive perceptions of AI; potential barriers to using AI in healthcare; concerns regarding AI use and steps needed to ensure the acceptability of future AI tools. Overall, we found that those working in healthcare were generally open to the use of AI and expected it to have many benefits for patients and facilitate access to care. However, concerns were raised regarding the security of patient data, the potential for misdiagnosis and that AI could increase the burden on already strained healthcare staff. CONCLUSION This study found that healthcare staff are willing to engage with AI research and incorporate AI tools into care pathways. Going forward, the NHS and AI developers will need to collaborate closely to ensure that future tools are suitable for their intended use and do not negatively impact workloads or patient trust. Future AI studies should continue to incorporate the views of key stakeholders to improve tool acceptability. TRIAL REGISTRATION NUMBER NCT05028179; ISRCTN15113915; IRAS ref: 293515.
Collapse
Affiliation(s)
| | - Maria Breen
- School of Psychology & Clinical Language Sciences, University of Reading, Reading, UK
- Breen Clinical Research, London, UK
| | - Paul Leeson
- Division of Cardiovascular Medicine, University of Oxford, Oxford, UK
| | | | - Victoria Williamson
- King's College London, London, UK
- Experimental Psychology, University of Oxford, Oxford, UK
| |
Collapse
|
8
|
Zhang M, Scandiffio J, Younus S, Jeyakumar T, Karsan I, Charow R, Salhia M, Wiljer D. The Adoption of AI in Mental Health Care-Perspectives From Mental Health Professionals: Qualitative Descriptive Study. JMIR Form Res 2023; 7:e47847. [PMID: 38060307 PMCID: PMC10739240 DOI: 10.2196/47847] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2023] [Revised: 10/08/2023] [Accepted: 10/11/2023] [Indexed: 12/08/2023] Open
Abstract
BACKGROUND Artificial intelligence (AI) is transforming the mental health care environment. AI tools are increasingly accessed by clients and service users. Mental health professionals must be prepared not only to use AI but also to have conversations about it when delivering care. Despite the potential for AI to enable more efficient and reliable and higher-quality care delivery, there is a persistent gap among mental health professionals in the adoption of AI. OBJECTIVE A needs assessment was conducted among mental health professionals to (1) understand the learning needs of the workforce and their attitudes toward AI and (2) inform the development of AI education curricula and knowledge translation products. METHODS A qualitative descriptive approach was taken to explore the needs of mental health professionals regarding their adoption of AI through semistructured interviews. To reach maximum variation sampling, mental health professionals (eg, psychiatrists, mental health nurses, educators, scientists, and social workers) in various settings across Ontario (eg, urban and rural, public and private sector, and clinical and research) were recruited. RESULTS A total of 20 individuals were recruited. Participants included practitioners (9/20, 45% social workers and 1/20, 5% mental health nurses), educator scientists (5/20, 25% with dual roles as professors/lecturers and researchers), and practitioner scientists (3/20, 15% with dual roles as researchers and psychiatrists and 2/20, 10% with dual roles as researchers and mental health nurses). Four major themes emerged: (1) fostering practice change and building self-efficacy to integrate AI into patient care; (2) promoting system-level change to accelerate the adoption of AI in mental health; (3) addressing the importance of organizational readiness as a catalyst for AI adoption; and (4) ensuring that mental health professionals have the education, knowledge, and skills to harness AI in optimizing patient care. CONCLUSIONS AI technologies are starting to emerge in mental health care. Although many digital tools, web-based services, and mobile apps are designed using AI algorithms, mental health professionals have generally been slower in the adoption of AI. As indicated by this study's findings, the implications are 3-fold. At the individual level, digital professionals must see the value in digitally compassionate tools that retain a humanistic approach to care. For mental health professionals, resistance toward AI adoption must be acknowledged through educational initiatives to raise awareness about the relevance, practicality, and benefits of AI. At the organizational level, digital professionals and leaders must collaborate on governance and funding structures to promote employee buy-in. At the societal level, digital and mental health professionals should collaborate in the creation of formal AI training programs specific to mental health to address knowledge gaps. This study promotes the design of relevant and sustainable education programs to support the adoption of AI within the mental health care sphere.
Collapse
Affiliation(s)
| | | | | | - Tharshini Jeyakumar
- University Health Network, Toronto, ON, Canada
- Institute of Health Policy, Management, and Evaluation, University of Toronto, Toronto, ON, Canada
| | | | - Rebecca Charow
- University Health Network, Toronto, ON, Canada
- Institute of Health Policy, Management, and Evaluation, University of Toronto, Toronto, ON, Canada
| | - Mohammad Salhia
- Rotman School of Management, University of Toronto, Toronto, ON, Canada
| | - David Wiljer
- University Health Network, Toronto, ON, Canada
- Institute of Health Policy, Management, and Evaluation, University of Toronto, Toronto, ON, Canada
- Department of Medicine, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
9
|
Blease C, Torous J. ChatGPT and mental healthcare: balancing benefits with risks of harms. BMJ MENTAL HEALTH 2023; 26:e300884. [PMID: 37949485 PMCID: PMC10649440 DOI: 10.1136/bmjment-2023-300884] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/15/2023] [Accepted: 10/06/2023] [Indexed: 11/12/2023]
Abstract
Against the global need for increased access to mental services, health organisations are looking to technological advances to improve the delivery of care and lower costs. Since November 2022, with the public launch of OpenAI's ChatGPT, the field of generative artificial intelligence (AI) has received expanding attention. Although generative AI itself is not new, technical advances and the increased accessibility of large language models (LLMs) (eg, OpenAI's GPT-4 and Google's Bard) suggest use of these tools could be clinically significant. LLMs are an application of generative AI technology that can summarise and generate content based on training on vast data sets. Unlike search engines, which provide internet links in response to typed entries, chatbots that rely on generative language models can simulate dialogue that resembles human conversations. We examine the potential promise and the risks of using LLMs in mental healthcare today, focusing on their scope to impact mental healthcare, including global equity in the delivery of care. Although we caution that LLMs should not be used to disintermediate mental health clinicians, we signal how-if carefully implemented-in the long term these tools could reap benefits for patients and health professionals.
Collapse
Affiliation(s)
- Charlotte Blease
- Participatory eHealth and Health Data Research Group, Department of Women's and Children's Health, Uppsala Universitet, Uppsala, Sweden
- Digital Psychiatry, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, Massachusetts, USA
| | - John Torous
- Psychiatry, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, Massachusetts, USA
| |
Collapse
|
10
|
Sahoo JP, Narayan BN, Santi NS. The future of psychiatry with artificial intelligence: can the man-machine duo redefine the tenets? CONSORTIUM PSYCHIATRICUM 2023; 4:72-76. [PMID: 38249529 PMCID: PMC10795941 DOI: 10.17816/cp13626] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2023] [Accepted: 09/15/2023] [Indexed: 01/23/2024] Open
Abstract
As one of the largest contributors of morbidity and mortality, psychiatric disorders are anticipated to triple in prevalence over the coming decade or so. Major obstacles to psychiatric care include stigma, funding constraints, and a dearth of resources and psychiatrists. The main thrust of our present-day discussion has been towards the direction of how machine learning and artificial intelligence could influence the way that patients experience care. To better grasp the issues regarding trust, privacy, and autonomy, their societal and ethical ramifications need to be probed. There is always the possibility that the artificial mind could malfunction or exhibit behavioral abnormalities. An in-depth philosophical understanding of these possibilities in both human and artificial intelligence could offer correlational insights into the robotic management of mental disorders in the future. This article looks into the role of artificial intelligence, the different challenges associated with it, as well as the perspectives in the management of such mental illnesses as depression, anxiety, and schizophrenia.
Collapse
Affiliation(s)
| | | | - N Simple Santi
- Veer Surendra Sai Institute Of Medical Science And Research
| |
Collapse
|
11
|
Orlova IA, Akopyan ZA, Plisyuk AG, Tarasova EV, Borisov EN, Dolgushin GO, Khvatova EI, Grigoryan MA, Gabbasova LA, Kamalov AA. Opinion research among Russian Physicians on the application of technologies using artificial intelligence in the field of medicine and health care. BMC Health Serv Res 2023; 23:749. [PMID: 37442981 DOI: 10.1186/s12913-023-09493-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2022] [Accepted: 05/03/2023] [Indexed: 07/15/2023] Open
Abstract
BACKGROUND To date, no opinion surveys has been conducted among Russian physicians to study their awareness about artificial intelligence. With a survey, we aimed to evaluate the attitudes of stakeholders to the usage of technologies employing AI in the field of medicine and healthcare and identify challenges and perspectives to introducing AI. METHODS We conducted a 12-question online survey using Google Forms. The survey consisted of questions related to the recognition of AI and attitudes towards it, the direction of development of AI in medicine and the possible risks of using AI in medicine. RESULTS 301 doctors took part in the survey. 107 (35.6%) responded that they are familiar with AI. The vast majority of participants considered AI useful in the medical field (85%). The advantage of AI was associated with the ability to analyze huge volumes of clinically relevant data in real time (79%). Respondents highlighted areas where AI would be most useful-organizational optimization (74%), biopharmaceutical research (67%), and disease diagnosis (52%). Among the possible problems when using AI, they noted the lack of flexibility and limited application on controversial issues (64% and 60% of respondents). 56% believe that AI decision making will be difficult if inadequate information is presented for analysis. A third of doctors fear that specialists with little experience took part in the development of AI, and 89% of respondents believe that doctors should participate in the development of AI for medicine and healthcare. Only 20 participants (6.6%) responded that they agree that AI can replace them at work. At the same time, 76% of respondents believe that in the future, doctors using AI will replace those who do not. CONCLUSIONS Russian doctors are for AI in medicine. Most of the respondents believe that AI will not replace them in the future and will become a useful tool. First of all, for optimizing organizational processes, research and diagnostics of diseases. TRIAL REGISTRATION This study was approved by the Local Ethics Committee of the Lomonosov Moscow State University Medical Research and Education Center (IRB00010587).
Collapse
Affiliation(s)
- I A Orlova
- Medical Research and Education Center of Lomonosov, Moscow State University, 27/10 Lomonosov Prospect, Moscow, 119192, Russia
- Faculty of Fundamental Medicine, Lomonosov Moscow State University, 27/1 Lomonosov Prospect, Moscow, 119192, Russia
| | - Zh A Akopyan
- Medical Research and Education Center of Lomonosov, Moscow State University, 27/10 Lomonosov Prospect, Moscow, 119192, Russia
- Faculty of Fundamental Medicine, Lomonosov Moscow State University, 27/1 Lomonosov Prospect, Moscow, 119192, Russia
| | - A G Plisyuk
- Medical Research and Education Center of Lomonosov, Moscow State University, 27/10 Lomonosov Prospect, Moscow, 119192, Russia
- Faculty of Fundamental Medicine, Lomonosov Moscow State University, 27/1 Lomonosov Prospect, Moscow, 119192, Russia
| | - E V Tarasova
- Medical Research and Education Center of Lomonosov, Moscow State University, 27/10 Lomonosov Prospect, Moscow, 119192, Russia.
- Faculty of Fundamental Medicine, Lomonosov Moscow State University, 27/1 Lomonosov Prospect, Moscow, 119192, Russia.
| | - E N Borisov
- Medical Research and Education Center of Lomonosov, Moscow State University, 27/10 Lomonosov Prospect, Moscow, 119192, Russia
| | - G O Dolgushin
- Faculty of Fundamental Medicine, Lomonosov Moscow State University, 27/1 Lomonosov Prospect, Moscow, 119192, Russia
| | - E I Khvatova
- Faculty of Fundamental Medicine, Lomonosov Moscow State University, 27/1 Lomonosov Prospect, Moscow, 119192, Russia
| | - M A Grigoryan
- Faculty of Fundamental Medicine, Lomonosov Moscow State University, 27/1 Lomonosov Prospect, Moscow, 119192, Russia
| | - L A Gabbasova
- Medical Research and Education Center of Lomonosov, Moscow State University, 27/10 Lomonosov Prospect, Moscow, 119192, Russia
- Faculty of Fundamental Medicine, Lomonosov Moscow State University, 27/1 Lomonosov Prospect, Moscow, 119192, Russia
| | - A A Kamalov
- Medical Research and Education Center of Lomonosov, Moscow State University, 27/10 Lomonosov Prospect, Moscow, 119192, Russia
- Faculty of Fundamental Medicine, Lomonosov Moscow State University, 27/1 Lomonosov Prospect, Moscow, 119192, Russia
| |
Collapse
|
12
|
Blease C, Kharko A, Bernstein M, Bradley C, Houston M, Walsh I, D Mandl K. Computerization of the Work of General Practitioners: Mixed Methods Survey of Final-Year Medical Students in Ireland. JMIR MEDICAL EDUCATION 2023; 9:e42639. [PMID: 36939809 PMCID: PMC10131917 DOI: 10.2196/42639] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/13/2022] [Revised: 12/14/2022] [Accepted: 01/15/2023] [Indexed: 06/18/2023]
Abstract
BACKGROUND The potential for digital health technologies, including machine learning (ML)-enabled tools, to disrupt the medical profession is the subject of ongoing debate within biomedical informatics. OBJECTIVE We aimed to describe the opinions of final-year medical students in Ireland regarding the potential of future technology to replace or work alongside general practitioners (GPs) in performing key tasks. METHODS Between March 2019 and April 2020, using a convenience sample, we conducted a mixed methods paper-based survey of final-year medical students. The survey was administered at 4 out of 7 medical schools in Ireland across each of the 4 provinces in the country. Quantitative data were analyzed using descriptive statistics and nonparametric tests. We used thematic content analysis to investigate free-text responses. RESULTS In total, 43.1% (252/585) of the final-year students at 3 medical schools responded, and data collection at 1 medical school was terminated due to disruptions associated with the COVID-19 pandemic. With regard to forecasting the potential impact of artificial intelligence (AI)/ML on primary care 25 years from now, around half (127/246, 51.6%) of all surveyed students believed the work of GPs will change minimally or not at all. Notably, students who did not intend to enter primary care predicted that AI/ML will have a great impact on the work of GPs. CONCLUSIONS We caution that without a firm curricular foundation on advances in AI/ML, students may rely on extreme perspectives involving self-preserving optimism biases that demote the impact of advances in technology on primary care on the one hand and technohype on the other. Ultimately, these biases may lead to negative consequences in health care. Improvements in medical education could help prepare tomorrow's doctors to optimize and lead the ethical and evidence-based implementation of AI/ML-enabled tools in medicine for enhancing the care of tomorrow's patients.
Collapse
Affiliation(s)
- Charlotte Blease
- General Medicine and Primary Care, Beth Israel Deaconess Medical Center, Boston, MA, United States
| | - Anna Kharko
- Healthcare Sciences and e-Health, Department of Women's and Children's Health, Uppsala University, Uppsala, Sweden
- School of Psychology, University of Plymouth, Plymouth, United Kingdom
| | - Michael Bernstein
- Department of Behavioral and Social Sciences, School of Public Health, Brown University, Providence, RI, United States
- Department of Diagnostic Imaging, Warren Alpert Medical School, Brown University, Providence, RI, United States
| | - Colin Bradley
- School of Medicine, University College Cork, Cork, Ireland
| | - Muiris Houston
- School of Medicine, National University of Ireland Galway, Galway, Ireland
- School of Medicine, Trinity College Dublin, Dublin, Ireland
| | - Ian Walsh
- Dentistry and Biomedical Sciences, School of Medicine, Queen's University, Belfast, Ireland
| | - Kenneth D Mandl
- Computational Health Informatics Program, Boston Children's Hospital, Boston, MA, United States
| |
Collapse
|
13
|
Sharma A, Lin IW, Miner AS, Atkins DC, Althoff T. Human–AI collaboration enables more empathic conversations in text-based peer-to-peer mental health support. NAT MACH INTELL 2023. [DOI: 10.1038/s42256-022-00593-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/24/2023]
|
14
|
Morrow E, Zidaru T, Ross F, Mason C, Patel KD, Ream M, Stockley R. Artificial intelligence technologies and compassion in healthcare: A systematic scoping review. Front Psychol 2023; 13:971044. [PMID: 36733854 PMCID: PMC9887144 DOI: 10.3389/fpsyg.2022.971044] [Citation(s) in RCA: 21] [Impact Index Per Article: 21.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2022] [Accepted: 12/05/2022] [Indexed: 01/18/2023] Open
Abstract
Background Advances in artificial intelligence (AI) technologies, together with the availability of big data in society, creates uncertainties about how these developments will affect healthcare systems worldwide. Compassion is essential for high-quality healthcare and research shows how prosocial caring behaviors benefit human health and societies. However, the possible association between AI technologies and compassion is under conceptualized and underexplored. Objectives The aim of this scoping review is to provide a comprehensive depth and a balanced perspective of the emerging topic of AI technologies and compassion, to inform future research and practice. The review questions were: How is compassion discussed in relation to AI technologies in healthcare? How are AI technologies being used to enhance compassion in healthcare? What are the gaps in current knowledge and unexplored potential? What are the key areas where AI technologies could support compassion in healthcare? Materials and methods A systematic scoping review following five steps of Joanna Briggs Institute methodology. Presentation of the scoping review conforms with PRISMA-ScR (Preferred Reporting Items for Systematic reviews and Meta-Analyses extension for Scoping Reviews). Eligibility criteria were defined according to 3 concept constructs (AI technologies, compassion, healthcare) developed from the literature and informed by medical subject headings (MeSH) and key words for the electronic searches. Sources of evidence were Web of Science and PubMed databases, articles published in English language 2011-2022. Articles were screened by title/abstract using inclusion/exclusion criteria. Data extracted (author, date of publication, type of article, aim/context of healthcare, key relevant findings, country) was charted using data tables. Thematic analysis used an inductive-deductive approach to generate code categories from the review questions and the data. A multidisciplinary team assessed themes for resonance and relevance to research and practice. Results Searches identified 3,124 articles. A total of 197 were included after screening. The number of articles has increased over 10 years (2011, n = 1 to 2021, n = 47 and from Jan-Aug 2022 n = 35 articles). Overarching themes related to the review questions were: (1) Developments and debates (7 themes) Concerns about AI ethics, healthcare jobs, and loss of empathy; Human-centered design of AI technologies for healthcare; Optimistic speculation AI technologies will address care gaps; Interrogation of what it means to be human and to care; Recognition of future potential for patient monitoring, virtual proximity, and access to healthcare; Calls for curricula development and healthcare professional education; Implementation of AI applications to enhance health and wellbeing of the healthcare workforce. (2) How AI technologies enhance compassion (10 themes) Empathetic awareness; Empathetic response and relational behavior; Communication skills; Health coaching; Therapeutic interventions; Moral development learning; Clinical knowledge and clinical assessment; Healthcare quality assessment; Therapeutic bond and therapeutic alliance; Providing health information and advice. (3) Gaps in knowledge (4 themes) Educational effectiveness of AI-assisted learning; Patient diversity and AI technologies; Implementation of AI technologies in education and practice settings; Safety and clinical effectiveness of AI technologies. (4) Key areas for development (3 themes) Enriching education, learning and clinical practice; Extending healing spaces; Enhancing healing relationships. Conclusion There is an association between AI technologies and compassion in healthcare and interest in this association has grown internationally over the last decade. In a range of healthcare contexts, AI technologies are being used to enhance empathetic awareness; empathetic response and relational behavior; communication skills; health coaching; therapeutic interventions; moral development learning; clinical knowledge and clinical assessment; healthcare quality assessment; therapeutic bond and therapeutic alliance; and to provide health information and advice. The findings inform a reconceptualization of compassion as a human-AI system of intelligent caring comprising six elements: (1) Awareness of suffering (e.g., pain, distress, risk, disadvantage); (2) Understanding the suffering (significance, context, rights, responsibilities etc.); (3) Connecting with the suffering (e.g., verbal, physical, signs and symbols); (4) Making a judgment about the suffering (the need to act); (5) Responding with an intention to alleviate the suffering; (6) Attention to the effect and outcomes of the response. These elements can operate at an individual (human or machine) and collective systems level (healthcare organizations or systems) as a cyclical system to alleviate different types of suffering. New and novel approaches to human-AI intelligent caring could enrich education, learning, and clinical practice; extend healing spaces; and enhance healing relationships. Implications In a complex adaptive system such as healthcare, human-AI intelligent caring will need to be implemented, not as an ideology, but through strategic choices, incentives, regulation, professional education, and training, as well as through joined up thinking about human-AI intelligent caring. Research funders can encourage research and development into the topic of AI technologies and compassion as a system of human-AI intelligent caring. Educators, technologists, and health professionals can inform themselves about the system of human-AI intelligent caring.
Collapse
Affiliation(s)
| | - Teodor Zidaru
- Department of Anthropology, London School of Economics and Political Sciences, London, United Kingdom
| | - Fiona Ross
- Faculty of Health, Science, Social Care and Education, Kingston University London, London, United Kingdom
| | - Cindy Mason
- Artificial Intelligence Researcher (Independent), Palo Alto, CA, United States
| | | | - Melissa Ream
- Kent Surrey Sussex Academic Health Science Network (AHSN) and the National AHSN Network Artificial Intelligence (AI) Initiative, Surrey, United Kingdom
| | - Rich Stockley
- Head of Research and Engagement, Surrey Heartlands Health and Care Partnership, Surrey, United Kingdom
| |
Collapse
|
15
|
Shaikh AK, Alhashmi SM, Khalique N, Khedr AM, Raahemifar K, Bukhari S. Bibliometric analysis on the adoption of artificial intelligence applications in the e-health sector. Digit Health 2023; 9:20552076221149296. [PMID: 36683951 PMCID: PMC9850136 DOI: 10.1177/20552076221149296] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2022] [Accepted: 12/18/2022] [Indexed: 01/19/2023] Open
Abstract
Artificial Intelligent (AI) applications in e-health have evolved considerably in the last 25 years. To track the current research progress in this field, there is a need to analyze the most recent trend of adopting AI applications in e-health. This bibliometric analysis study covers AI applications in e-health. It differs from the existing literature review as the journal articles are obtained from the Scopus database from its beginning to late 2021 (25 years), which depicts the most recent trend of AI in e-health. The bibliometric analysis is employed to find the statistical and quantitative analysis of available literature of a specific field of study for a particular period. An extensive global literature review is performed to identify the significant research area, authors, or their relationship through published articles. It also provides the researchers with an overview of the work evolution of specific research fields. The study's main contribution highlights the essential authors, journals, institutes, keywords, and states in developing the AI field in e-health.
Collapse
Affiliation(s)
| | - Saadat M Alhashmi
- Department of Information Systems, College of Computing and
Informatics, University of
Sharjah, Sharjah, United Arab
Emirates
| | - Nadia Khalique
- College of
Economics and Political Science, Sultan Qaboos
University, Muscat, Oman
| | - Ahmed M. Khedr
- Department of Information Systems, College of Computing and
Informatics, University of
Sharjah, Sharjah, United Arab
Emirates
| | | | - Sadaf Bukhari
- Beijing
Institute of Technology, Beijing, Beijing,
China
| |
Collapse
|
16
|
Pap IA, Oniga S. A Review of Converging Technologies in eHealth Pertaining to Artificial Intelligence. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2022; 19:11413. [PMID: 36141685 PMCID: PMC9517043 DOI: 10.3390/ijerph191811413] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/23/2022] [Revised: 08/31/2022] [Accepted: 09/06/2022] [Indexed: 06/16/2023]
Abstract
Over the last couple of years, in the context of the COVID-19 pandemic, many healthcare issues have been exacerbated, highlighting the paramount need to provide both reliable and affordable health services to remote locations by using the latest technologies such as video conferencing, data management, the secure transfer of patient information, and efficient data analysis tools such as machine learning algorithms. In the constant struggle to offer healthcare to everyone, many modern technologies find applicability in eHealth, mHealth, telehealth or telemedicine. Through this paper, we attempt to render an overview of what different technologies are used in certain healthcare applications, ranging from remote patient monitoring in the field of cardio-oncology to analyzing EEG signals through machine learning for the prediction of seizures, focusing on the role of artificial intelligence in eHealth.
Collapse
Affiliation(s)
- Iuliu Alexandru Pap
- Department of Electric, Electronic and Computer Engineering, Technical University of Cluj-Napoca, North University Center of Baia Mare, 430083 Baia Mare, Romania
| | - Stefan Oniga
- Department of Electric, Electronic and Computer Engineering, Technical University of Cluj-Napoca, North University Center of Baia Mare, 430083 Baia Mare, Romania
- Department of IT Systems and Networks, Faculty of Informatics, University of Debrecen, 4032 Debrecen, Hungary
| |
Collapse
|
17
|
Ho SM, Liu X, Seraj MS, Dickey S. Social distance "nudge:" a context aware mHealth intervention in response to COVID pandemics. COMPUTATIONAL AND MATHEMATICAL ORGANIZATION THEORY 2022; 29:1-24. [PMID: 36106126 PMCID: PMC9461402 DOI: 10.1007/s10588-022-09365-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Accepted: 08/17/2022] [Indexed: 06/15/2023]
Abstract
The impact of the COVID pandemic to our society is unprecedented in our time. As coronavirus mutates, maintaining social distance remains an essential step in defending personal as well as public health. This study conceptualizes the social distance "nudge" and explores the efficacy of mHealth digital intervention, while developing and validating a choice architecture that aims to influence users' behavior in maintaining social distance for their own self-interest. End-user nudging experiments were conducted via a mobile phone app that was developed as a research artifact. The accuracy of social distance nudging was validated in both United States and Japan. Future work will consider behavioral studies to better understand the effectiveness of this digital nudging intervention.
Collapse
Affiliation(s)
- Shuyuan Mary Ho
- School of Information, Florida State University, 142 Collegiate Loop, P.O. Box 3062100, Tallahassee, FL 32306-2100 USA
| | - Xiuwen Liu
- Department of Computer Science, Florida State University, 1017 Academy Way, Tallahassee, FL 32304 USA
| | - Md Shamim Seraj
- Department of Computer Science, Florida State University, 1017 Academy Way, Tallahassee, FL 32304 USA
| | - Sabrina Dickey
- College of Nursing, Florida State University, 98 Varsity Way, Tallahassee, FL 32306-4310 USA
| |
Collapse
|
18
|
Chen M, Zhang B, Cai Z, Seery S, Gonzalez MJ, Ali NM, Ren R, Qiao Y, Xue P, Jiang Y. Acceptance of clinical artificial intelligence among physicians and medical students: A systematic review with cross-sectional survey. Front Med (Lausanne) 2022; 9:990604. [PMID: 36117979 PMCID: PMC9472134 DOI: 10.3389/fmed.2022.990604] [Citation(s) in RCA: 39] [Impact Index Per Article: 19.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2022] [Accepted: 08/01/2022] [Indexed: 11/13/2022] Open
Abstract
Background Artificial intelligence (AI) needs to be accepted and understood by physicians and medical students, but few have systematically assessed their attitudes. We investigated clinical AI acceptance among physicians and medical students around the world to provide implementation guidance. Materials and methods We conducted a two-stage study, involving a foundational systematic review of physician and medical student acceptance of clinical AI. This enabled us to design a suitable web-based questionnaire which was then distributed among practitioners and trainees around the world. Results Sixty studies were included in this systematic review, and 758 respondents from 39 countries completed the online questionnaire. Five (62.50%) of eight studies reported 65% or higher awareness regarding the application of clinical AI. Although, only 10–30% had actually used AI and 26 (74.28%) of 35 studies suggested there was a lack of AI knowledge. Our questionnaire uncovered 38% awareness rate and 20% utility rate of clinical AI, although 53% lacked basic knowledge of clinical AI. Forty-five studies mentioned attitudes toward clinical AI, and over 60% from 38 (84.44%) studies were positive about AI, although they were also concerned about the potential for unpredictable, incorrect results. Seventy-seven percent were optimistic about the prospect of clinical AI. The support rate for the statement that AI could replace physicians ranged from 6 to 78% across 40 studies which mentioned this topic. Five studies recommended that efforts should be made to increase collaboration. Our questionnaire showed 68% disagreed that AI would become a surrogate physician, but believed it should assist in clinical decision-making. Participants with different identities, experience and from different countries hold similar but subtly different attitudes. Conclusion Most physicians and medical students appear aware of the increasing application of clinical AI, but lack practical experience and related knowledge. Overall, participants have positive but reserved attitudes about AI. In spite of the mixed opinions around clinical AI becoming a surrogate physician, there was a consensus that collaborations between the two should be strengthened. Further education should be conducted to alleviate anxieties associated with change and adopting new technologies.
Collapse
Affiliation(s)
- Mingyang Chen
- School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Bo Zhang
- School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Ziting Cai
- School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Samuel Seery
- Faculty of Health and Medicine, Division of Health Research, Lancaster University, Lancaster, United Kingdom
| | | | - Nasra M. Ali
- The First Affiliated Hospital, Dalian Medical University, Dalian, China
| | - Ran Ren
- Global Health Research Center, Dalian Medical University, Dalian, China
| | - Youlin Qiao
- School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
- *Correspondence: Youlin Qiao,
| | - Peng Xue
- School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
- Peng Xue,
| | - Yu Jiang
- School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
- Yu Jiang,
| |
Collapse
|
19
|
Çitil ET, Çitil Canbay F. Artificial intelligence and the future of midwifery: What do midwives think about artificial intelligence? A qualitative study. Health Care Women Int 2022; 43:1510-1527. [PMID: 35452353 DOI: 10.1080/07399332.2022.2055760] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/03/2023]
Abstract
The evidence on how AI will make a revolution is insufficient. Our aim was to investigate opinions of midwives on the future of AI and midwifery. Semi-structured interviews were done with 18 midwives in Turkey. Themes were identified: expectations included the advantages and conditional acceptance of robotic technology, prejudices reflected perceived shortcomings, lack of human competencies, and trust issues. Concerns included midwifery care and concerns about her future. Midwives were overwhelmingly skeptical about the replacement of human capabilities by AI and found the technology's potential limited.
Collapse
Affiliation(s)
- Elif Tuğçe Çitil
- Department of Midwifery, Health Science Faculty, Kütahya Health Science University, Kütahya, Turkey
| | - Funda Çitil Canbay
- Department of Midwifery, Health Science Faculty, Atatürk University, Erzurum, Turkey
| |
Collapse
|
20
|
Jussupow E, Spohrer K, Heinzl A. Identity Threats as a Reason for Resistance to Artificial Intelligence: Survey Study With Medical Students and Professionals. JMIR Form Res 2022; 6:e28750. [PMID: 35319465 PMCID: PMC8987955 DOI: 10.2196/28750] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2021] [Revised: 05/27/2021] [Accepted: 01/03/2022] [Indexed: 01/26/2023] Open
Abstract
Background Information systems based on artificial intelligence (AI) have increasingly spurred controversies among medical professionals as they start to outperform medical experts in tasks that previously required complex human reasoning. Prior research in other contexts has shown that such a technological disruption can result in professional identity threats and provoke negative attitudes and resistance to using technology. However, little is known about how AI systems evoke professional identity threats in medical professionals and under which conditions they actually provoke negative attitudes and resistance. Objective The aim of this study is to investigate how medical professionals’ resistance to AI can be understood because of professional identity threats and temporal perceptions of AI systems. It examines the following two dimensions of medical professional identity threat: threats to physicians’ expert status (professional recognition) and threats to physicians’ role as an autonomous care provider (professional capabilities). This paper assesses whether these professional identity threats predict resistance to AI systems and change in importance under the conditions of varying professional experience and varying perceived temporal relevance of AI systems. Methods We conducted 2 web-based surveys with 164 medical students and 42 experienced physicians across different specialties. The participants were provided with a vignette of a general medical AI system. We measured the experienced identity threats, resistance attitudes, and perceived temporal distance of AI. In a subsample, we collected additional data on the perceived identity enhancement to gain a better understanding of how the participants perceived the upcoming technological change as beyond a mere threat. Qualitative data were coded in a content analysis. Quantitative data were analyzed in regression analyses. Results Both threats to professional recognition and threats to professional capabilities contributed to perceived self-threat and resistance to AI. Self-threat was negatively associated with resistance. Threats to professional capabilities directly affected resistance to AI, whereas the effect of threats to professional recognition was fully mediated through self-threat. Medical students experienced stronger identity threats and resistance to AI than medical professionals. The temporal distance of AI changed the importance of professional identity threats. If AI systems were perceived as relevant only in the distant future, the effect of threats to professional capabilities was weaker, whereas the effect of threats to professional recognition was stronger. The effect of threats remained robust after including perceived identity enhancement. The results show that the distinct dimensions of medical professional identity are affected by the upcoming technological change through AI. Conclusions Our findings demonstrate that AI systems can be perceived as a threat to medical professional identity. Both threats to professional recognition and threats to professional capabilities contribute to resistance attitudes toward AI and need to be considered in the implementation of AI systems in clinical practice.
Collapse
Affiliation(s)
| | - Kai Spohrer
- Frankfurt School of Finance & Management, Frankfurt, Germany
| | | |
Collapse
|
21
|
Blease C, Kharko A, Bernstein M, Bradley C, Houston M, Walsh I, Hägglund M, DesRoches C, Mandl KD. Machine learning in medical education: a survey of the experiences and opinions of medical students in Ireland. BMJ Health Care Inform 2022; 29:bmjhci-2021-100480. [PMID: 35105606 PMCID: PMC8808371 DOI: 10.1136/bmjhci-2021-100480] [Citation(s) in RCA: 27] [Impact Index Per Article: 13.5] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/02/2022] Open
Affiliation(s)
- Charlotte Blease
- Division of General Medicine, Beth Israel Deaconess Medical Center, Boston, Massachusetts, USA
| | - Anna Kharko
- Faculty of Health and Human Sciences, University of Plymouth, Plymouth, UK.,Department of Women's and Children's Health, Uppsala University, Uppsala, Sweden
| | - Michael Bernstein
- School of Public Health, Brown University, Providence, Rhode Island, USA
| | - Colin Bradley
- School of Medicine, University College Cork, Cork, Ireland
| | - Muiris Houston
- School of Medicine, National University of Ireland Galway, Galway, Ireland.,School of Medicine, Trinity College Dublin, Dublin, Ireland
| | - Ian Walsh
- School of Medicine, Dentistry and Biomedical Sciences, Queen's University, Belfast, Belfast, Northern Ireland, UK
| | - Maria Hägglund
- Department of Women's and Children's Health, Uppsala University, Uppsala, Sweden
| | - Catherine DesRoches
- Division of General Medicine, Beth Israel Deaconess Medical Center, Boston, Massachusetts, USA.,Harvard Medical School, Boston, Massachusetts, USA
| | - Kenneth D Mandl
- Harvard Medical School, Boston, Massachusetts, USA.,Computational Health Informatics Program, Boston Children's Hospital, Boston, Massachusetts, USA
| |
Collapse
|
22
|
Boucher EM, Harake NR, Ward HE, Stoeckl SE, Vargas J, Minkel J, Parks AC, Zilca R. Artificially intelligent chatbots in digital mental health interventions: a review. Expert Rev Med Devices 2021; 18:37-49. [PMID: 34872429 DOI: 10.1080/17434440.2021.2013200] [Citation(s) in RCA: 52] [Impact Index Per Article: 17.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
INTRODUCTION Increasing demand for mental health services and the expanding capabilities of artificial intelligence (AI) in recent years has driven the development of digital mental health interventions (DMHIs). To date, AI-based chatbots have been integrated into DMHIs to support diagnostics and screening, symptom management and behavior change, and content delivery. AREAS COVERED We summarize the current landscape of DMHIs, with a focus on AI-based chatbots. Happify Health's AI chatbot, Anna, serves as a case study for discussion of potential challenges and how these might be addressed, and demonstrates the promise of chatbots as effective, usable, and adoptable within DMHIs. Finally, we discuss ways in which future research can advance the field, addressing topics including perceptions of AI, the impact of individual differences, and implications for privacy and ethics. EXPERT OPINION Our discussion concludes with a speculative viewpoint on the future of AI in DMHIs, including the use of chatbots, the evolution of AI, dynamic mental health systems, hyper-personalization, and human-like intervention delivery.
Collapse
|
23
|
Tran AQ, Nguyen LH, Nguyen HSA, Nguyen CT, Vu LG, Zhang M, Vu TMT, Nguyen SH, Tran BX, Latkin CA, Ho RCM, Ho CSH. Determinants of Intention to Use Artificial Intelligence-Based Diagnosis Support System Among Prospective Physicians. Front Public Health 2021; 9:755644. [PMID: 34900904 PMCID: PMC8661093 DOI: 10.3389/fpubh.2021.755644] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2021] [Accepted: 10/19/2021] [Indexed: 12/02/2022] Open
Abstract
Background: This study aimed to develop a theoretical model to explore the behavioral intentions of medical students to adopt an AI-based Diagnosis Support System. Methods: This online cross-sectional survey used the unified theory of user acceptance of technology (UTAUT) to examine the intentions to use an AI-based Diagnosis Support System in 211 undergraduate medical students in Vietnam. Partial least squares (PLS) structural equational modeling was employed to assess the relationship between latent constructs. Results: Effort expectancy (β = 0.201, p < 0.05) and social influence (β = 0.574, p < 0.05) were positively associated with initial trust, while no association was found between performance expectancy and initial trust (p > 0.05). Only social influence (β = 0.527, p < 0.05) was positively related to the behavioral intention. Conclusions: This study highlights positive behavioral intentions in using an AI-based diagnosis support system among prospective Vietnamese physicians, as well as the effect of social influence on this choice. The development of AI-based competent curricula should be considered when reforming medical education in Vietnam.
Collapse
Affiliation(s)
- Anh Quynh Tran
- Institute for Preventive Medicine and Public Health, Hanoi Medical University, Hanoi, Vietnam
| | - Long Hoang Nguyen
- Department of Global Public Health, Karolinska Institutet, Stockholm, Sweden
| | | | - Cuong Tat Nguyen
- Institute for Global Health Innovations, Duy Tan University, Da Nang, Vietnam.,Faculty of Medicine, Duy Tan University, Da Nang, Vietnam
| | - Linh Gia Vu
- Institute for Global Health Innovations, Duy Tan University, Da Nang, Vietnam.,Faculty of Medicine, Duy Tan University, Da Nang, Vietnam
| | - Melvyn Zhang
- National Addictions Management Service (NAMS), Institute of Mental Health, Singapore, Singapore
| | | | - Son Hoang Nguyen
- Center of Excellence in Evidence-Based Medicine, Nguyen Tat Thanh University, Ho Chi Minh City, Vietnam
| | - Bach Xuan Tran
- Institute for Preventive Medicine and Public Health, Hanoi Medical University, Hanoi, Vietnam.,Bloomberg School of Public Health, Johns Hopkins University, Baltimore, MD, United States
| | - Carl A Latkin
- Bloomberg School of Public Health, Johns Hopkins University, Baltimore, MD, United States
| | - Roger C M Ho
- Department of Psychological Medicine, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore.,Institute for Health Innovation and Technology (iHealthtech), National University of Singapore, Singapore, Singapore
| | - Cyrus S H Ho
- Department of Psychological Medicine, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| |
Collapse
|
24
|
Blease C, Kharko A, Annoni M, Gaab J, Locher C. Machine Learning in Clinical Psychology and Psychotherapy Education: A Mixed Methods Pilot Survey of Postgraduate Students at a Swiss University. Front Public Health 2021; 9:623088. [PMID: 33898374 PMCID: PMC8064116 DOI: 10.3389/fpubh.2021.623088] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2020] [Accepted: 03/05/2021] [Indexed: 11/13/2022] Open
Abstract
Background: There is increasing use of psychotherapy apps in mental health care. Objective: This mixed methods pilot study aimed to explore postgraduate clinical psychology students' familiarity and formal exposure to topics related to artificial intelligence and machine learning (AI/ML) during their studies. Methods: In April-June 2020, we conducted a mixed-methods online survey using a convenience sample of 120 clinical psychology students enrolled in a two-year Masters' program at a Swiss University. Results: In total 37 students responded (response rate: 37/120, 31%). Among respondents, 73% (n = 27) intended to enter a mental health profession, and 97% reported that they had heard of the term "machine learning." Students estimated 0.52% of their program would be spent on AI/ML education. Around half (46%) reported that they intended to learn about AI/ML as it pertained to mental health care. On 5-point Likert scale, students "moderately agreed" (median = 4) that AI/M should be part of clinical psychology/psychotherapy education. Qualitative analysis of students' comments resulted in four major themes on the impact of AI/ML on mental healthcare: (1) Changes in the quality and understanding of psychotherapy care; (2) Impact on patient-therapist interactions; (3) Impact on the psychotherapy profession; (4) Data management and ethical issues. Conclusions: This pilot study found that postgraduate clinical psychology students held a wide range of opinions but had limited formal education on how AI/ML-enabled tools might impact psychotherapy. The survey raises questions about how curricula could be enhanced to educate clinical psychology/psychotherapy trainees about the scope of AI/ML in mental healthcare.
Collapse
Affiliation(s)
- Charlotte Blease
- General Medicine and Primary Care, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, MA, United States
| | - Anna Kharko
- Faculty of Health, University of Plymouth, Plymouth, United Kingdom
| | - Marco Annoni
- Interdepartmental Center for Research Ethics and Integrity CNR, Rome, Italy.,Fondazione Umberto Veronesi, Milan, Italy
| | - Jens Gaab
- Department of Clinical Psychology and Psychotherapy, University of Basel, Basel, Switzerland
| | - Cosima Locher
- Faculty of Health, University of Plymouth, Plymouth, United Kingdom.,Department of Clinical Psychology and Psychotherapy, University of Basel, Basel, Switzerland.,Department of Consultation-Liaison Psychiatry and Psychosomatic Medicine, University Hospital Zurich, Zurich, Switzerland
| |
Collapse
|
25
|
Shinners L, Aggar C, Grace S, Smith S. Exploring healthcare professionals' perceptions of artificial intelligence: Validating a questionnaire using the e-Delphi method. Digit Health 2021; 7:20552076211003433. [PMID: 33815816 PMCID: PMC7995296 DOI: 10.1177/20552076211003433] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2020] [Accepted: 02/23/2021] [Indexed: 01/15/2023] Open
Abstract
Objective The aim of this study was to draw upon the collective knowledge of experts in the fields of health and technology to develop a questionnaire that measured healthcare professionals' perceptions of Artificial Intelligence (AI). Methods The panel for this study were carefully selected participants who demonstrated an interest and/or involvement in AI from the fields of health or information technology. Recruitment was accomplished via email which invited the panel member to participate and included study and consent information. Data were collected from three rounds in the form of an online survey, an online group meeting and email communication. A 75% median threshold was used to define consensus. Results Between January and March 2019, five healthcare professionals and three IT experts participated in three rounds of study to reach consensus on the structure and content of the questionnaire. In Round 1 panel members identified issues about general understanding of AI and achieved consensus on nine draft questionnaire items. In Round 2 the panel achieved consensus on demographic questions and comprehensive group discussion resulted in the development of two further questionnaire items for inclusion. In a final e-Delphi round, a draft of the final questionnaire was distributed via email to the panel members for comment. No further amendments were put forward and 100% consensus was achieved. Conclusion A modified e-Delphi method was used to validate and develop a questionnaire to explore healthcare professionals' perceptions of AI. The e-Delphi method was successful in achieving consensus from an interdisciplinary panel of experts from health and IT. Further research is recommended to test the reliability of this questionnaire.
Collapse
Affiliation(s)
- Lucy Shinners
- Faculty of Health, Southern Cross University, Gold Coast Airport, Bilinga, Australia
| | - Christina Aggar
- Faculty of Health, Southern Cross University, Gold Coast Airport, Bilinga, Australia
| | - Sandra Grace
- Faculty of Health, Southern Cross University, East Lismore, Australia
| | - Stuart Smith
- Faculty of Health, Southern Cross University, Coffs Harbour, Australia
| |
Collapse
|