1
|
Rukadikar A, Khandelwal K. Exploring the promises and pitfalls of artificial intelligence interventions in predicting adolescent self-harm and suicide attempts. Gen Hosp Psychiatry 2024; 89:95-96. [PMID: 38438295 DOI: 10.1016/j.genhosppsych.2024.02.016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/23/2024] [Revised: 02/28/2024] [Accepted: 02/28/2024] [Indexed: 03/06/2024]
Affiliation(s)
- Aaradhana Rukadikar
- Symbiosis Law School, Symbiosis International (Deemed University) (SIU), Pune, Maharashtra, India
| | - Komal Khandelwal
- Symbiosis Law School, Symbiosis International (Deemed University) (SIU), Pune, Maharashtra, India.
| |
Collapse
|
2
|
Ghadiri P, Yaffe MJ, Adams AM, Abbasgholizadeh-Rahimi S. Primary care physicians' perceptions of artificial intelligence systems in the care of adolescents' mental health. BMC PRIMARY CARE 2024; 25:215. [PMID: 38872128 PMCID: PMC11170885 DOI: 10.1186/s12875-024-02417-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/15/2023] [Accepted: 05/06/2024] [Indexed: 06/15/2024]
Abstract
BACKGROUND Given that mental health problems in adolescence may have lifelong impacts, the role of primary care physicians (PCPs) in identifying and managing these issues is important. Artificial Intelligence (AI) may offer solutions to the current challenges involved in mental health care. We therefore explored PCPs' challenges in addressing adolescents' mental health, along with their attitudes towards using AI to assist them in their tasks. METHODS We used purposeful sampling to recruit PCPs for a virtual Focus Group (FG). The virtual FG lasted 75 minutes and was moderated by two facilitators. A life transcription was produced by an online meeting software. Transcribed data was cleaned, followed by a priori and inductive coding and thematic analysis. RESULTS We reached out to 35 potential participants via email. Seven agreed to participate, and ultimately four took part in the FG. PCPs perceived that AI systems have the potential to be cost-effective, credible, and useful in collecting large amounts of patients' data, and relatively credible. They envisioned AI assisting with tasks such as diagnoses and establishing treatment plans. However, they feared that reliance on AI might result in a loss of clinical competency. PCPs wanted AI systems to be user-friendly, and they were willing to assist in achieving this goal if it was within their scope of practice and they were compensated for their contribution. They stressed a need for regulatory bodies to deal with medicolegal and ethical aspects of AI and clear guidelines to reduce or eliminate the potential of patient harm. CONCLUSION This study provides the groundwork for assessing PCPs' perceptions of AI systems' features and characteristics, potential applications, possible negative aspects, and requirements for using them. A future study of adolescents' perspectives on integrating AI into mental healthcare might contribute a fuller understanding of the potential of AI for this population.
Collapse
Affiliation(s)
- Pooria Ghadiri
- Department of Family Medicine and Faculty of Dental Medicine and Oral Health Sciences, McGill University, 5858 Ch. de la Côte-des-Neiges, Montréal, QC, H3S 1Z1, Canada
- Mila-Quebec AI Institute, Montréal, QC, Canada
| | - Mark J Yaffe
- Department of Family Medicine and Faculty of Dental Medicine and Oral Health Sciences, McGill University, 5858 Ch. de la Côte-des-Neiges, Montréal, QC, H3S 1Z1, Canada
- St. Mary's Hospital Center of the Integrated University Centre for Health and Social Services of West Island of Montreal, Montréal, QC, Canada
| | - Alayne Mary Adams
- Department of Family Medicine and Faculty of Dental Medicine and Oral Health Sciences, McGill University, 5858 Ch. de la Côte-des-Neiges, Montréal, QC, H3S 1Z1, Canada
| | - Samira Abbasgholizadeh-Rahimi
- Department of Family Medicine and Faculty of Dental Medicine and Oral Health Sciences, McGill University, 5858 Ch. de la Côte-des-Neiges, Montréal, QC, H3S 1Z1, Canada.
- Mila-Quebec AI Institute, Montréal, QC, Canada.
- Lady Davis Institute for Medical Research (LDI), Jewish General Hospital, Montréal, QC, Canada.
| |
Collapse
|
3
|
Sezgin E, McKay I. Behavioral health and generative AI: a perspective on future of therapies and patient care. NPJ MENTAL HEALTH RESEARCH 2024; 3:25. [PMID: 38849499 PMCID: PMC11161641 DOI: 10.1038/s44184-024-00067-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/27/2023] [Accepted: 04/06/2024] [Indexed: 06/09/2024]
Affiliation(s)
- Emre Sezgin
- The Abigail Wexner Research Institute at Nationwide Children's Hospital, Columbus, OH, USA.
- The Ohio State University College of Medicine, Columbus, OH, USA.
| | - Ian McKay
- The Ohio State University College of Medicine, Columbus, OH, USA
- Department of Psychiatry and Behavioral Health, Nationwide Children's Hospital, Columbus, OH, USA
| |
Collapse
|
4
|
Alhuwaydi AM. Exploring the Role of Artificial Intelligence in Mental Healthcare: Current Trends and Future Directions - A Narrative Review for a Comprehensive Insight. Risk Manag Healthc Policy 2024; 17:1339-1348. [PMID: 38799612 PMCID: PMC11127648 DOI: 10.2147/rmhp.s461562] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2024] [Accepted: 05/10/2024] [Indexed: 05/29/2024] Open
Abstract
Mental health is an essential component of the health and well-being of a person and community, and it is critical for the individual, society, and socio-economic development of any country. Mental healthcare is currently in the health sector transformation era, with emerging technologies such as artificial intelligence (AI) reshaping the screening, diagnosis, and treatment modalities of psychiatric illnesses. The present narrative review is aimed at discussing the current landscape and the role of AI in mental healthcare, including screening, diagnosis, and treatment. Furthermore, this review attempted to highlight the key challenges, limitations, and prospects of AI in providing mental healthcare based on existing works of literature. The literature search for this narrative review was obtained from PubMed, Saudi Digital Library (SDL), Google Scholar, Web of Science, and IEEE Xplore, and we included only English-language articles published in the last five years. Keywords used in combination with Boolean operators ("AND" and "OR") were the following: "Artificial intelligence", "Machine learning", Deep learning", "Early diagnosis", "Treatment", "interventions", "ethical consideration", and "mental Healthcare". Our literature review revealed that, equipped with predictive analytics capabilities, AI can improve treatment planning by predicting an individual's response to various interventions. Predictive analytics, which uses historical data to formulate preventative interventions, aligns with the move toward individualized and preventive mental healthcare. In the screening and diagnostic domains, a subset of AI, such as machine learning and deep learning, has been proven to analyze various mental health data sets and predict the patterns associated with various mental health problems. However, limited studies have evaluated the collaboration between healthcare professionals and AI in delivering mental healthcare, as these sensitive problems require empathy, human connections, and holistic, personalized, and multidisciplinary approaches. Ethical issues, cybersecurity, a lack of data analytics diversity, cultural sensitivity, and language barriers remain concerns for implementing this futuristic approach in mental healthcare. Considering these sensitive problems require empathy, human connections, and holistic, personalized, and multidisciplinary approaches, it is imperative to explore these aspects. Therefore, future comparative trials with larger sample sizes and data sets are warranted to evaluate different AI models used in mental healthcare across regions to fill the existing knowledge gaps.
Collapse
Affiliation(s)
- Ahmed M Alhuwaydi
- Department of Internal Medicine, Division of Psychiatry, College of Medicine, Jouf University, Sakaka, Saudi Arabia
| |
Collapse
|
5
|
Mulpuri RP, Konda N, Gadde ST, Amalakanti S, Valiveti SC. Artificial Intelligence and Machine Learning in Neuroregeneration: A Systematic Review. Cureus 2024; 16:e61400. [PMID: 38953082 PMCID: PMC11215936 DOI: 10.7759/cureus.61400] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2024] [Accepted: 05/30/2024] [Indexed: 07/03/2024] Open
Abstract
Artificial intelligence (AI) and machine learning (ML) show promise in various medical domains, including medical imaging, precise diagnoses, and pharmaceutical research. In neuroscience and neurosurgery, AI/ML advancements enhance brain-computer interfaces, neuroprosthetics, and surgical planning. They are poised to revolutionize neuroregeneration by unraveling the nervous system's complexities. However, research on AI/ML in neuroregeneration is fragmented, necessitating a comprehensive review. Adhering to Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) recommendations, 19 English-language papers focusing on AI/ML in neuroregeneration were selected from a total of 247. Two researchers independently conducted data extraction and quality assessment using the Mixed Methods Appraisal Tool (MMAT) 2018. Eight studies were deemed high quality, 10 moderate, and four low. Primary goals included diagnosing neurological disorders (35%), robotic rehabilitation (18%), and drug discovery (12% each). Methods ranged from analyzing imaging data (24%) to animal models (24%) and electronic health records (12%). Deep learning accounted for 41% of AI/ML techniques, while standard ML algorithms constituted 29%. The review underscores the growing interest in AI/ML for neuroregenerative medicine, with increasing publications. These technologies aid in diagnosing diseases and facilitating functional recovery through robotics and targeted stimulation. AI-driven drug discovery holds promise for identifying neuroregenerative therapies. Nonetheless, addressing existing limitations remains crucial in this rapidly evolving field.
Collapse
Affiliation(s)
- Rajendra P Mulpuri
- General Medicine, All India Institute of Medical Sciences, Mangalagiri, IND
| | - Nikhitha Konda
- Internal Medicine, Alluri Sitarama Raju Academy of Medical Sciences, Eluru, IND
| | - Sai T Gadde
- General Medicine, All India Institute of Medical Sciences, Mangalagiri, IND
| | - Sridhar Amalakanti
- General Medicine, All India Institute of Medical Sciences, Mangalagiri, IND
| | | |
Collapse
|
6
|
Shahzad MF, Xu S, Lim WM, Yang X, Khan QR. Artificial intelligence and social media on academic performance and mental well-being: Student perceptions of positive impact in the age of smart learning. Heliyon 2024; 10:e29523. [PMID: 38665566 PMCID: PMC11043955 DOI: 10.1016/j.heliyon.2024.e29523] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2023] [Revised: 03/14/2024] [Accepted: 04/09/2024] [Indexed: 04/28/2024] Open
Abstract
The advancement of artificial intelligence (AI) and the ubiquity of social media have become transformative agents in contemporary educational ecosystems. The spotlight of this inquiry focuses on the nexus between AI and social media usage in relation to academic performance and mental well-being, and the role of smart learning in facilitating these relationships. Using partial least squares-structural equation modeling (PLS-SEM) on a sample of 401 Chinese university students. The study results reveal that both AI and social media have a positive impact on academic performance and mental well-being among university students. Furthermore, smart learning serves as a positive mediating variable, amplifying the beneficial effects of AI and social media on both academic performance and mental well-being. These revelations contribute to the discourse on technology-enhanced education, showing that embracing AI and social media can have a positive impact on student performance and well-being.
Collapse
Affiliation(s)
| | - Shuo Xu
- College of Economics and Management, Beijing University of Technology, Beijing, PR China
| | - Weng Marc Lim
- Sunway Business School, Sunway University, Sunway City, Selangor, Malaysia
- School of Business, Law and Entrepreneurship, Swinburne University of Technology, Hawthorn, Victoria, Australia
- Design and Arts, Swinburne University of Technology, Kuching, Sarawak, Malaysia
| | - Xingbing Yang
- Beijing Yuchehang Information Technology Co., Ltd, Beijing, 100089, PR China
| | - Qasim Raza Khan
- Department of Management Sciences, COMSATS University Islamabad, Lahore Campus, Pakistan
| |
Collapse
|
7
|
Adler DA, Stamatis CA, Meyerhoff J, Mohr DC, Wang F, Aranovich GJ, Sen S, Choudhury T. Measuring algorithmic bias to analyze the reliability of AI tools that predict depression risk using smartphone sensed-behavioral data. NPJ MENTAL HEALTH RESEARCH 2024; 3:17. [PMID: 38649446 PMCID: PMC11035598 DOI: 10.1038/s44184-024-00057-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/09/2023] [Accepted: 02/07/2024] [Indexed: 04/25/2024]
Abstract
AI tools intend to transform mental healthcare by providing remote estimates of depression risk using behavioral data collected by sensors embedded in smartphones. While these tools accurately predict elevated depression symptoms in small, homogenous populations, recent studies show that these tools are less accurate in larger, more diverse populations. In this work, we show that accuracy is reduced because sensed-behaviors are unreliable predictors of depression across individuals: sensed-behaviors that predict depression risk are inconsistent across demographic and socioeconomic subgroups. We first identified subgroups where a developed AI tool underperformed by measuring algorithmic bias, where subgroups with depression were incorrectly predicted to be at lower risk than healthier subgroups. We then found inconsistencies between sensed-behaviors predictive of depression across these subgroups. Our findings suggest that researchers developing AI tools predicting mental health from sensed-behaviors should think critically about the generalizability of these tools, and consider tailored solutions for targeted populations.
Collapse
Affiliation(s)
- Daniel A Adler
- Cornell Tech, Information Science, 2 W Loop Rd, New York, NY, 10044, USA.
| | - Caitlin A Stamatis
- Northwestern University Feinberg School of Medicine, Center for Behavioral Intervention Technologies, Chicago, IL, 60611, USA
| | - Jonah Meyerhoff
- Northwestern University Feinberg School of Medicine, Center for Behavioral Intervention Technologies, Chicago, IL, 60611, USA
| | - David C Mohr
- Northwestern University Feinberg School of Medicine, Center for Behavioral Intervention Technologies, Chicago, IL, 60611, USA
| | - Fei Wang
- Weill Cornell Medicine, Population Health Sciences, New York, NY, 10065, USA
| | | | - Srijan Sen
- Michigan Medicine, Department of Psychiatry, Ann Arbor, MI, 48109, USA
| | - Tanzeem Choudhury
- Cornell Tech, Information Science, 2 W Loop Rd, New York, NY, 10044, USA
| |
Collapse
|
8
|
Adler DA, Stamatis CA, Meyerhoff J, Mohr DC, Wang F, Aranovich GJ, Sen S, Choudhury T. Measuring algorithmic bias to analyze the reliability of AI tools that predict depression risk using smartphone sensed-behavioral data. RESEARCH SQUARE 2024:rs.3.rs-3044613. [PMID: 38746448 PMCID: PMC11092819 DOI: 10.21203/rs.3.rs-3044613/v1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/16/2024]
Abstract
AI tools intend to transform mental healthcare by providing remote estimates of depression risk using behavioral data collected by sensors embedded in smartphones. While these tools accurately predict elevated symptoms in small, homogenous populations, recent studies show that these tools are less accurate in larger, more diverse populations. In this work, we show that accuracy is reduced because sensed-behaviors are unreliable predictors of depression across individuals; specifically the sensed-behaviors that predict depression risk are inconsistent across demographic and socioeconomic subgroups. We first identified subgroups where a developed AI tool underperformed by measuring algorithmic bias, where subgroups with depression were incorrectly predicted to be at lower risk than healthier subgroups. We then found inconsistencies between sensed-behaviors predictive of depression across these subgroups. Our findings suggest that researchers developing AI tools predicting mental health from behavior should think critically about the generalizability of these tools, and consider tailored solutions for targeted populations.
Collapse
|
9
|
Zhang G, Zhang Q, Li F. The impact of spiritual care on the psychological health and quality of life of adults with heart failure: a systematic review of randomized trials. Front Med (Lausanne) 2024; 11:1334920. [PMID: 38695025 PMCID: PMC11062134 DOI: 10.3389/fmed.2024.1334920] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2023] [Accepted: 03/05/2024] [Indexed: 05/04/2024] Open
Abstract
Background Heart failure (HF) brings not only physical pain but also psychological distress. This systematic review investigated the influence of spiritual care on the psychological well-being and quality of life in adults with HF. Methods We conducted a systematic literature review following PRISMA guidelines, searching seven electronic databases for relevant randomized controlled studies without language or temporal restrictions. The studies were assessed for quality using the Cochrane Bias Risk tool. Results A total of 13 studies (882 participants) were reviewed, investigating interventions such as religion, meditation, mental health, cognitive interventions, and spiritual support. Key factors influencing the effectiveness of spiritual care implementation included integration into routine care, respect for diversity, patient engagement, intervention quality, and alignment with patient beliefs. The majority of the studies indicated that spiritual care has a potentially beneficial impact on the mental health and quality of life of patients with HF. Conclusion The findings provide valuable insights for healthcare professionals, highlighting the importance of adopting a spiritual care approach to healthcare for this population.
Collapse
Affiliation(s)
- Guangwei Zhang
- School of Nursing, Jilin University, Changchun, China
- The First Hospital of Jilin University, Changchun, China
| | - Qiyu Zhang
- The First Hospital of Jilin University, Changchun, China
| | - Fan Li
- School of Nursing, Jilin University, Changchun, China
- Department of Pathogenobiology, The Key Laboratory of Zoonosis, Chinese, Ministry of Education, College of Basic Medicine, Jilin University, Changchun, China
- The Key Laboratory for Bionics Engineering, Ministry of Education, Jilin University, Changchun, China
- Engineering Research Center for Medical Biomaterials of Jilin Province, Jilin University, Changchun, China
- Key Laboratory for Health Biomedical Materials of Jilin Province, Jilin University, Changchun, China
- State Key Laboratory of Pathogenesis, Prevention and Treatment of High Incidence Diseases in Central Asia, Urumqi, Xinjiang, China
| |
Collapse
|
10
|
Singhal A, Neveditsin N, Tanveer H, Mago V. Toward Fairness, Accountability, Transparency, and Ethics in AI for Social Media and Health Care: Scoping Review. JMIR Med Inform 2024; 12:e50048. [PMID: 38568737 PMCID: PMC11024755 DOI: 10.2196/50048] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2023] [Revised: 12/21/2023] [Accepted: 02/15/2024] [Indexed: 04/05/2024] Open
Abstract
BACKGROUND The use of social media for disseminating health care information has become increasingly prevalent, making the expanding role of artificial intelligence (AI) and machine learning in this process both significant and inevitable. This development raises numerous ethical concerns. This study explored the ethical use of AI and machine learning in the context of health care information on social media platforms (SMPs). It critically examined these technologies from the perspectives of fairness, accountability, transparency, and ethics (FATE), emphasizing computational and methodological approaches that ensure their responsible application. OBJECTIVE This study aims to identify, compare, and synthesize existing solutions that address the components of FATE in AI applications in health care on SMPs. Through an in-depth exploration of computational methods, approaches, and evaluation metrics used in various initiatives, we sought to elucidate the current state of the art and identify existing gaps. Furthermore, we assessed the strength of the evidence supporting each identified solution and discussed the implications of our findings for future research and practice. In doing so, we made a unique contribution to the field by highlighting areas that require further exploration and innovation. METHODS Our research methodology involved a comprehensive literature search across PubMed, Web of Science, and Google Scholar. We used strategic searches through specific filters to identify relevant research papers published since 2012 focusing on the intersection and union of different literature sets. The inclusion criteria were centered on studies that primarily addressed FATE in health care discussions on SMPs; those presenting empirical results; and those covering definitions, computational methods, approaches, and evaluation metrics. RESULTS Our findings present a nuanced breakdown of the FATE principles, aligning them where applicable with the American Medical Informatics Association ethical guidelines. By dividing these principles into dedicated sections, we detailed specific computational methods and conceptual approaches tailored to enforcing FATE in AI-driven health care on SMPs. This segmentation facilitated a deeper understanding of the intricate relationship among the FATE principles and highlighted the practical challenges encountered in their application. It underscored the pioneering contributions of our study to the discourse on ethical AI in health care on SMPs, emphasizing the complex interplay and the limitations faced in implementing these principles effectively. CONCLUSIONS Despite the existence of diverse approaches and metrics to address FATE issues in AI for health care on SMPs, challenges persist. The application of these approaches often intersects with additional ethical considerations, occasionally leading to conflicts. Our review highlights the lack of a unified, comprehensive solution for fully and effectively integrating FATE principles in this domain. This gap necessitates careful consideration of the ethical trade-offs involved in deploying existing methods and underscores the need for ongoing research.
Collapse
Affiliation(s)
- Aditya Singhal
- Department of Computer Science, Lakehead University, Thunder Bay, ON, Canada
| | - Nikita Neveditsin
- Department of Mathematics and Computing Science, Saint Mary's University, Halifax, NS, Canada
| | - Hasnaat Tanveer
- Faculty of Mathematics, University of Waterloo, Waterloo, ON, Canada
| | - Vijay Mago
- School of Health Policy and Management, York University, Toronto, ON, Canada
| |
Collapse
|
11
|
Thakkar A, Gupta A, De Sousa A. Artificial intelligence in positive mental health: a narrative review. Front Digit Health 2024; 6:1280235. [PMID: 38562663 PMCID: PMC10982476 DOI: 10.3389/fdgth.2024.1280235] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Accepted: 02/26/2024] [Indexed: 04/04/2024] Open
Abstract
The paper reviews the entire spectrum of Artificial Intelligence (AI) in mental health and its positive role in mental health. AI has a huge number of promises to offer mental health care and this paper looks at multiple facets of the same. The paper first defines AI and its scope in the area of mental health. It then looks at various facets of AI like machine learning, supervised machine learning and unsupervised machine learning and other facets of AI. The role of AI in various psychiatric disorders like neurodegenerative disorders, intellectual disability and seizures are discussed along with the role of AI in awareness, diagnosis and intervention in mental health disorders. The role of AI in positive emotional regulation and its impact in schizophrenia, autism spectrum disorders and mood disorders is also highlighted. The article also discusses the limitations of AI based approaches and the need for AI based approaches in mental health to be culturally aware, with structured flexible algorithms and an awareness of biases that can arise in AI. The ethical issues that may arise with the use of AI in mental health are also visited.
Collapse
|
12
|
DelPozo-Banos M, Stewart R, John A. Machine learning in mental health and its relationship with epidemiological practice. Front Psychiatry 2024; 15:1347100. [PMID: 38528983 PMCID: PMC10961376 DOI: 10.3389/fpsyt.2024.1347100] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/30/2023] [Accepted: 02/22/2024] [Indexed: 03/27/2024] Open
Affiliation(s)
| | - Robert Stewart
- King’s College London, Institute of Psychiatry, Psychology and Neuroscience, London, United Kingdom
- South London and Maudsley National Health Service (NHS) Foundation Trust, London, United Kingdom
| | - Ann John
- Swansea University Medical School, Swansea, United Kingdom
| |
Collapse
|
13
|
Chen J, Yuan D, Dong R, Cai J, Ai Z, Zhou S. Artificial intelligence significantly facilitates development in the mental health of college students: a bibliometric analysis. Front Psychol 2024; 15:1375294. [PMID: 38515973 PMCID: PMC10955080 DOI: 10.3389/fpsyg.2024.1375294] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2024] [Accepted: 02/26/2024] [Indexed: 03/23/2024] Open
Abstract
Objective College students are currently grappling with severe mental health challenges, and research on artificial intelligence (AI) related to college students mental health, as a crucial catalyst for promoting psychological well-being, is rapidly advancing. Employing bibliometric methods, this study aim to analyze and discuss the research on AI in college student mental health. Methods Publications pertaining to AI and college student mental health were retrieved from the Web of Science core database. The distribution of publications were analyzed to gage the predominant productivity. Data on countries, authors, journal, and keywords were analyzed using VOSViewer, exploring collaboration patterns, disciplinary composition, research hotspots and trends. Results Spanning 2003 to 2023, the study encompassed 1722 publications, revealing notable insights: (1) a gradual rise in annual publications, reaching its zenith in 2022; (2) Journal of Affective Disorders and Psychiatry Research emerged were the most productive and influential sources in this field, with significant contributions from China, the United States, and their affiliated higher education institutions; (3) the primary mental health issues were depression and anxiety, with machine learning and AI having the widest range of applications; (4) an imperative for enhanced international and interdisciplinary collaboration; (5) research hotspots exploring factors influencing college student mental health and AI applications. Conclusion This study provides a succinct yet comprehensive overview of this field, facilitating a nuanced understanding of prospective applications of AI in college student mental health. Professionals can leverage this research to discern the advantages, risks, and potential impacts of AI in this critical field.
Collapse
Affiliation(s)
- Jing Chen
- Wuhan University China Institute of Boundary and Ocean Studies, Wuhan, China
| | - Dongfeng Yuan
- Faculty of Pharmacy, Hubei University of Chinese Medicine, Wuhan, China
| | - Ruotong Dong
- Faculty of Pharmacy, Hubei University of Chinese Medicine, Wuhan, China
| | - Jingyi Cai
- Faculty of Pharmacy, Hubei University of Chinese Medicine, Wuhan, China
| | - Zhongzhu Ai
- Faculty of Pharmacy, Hubei University of Chinese Medicine, Wuhan, China
- Hubei Shizhen Laboratory, Wuhan, China
| | - Shanshan Zhou
- Hubei Shizhen Laboratory, Wuhan, China
- The First Clinical Medical School, Hubei University of Chinese Medicine, Wuhan, China
| |
Collapse
|
14
|
Dergaa I, Saad HB, El Omri A, Glenn JM, Clark CCT, Washif JA, Guelmami N, Hammouda O, Al-Horani RA, Reynoso-Sánchez LF, Romdhani M, Paineiras-Domingos LL, Vancini RL, Taheri M, Mataruna-Dos-Santos LJ, Trabelsi K, Chtourou H, Zghibi M, Eken Ö, Swed S, Aissa MB, Shawki HH, El-Seedi HR, Mujika I, Seiler S, Zmijewski P, Pyne DB, Knechtle B, Asif IM, Drezner JA, Sandbakk Ø, Chamari K. Using artificial intelligence for exercise prescription in personalised health promotion: A critical evaluation of OpenAI's GPT-4 model. Biol Sport 2024; 41:221-241. [PMID: 38524814 PMCID: PMC10955739 DOI: 10.5114/biolsport.2024.133661] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2023] [Revised: 11/15/2023] [Accepted: 11/28/2023] [Indexed: 03/26/2024] Open
Abstract
The rise of artificial intelligence (AI) applications in healthcare provides new possibilities for personalized health management. AI-based fitness applications are becoming more common, facilitating the opportunity for individualised exercise prescription. However, the use of AI carries the risk of inadequate expert supervision, and the efficacy and validity of such applications have not been thoroughly investigated, particularly in the context of diverse health conditions. The aim of the study was to critically assess the efficacy of exercise prescriptions generated by OpenAI's Generative Pre-Trained Transformer 4 (GPT-4) model for five example patient profiles with diverse health conditions and fitness goals. Our focus was to assess the model's ability to generate exercise prescriptions based on a singular, initial interaction, akin to a typical user experience. The evaluation was conducted by leading experts in the field of exercise prescription. Five distinct scenarios were formulated, each representing a hypothetical individual with a specific health condition and fitness objective. Upon receiving details of each individual, the GPT-4 model was tasked with generating a 30-day exercise program. These AI-derived exercise programs were subsequently subjected to a thorough evaluation by experts in exercise prescription. The evaluation encompassed adherence to established principles of frequency, intensity, time, and exercise type; integration of perceived exertion levels; consideration for medication intake and the respective medical condition; and the extent of program individualization tailored to each hypothetical profile. The AI model could create general safety-conscious exercise programs for various scenarios. However, the AI-generated exercise prescriptions lacked precision in addressing individual health conditions and goals, often prioritizing excessive safety over the effectiveness of training. The AI-based approach aimed to ensure patient improvement through gradual increases in training load and intensity, but the model's potential to fine-tune its recommendations through ongoing interaction was not fully satisfying. AI technologies, in their current state, can serve as supplemental tools in exercise prescription, particularly in enhancing accessibility for individuals unable to access, often costly, professional advice. However, AI technologies are not yet recommended as a substitute for personalized, progressive, and health condition-specific prescriptions provided by healthcare and fitness professionals. Further research is needed to explore more interactive use of AI models and integration of real-time physiological feedback.
Collapse
Affiliation(s)
- Ismail Dergaa
- Primary Health Care Corporation (PHCC), Doha, Qatar
- Research Laboratory Education, Motricité, Sport et Santé (EM2S) LR19JS01, High Institute of Sport and Physical Education of Sfax, University of Sfax, Sfax 3000, Tunisia
- High Institute of Sport and Physical Education of Kef, Jendouba, Kef, Tunisia
| | - Helmi Ben Saad
- University of Sousse, Farhat HACHED hospital, Research Laboratory LR12SP09 «Heart Failure», Sousse, Tunisia
- University of Sousse, Faculty of Medicine of Sousse, laboratory of Physiology, Sousse, Tunisia
| | - Abdelfatteh El Omri
- Surgical Research Section, Department of Surgery, Hamad Medical Corporation, Doha 3050, Qatar
| | | | - Cain C. T. Clark
- College of Life Sciences, Birmingham City University, Birmingham, B15 3TN, UK
- Institute for Health and Wellbeing, Coventry University, Coventry, CV1 5FB, UK
| | - Jad Adrian Washif
- Sports Performance Division, National Sports Institute of Malaysia, Kuala Lumpur, Malaysia
| | - Noomen Guelmami
- High Institute of Sport and Physical Education of Kef, Jendouba, Kef, Tunisia
- Postgraduate School of Public Health, Department of Health Sciences (DISSAL), University of Genoa, Genoa, Italy
| | - Omar Hammouda
- Interdisciplinary Laboratory in Neurosciences, Physiology and Psychology: Physical Activity, Health and Learning (LINP2), UFR STAPS (Faculty of Sport Sciences), UPL, Paris Nanterre University, Nanterre, France
- Research Laboratory, Molecular Bases of Human Pathology, LR19ES13, Faculty of Medicine, University of Sfax, Tunisia
| | | | | | - Mohamed Romdhani
- Interdisciplinary Laboratory in Neurosciences, Physiology and Psychology: Physical Activity, Health and Learning (LINP2), UFR STAPS (Faculty of Sport Sciences), UPL, Paris Nanterre University, Nanterre, France
| | | | - Rodrigo L. Vancini
- Centro de Educação Física e Desportos, Universidade Federal do Espírito Santo, Vitória, Espírito Santo, Brazil
| | - Morteza Taheri
- Department of Motor Behavior, Faculty of Sport Sciences, University of Tehran, Tehran, Iran
| | - Leonardo Jose Mataruna-Dos-Santos
- Department of Creative Industries, Faculty of Communication, Arts and Sciences, Canadian University of Dubai, Dubai, United Arab Emirates
| | - Khaled Trabelsi
- Research Laboratory Education, Motricité, Sport et Santé (EM2S) LR19JS01, High Institute of Sport and Physical Education of Sfax, University of Sfax, Sfax 3000, Tunisia
| | - Hamdi Chtourou
- Research Laboratory Education, Motricité, Sport et Santé (EM2S) LR19JS01, High Institute of Sport and Physical Education of Sfax, University of Sfax, Sfax 3000, Tunisia
| | - Makram Zghibi
- High Institute of Sport and Physical Education of Kef, Jendouba, Kef, Tunisia
| | - Özgür Eken
- Department of Physical Education and Sport Teaching, Inonu University, Malatya 44000, Turkey
| | - Sarya Swed
- University of Aleppo Faculty of Medicine: Aleppo, Aleppo Governorate, Syria
| | - Mohamed Ben Aissa
- Postgraduate School of Public Health, Department of Health Sciences (DISSAL), University of Genoa, Genoa, Italy
| | - Hossam H. Shawki
- Department of Comparative and Experimental Medicine, Nagoya City University Graduate School of Medical Sciences, Nagoya 467-8601, Japan
| | - Hesham R. El-Seedi
- Department of Chemistry, Faculty of Science, Islamic University of Madinah, Madinah, 42351, Saudi Arabia
- International Research Center for Food Nutrition and Safety, Jiangsu University, Zhenjiang 212013, China
- International Research Center for Food Nutrition and Safety, Jiangsu University, Zhenjiang 212013, China
| | - Iñigo Mujika
- Department of Physiology, Faculty of Medicine and Nursing, University of the Basque Country, Leioa, Basque Country
- Exercise Science Laboratory, School of Kinesiology, Faculty of Medicine, Universidad Finis Terrae, Santiago, Chile
| | - Stephen Seiler
- Department of Sport Science and Physical Education, University of Agder, Kristiansand, Norway
| | - Piotr Zmijewski
- Jozef Pilsudski University of Physical Education in Warsaw, Warsaw, Poland
| | - David B. Pyne
- Research Institute for Sport and Exercise, University of Canberra, Canberra, ACT, Australia
| | - Beat Knechtle
- Institute of Primary Care, University of Zurich, Zurich, Switzerland
| | - Irfan M Asif
- Department of Family and Community Medicine, University of Alabama at Birmingham, Birmingham, Alabama, USA
| | - Jonathan A Drezner
- Center for Sports Cardiology, University of Washington, Seattle, Washington, USA
| | - Øyvind Sandbakk
- Center for Elite Sports Research, Department of Neuromedicine and Movement Science, Norwegian University of Science and Technology, Trondheim, Norway
| | - Karim Chamari
- Higher institute of Sport and Physical Education, ISSEP Ksar Saïd, Manouba University, Tunisia
| |
Collapse
|
15
|
Zafar F, Fakhare Alam L, Vivas RR, Wang J, Whei SJ, Mehmood S, Sadeghzadegan A, Lakkimsetti M, Nazir Z. The Role of Artificial Intelligence in Identifying Depression and Anxiety: A Comprehensive Literature Review. Cureus 2024; 16:e56472. [PMID: 38638735 PMCID: PMC11025697 DOI: 10.7759/cureus.56472] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/18/2024] [Indexed: 04/20/2024] Open
Abstract
This narrative literature review undertakes a comprehensive examination of the burgeoning field, tracing the development of artificial intelligence (AI)-powered tools for depression and anxiety detection from the level of intricate algorithms to practical applications. Delivering essential mental health care services is now a significant public health priority. In recent years, AI has become a game-changer in the early identification and intervention of these pervasive mental health disorders. AI tools can potentially empower behavioral healthcare services by helping psychiatrists collect objective data on patients' progress and tasks. This study emphasizes the current understanding of AI, the different types of AI, its current use in multiple mental health disorders, advantages, disadvantages, and future potentials. As technology develops and the digitalization of the modern era increases, there will be a rise in the application of artificial intelligence in psychiatry; therefore, a comprehensive understanding will be needed. We searched PubMed, Google Scholar, and Science Direct using keywords for this. In a recent review of studies using electronic health records (EHR) with AI and machine learning techniques for diagnosing all clinical conditions, roughly 99 publications have been found. Out of these, 35 studies were identified for mental health disorders in all age groups, and among them, six studies utilized EHR data sources. By critically analyzing prominent scholarly works, we aim to illuminate the current state of this technology, exploring its successes, limitations, and future directions. In doing so, we hope to contribute to a nuanced understanding of AI's potential to revolutionize mental health diagnostics and pave the way for further research and development in this critically important domain.
Collapse
Affiliation(s)
- Fabeha Zafar
- Internal Medicine, Dow University of Health Sciences (DUHS), Karachi, PAK
| | | | - Rafael R Vivas
- Nutrition, Food and Exercise Sciences, Florida State University College of Human Sciences, Tallahassee, USA
| | - Jada Wang
- Medicine, St. George's University, Brooklyn, USA
| | - See Jia Whei
- Internal Medicine, Sriwijaya University, Palembang, IDN
| | | | | | | | - Zahra Nazir
- Internal Medicine, Combined Military Hospital, Quetta, Quetta, PAK
| |
Collapse
|
16
|
Adibi S, Valizadeh-Haghi S, Khazaal Y, Rahmatizadeh S. Editorial: Mobile health application in addictive disorders therapy. Front Psychiatry 2024; 15:1360744. [PMID: 38370560 PMCID: PMC10869578 DOI: 10.3389/fpsyt.2024.1360744] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/24/2023] [Accepted: 01/23/2024] [Indexed: 02/20/2024] Open
Affiliation(s)
- Sasan Adibi
- School of Information Technology, Deakin University, Geelong, VIC, Australia
| | - Saeideh Valizadeh-Haghi
- Department of Medical Library and Information Science, School of Allied Medical Sciences, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Yasser Khazaal
- Department of Psychiatry, Lausanne University Hospital and Lausanne University, Lausanne, Switzerland
| | - Shahabedin Rahmatizadeh
- Department of Health Information Technology and Management, School of Allied Medical Sciences, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| |
Collapse
|
17
|
Rogan J, Bucci S, Firth J. Health Care Professionals' Views on the Use of Passive Sensing, AI, and Machine Learning in Mental Health Care: Systematic Review With Meta-Synthesis. JMIR Ment Health 2024; 11:e49577. [PMID: 38261403 PMCID: PMC10848143 DOI: 10.2196/49577] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/02/2023] [Revised: 10/30/2023] [Accepted: 11/01/2023] [Indexed: 01/24/2024] Open
Abstract
BACKGROUND Mental health difficulties are highly prevalent worldwide. Passive sensing technologies and applied artificial intelligence (AI) methods can provide an innovative means of supporting the management of mental health problems and enhancing the quality of care. However, the views of stakeholders are important in understanding the potential barriers to and facilitators of their implementation. OBJECTIVE This study aims to review, critically appraise, and synthesize qualitative findings relating to the views of mental health care professionals on the use of passive sensing and AI in mental health care. METHODS A systematic search of qualitative studies was performed using 4 databases. A meta-synthesis approach was used, whereby studies were analyzed using an inductive thematic analysis approach within a critical realist epistemological framework. RESULTS Overall, 10 studies met the eligibility criteria. The 3 main themes were uses of passive sensing and AI in clinical practice, barriers to and facilitators of use in practice, and consequences for service users. A total of 5 subthemes were identified: barriers, facilitators, empowerment, risk to well-being, and data privacy and protection issues. CONCLUSIONS Although clinicians are open-minded about the use of passive sensing and AI in mental health care, important factors to consider are service user well-being, clinician workloads, and therapeutic relationships. Service users and clinicians must be involved in the development of digital technologies and systems to ensure ease of use. The development of, and training in, clear policies and guidelines on the use of passive sensing and AI in mental health care, including risk management and data security procedures, will also be key to facilitating clinician engagement. The means for clinicians and service users to provide feedback on how the use of passive sensing and AI in practice is being received should also be considered. TRIAL REGISTRATION PROSPERO International Prospective Register of Systematic Reviews CRD42022331698; https://www.crd.york.ac.uk/prospero/display_record.php?RecordID=331698.
Collapse
Affiliation(s)
- Jessica Rogan
- Division of Psychology and Mental Health, School of Health Sciences, Faculty of Biology, Medicine and Health, Manchester Academic Health Sciences, The University of Manchester, Manchester, United Kingdom
- Greater Manchester Mental Health NHS Foundation Trust, Manchester, United Kingdom
| | - Sandra Bucci
- Division of Psychology and Mental Health, School of Health Sciences, Faculty of Biology, Medicine and Health, Manchester Academic Health Sciences, The University of Manchester, Manchester, United Kingdom
- Greater Manchester Mental Health NHS Foundation Trust, Manchester, United Kingdom
| | - Joseph Firth
- Division of Psychology and Mental Health, School of Health Sciences, Faculty of Biology, Medicine and Health, Manchester Academic Health Sciences, The University of Manchester, Manchester, United Kingdom
| |
Collapse
|
18
|
Singh V, Sarkar S, Gaur V, Grover S, Singh OP. Clinical Practice Guidelines on using artificial intelligence and gadgets for mental health and well-being. Indian J Psychiatry 2024; 66:S414-S419. [PMID: 38445270 PMCID: PMC10911327 DOI: 10.4103/indianjpsychiatry.indianjpsychiatry_926_23] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/06/2023] [Revised: 12/12/2023] [Accepted: 12/18/2023] [Indexed: 03/07/2024] Open
Affiliation(s)
- Vipul Singh
- Department of Psychiatry, Government Medical College, Kannauj, Uttar Pradesh, India
| | - Sharmila Sarkar
- Department of Psychiatry, Calcutta National Medical College, Kolkata, West Bengal, India
| | - Vikas Gaur
- Department of Psychiatry, Jaipur National University Institute for Medical Sciences and Research Centre, Jaipur, Rajasthan, India
| | - Sandeep Grover
- Department of Psychiatry, Post Graduate Institute of Medical Education and Research, Chandigarh, India
| | - Om Prakash Singh
- Department of Psychiatry, Midnapore Medical College, Midnapore, West Bengal, India E-mail:
| |
Collapse
|
19
|
Zhang M, Scandiffio J, Younus S, Jeyakumar T, Karsan I, Charow R, Salhia M, Wiljer D. The Adoption of AI in Mental Health Care-Perspectives From Mental Health Professionals: Qualitative Descriptive Study. JMIR Form Res 2023; 7:e47847. [PMID: 38060307 PMCID: PMC10739240 DOI: 10.2196/47847] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2023] [Revised: 10/08/2023] [Accepted: 10/11/2023] [Indexed: 12/08/2023] Open
Abstract
BACKGROUND Artificial intelligence (AI) is transforming the mental health care environment. AI tools are increasingly accessed by clients and service users. Mental health professionals must be prepared not only to use AI but also to have conversations about it when delivering care. Despite the potential for AI to enable more efficient and reliable and higher-quality care delivery, there is a persistent gap among mental health professionals in the adoption of AI. OBJECTIVE A needs assessment was conducted among mental health professionals to (1) understand the learning needs of the workforce and their attitudes toward AI and (2) inform the development of AI education curricula and knowledge translation products. METHODS A qualitative descriptive approach was taken to explore the needs of mental health professionals regarding their adoption of AI through semistructured interviews. To reach maximum variation sampling, mental health professionals (eg, psychiatrists, mental health nurses, educators, scientists, and social workers) in various settings across Ontario (eg, urban and rural, public and private sector, and clinical and research) were recruited. RESULTS A total of 20 individuals were recruited. Participants included practitioners (9/20, 45% social workers and 1/20, 5% mental health nurses), educator scientists (5/20, 25% with dual roles as professors/lecturers and researchers), and practitioner scientists (3/20, 15% with dual roles as researchers and psychiatrists and 2/20, 10% with dual roles as researchers and mental health nurses). Four major themes emerged: (1) fostering practice change and building self-efficacy to integrate AI into patient care; (2) promoting system-level change to accelerate the adoption of AI in mental health; (3) addressing the importance of organizational readiness as a catalyst for AI adoption; and (4) ensuring that mental health professionals have the education, knowledge, and skills to harness AI in optimizing patient care. CONCLUSIONS AI technologies are starting to emerge in mental health care. Although many digital tools, web-based services, and mobile apps are designed using AI algorithms, mental health professionals have generally been slower in the adoption of AI. As indicated by this study's findings, the implications are 3-fold. At the individual level, digital professionals must see the value in digitally compassionate tools that retain a humanistic approach to care. For mental health professionals, resistance toward AI adoption must be acknowledged through educational initiatives to raise awareness about the relevance, practicality, and benefits of AI. At the organizational level, digital professionals and leaders must collaborate on governance and funding structures to promote employee buy-in. At the societal level, digital and mental health professionals should collaborate in the creation of formal AI training programs specific to mental health to address knowledge gaps. This study promotes the design of relevant and sustainable education programs to support the adoption of AI within the mental health care sphere.
Collapse
Affiliation(s)
| | | | | | - Tharshini Jeyakumar
- University Health Network, Toronto, ON, Canada
- Institute of Health Policy, Management, and Evaluation, University of Toronto, Toronto, ON, Canada
| | | | - Rebecca Charow
- University Health Network, Toronto, ON, Canada
- Institute of Health Policy, Management, and Evaluation, University of Toronto, Toronto, ON, Canada
| | - Mohammad Salhia
- Rotman School of Management, University of Toronto, Toronto, ON, Canada
| | - David Wiljer
- University Health Network, Toronto, ON, Canada
- Institute of Health Policy, Management, and Evaluation, University of Toronto, Toronto, ON, Canada
- Department of Medicine, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
20
|
Pozuelo JR, Moffett BD, Davis M, Stein A, Cohen H, Craske MG, Maritze M, Makhubela P, Nabulumba C, Sikoti D, Kahn K, Sodi T, van Heerden A, O'Mahen HA. User-Centered Design of a Gamified Mental Health App for Adolescents in Sub-Saharan Africa: Multicycle Usability Testing Study. JMIR Form Res 2023; 7:e51423. [PMID: 38032691 PMCID: PMC10722378 DOI: 10.2196/51423] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Revised: 10/29/2023] [Accepted: 10/30/2023] [Indexed: 12/01/2023] Open
Abstract
BACKGROUND There is an urgent need for scalable psychological treatments to address adolescent depression in low-resource settings. Digital mental health interventions have many potential advantages, but few have been specifically designed for or rigorously evaluated with adolescents in sub-Saharan Africa. OBJECTIVE This study had 2 main objectives. The first was to describe the user-centered development of a smartphone app that delivers behavioral activation (BA) to treat depression among adolescents in rural South Africa and Uganda. The second was to summarize the findings from multicycle usability testing. METHODS An iterative user-centered agile design approach was used to co-design the app to ensure that it was engaging, culturally relevant, and usable for the target populations. An array of qualitative methods, including focus group discussions, in-depth individual interviews, participatory workshops, usability testing, and extensive expert consultation, was used to iteratively refine the app throughout each phase of development. RESULTS A total of 160 adolescents from rural South Africa and Uganda were involved in the development process. The app was built to be consistent with the principles of BA and supported by brief weekly phone calls from peer mentors who would help users overcome barriers to engagement. Drawing on the findings of the formative work, we applied a narrative game format to develop the Kuamsha app. This approach taught the principles of BA using storytelling techniques and game design elements. The stories were developed collaboratively with adolescents from the study sites and included decision points that allowed users to shape the narrative, character personalization, in-app points, and notifications. Each story consists of 6 modules ("episodes") played in sequential order, and each covers different BA skills. Between modules, users were encouraged to work on weekly activities and report on their progress and mood as they completed these activities. The results of the multicycle usability testing showed that the Kuamsha app was acceptable in terms of usability and engagement. CONCLUSIONS The Kuamsha app uniquely delivered BA for adolescent depression via an interactive narrative game format tailored to the South African and Ugandan contexts. Further studies are currently underway to examine the intervention's feasibility, acceptability, and efficacy in reducing depressive symptoms.
Collapse
Affiliation(s)
- Julia R Pozuelo
- Department of Global Health and Social Medicine, Harvard Medical School, Harvard University, Boston, MA, United States
- Department of Psychiatry, University of Oxford, Oxford, United Kingdom
- MRC/Wits Rural Public Health and Health Transitions Research Unit (Agincourt), School of Public Health, Faculty of Health Sciences, University of the Witwatersrand, Johannesburg, South Africa
| | - Bianca D Moffett
- MRC/Wits Rural Public Health and Health Transitions Research Unit (Agincourt), School of Public Health, Faculty of Health Sciences, University of the Witwatersrand, Johannesburg, South Africa
| | | | - Alan Stein
- Department of Psychiatry, University of Oxford, Oxford, United Kingdom
- MRC/Wits Rural Public Health and Health Transitions Research Unit (Agincourt), School of Public Health, Faculty of Health Sciences, University of the Witwatersrand, Johannesburg, South Africa
- Africa Health Research Institute, KwaZulu Natal, South Africa
| | - Halley Cohen
- Lincoln College, University of Oxford, Oxford, United Kingdom
| | - Michelle G Craske
- Department of Psychology, University of California, Los Angeles, Los Angeles, CA, United States
- Department of Psychiatry and Biobehavioral Sciences, University of California, Los Angeles, Los Angeles, CA, United States
| | - Meriam Maritze
- MRC/Wits Rural Public Health and Health Transitions Research Unit (Agincourt), School of Public Health, Faculty of Health Sciences, University of the Witwatersrand, Johannesburg, South Africa
| | - Princess Makhubela
- MRC/Wits Rural Public Health and Health Transitions Research Unit (Agincourt), School of Public Health, Faculty of Health Sciences, University of the Witwatersrand, Johannesburg, South Africa
| | | | | | - Kathleen Kahn
- MRC/Wits Rural Public Health and Health Transitions Research Unit (Agincourt), School of Public Health, Faculty of Health Sciences, University of the Witwatersrand, Johannesburg, South Africa
- Umeå Centre for Global Health Research, Division of Epidemiology and Global Health, Department of Public Health and Clinical Medicine, Umeå University, Umeå, Sweden
| | - Tholene Sodi
- SAMRC-DSI/NRF-UL SARChI Research Chair in Mental Health and Society, University of Limpopo, Limpopo, South Africa
| | - Alastair van Heerden
- Center for Community Based Research, Human Sciences Research Council, Pietermaritzburg, South Africa
- SAMRC/Wits Developmental Pathways for Health Research Unit, Department of Paediatrics, School of Clinical Medicine, Faculty of Health Sciences, University of the Witwatersrand, Johannesburg, South Africa
| | - Heather A O'Mahen
- Mood Disorders Centre, Department of Psychology, University of Exeter, Exeter, United Kingdom
| |
Collapse
|
21
|
Wilhelmy S, Giupponi G, Groß D, Eisendle K, Conca A. A shift in psychiatry through AI? Ethical challenges. Ann Gen Psychiatry 2023; 22:43. [PMID: 37919759 PMCID: PMC10623776 DOI: 10.1186/s12991-023-00476-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/19/2023] [Accepted: 10/24/2023] [Indexed: 11/04/2023] Open
Abstract
The digital transformation has made its way into many areas of society, including medicine. While AI-based systems are widespread in medical disciplines, their use in psychiatry is progressing more slowly. However, they promise to revolutionize psychiatric practice in terms of prevention options, diagnostics, or even therapy. Psychiatry is in the midst of this digital transformation, so the question is no longer "whether" to use technology, but "how" we can use it to achieve goals of progress or improvement. The aim of this article is to argue that this revolution brings not only new opportunities but also new ethical challenges for psychiatry, especially with regard to safety, responsibility, autonomy, or transparency. As an example, the relationship between doctor and patient in psychiatry will be addressed, in which digitization is also leading to ethically relevant changes. Ethical reflection on the use of AI systems offers the opportunity to accompany these changes carefully in order to take advantage of the benefits that this change brings. The focus should therefore always be on balancing what is technically possible with what is ethically necessary.
Collapse
Affiliation(s)
- Saskia Wilhelmy
- Institute for History, Theory and Ethics in Medicine, University Hospital, RWTH Aachen University, Wendlingweg 2, 5074, Aachen, Germany.
| | - Giancarlo Giupponi
- Academic Teaching Department of Psychiatry, Central Hospital, Sanitary Agency of South Tyrol, Via Lorenz Böhler 5, 39100, Bolzano, Italy
| | - Dominik Groß
- Institute for History, Theory and Ethics in Medicine, University Hospital, RWTH Aachen University, Wendlingweg 2, 5074, Aachen, Germany
| | - Klaus Eisendle
- Institute of General Practice and Public Health, Provincial College for Health Professions Claudiana, Lorenz-Böhler-Straße 13, 39100, Bolzano, Italy
| | - Andreas Conca
- Academic Teaching Department of Psychiatry, Central Hospital, Sanitary Agency of South Tyrol, Via Lorenz Böhler 5, 39100, Bolzano, Italy
| |
Collapse
|
22
|
Alanzi T, Alotaibi R, Alajmi R, Bukhamsin Z, Fadaq K, AlGhamdi N, Bu Khamsin N, Alzahrani L, Abdullah R, Alsayer R, Al Muarfaj AM, Alanzi N. Barriers and Facilitators of Artificial Intelligence in Family Medicine: An Empirical Study With Physicians in Saudi Arabia. Cureus 2023; 15:e49419. [PMID: 38149160 PMCID: PMC10750222 DOI: 10.7759/cureus.49419] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/23/2023] [Indexed: 12/28/2023] Open
Abstract
BACKGROUND Artificial intelligence (AI) is a novel technology that has been widely acknowledged for its potential to improve the processes' efficiency across industries. However, its barriers and facilitators in healthcare are not completely understood due to its novel nature. STUDY PURPOSE The purpose of this study is to explore the intricate landscape of AI use in family medicine, aiming to uncover the factors that either hinder or enable its successful adoption. METHODS A cross-sectional survey design is adopted in this study. The questionnaire included 10 factors (performance expectancy, effort expectancy, social influence, facilitating conditions, behavioral intention, trust, perceived privacy risk, personal innovativeness, ethical concerns, and facilitators) affecting the acceptance of AI. A total of 157 family physicians participated in the online survey. RESULTS Effort expectancy (μ = 3.85) and facilitating conditions (μ = 3.77) were identified to be strong influence factors. Access to data (μ = 4.33), increased computing power (μ = 3.92), and telemedicine (μ = 3.78) were identified as major facilitators; regulatory support (μ = 2.29) and interoperability standards (μ = 2.71) were identified as barriers along with privacy and ethical concerns. Younger individuals tend to have more positive attitudes and expectations toward AI-enabled assistants compared to older participants (p < .05). Perceived privacy risk is negatively correlated with all factors. CONCLUSION Although there are various barriers and concerns regarding the use of AI in healthcare, the preference for AI use in healthcare, especially family medicine, is increasing.
Collapse
Affiliation(s)
- Turki Alanzi
- Department of Health Information Management and Technology, College of Public Health, Imam Abdulrahman Bin Faisal University, Dammam, SAU
| | - Raghad Alotaibi
- Department of Family Medicine, King Fahad Medical City, Riyadh, SAU
| | - Rahaf Alajmi
- College of Medicine, Imam Abdulrahman Bin Faisal University, Dammam, SAU
| | - Zainab Bukhamsin
- College of Clinical Pharmacy, Imam Abdulrahman Bin Faisal University, Dammam, SAU
| | - Khadija Fadaq
- College of Medicine, Imam Abdulrahman Bin Faisal University, Dammam, SAU
| | - Nouf AlGhamdi
- College of Medicine, Imam Abdulrahman Bin Faisal University, Dammam, SAU
| | | | | | - Ruya Abdullah
- Faculty of Medicine, Ibn Sina National College, Jeddah, SAU
| | - Razan Alsayer
- College of Medicine, Northern Border University, Arar, SAU
| | - Afrah M Al Muarfaj
- Department of Health Affairs, General Directorate of Health Affairs in Assir Region, Ministry of Health, Abha, SAU
| | - Nouf Alanzi
- Department of Clinical Laboratory Sciences, College of Applied Medical Sciences, Jouf University, Sakakah, SAU
| |
Collapse
|
23
|
Yu J, Shen N, Conway S, Hiebert M, Lai-Zhao B, McCann M, Mehta RR, Miranda M, Putterman C, Santisteban JA, Thomson N, Young C, Chiuccariello L, Hunter K, Hill S. A holistic approach to integrating patient, family, and lived experience voices in the development of the BrainHealth Databank: a digital learning health system to enable artificial intelligence in the clinic. FRONTIERS IN HEALTH SERVICES 2023; 3:1198195. [PMID: 37927443 PMCID: PMC10625404 DOI: 10.3389/frhs.2023.1198195] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/31/2023] [Accepted: 10/04/2023] [Indexed: 11/07/2023]
Abstract
Artificial intelligence, machine learning, and digital health innovations have tremendous potential to advance patient-centred, data-driven mental healthcare. To enable the clinical application of such innovations, the Krembil Centre for Neuroinformatics at the Centre for Addiction and Mental Health, Canada's largest mental health hospital, embarked on a journey to co-create a digital learning health system called the BrainHealth Databank (BHDB). Working with clinicians, scientists, and administrators alongside patients, families, and persons with lived experience (PFLE), this hospital-wide team has adopted a systems approach that integrates clinical and research data and practices to improve care and accelerate research. PFLE engagement was intentional and initiated at the conception stage of the BHDB to help ensure the initiative would achieve its goal of understanding the community's needs while improving patient care and experience. The BHDB team implemented an evolving, dynamic strategy to support continuous and active PFLE engagement in all aspects of the BHDB that has and will continue to impact patients and families directly. We describe PFLE consultation, co-design, and partnership in various BHDB activities and projects. In all three examples, we discuss the factors contributing to successful PFLE engagement, share lessons learned, and highlight areas for growth and improvement. By sharing how the BHDB navigated and fostered PFLE engagement, we hope to motivate and inspire the health informatics community to collectively chart their paths in PFLE engagement to support advancements in digital health and artificial intelligence.
Collapse
Affiliation(s)
- Joanna Yu
- Krembil Centre for Neuroinformatics, Centre for Addiction and Mental Health, Toronto, ON, Canada
- Health and Technology, Vector Institute for Artificial Intelligence, Toronto, ON, Canada
| | - Nelson Shen
- Krembil Centre for Neuroinformatics, Centre for Addiction and Mental Health, Toronto, ON, Canada
- Institute of Health Policy, Management and Evaluation, University of Toronto, Toronto, ON, Canada
- AMS Healthcare, Toronto, ON, Canada
| | - Susan Conway
- Centre for Addictions and Mental Health, Toronto, ON, Canada
| | - Melissa Hiebert
- Centre for Addictions and Mental Health, Toronto, ON, Canada
| | - Benson Lai-Zhao
- Centre for Addictions and Mental Health, Toronto, ON, Canada
| | - Miriam McCann
- Centre for Addictions and Mental Health, Toronto, ON, Canada
| | - Rohan R. Mehta
- Centre for Addictions and Mental Health, Toronto, ON, Canada
| | - Morena Miranda
- Centre for Addictions and Mental Health, Toronto, ON, Canada
| | - Connie Putterman
- Centre for Addictions and Mental Health, Toronto, ON, Canada
- CanChild, Hamilton, ON, Canada
- CHILD-BRIGHT Network, Montreal, QC, Canada
- Kids Brain Health Network, Burnaby, ON, Canada
- Province of Ontario Neurodevelopmental (POND) Network, Toronto, ON, Canada
| | - Jose Arturo Santisteban
- Krembil Centre for Neuroinformatics, Centre for Addiction and Mental Health, Toronto, ON, Canada
| | - Nicole Thomson
- Institute of Health Policy, Management and Evaluation, University of Toronto, Toronto, ON, Canada
| | - Courtney Young
- Centre for Addictions and Mental Health, Toronto, ON, Canada
| | | | - Kimberly Hunter
- Centre for Addictions and Mental Health, Toronto, ON, Canada
| | - Sean Hill
- Krembil Centre for Neuroinformatics, Centre for Addiction and Mental Health, Toronto, ON, Canada
- Health and Technology, Vector Institute for Artificial Intelligence, Toronto, ON, Canada
- Department of Psychiatry, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
24
|
Nashwan AJ, Gharib S, Alhadidi M, El-Ashry AM, Alamgir A, Al-Hassan M, Khedr MA, Dawood S, Abufarsakh B. Harnessing Artificial Intelligence: Strategies for Mental Health Nurses in Optimizing Psychiatric Patient Care. Issues Ment Health Nurs 2023; 44:1020-1034. [PMID: 37850937 DOI: 10.1080/01612840.2023.2263579] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/19/2023]
Abstract
This narrative review explores the transformative impact of Artificial Intelligence (AI) on mental health nursing, particularly in enhancing psychiatric patient care. AI technologies present new strategies for early detection, risk assessment, and improving treatment adherence in mental health. They also facilitate remote patient monitoring, bridge geographical gaps, and support clinical decision-making. The evolution of virtual mental health assistants and AI-enhanced therapeutic interventions are also discussed. These technological advancements reshape the nurse-patient interactions while ensuring personalized, efficient, and high-quality care. The review also addresses AI's ethical and responsible use in mental health nursing, emphasizing patient privacy, data security, and the balance between human interaction and AI tools. As AI applications in mental health care continue to evolve, this review encourages continued innovation while advocating for responsible implementation, thereby optimally leveraging the potential of AI in mental health nursing.
Collapse
Affiliation(s)
- Abdulqadir J Nashwan
- Nursing Department, Hamad Medical Corporation, Doha, Qatar
- Department of Public Health, College of Health Sciences, QU Health, Qatar University, Doha, Qatar
| | - Suzan Gharib
- Nursing Department, Al-Khaldi Hospital, Amman, Jordan
| | - Majdi Alhadidi
- Psychiatric & Mental Health Nursing, Faculty of Nursing, Al-Zaytoonah University of Jordan, Amman, Jordan
| | | | | | | | | | - Shaimaa Dawood
- Faculty of Nursing, Alexandria University, Alexandria, Egypt
| | | |
Collapse
|
25
|
Jin KW, Li Q, Xie Y, Xiao G. Artificial intelligence in mental healthcare: an overview and future perspectives. Br J Radiol 2023; 96:20230213. [PMID: 37698582 PMCID: PMC10546438 DOI: 10.1259/bjr.20230213] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2023] [Revised: 08/31/2023] [Accepted: 09/03/2023] [Indexed: 09/13/2023] Open
Abstract
Artificial intelligence is disrupting the field of mental healthcare through applications in computational psychiatry, which leverages quantitative techniques to inform our understanding, detection, and treatment of mental illnesses. This paper provides an overview of artificial intelligence technologies in modern mental healthcare and surveys recent advances made by researchers, focusing on the nascent field of digital psychiatry. We also consider the ethical implications of artificial intelligence playing a greater role in mental healthcare.
Collapse
Affiliation(s)
| | - Qiwei Li
- Department of Mathemaical Sciences, The University of Texas at Dallas, Richardson, Texas, United States
| | | | | |
Collapse
|
26
|
Alowais SA, Alghamdi SS, Alsuhebany N, Alqahtani T, Alshaya AI, Almohareb SN, Aldairem A, Alrashed M, Bin Saleh K, Badreldin HA, Al Yami MS, Al Harbi S, Albekairy AM. Revolutionizing healthcare: the role of artificial intelligence in clinical practice. BMC MEDICAL EDUCATION 2023; 23:689. [PMID: 37740191 PMCID: PMC10517477 DOI: 10.1186/s12909-023-04698-z] [Citation(s) in RCA: 76] [Impact Index Per Article: 76.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/23/2023] [Accepted: 09/19/2023] [Indexed: 09/24/2023]
Abstract
INTRODUCTION Healthcare systems are complex and challenging for all stakeholders, but artificial intelligence (AI) has transformed various fields, including healthcare, with the potential to improve patient care and quality of life. Rapid AI advancements can revolutionize healthcare by integrating it into clinical practice. Reporting AI's role in clinical practice is crucial for successful implementation by equipping healthcare providers with essential knowledge and tools. RESEARCH SIGNIFICANCE This review article provides a comprehensive and up-to-date overview of the current state of AI in clinical practice, including its potential applications in disease diagnosis, treatment recommendations, and patient engagement. It also discusses the associated challenges, covering ethical and legal considerations and the need for human expertise. By doing so, it enhances understanding of AI's significance in healthcare and supports healthcare organizations in effectively adopting AI technologies. MATERIALS AND METHODS The current investigation analyzed the use of AI in the healthcare system with a comprehensive review of relevant indexed literature, such as PubMed/Medline, Scopus, and EMBASE, with no time constraints but limited to articles published in English. The focused question explores the impact of applying AI in healthcare settings and the potential outcomes of this application. RESULTS Integrating AI into healthcare holds excellent potential for improving disease diagnosis, treatment selection, and clinical laboratory testing. AI tools can leverage large datasets and identify patterns to surpass human performance in several healthcare aspects. AI offers increased accuracy, reduced costs, and time savings while minimizing human errors. It can revolutionize personalized medicine, optimize medication dosages, enhance population health management, establish guidelines, provide virtual health assistants, support mental health care, improve patient education, and influence patient-physician trust. CONCLUSION AI can be used to diagnose diseases, develop personalized treatment plans, and assist clinicians with decision-making. Rather than simply automating tasks, AI is about developing technologies that can enhance patient care across healthcare settings. However, challenges related to data privacy, bias, and the need for human expertise must be addressed for the responsible and effective implementation of AI in healthcare.
Collapse
Affiliation(s)
- Shuroug A Alowais
- Department of Pharmacy Practice, College of Pharmacy, King Saud bin Abdulaziz University for Health Sciences, Prince Mutib Ibn Abdullah Ibn Abdulaziz Rd, Riyadh, 14611, Saudi Arabia.
- King Abdullah International Medical Research Center, Riyadh, Saudi Arabia.
- Pharmaceutical Care Department, King Abdulaziz Medical City, National Guard Health Affairs, Riyadh, Saudi Arabia.
| | - Sahar S Alghamdi
- King Abdullah International Medical Research Center, Riyadh, Saudi Arabia
- Pharmaceutical Care Department, King Abdulaziz Medical City, National Guard Health Affairs, Riyadh, Saudi Arabia
- Department of Pharmaceutical Sciences, College of Pharmacy, King Saud bin Abdulaziz University for Health Sciences, Riyadh, Saudi Arabia
| | - Nada Alsuhebany
- Department of Pharmacy Practice, College of Pharmacy, King Saud bin Abdulaziz University for Health Sciences, Prince Mutib Ibn Abdullah Ibn Abdulaziz Rd, Riyadh, 14611, Saudi Arabia
- King Abdullah International Medical Research Center, Riyadh, Saudi Arabia
- Pharmaceutical Care Department, King Abdulaziz Medical City, National Guard Health Affairs, Riyadh, Saudi Arabia
| | - Tariq Alqahtani
- King Abdullah International Medical Research Center, Riyadh, Saudi Arabia
- Pharmaceutical Care Department, King Abdulaziz Medical City, National Guard Health Affairs, Riyadh, Saudi Arabia
- Department of Pharmaceutical Sciences, College of Pharmacy, King Saud bin Abdulaziz University for Health Sciences, Riyadh, Saudi Arabia
| | - Abdulrahman I Alshaya
- Department of Pharmacy Practice, College of Pharmacy, King Saud bin Abdulaziz University for Health Sciences, Prince Mutib Ibn Abdullah Ibn Abdulaziz Rd, Riyadh, 14611, Saudi Arabia
- King Abdullah International Medical Research Center, Riyadh, Saudi Arabia
- Pharmaceutical Care Department, King Abdulaziz Medical City, National Guard Health Affairs, Riyadh, Saudi Arabia
| | - Sumaya N Almohareb
- Department of Pharmacy Practice, College of Pharmacy, King Saud bin Abdulaziz University for Health Sciences, Prince Mutib Ibn Abdullah Ibn Abdulaziz Rd, Riyadh, 14611, Saudi Arabia
- King Abdullah International Medical Research Center, Riyadh, Saudi Arabia
- Pharmaceutical Care Department, King Abdulaziz Medical City, National Guard Health Affairs, Riyadh, Saudi Arabia
| | - Atheer Aldairem
- Department of Pharmacy Practice, College of Pharmacy, King Saud bin Abdulaziz University for Health Sciences, Prince Mutib Ibn Abdullah Ibn Abdulaziz Rd, Riyadh, 14611, Saudi Arabia
- King Abdullah International Medical Research Center, Riyadh, Saudi Arabia
- Pharmaceutical Care Department, King Abdulaziz Medical City, National Guard Health Affairs, Riyadh, Saudi Arabia
| | - Mohammed Alrashed
- Department of Pharmacy Practice, College of Pharmacy, King Saud bin Abdulaziz University for Health Sciences, Prince Mutib Ibn Abdullah Ibn Abdulaziz Rd, Riyadh, 14611, Saudi Arabia
- King Abdullah International Medical Research Center, Riyadh, Saudi Arabia
- Pharmaceutical Care Department, King Abdulaziz Medical City, National Guard Health Affairs, Riyadh, Saudi Arabia
| | - Khalid Bin Saleh
- Department of Pharmacy Practice, College of Pharmacy, King Saud bin Abdulaziz University for Health Sciences, Prince Mutib Ibn Abdullah Ibn Abdulaziz Rd, Riyadh, 14611, Saudi Arabia
- King Abdullah International Medical Research Center, Riyadh, Saudi Arabia
- Pharmaceutical Care Department, King Abdulaziz Medical City, National Guard Health Affairs, Riyadh, Saudi Arabia
| | - Hisham A Badreldin
- Department of Pharmacy Practice, College of Pharmacy, King Saud bin Abdulaziz University for Health Sciences, Prince Mutib Ibn Abdullah Ibn Abdulaziz Rd, Riyadh, 14611, Saudi Arabia
- King Abdullah International Medical Research Center, Riyadh, Saudi Arabia
- Pharmaceutical Care Department, King Abdulaziz Medical City, National Guard Health Affairs, Riyadh, Saudi Arabia
| | - Majed S Al Yami
- Department of Pharmacy Practice, College of Pharmacy, King Saud bin Abdulaziz University for Health Sciences, Prince Mutib Ibn Abdullah Ibn Abdulaziz Rd, Riyadh, 14611, Saudi Arabia
- King Abdullah International Medical Research Center, Riyadh, Saudi Arabia
- Pharmaceutical Care Department, King Abdulaziz Medical City, National Guard Health Affairs, Riyadh, Saudi Arabia
| | - Shmeylan Al Harbi
- Department of Pharmacy Practice, College of Pharmacy, King Saud bin Abdulaziz University for Health Sciences, Prince Mutib Ibn Abdullah Ibn Abdulaziz Rd, Riyadh, 14611, Saudi Arabia
- King Abdullah International Medical Research Center, Riyadh, Saudi Arabia
- Pharmaceutical Care Department, King Abdulaziz Medical City, National Guard Health Affairs, Riyadh, Saudi Arabia
| | - Abdulkareem M Albekairy
- Department of Pharmacy Practice, College of Pharmacy, King Saud bin Abdulaziz University for Health Sciences, Prince Mutib Ibn Abdullah Ibn Abdulaziz Rd, Riyadh, 14611, Saudi Arabia
- King Abdullah International Medical Research Center, Riyadh, Saudi Arabia
- Pharmaceutical Care Department, King Abdulaziz Medical City, National Guard Health Affairs, Riyadh, Saudi Arabia
| |
Collapse
|
27
|
Espejo G, Reiner W, Wenzinger M. Exploring the Role of Artificial Intelligence in Mental Healthcare: Progress, Pitfalls, and Promises. Cureus 2023; 15:e44748. [PMID: 37809254 PMCID: PMC10556257 DOI: 10.7759/cureus.44748] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/05/2023] [Indexed: 10/10/2023] Open
Abstract
The rise of artificial intelligence (AI) heralds a significant revolution in healthcare, particularly in mental health. AI's potential spans diagnostic algorithms, data analysis from diverse sources, and real-time patient monitoring. It is essential for clinicians to remain informed about AI's progress and limitations. The inherent complexity of mental disorders, limited objective data, and retrospective studies pose challenges to the application of AI. Privacy concerns, bias, and the risk of AI replacing human care also loom. Regulatory oversight and physician involvement are needed for equitable AI implementation. AI integration and use in psychotherapy and other services are on the horizon. Patient trust, feasibility, clinical efficacy, and clinician acceptance are prerequisites. In the future, governing bodies must decide on AI ownership, governance, and integration approaches. While AI can enhance clinical decision-making and efficiency, it might also exacerbate moral dilemmas, autonomy loss, and issues regarding the scope of practice. Striking a balance between AI's strengths and limitations involves utilizing AI as a validated clinical supplement under medical supervision, necessitating active clinician involvement in AI research, ethics, and regulation. AI's trajectory must align with optimizing mental health treatment and upholding compassionate care.
Collapse
Affiliation(s)
- Gemma Espejo
- Psychiatry and Behavioral Sciences, University of California, Irvine School of Medicine, Irvine, USA
| | - Wade Reiner
- Psychiatry, University of Washington, Seattle, USA
| | | |
Collapse
|
28
|
Hadar-Shoval D, Elyoseph Z, Lvovsky M. The plasticity of ChatGPT's mentalizing abilities: personalization for personality structures. Front Psychiatry 2023; 14:1234397. [PMID: 37720897 PMCID: PMC10503434 DOI: 10.3389/fpsyt.2023.1234397] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/04/2023] [Accepted: 08/22/2023] [Indexed: 09/19/2023] Open
Abstract
This study evaluated the potential of ChatGPT, a large language model, to generate mentalizing-like abilities that are tailored to a specific personality structure and/or psychopathology. Mentalization is the ability to understand and interpret one's own and others' mental states, including thoughts, feelings, and intentions. Borderline Personality Disorder (BPD) and Schizoid Personality Disorder (SPD) are characterized by distinct patterns of emotional regulation. Individuals with BPD tend to experience intense and unstable emotions, while individuals with SPD tend to experience flattened or detached emotions. We used ChatGPT's free version 23.3 and assessed the extent to which its responses akin to emotional awareness (EA) were customized to the distinctive personality structure-character characterized by Borderline Personality Disorder (BPD) and Schizoid Personality Disorder (SPD), employing the Levels of Emotional Awareness Scale (LEAS). ChatGPT was able to accurately describe the emotional reactions of individuals with BPD as more intense, complex, and rich than those with SPD. This finding suggests that ChatGPT can generate mentalizing-like responses consistent with a range of psychopathologies in line with clinical and theoretical knowledge. However, the study also raises concerns regarding the potential for stigmas or biases related to mental diagnoses to impact the validity and usefulness of chatbot-based clinical interventions. We emphasize the need for the responsible development and deployment of chatbot-based interventions in mental health, which considers diverse theoretical frameworks.
Collapse
Affiliation(s)
- Dorit Hadar-Shoval
- Department of Psychology and Educational Counseling, The Center for Psychobiological Research, Max Stern Yezreel Valley College, Emek Yezreel, Israel
| | - Zohar Elyoseph
- Department of Psychology and Educational Counseling, The Center for Psychobiological Research, Max Stern Yezreel Valley College, Emek Yezreel, Israel
- Department of Brain Sciences, Faculty of Medicine, Imperial College London, London, United Kingdom
- Educational Psychology Department, Center for Psychobiological Research, Max Stern Yezreel Valley College, Emek Yezreel, Israel
| | - Maya Lvovsky
- Educational Psychology Department, Center for Psychobiological Research, Max Stern Yezreel Valley College, Emek Yezreel, Israel
| |
Collapse
|
29
|
Elyoseph Z, Levkovich I. Beyond human expertise: the promise and limitations of ChatGPT in suicide risk assessment. Front Psychiatry 2023; 14:1213141. [PMID: 37593450 PMCID: PMC10427505 DOI: 10.3389/fpsyt.2023.1213141] [Citation(s) in RCA: 15] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/28/2023] [Accepted: 07/19/2023] [Indexed: 08/19/2023] Open
Abstract
ChatGPT, an artificial intelligence language model developed by OpenAI, holds the potential for contributing to the field of mental health. Nevertheless, although ChatGPT theoretically shows promise, its clinical abilities in suicide prevention, a significant mental health concern, have yet to be demonstrated. To address this knowledge gap, this study aims to compare ChatGPT's assessments of mental health indicators to those of mental health professionals in a hypothetical case study that focuses on suicide risk assessment. Specifically, ChatGPT was asked to evaluate a text vignette describing a hypothetical patient with varying levels of perceived burdensomeness and thwarted belongingness. The ChatGPT assessments were compared to the norms of mental health professionals. The results indicated that ChatGPT rated the risk of suicide attempts lower than did the mental health professionals in all conditions. Furthermore, ChatGPT rated mental resilience lower than the norms in most conditions. These results imply that gatekeepers, patients or even mental health professionals who rely on ChatGPT for evaluating suicidal risk or as a complementary tool to improve decision-making may receive an inaccurate assessment that underestimates the actual suicide risk.
Collapse
Affiliation(s)
- Zohar Elyoseph
- Department of Psychology and Educational Counseling, The Center for Psychobiological Research, Max Stern Yezreel Valley College, Emek Yezreel, Israel
- Department of Brain Sciences, Faculty of Medicine, Imperial College London, London, United Kingdom
| | - Inbar Levkovich
- Faculty of Graduate Studies, Oranim Academic College, Kiryat Tiv'on, Israel
| |
Collapse
|
30
|
Levis M, Levy J, Dufort V, Russ CJ, Shiner B. Dynamic suicide topic modelling: Deriving population-specific, psychosocial and time-sensitive suicide risk variables from Electronic Health Record psychotherapy notes. Clin Psychol Psychother 2023; 30:795-810. [PMID: 36797651 PMCID: PMC11172400 DOI: 10.1002/cpp.2842] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2023] [Accepted: 02/14/2023] [Indexed: 02/18/2023]
Abstract
In the machine learning subfield of natural language processing, a topic model is a type of unsupervised method that is used to uncover abstract topics within a corpus of text. Dynamic topic modelling (DTM) is used for capturing change in these topics over time. The study deploys DTM on corpus of electronic health record psychotherapy notes. This retrospective study examines whether DTM helps distinguish closely matched patients that did and did not die by suicide. Cohort consists of United States Department of Veterans Affairs (VA) patients diagnosed with Posttraumatic Stress Disorder (PTSD) between 2004 and 2013. Each case (those who died by suicide during the year following diagnosis) was matched with five controls (those who remained alive) that shared psychotherapists and had similar suicide risk based on VA's suicide prediction algorithm. Cohort was restricted to patients who received psychotherapy for 9+ months after initial PTSD diagnoses (cases = 77; controls = 362). For cases, psychotherapy notes from diagnosis until death were examined. For controls, psychotherapy notes from diagnosis until matched case's death date were examined. A Python-based DTM algorithm was utilized. Derived topics identified population-specific themes, including PTSD, psychotherapy, medication, communication and relationships. Control topics changed significantly more over time than case topics. Topic differences highlighted engagement, expressivity and therapeutic alliance. This study strengthens groundwork for deriving population-specific, psychosocial and time-sensitive suicide risk variables.
Collapse
Affiliation(s)
- Maxwell Levis
- White River Junction VA Medical Center, Hartford, Vermont, USA
- Geisel School of Medicine at Dartmouth, Hanover, New Hampshire, USA
| | - Joshua Levy
- Geisel School of Medicine at Dartmouth, Hanover, New Hampshire, USA
| | - Vincent Dufort
- White River Junction VA Medical Center, Hartford, Vermont, USA
| | - Carey J. Russ
- White River Junction VA Medical Center, Hartford, Vermont, USA
- Geisel School of Medicine at Dartmouth, Hanover, New Hampshire, USA
| | - Brian Shiner
- White River Junction VA Medical Center, Hartford, Vermont, USA
- Geisel School of Medicine at Dartmouth, Hanover, New Hampshire, USA
- National Center for PTSD Executive Division, Hartford, Vermont, USA
| |
Collapse
|
31
|
周 德, 金 益, 陈 瑛. [The application scenarios study on the intervention of cognitive decline in elderly population using metaverse technology]. SHENG WU YI XUE GONG CHENG XUE ZA ZHI = JOURNAL OF BIOMEDICAL ENGINEERING = SHENGWU YIXUE GONGCHENGXUE ZAZHI 2023; 40:573-581. [PMID: 37380399 PMCID: PMC10307614 DOI: 10.7507/1001-5515.202208092] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Subscribe] [Scholar Register] [Received: 08/31/2022] [Revised: 04/02/2023] [Indexed: 06/30/2023]
Abstract
China is facing the peak of an ageing population, and there is an increase in demand for intelligent healthcare services for the elderly. The metaverse, as a new internet social communication space, has shown infinite potential for application. This paper focuses on the application of the metaverse in medicine in the intervention of cognitive decline in the elderly population. The problems in assessment and intervention of cognitive decline in the elderly group were analyzed. The basic data required to construct the metaverse in medicine was introduced. Moreover, it is demonstrated that the elderly users can conduct self-monitoring, experience immersive self-healing and health-care through the metaverse in medicine technology. Furthermore, we proposed that it is feasible that the metaverse in medicine has obvious advantages in prediction and diagnosis, prevention and rehabilitation, as well as assisting patients with cognitive decline. Risks for its application were pointed out as well. The metaverse in medicine technology solves the problem of non-face-to-face social communication for elderly users, which may help to reconstruct the social medical system and service mode for the elderly population.
Collapse
Affiliation(s)
- 德富 周
- 苏州市职业大学 江苏省现代企业信息化应用支撑软件工程技术研发中心(江苏苏州 215104)Suzhou Vocational University Jiangsu Province Support Software Engineering R&D Center for Modern Information Technology Application in Enterprise, Suzhou, Jiangsu 215104, P. R. China
| | - 益 金
- 苏州市职业大学 江苏省现代企业信息化应用支撑软件工程技术研发中心(江苏苏州 215104)Suzhou Vocational University Jiangsu Province Support Software Engineering R&D Center for Modern Information Technology Application in Enterprise, Suzhou, Jiangsu 215104, P. R. China
| | - 瑛 陈
- 苏州市职业大学 江苏省现代企业信息化应用支撑软件工程技术研发中心(江苏苏州 215104)Suzhou Vocational University Jiangsu Province Support Software Engineering R&D Center for Modern Information Technology Application in Enterprise, Suzhou, Jiangsu 215104, P. R. China
| |
Collapse
|
32
|
Elyoseph Z, Hadar-Shoval D, Asraf K, Lvovsky M. ChatGPT outperforms humans in emotional awareness evaluations. Front Psychol 2023; 14:1199058. [PMID: 37303897 PMCID: PMC10254409 DOI: 10.3389/fpsyg.2023.1199058] [Citation(s) in RCA: 22] [Impact Index Per Article: 22.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2023] [Accepted: 05/11/2023] [Indexed: 06/13/2023] Open
Abstract
The artificial intelligence chatbot, ChatGPT, has gained widespread attention for its ability to perform natural language processing tasks and has the fastest-growing user base in history. Although ChatGPT has successfully generated theoretical information in multiple fields, its ability to identify and describe emotions is still unknown. Emotional awareness (EA), the ability to conceptualize one's own and others' emotions, is considered a transdiagnostic mechanism for psychopathology. This study utilized the Levels of Emotional Awareness Scale (LEAS) as an objective, performance-based test to analyze ChatGPT's responses to twenty scenarios and compared its EA performance with that of the general population norms, as reported by a previous study. A second examination was performed one month later to measure EA improvement over time. Finally, two independent licensed psychologists evaluated the fit-to-context of ChatGPT's EA responses. In the first examination, ChatGPT demonstrated significantly higher performance than the general population on all the LEAS scales (Z score = 2.84). In the second examination, ChatGPT's performance significantly improved, almost reaching the maximum possible LEAS score (Z score = 4.26). Its accuracy levels were also extremely high (9.7/10). The study demonstrated that ChatGPT can generate appropriate EA responses, and that its performance may improve significantly over time. The study has theoretical and clinical implications, as ChatGPT can be used as part of cognitive training for clinical populations with EA impairments. In addition, ChatGPT's EA-like abilities may facilitate psychiatric diagnosis and assessment and be used to enhance emotional language. Further research is warranted to better understand the potential benefits and risks of ChatGPT and refine it to promote mental health.
Collapse
Affiliation(s)
- Zohar Elyoseph
- Department of Psychology and Educational Counseling, The Center for Psychobiological Research, Max Stern Yezreel Valley College, Emek Yezreel, Israel
- Department of Brain Sciences, Faculty of Medicine, Imperial College London, London, England
| | - Dorit Hadar-Shoval
- Department of Psychology and Educational Counseling, The Center for Psychobiological Research, Max Stern Yezreel Valley College, Emek Yezreel, Israel
| | - Kfir Asraf
- Psychology Department, Center for Psychobiological Research, Max Stern Yezreel Valley College, Emek Yezreel, Israel
| | - Maya Lvovsky
- Psychology Department, Center for Psychobiological Research, Max Stern Yezreel Valley College, Emek Yezreel, Israel
| |
Collapse
|
33
|
Andrew J, Rudra M, Eunice J, Belfin RV. Artificial intelligence in adolescents mental health disorder diagnosis, prognosis, and treatment. Front Public Health 2023; 11:1110088. [PMID: 37064712 PMCID: PMC10102508 DOI: 10.3389/fpubh.2023.1110088] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2022] [Accepted: 03/20/2023] [Indexed: 04/03/2023] Open
Affiliation(s)
- J. Andrew
- Computer Science and Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal, Karnataka, India
- *Correspondence: J. Andrew
| | - Madhuria Rudra
- Electronics and Communication Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal, Karnataka, India
| | - Jennifer Eunice
- Electronics and Communication Engineering, Karunya Institute of Technology and Sciences, Coimbatore, Tamil Nadu, India
| | - R. V. Belfin
- BRIC, School of Medicine, University of North Carolina, Chapel Hill, NC, United States
| |
Collapse
|
34
|
Psychotic disorders as a framework for precision psychiatry. Nat Rev Neurol 2023; 19:221-234. [PMID: 36879033 DOI: 10.1038/s41582-023-00779-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/24/2023] [Indexed: 03/08/2023]
Abstract
People with psychotic disorders can show marked interindividual variations in the onset of illness, responses to treatment and relapse, but they receive broadly similar clinical care. Precision psychiatry is an approach that aims to stratify people with a given disorder according to different clinical outcomes and tailor treatment to their individual needs. At present, interindividual differences in outcomes of psychotic disorders are difficult to predict on the basis of clinical assessment alone. Therefore, current research in psychosis seeks to build models that predict outcomes by integrating clinical information with a range of biological measures. Here, we review recent progress in the application of precision psychiatry to psychotic disorders and consider the challenges associated with implementing this approach in clinical practice.
Collapse
|
35
|
Morrow E, Zidaru T, Ross F, Mason C, Patel KD, Ream M, Stockley R. Artificial intelligence technologies and compassion in healthcare: A systematic scoping review. Front Psychol 2023; 13:971044. [PMID: 36733854 PMCID: PMC9887144 DOI: 10.3389/fpsyg.2022.971044] [Citation(s) in RCA: 17] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2022] [Accepted: 12/05/2022] [Indexed: 01/18/2023] Open
Abstract
Background Advances in artificial intelligence (AI) technologies, together with the availability of big data in society, creates uncertainties about how these developments will affect healthcare systems worldwide. Compassion is essential for high-quality healthcare and research shows how prosocial caring behaviors benefit human health and societies. However, the possible association between AI technologies and compassion is under conceptualized and underexplored. Objectives The aim of this scoping review is to provide a comprehensive depth and a balanced perspective of the emerging topic of AI technologies and compassion, to inform future research and practice. The review questions were: How is compassion discussed in relation to AI technologies in healthcare? How are AI technologies being used to enhance compassion in healthcare? What are the gaps in current knowledge and unexplored potential? What are the key areas where AI technologies could support compassion in healthcare? Materials and methods A systematic scoping review following five steps of Joanna Briggs Institute methodology. Presentation of the scoping review conforms with PRISMA-ScR (Preferred Reporting Items for Systematic reviews and Meta-Analyses extension for Scoping Reviews). Eligibility criteria were defined according to 3 concept constructs (AI technologies, compassion, healthcare) developed from the literature and informed by medical subject headings (MeSH) and key words for the electronic searches. Sources of evidence were Web of Science and PubMed databases, articles published in English language 2011-2022. Articles were screened by title/abstract using inclusion/exclusion criteria. Data extracted (author, date of publication, type of article, aim/context of healthcare, key relevant findings, country) was charted using data tables. Thematic analysis used an inductive-deductive approach to generate code categories from the review questions and the data. A multidisciplinary team assessed themes for resonance and relevance to research and practice. Results Searches identified 3,124 articles. A total of 197 were included after screening. The number of articles has increased over 10 years (2011, n = 1 to 2021, n = 47 and from Jan-Aug 2022 n = 35 articles). Overarching themes related to the review questions were: (1) Developments and debates (7 themes) Concerns about AI ethics, healthcare jobs, and loss of empathy; Human-centered design of AI technologies for healthcare; Optimistic speculation AI technologies will address care gaps; Interrogation of what it means to be human and to care; Recognition of future potential for patient monitoring, virtual proximity, and access to healthcare; Calls for curricula development and healthcare professional education; Implementation of AI applications to enhance health and wellbeing of the healthcare workforce. (2) How AI technologies enhance compassion (10 themes) Empathetic awareness; Empathetic response and relational behavior; Communication skills; Health coaching; Therapeutic interventions; Moral development learning; Clinical knowledge and clinical assessment; Healthcare quality assessment; Therapeutic bond and therapeutic alliance; Providing health information and advice. (3) Gaps in knowledge (4 themes) Educational effectiveness of AI-assisted learning; Patient diversity and AI technologies; Implementation of AI technologies in education and practice settings; Safety and clinical effectiveness of AI technologies. (4) Key areas for development (3 themes) Enriching education, learning and clinical practice; Extending healing spaces; Enhancing healing relationships. Conclusion There is an association between AI technologies and compassion in healthcare and interest in this association has grown internationally over the last decade. In a range of healthcare contexts, AI technologies are being used to enhance empathetic awareness; empathetic response and relational behavior; communication skills; health coaching; therapeutic interventions; moral development learning; clinical knowledge and clinical assessment; healthcare quality assessment; therapeutic bond and therapeutic alliance; and to provide health information and advice. The findings inform a reconceptualization of compassion as a human-AI system of intelligent caring comprising six elements: (1) Awareness of suffering (e.g., pain, distress, risk, disadvantage); (2) Understanding the suffering (significance, context, rights, responsibilities etc.); (3) Connecting with the suffering (e.g., verbal, physical, signs and symbols); (4) Making a judgment about the suffering (the need to act); (5) Responding with an intention to alleviate the suffering; (6) Attention to the effect and outcomes of the response. These elements can operate at an individual (human or machine) and collective systems level (healthcare organizations or systems) as a cyclical system to alleviate different types of suffering. New and novel approaches to human-AI intelligent caring could enrich education, learning, and clinical practice; extend healing spaces; and enhance healing relationships. Implications In a complex adaptive system such as healthcare, human-AI intelligent caring will need to be implemented, not as an ideology, but through strategic choices, incentives, regulation, professional education, and training, as well as through joined up thinking about human-AI intelligent caring. Research funders can encourage research and development into the topic of AI technologies and compassion as a system of human-AI intelligent caring. Educators, technologists, and health professionals can inform themselves about the system of human-AI intelligent caring.
Collapse
Affiliation(s)
| | - Teodor Zidaru
- Department of Anthropology, London School of Economics and Political Sciences, London, United Kingdom
| | - Fiona Ross
- Faculty of Health, Science, Social Care and Education, Kingston University London, London, United Kingdom
| | - Cindy Mason
- Artificial Intelligence Researcher (Independent), Palo Alto, CA, United States
| | | | - Melissa Ream
- Kent Surrey Sussex Academic Health Science Network (AHSN) and the National AHSN Network Artificial Intelligence (AI) Initiative, Surrey, United Kingdom
| | - Rich Stockley
- Head of Research and Engagement, Surrey Heartlands Health and Care Partnership, Surrey, United Kingdom
| |
Collapse
|
36
|
Cao XJ, Liu XQ. Artificial intelligence-assisted psychosis risk screening in adolescents: Practices and challenges. World J Psychiatry 2022; 12:1287-1297. [PMID: 36389087 PMCID: PMC9641379 DOI: 10.5498/wjp.v12.i10.1287] [Citation(s) in RCA: 20] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/11/2022] [Revised: 08/09/2022] [Accepted: 09/22/2022] [Indexed: 02/05/2023] Open
Abstract
Artificial intelligence-based technologies are gradually being applied to psych-iatric research and practice. This paper reviews the primary literature concerning artificial intelligence-assisted psychosis risk screening in adolescents. In terms of the practice of psychosis risk screening, the application of two artificial intelligence-assisted screening methods, chatbot and large-scale social media data analysis, is summarized in detail. Regarding the challenges of psychiatric risk screening, ethical issues constitute the first challenge of psychiatric risk screening through artificial intelligence, which must comply with the four biomedical ethical principles of respect for autonomy, nonmaleficence, beneficence and impartiality such that the development of artificial intelligence can meet the moral and ethical requirements of human beings. By reviewing the pertinent literature concerning current artificial intelligence-assisted adolescent psychosis risk screens, we propose that assuming they meet ethical requirements, there are three directions worth considering in the future development of artificial intelligence-assisted psychosis risk screening in adolescents as follows: nonperceptual real-time artificial intelligence-assisted screening, further reducing the cost of artificial intelligence-assisted screening, and improving the ease of use of artificial intelligence-assisted screening techniques and tools.
Collapse
Affiliation(s)
- Xiao-Jie Cao
- Graduate School of Education, Peking University, Beijing 100871, China
| | - Xin-Qiao Liu
- School of Education, Tianjin University, Tianjin 300350, China
| |
Collapse
|
37
|
Giansanti D. The Regulation of Artificial Intelligence in Digital Radiology in the Scientific Literature: A Narrative Review of Reviews. Healthcare (Basel) 2022; 10:1824. [PMID: 36292270 PMCID: PMC9601605 DOI: 10.3390/healthcare10101824] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2022] [Revised: 09/14/2022] [Accepted: 09/20/2022] [Indexed: 09/05/2023] Open
Abstract
Today, there is growing interest in artificial intelligence (AI) in the field of digital radiology (DR). This is also due to the push that has been applied in this sector due to the pandemic. Many studies are devoted to the challenges of integration in the health domain. One of the most important challenges is that of regulations. This study conducted a narrative review of reviews on the international approach to the regulation of AI in DR. The design of the study was based on: (I) An overview on Scopus and Pubmed (II) A qualification and eligibility process based on a standardized checklist and a scoring system. The results have highlighted an international approach to the regulation of these systems classified as "software as medical devices (SaMD)" arranged into: ethical issues, international regulatory framework, and bottlenecks of the legal issues. Several recommendations emerge from the analysis. They are all based on fundamental pillars: (a) The need to overcome a differentiated approach between countries. (b) The need for greater transparency and publicity of information both for SaMDs as a whole and for the algorithms and test patterns. (c) The need for an interdisciplinary approach that avoids bias (including demographic) in algorithms and test data. (d) The need to reduce some limits/gaps of the scientific literature production that do not cover the international approach.
Collapse
|
38
|
Ott T, Dabrock P. Transparent human – (non-) transparent technology? The Janus-faced call for transparency in AI-based health care technologies. Front Genet 2022; 13:902960. [PMID: 36072654 PMCID: PMC9444183 DOI: 10.3389/fgene.2022.902960] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2022] [Accepted: 07/15/2022] [Indexed: 11/13/2022] Open
Abstract
The use of Artificial Intelligence and Big Data in health care opens up new opportunities for the measurement of the human. Their application aims not only at gathering more and better data points but also at doing it less invasive. With this change in health care towards its extension to almost all areas of life and its increasing invisibility and opacity, new questions of transparency arise. While the complex human-machine interactions involved in deploying and using AI tend to become non-transparent, the use of these technologies makes the patient seemingly transparent. Papers on the ethical implementation of AI plead for transparency but neglect the factor of the “transparent patient” as intertwined with AI. Transparency in this regard appears to be Janus-faced: The precondition for receiving help - e.g., treatment advice regarding the own health - is to become transparent for the digitized health care system. That is, for instance, to donate data and become visible to the AI and its operators. The paper reflects on this entanglement of transparent patients and (non-) transparent technology. It argues that transparency regarding both AI and humans is not an ethical principle per se but an infraethical concept. Further, it is no sufficient basis for avoiding harm and human dignity violations. Rather, transparency must be enriched by intelligibility following Judith Butler’s use of the term. Intelligibility is understood as an epistemological presupposition for recognition and the ensuing humane treatment. Finally, the paper highlights ways to testify intelligibility in dealing with AI in health care ex ante, ex post, and continuously.
Collapse
|
39
|
Zarate D, Stavropoulos V, Ball M, de Sena Collier G, Jacobson NC. Exploring the digital footprint of depression: a PRISMA systematic literature review of the empirical evidence. BMC Psychiatry 2022; 22:421. [PMID: 35733121 PMCID: PMC9214685 DOI: 10.1186/s12888-022-04013-y] [Citation(s) in RCA: 20] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/20/2021] [Accepted: 05/17/2022] [Indexed: 12/14/2022] Open
Abstract
BACKGROUND This PRISMA systematic literature review examined the use of digital data collection methods (including ecological momentary assessment [EMA], experience sampling method [ESM], digital biomarkers, passive sensing, mobile sensing, ambulatory assessment, and time-series analysis), emphasizing on digital phenotyping (DP) to study depression. DP is defined as the use of digital data to profile health information objectively. AIMS Four distinct yet interrelated goals underpin this study: (a) to identify empirical research examining the use of DP to study depression; (b) to describe the different methods and technology employed; (c) to integrate the evidence regarding the efficacy of digital data in the examination, diagnosis, and monitoring of depression and (d) to clarify DP definitions and digital mental health records terminology. RESULTS Overall, 118 studies were assessed as eligible. Considering the terms employed, "EMA", "ESM", and "DP" were the most predominant. A variety of DP data sources were reported, including voice, language, keyboard typing kinematics, mobile phone calls and texts, geocoded activity, actigraphy sensor-related recordings (i.e., steps, sleep, circadian rhythm), and self-reported apps' information. Reviewed studies employed subjectively and objectively recorded digital data in combination with interviews and psychometric scales. CONCLUSIONS Findings suggest links between a person's digital records and depression. Future research recommendations include (a) deriving consensus regarding the DP definition and (b) expanding the literature to consider a person's broader contextual and developmental circumstances in relation to their digital data/records.
Collapse
Affiliation(s)
- Daniel Zarate
- Institute for Health and Sport, Victoria University, Melbourne, Australia.
| | - Vasileios Stavropoulos
- grid.1019.90000 0001 0396 9544Institute for Health and Sport, Victoria University, Melbourne, Australia ,grid.5216.00000 0001 2155 0800Department of Psychology, University of Athens, Athens, Greece
| | - Michelle Ball
- grid.1019.90000 0001 0396 9544Institute for Health and Sport, Victoria University, Melbourne, Australia
| | - Gabriel de Sena Collier
- grid.1019.90000 0001 0396 9544Institute for Health and Sport, Victoria University, Melbourne, Australia
| | - Nicholas C. Jacobson
- grid.254880.30000 0001 2179 2404Center for Technology and Behavioral Health, Geisel School of Medicine, Dartmouth College, Hanover, USA ,grid.254880.30000 0001 2179 2404Department of Biomedical Data Science, Geisel School of Medicine, Dartmouth College, Hanover, USA ,grid.254880.30000 0001 2179 2404Department of Psychiatry, Geisel School of Medicine, Dartmouth College, Hanover, USA ,grid.254880.30000 0001 2179 2404Quantitative Biomedical Sciences Program, Dartmouth College, Hanover, USA
| |
Collapse
|
40
|
Liu H. Applications of Artificial Intelligence to Popularize Legal Knowledge and Publicize the Impact on Adolescents' Mental Health Status. Front Psychiatry 2022; 13:902456. [PMID: 35722558 PMCID: PMC9199859 DOI: 10.3389/fpsyt.2022.902456] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/23/2022] [Accepted: 04/21/2022] [Indexed: 11/13/2022] Open
Abstract
Artificial intelligence (AI) advancements have radically altered human production and daily living. When it comes to AI's quick rise, it facilitates the growth of China's citizens, and at the same moment, a lack of intelligence has led to several concerns regarding regulations and laws. Current investigations regarding AI on legal knowledge do not have consistent benefits in predicting adolescents' psychological status, performance, etc. The study's primary purpose is to examine the influence of AI on the legal profession and adolescent mental health using a novel cognitive fuzzy K-nearest neighbor (CF-KNN). Initially, the legal education datasets are gathered and are standardized in the pre-processing stage through the normalization technique to retrieve the unwanted noises or outliers. When normalized data are transformed into numerical features, they can be analyzed using a variational autoencoder (VAE) approach. Multi-gradient ant colony optimization (MG-ACO) is applied to select a proper subset of the features. Tree C4.5 (T-C4.5) and fitness-based logistic regression analysis (F-LRA) techniques assess the adolescent's mental health conditions. Finally, our proposed work's performance is examined and compared with classical techniques to gain our work with the greatest effectiveness. Findings are depicted in chart formation by employing the MATLAB tool.
Collapse
Affiliation(s)
- Hao Liu
- School of Law, Chongqing University, Chongqing, China
| |
Collapse
|
41
|
Abstract
Human-computer interaction (HCI) has contributed to the design and development of some efficient, user-friendly, cost-effective, and adaptable digital mental health solutions. But HCI has not been well-combined into technological developments resulting in quality and safety concerns. Digital platforms and artificial intelligence (AI) have a good potential to improve prediction, identification, coordination, and treatment by mental health care and suicide prevention services. AI is driving web-based and smartphone apps; mostly it is used for self-help and guided cognitive behavioral therapy (CBT) for anxiety and depression. Interactive AI may help real-time screening and treatment in outdated, strained or lacking mental healthcare systems. The barriers for using AI in mental healthcare include accessibility, efficacy, reliability, usability, safety, security, ethics, suitable education and training, and socio-cultural adaptability. Apps, real-time machine learning algorithms, immersive technologies, and digital phenotyping are notable prospects. Generally, there is a need for faster and better human factors in combination with machine interaction and automation, higher levels of effectiveness evaluation and the application of blended, hybrid or stepped care in an adjunct approach. HCI modeling may assist in the design and development of usable applications, and to effectively recognize, acknowledge, and address the inequities of mental health care and suicide prevention and assist in the digital therapeutic alliance.
Collapse
|
42
|
Nilsen P, Svedberg P, Nygren J, Frideros M, Johansson J, Schueller S. Accelerating the impact of artificial intelligence in mental healthcare through implementation science. IMPLEMENTATION RESEARCH AND PRACTICE 2022; 3:26334895221112033. [PMID: 37091110 PMCID: PMC9924259 DOI: 10.1177/26334895221112033] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Background The implementation of artificial intelligence (AI) in mental healthcare offers a potential solution to some of the problems associated with the availability, attractiveness, and accessibility of mental healthcare services. However, there are many knowledge gaps regarding how to implement and best use AI to add value to mental healthcare services, providers, and consumers. The aim of this paper is to identify challenges and opportunities for AI use in mental healthcare and to describe key insights from implementation science of potential relevance to understand and facilitate AI implementation in mental healthcare. Methods The paper is based on a selective review of articles concerning AI in mental healthcare and implementation science. Results Research in implementation science has established the importance of considering and planning for implementation from the start, the progression of implementation through different stages, and the appreciation of determinants at multiple levels. Determinant frameworks and implementation theories have been developed to understand and explain how different determinants impact on implementation. AI research should explore the relevance of these determinants for AI implementation. Implementation strategies to support AI implementation must address determinants specific to AI implementation in mental health. There might also be a need to develop new theoretical approaches or augment and recontextualize existing ones. Implementation outcomes may have to be adapted to be relevant in an AI implementation context. Conclusion Knowledge derived from implementation science could provide an important starting point for research on implementation of AI in mental healthcare. This field has generated many insights and provides a broad range of theories, frameworks, and concepts that are likely relevant for this research. However, when taking advantage of the existing knowledge basis, it is important to also be explorative and study AI implementation in health and mental healthcare as a new phenomenon in its own right since implementing AI may differ in various ways from implementing evidence-based practices in terms of what implementation determinants, strategies, and outcomes are most relevant. Plain Language Summary: The implementation of artificial intelligence (AI) in mental healthcare offers a potential solution to some of the problems associated with the availability, attractiveness, and accessibility of mental healthcare services. However, there are many knowledge gaps concerning how to implement and best use AI to add value to mental healthcare services, providers, and consumers. This paper is based on a selective review of articles concerning AI in mental healthcare and implementation science, with the aim to identify challenges and opportunities for the use of AI in mental healthcare and describe key insights from implementation science of potential relevance to understand and facilitate AI implementation in mental healthcare. AI offers opportunities for identifying the patients most in need of care or the interventions that might be most appropriate for a given population or individual. AI also offers opportunities for supporting a more reliable diagnosis of psychiatric disorders and ongoing monitoring and tailoring during the course of treatment. However, AI implementation challenges exist at organizational/policy, individual, and technical levels, making it relevant to draw on implementation science knowledge for understanding and facilitating implementation of AI in mental healthcare. Knowledge derived from implementation science could provide an important starting point for research on AI implementation in mental healthcare. This field has generated many insights and provides a broad range of theories, frameworks, and concepts that are likely relevant for this research.
Collapse
Affiliation(s)
| | - Petra Svedberg
- Halmstad University School of Health and Welfare, Halmstad University, Halmstad, Sweden
| | - Jens Nygren
- Halmstad University School of Health and Welfare, Halmstad University, Halmstad, Sweden
| | | | | | - Stephen Schueller
- Psychological Science, University of California Irvine, Irvine, CA, USA
| |
Collapse
|
43
|
Ćosić K, Popović S, Šarlija M, Kesedžić I, Gambiraža M, Dropuljić B, Mijić I, Henigsberg N, Jovanovic T. AI-Based Prediction and Prevention of Psychological and Behavioral Changes in Ex-COVID-19 Patients. Front Psychol 2021; 12:782866. [PMID: 35027902 PMCID: PMC8751545 DOI: 10.3389/fpsyg.2021.782866] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2021] [Accepted: 12/02/2021] [Indexed: 12/30/2022] Open
Abstract
The COVID-19 pandemic has adverse consequences on human psychology and behavior long after initial recovery from the virus. These COVID-19 health sequelae, if undetected and left untreated, may lead to more enduring mental health problems, and put vulnerable individuals at risk of developing more serious psychopathologies. Therefore, an early distinction of such vulnerable individuals from those who are more resilient is important to undertake timely preventive interventions. The main aim of this article is to present a comprehensive multimodal conceptual approach for addressing these potential psychological and behavioral mental health changes using state-of-the-art tools and means of artificial intelligence (AI). Mental health COVID-19 recovery programs at post-COVID clinics based on AI prediction and prevention strategies may significantly improve the global mental health of ex-COVID-19 patients. Most COVID-19 recovery programs currently involve specialists such as pulmonologists, cardiologists, and neurologists, but there is a lack of psychiatrist care. The focus of this article is on new tools which can enhance the current limited psychiatrist resources and capabilities in coping with the upcoming challenges related to widespread mental health disorders. Patients affected by COVID-19 are more vulnerable to psychological and behavioral changes than non-COVID populations and therefore they deserve careful clinical psychological screening in post-COVID clinics. However, despite significant advances in research, the pace of progress in prevention of psychiatric disorders in these patients is still insufficient. Current approaches for the diagnosis of psychiatric disorders largely rely on clinical rating scales, as well as self-rating questionnaires that are inadequate for comprehensive assessment of ex-COVID-19 patients' susceptibility to mental health deterioration. These limitations can presumably be overcome by applying state-of-the-art AI-based tools in diagnosis, prevention, and treatment of psychiatric disorders in acute phase of disease to prevent more chronic psychiatric consequences.
Collapse
Affiliation(s)
- Krešimir Ćosić
- Faculty of Electrical Engineering and Computing, University of Zagreb, Zagreb, Croatia
| | - Siniša Popović
- Faculty of Electrical Engineering and Computing, University of Zagreb, Zagreb, Croatia
| | - Marko Šarlija
- Faculty of Electrical Engineering and Computing, University of Zagreb, Zagreb, Croatia
| | - Ivan Kesedžić
- Faculty of Electrical Engineering and Computing, University of Zagreb, Zagreb, Croatia
| | - Mate Gambiraža
- Faculty of Electrical Engineering and Computing, University of Zagreb, Zagreb, Croatia
| | - Branimir Dropuljić
- Faculty of Electrical Engineering and Computing, University of Zagreb, Zagreb, Croatia
| | - Igor Mijić
- Faculty of Electrical Engineering and Computing, University of Zagreb, Zagreb, Croatia
| | - Neven Henigsberg
- Croatian Institute for Brain Research, University of Zagreb School of Medicine, Zagreb, Croatia
| | - Tanja Jovanovic
- Department of Psychiatry and Behavioral Neurosciences, Wayne State University School of Medicine, Detroit, MI, United States
| |
Collapse
|
44
|
Roth CB, Papassotiropoulos A, Brühl AB, Lang UE, Huber CG. Psychiatry in the Digital Age: A Blessing or a Curse? INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2021; 18:8302. [PMID: 34444055 PMCID: PMC8391902 DOI: 10.3390/ijerph18168302] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 06/30/2021] [Revised: 07/31/2021] [Accepted: 08/03/2021] [Indexed: 12/23/2022]
Abstract
Social distancing and the shortage of healthcare professionals during the COVID-19 pandemic, the impact of population aging on the healthcare system, as well as the rapid pace of digital innovation are catalyzing the development and implementation of new technologies and digital services in psychiatry. Is this transformation a blessing or a curse for psychiatry? To answer this question, we conducted a literature review covering a broad range of new technologies and eHealth services, including telepsychiatry; computer-, internet-, and app-based cognitive behavioral therapy; virtual reality; digital applied games; a digital medicine system; omics; neuroimaging; machine learning; precision psychiatry; clinical decision support; electronic health records; physician charting; digital language translators; and online mental health resources for patients. We found that eHealth services provide effective, scalable, and cost-efficient options for the treatment of people with limited or no access to mental health care. This review highlights innovative technologies spearheading the way to more effective and safer treatments. We identified artificially intelligent tools that relieve physicians from routine tasks, allowing them to focus on collaborative doctor-patient relationships. The transformation of traditional clinics into digital ones is outlined, and the challenges associated with the successful deployment of digitalization in psychiatry are highlighted.
Collapse
Affiliation(s)
- Carl B. Roth
- University Psychiatric Clinics Basel, Clinic for Adults, University of Basel, Wilhelm Klein-Strasse 27, CH-4002 Basel, Switzerland; (A.P.); (A.B.B.); (U.E.L.); (C.G.H.)
| | - Andreas Papassotiropoulos
- University Psychiatric Clinics Basel, Clinic for Adults, University of Basel, Wilhelm Klein-Strasse 27, CH-4002 Basel, Switzerland; (A.P.); (A.B.B.); (U.E.L.); (C.G.H.)
- Transfaculty Research Platform Molecular and Cognitive Neurosciences, University of Basel, Birmannsgasse 8, CH-4055 Basel, Switzerland
- Division of Molecular Neuroscience, Department of Psychology, University of Basel, Birmannsgasse 8, CH-4055 Basel, Switzerland
- Biozentrum, Life Sciences Training Facility, University of Basel, Klingelbergstrasse 50/70, CH-4056 Basel, Switzerland
| | - Annette B. Brühl
- University Psychiatric Clinics Basel, Clinic for Adults, University of Basel, Wilhelm Klein-Strasse 27, CH-4002 Basel, Switzerland; (A.P.); (A.B.B.); (U.E.L.); (C.G.H.)
| | - Undine E. Lang
- University Psychiatric Clinics Basel, Clinic for Adults, University of Basel, Wilhelm Klein-Strasse 27, CH-4002 Basel, Switzerland; (A.P.); (A.B.B.); (U.E.L.); (C.G.H.)
| | - Christian G. Huber
- University Psychiatric Clinics Basel, Clinic for Adults, University of Basel, Wilhelm Klein-Strasse 27, CH-4002 Basel, Switzerland; (A.P.); (A.B.B.); (U.E.L.); (C.G.H.)
| |
Collapse
|
45
|
Resnik P, De Choudhury M, Musacchio Schafer K, Coppersmith G. Bibliometric Studies and the Discipline of Social Media Mental Health Research. Comment on "Machine Learning for Mental Health in Social Media: Bibliometric Study". J Med Internet Res 2021; 23:e28990. [PMID: 34137722 PMCID: PMC8277321 DOI: 10.2196/28990] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2021] [Accepted: 05/13/2021] [Indexed: 12/14/2022] Open
Affiliation(s)
- Philip Resnik
- Department of Linguistics and Institute for Advanced Computer Studies, University of Maryland, College Park, MD, United States
| | - Munmun De Choudhury
- School of Interactive Computing, Georgia Institute of Technology, Atlanta, GA, United States
| | | | | |
Collapse
|
46
|
Renn BN, Schurr M, Zaslavsky O, Pratap A. Artificial Intelligence: An Interprofessional Perspective on Implications for Geriatric Mental Health Research and Care. Front Psychiatry 2021; 12:734909. [PMID: 34867524 PMCID: PMC8634654 DOI: 10.3389/fpsyt.2021.734909] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/01/2021] [Accepted: 10/07/2021] [Indexed: 11/26/2022] Open
Abstract
Artificial intelligence (AI) in healthcare aims to learn patterns in large multimodal datasets within and across individuals. These patterns may either improve understanding of current clinical status or predict a future outcome. AI holds the potential to revolutionize geriatric mental health care and research by supporting diagnosis, treatment, and clinical decision-making. However, much of this momentum is driven by data and computer scientists and engineers and runs the risk of being disconnected from pragmatic issues in clinical practice. This interprofessional perspective bridges the experiences of clinical scientists and data science. We provide a brief overview of AI with the main focus on possible applications and challenges of using AI-based approaches for research and clinical care in geriatric mental health. We suggest future AI applications in geriatric mental health consider pragmatic considerations of clinical practice, methodological differences between data and clinical science, and address issues of ethics, privacy, and trust.
Collapse
Affiliation(s)
- Brenna N Renn
- Department of Psychology, University of Nevada, Las Vegas, NV, United States
| | - Matthew Schurr
- Department of Psychology, University of Nevada, Las Vegas, NV, United States
| | - Oleg Zaslavsky
- Department of Biobehavioral Nursing and Health Informatics, University of Washington, Seattle, WA, United States
| | - Abhishek Pratap
- Krembil Centre for Neuroinformatics, Centre for Addiction and Mental Health, Toronto, ON, Canada.,Vector Institute for Artificial Intelligence, Toronto, ON, Canada.,Department of Biomedical Informatics and Medical Education, University of Washington, Seattle, WA, United States.,Institute of Psychiatry, Psychology & Neuroscience, King's College London, London, United Kingdom
| |
Collapse
|