1
|
Shang Z, Chauhan V, Devi K, Patil S. Artificial Intelligence, the Digital Surgeon: Unravelling Its Emerging Footprint in Healthcare - The Narrative Review. J Multidiscip Healthc 2024; 17:4011-4022. [PMID: 39165254 PMCID: PMC11333562 DOI: 10.2147/jmdh.s482757] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2024] [Accepted: 08/09/2024] [Indexed: 08/22/2024] Open
Abstract
Background Artificial Intelligence (AI) holds transformative potential for the healthcare industry, offering innovative solutions for diagnosis, treatment planning, and improving patient outcomes. As AI continues to be integrated into healthcare systems, it promises advancements across various domains. This review explores the diverse applications of AI in healthcare, along with the challenges and limitations that need to be addressed. The aim is to provide a comprehensive overview of AI's impact on healthcare and to identify areas for further development and focus. Main Applications The review discusses the broad range of AI applications in healthcare. In medical imaging and diagnostics, AI enhances the accuracy and efficiency of diagnostic processes, aiding in early disease detection. AI-powered clinical decision support systems assist healthcare professionals in patient management and decision-making. Predictive analytics using AI enables the prediction of patient outcomes and identification of potential health risks. AI-driven robotic systems have revolutionized surgical procedures, improving precision and outcomes. Virtual assistants and chatbots enhance patient interaction and support, providing timely information and assistance. In the pharmaceutical industry, AI accelerates drug discovery and development by identifying potential drug candidates and predicting their efficacy. Additionally, AI improves administrative efficiency and operational workflows in healthcare, streamlining processes and reducing costs. AI-powered remote monitoring and telehealth solutions expand access to healthcare, particularly in underserved areas. Challenges and Limitations Despite the significant promise of AI in healthcare, several challenges persist. Ensuring the reliability and consistency of AI-driven outcomes is crucial. Privacy and security concerns must be navigated carefully, particularly in handling sensitive patient data. Ethical considerations, including bias and fairness in AI algorithms, need to be addressed to prevent unintended consequences. Overcoming these challenges is critical for the ethical and successful integration of AI in healthcare. Conclusion The integration of AI into healthcare is advancing rapidly, offering substantial benefits in improving patient care and operational efficiency. However, addressing the associated challenges is essential to fully realize the transformative potential of AI in healthcare. Future efforts should focus on enhancing the reliability, transparency, and ethical standards of AI technologies to ensure they contribute positively to global health outcomes.
Collapse
Affiliation(s)
- Zifang Shang
- Guangdong Engineering Technological Research Centre of Clinical Molecular Diagnosis and Antibody Drugs, Meizhou People’s Hospital (Huangtang Hospital), Meizhou Academy of Medical Sciences, Meizhou, People’s Republic of China
| | - Varun Chauhan
- Multi-Disciplinary Research Unit, Government Institute of Medical Sciences, Greater Noida, India
| | - Kirti Devi
- Department of Medicine, Government Institute of Medical Sciences, Greater Noida, India
| | - Sandip Patil
- Department Haematology and Oncology, Shenzhen Children’s Hospital, Shenzhen, People’s Republic of China
| |
Collapse
|
2
|
Hammad Jaber Amin M, Abdelmonim Gasm Alseed Fadlalmoula GA, Awadalla Mohamed Elhassan Elmahi M, hatim Khalid Alrabee N, Hemmeda L, Haydar Awad M, Mustafa Ahmed GE, Abbasher Hussien Mohamed Ahmed K. Knowledge, attitude, and practice of artificial intelligence applications in medicine among physicians in Sudan: a national cross-sectional survey. Ann Med Surg (Lond) 2024; 86:4416-4421. [PMID: 39118720 PMCID: PMC11305753 DOI: 10.1097/ms9.0000000000002274] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2024] [Accepted: 06/04/2024] [Indexed: 08/10/2024] Open
Abstract
Background and aims Artificial intelligence (AI) has emerged as a rapidly developing tool within the medical landscape, globally aiding in diagnosis and healthcare management. However, its integration within healthcare systems remains varied across different regions. In Sudan, there exists a burgeoning interest in AI potential applications within medicine. This study aims to evaluate the knowledge, attitudes, and practices of AI applications in medicine among physicians in Sudan. Methods The authors conducted a web-based survey cross-sectional analytical study using an online questionnaire-based survey regarding demographic details, knowledge, attitudes, and practice of AI distributing through various e-mail listings and social media platforms. A sample of 825 Physicians including doctors in Sudan with different ranks and specialties were selected using the convenient non-probability sampling technique. Result Out of 825 Physicians, 666 (80.7%) of Physicians have previous knowledge about AI. However, only a small number 123 (14.9%) were taught about AI during their time in medical school, even fewer, just 120 (14.5%) had AI-related lessons in their training program. Regarding attitude, 675 (81.8%) agree that AI is very important in medicine, almost the same number, 681 (82.6%) support the idea of teaching AI in medical schools. Practically, 535 (64.8%) of doctors, think that should get special training in using AI tools in healthcare. Excitingly 651 (78.9%) of physicians are interested in working with AI in future. Based on different ranks of doctors toward AI; Medical Officers exhibited the highest proportion at (32.7%) of knowledge and understanding of AI concepts, followed by House Officers at (16.7%) (p=0.076); regarding attitude, Medical Officers demonstrated the highest (31.6%) favorable attitude, followed by House Officers at (17.5%) (p=0.229); In practice also, Medical Officer showed the highest portion (28.0%) among participants (p=0.129). Conclusion While there is a positive attitude and some level of AI practice, there remains a considerable gap in knowledge that needs addressing.
Collapse
Affiliation(s)
| | | | | | | | - Lina Hemmeda
- Faculty of Medicine, University of Khartoum, Khartoum
| | | | | | | |
Collapse
|
3
|
Li M, Xiong X, Xu B, Dickson C. Chinese Oncologists' Perspectives on Integrating AI into Clinical Practice: Cross-Sectional Survey Study. JMIR Form Res 2024; 8:e53918. [PMID: 38838307 PMCID: PMC11187515 DOI: 10.2196/53918] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2023] [Revised: 02/21/2024] [Accepted: 04/03/2024] [Indexed: 06/07/2024] Open
Abstract
BACKGROUND The rapid development of artificial intelligence (AI) has brought significant interest to its potential applications in oncology. Although AI-powered tools are already being implemented in some Chinese hospitals, their integration into clinical practice raises several concerns for Chinese oncologists. OBJECTIVE This study aims to explore the concerns of Chinese oncologists regarding the integration of AI into clinical practice and to identify the factors influencing these concerns. METHODS A total of 228 Chinese oncologists participated in a cross-sectional web-based survey from April to June in 2023 in mainland China. The survey gauged their worries about AI with multiple-choice questions. The survey evaluated their views on the statements of "The impact of AI on the doctor-patient relationship" and "AI will replace doctors." The data were analyzed using descriptive statistics, and variate analyses were used to find correlations between the oncologists' backgrounds and their concerns. RESULTS The study revealed that the most prominent concerns were the potential for AI to mislead diagnosis and treatment (163/228, 71.5%); an overreliance on AI (162/228, 71%); data and algorithm bias (123/228, 54%); issues with data security and patient privacy (123/228, 54%); and a lag in the adaptation of laws, regulations, and policies in keeping up with AI's development (115/228, 50.4%). Oncologists with a bachelor's degree expressed heightened concerns related to data and algorithm bias (34/49, 69%; P=.03) and the lagging nature of legal, regulatory, and policy issues (32/49, 65%; P=.046). Regarding AI's impact on doctor-patient relationships, 53.1% (121/228) saw a positive impact, whereas 35.5% (81/228) found it difficult to judge, 9.2% (21/228) feared increased disputes, and 2.2% (5/228) believed that there is no impact. Although sex differences were not significant (P=.08), perceptions varied-male oncologists tended to be more positive than female oncologists (74/135, 54.8% vs 47/93, 50%). Oncologists with a bachelor's degree (26/49, 53%; P=.03) and experienced clinicians (≥21 years; 28/56, 50%; P=.054). found it the hardest to judge. Those with IT experience were significantly more positive (25/35, 71%) than those without (96/193, 49.7%; P=.02). Opinions regarding the possibility of AI replacing doctors were diverse, with 23.2% (53/228) strongly disagreeing, 14% (32/228) disagreeing, 29.8% (68/228) being neutral, 16.2% (37/228) agreeing, and 16.7% (38/228) strongly agreeing. There were no significant correlations with demographic and professional factors (all P>.05). CONCLUSIONS Addressing oncologists' concerns about AI requires collaborative efforts from policy makers, developers, health care professionals, and legal experts. Emphasizing transparency, human-centered design, bias mitigation, and education about AI's potential and limitations is crucial. Through close collaboration and a multidisciplinary strategy, AI can be effectively integrated into oncology, balancing benefits with ethical considerations and enhancing patient care.
Collapse
Affiliation(s)
- Ming Li
- Department of Health Policy Management, Bloomberg School of Public Health, Johns Hopkins University, Baltimore, MD, United States
| | - XiaoMin Xiong
- Chongqing Key Laboratory of Intelligent Oncology for Breast Cancer, Chongqing University Cancer Hospital, Chongqing University School of Medicine, Chongqing, China
| | - Bo Xu
- Chongqing Key Laboratory of Intelligent Oncology for Breast Cancer, Chongqing University Cancer Hospital, Chongqing University School of Medicine, Chongqing, China
- Department of Biochemistry and Molecular Biology, Key Laboratory of Breast Cancer Prevention and Therapy, Ministry of Education, National Cancer Research Center, Tianjin Medical University Cancer Institute and Hospital, Tianjin, China
| | - Conan Dickson
- Department of Health Policy Management, Bloomberg School of Public Health, Johns Hopkins University, Baltimore, MD, United States
| |
Collapse
|
4
|
Lam BD, Dodge LE, Zerbey S, Robertson W, Rosovsky RP, Lake L, Datta S, Elavakanar P, Adamski A, Reyes N, Abe K, Vlachos IS, Zwicker JI, Patell R. The potential use of artificial intelligence for venous thromboembolism prophylaxis and management: clinician and healthcare informatician perspectives. Sci Rep 2024; 14:12010. [PMID: 38796561 PMCID: PMC11127994 DOI: 10.1038/s41598-024-62535-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2023] [Accepted: 05/17/2024] [Indexed: 05/28/2024] Open
Abstract
Venous thromboembolism (VTE) is the leading cause of preventable death in hospitalized patients. Artificial intelligence (AI) and machine learning (ML) can support guidelines recommending an individualized approach to risk assessment and prophylaxis. We conducted electronic surveys asking clinician and healthcare informaticians about their perspectives on AI/ML for VTE prevention and management. Of 101 respondents to the informatician survey, most were 40 years or older, male, clinicians and data scientists, and had performed research on AI/ML. Of the 607 US-based respondents to the clinician survey, most were 40 years or younger, female, physicians, and had never used AI to inform clinical practice. Most informaticians agreed that AI/ML can be used to manage VTE (56.0%). Over one-third were concerned that clinicians would not use the technology (38.9%), but the majority of clinicians believed that AI/ML probably or definitely can help with VTE prevention (70.1%). The most common concern in both groups was a perceived lack of transparency (informaticians 54.4%; clinicians 25.4%). These two surveys revealed that key stakeholders are interested in AI/ML for VTE prevention and management, and identified potential barriers to address prior to implementation.
Collapse
Affiliation(s)
- Barbara D Lam
- Division of Hematology, Department of Medicine, Beth Israel Deaconess Medical Center, Harvard Medical School, 330 Brookline Avenue, Boston, MA, 02215, USA
- Division of Clinical Informatics, Department of Medicine, Beth Israel Deaconess Medical Center, Boston, USA
| | - Laura E Dodge
- Department of Obstetrics and Gynecology, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, MA, USA
- Harvard T.H. Chan School of Public Health, Boston, MA, USA
| | - Sabrina Zerbey
- Division of Hematology, Department of Medicine, Beth Israel Deaconess Medical Center, Harvard Medical School, 330 Brookline Avenue, Boston, MA, 02215, USA
| | - William Robertson
- Weber State University, Ogden, UT, USA
- National Blood Clot Alliance, Philadelphia, PA, USA
| | - Rachel P Rosovsky
- Division of Hematology, Department of Medicine, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | | | - Siddhant Datta
- Division of Hospital Medicine, Department of Medicine, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, MA, USA
| | - Pavania Elavakanar
- Division of Hematology, Department of Medicine, Beth Israel Deaconess Medical Center, Harvard Medical School, 330 Brookline Avenue, Boston, MA, 02215, USA
| | - Alys Adamski
- Division of Blood Disorders, National Center on Birth Defects and Developmental Disabilities, Centers for Disease Control and Prevention, Atlanta, GA, USA
| | - Nimia Reyes
- Division of Blood Disorders, National Center on Birth Defects and Developmental Disabilities, Centers for Disease Control and Prevention, Atlanta, GA, USA
| | - Karon Abe
- Division of Blood Disorders, National Center on Birth Defects and Developmental Disabilities, Centers for Disease Control and Prevention, Atlanta, GA, USA
| | - Ioannis S Vlachos
- Department of Pathology, Cancer Research Institute, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, MA, USA
| | - Jeffrey I Zwicker
- Division of Hematology, Department of Medicine, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Rushad Patell
- Division of Hematology, Department of Medicine, Beth Israel Deaconess Medical Center, Harvard Medical School, 330 Brookline Avenue, Boston, MA, 02215, USA.
| |
Collapse
|
5
|
O'Connor C. Public perspectives on AI diagnosis of mental illness. Gen Psychiatr 2024; 37:e101370. [PMID: 38800631 PMCID: PMC11116862 DOI: 10.1136/gpsych-2023-101370] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/28/2023] [Accepted: 04/23/2024] [Indexed: 05/29/2024] Open
|
6
|
Daniyal M, Qureshi M, Marzo RR, Aljuaid M, Shahid D. Exploring clinical specialists' perspectives on the future role of AI: evaluating replacement perceptions, benefits, and drawbacks. BMC Health Serv Res 2024; 24:587. [PMID: 38725039 PMCID: PMC11080164 DOI: 10.1186/s12913-024-10928-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2023] [Accepted: 03/29/2024] [Indexed: 05/13/2024] Open
Abstract
BACKGROUND OF STUDY Over the past few decades, the utilization of Artificial Intelligence (AI) has surged in popularity, and its application in the medical field is witnessing a global increase. Nevertheless, the implementation of AI-based healthcare solutions has been slow in developing nations like Pakistan. This unique study aims to assess the opinion of clinical specialists on the future replacement of AI, its associated benefits, and its drawbacks in form southern region of Pakistan. MATERIAL AND METHODS A cross-sectional selective study was conducted from 140 clinical specialists (Surgery = 24, Pathology = 31, Radiology = 35, Gynecology = 35, Pediatric = 17) from the neglected southern Punjab region of Pakistan. The study was analyzed using χ2 - the test of association and the nexus between different factors was examined by multinomial logistic regression. RESULTS Out of 140 respondents, 34 (24.3%) believed hospitals were ready for AI, while 81 (57.9%) disagreed. Additionally, 42(30.0%) were concerned about privacy violations, and 70(50%) feared AI could lead to unemployment. Specialists with less than 6 years of experience are more likely to embrace AI (p = 0.0327, OR = 3.184, 95% C.I; 0.262, 3.556) and those who firmly believe that AI knowledge will not replace their future tasks exhibit a lower likelihood of accepting AI (p = 0.015, OR = 0.235, 95% C.I: (0.073, 0.758). Clinical specialists who perceive AI as a technology that encompasses both drawbacks and benefits demonstrated a higher likelihood of accepting its adoption (p = 0.084, OR = 2.969, 95% C.I; 0.865, 5.187). CONCLUSION Clinical specialists have embraced AI as the future of the medical field while acknowledging concerns about privacy and unemployment.
Collapse
Affiliation(s)
- Muhammad Daniyal
- Department of Statistics, Faculty of Computing, Islamia University of Bahawalpur, Bahawalpur, Pakistan.
| | - Moiz Qureshi
- Government Degree College, TandoJam, Hyderabad, Sindh, Pakistan
| | - Roy Rillera Marzo
- Faculty of Humanities and Health Sciences, Curtin University, Malaysia, , Miri, Sarawak, Malaysia
- Jeffrey Cheah School of Medicine and Health Sciences, Global Public Health, Monash University Malaysia, Subang Jaya, Selangor, Malaysia
| | - Mohammed Aljuaid
- Department of Health Administration, College of Business Administration, King Saud University, Riyadh, Saudi Arabia
| | - Duaa Shahid
- Hult International Business School, 02141, Cambridge, MA, USA
| |
Collapse
|
7
|
Gutierrez G, Stephenson C, Eadie J, Asadpour K, Alavi N. Examining the role of AI technology in online mental healthcare: opportunities, challenges, and implications, a mixed-methods review. Front Psychiatry 2024; 15:1356773. [PMID: 38774435 PMCID: PMC11106393 DOI: 10.3389/fpsyt.2024.1356773] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/16/2023] [Accepted: 04/22/2024] [Indexed: 05/24/2024] Open
Abstract
Introduction Online mental healthcare has gained significant attention due to its effectiveness, accessibility, and scalability in the management of mental health symptoms. Despite these advantages over traditional in-person formats, including higher availability and accessibility, issues with low treatment adherence and high dropout rates persist. Artificial intelligence (AI) technologies could help address these issues, through powerful predictive models, language analysis, and intelligent dialogue with users, however the study of these applications remains underexplored. The following mixed methods review aimed to supplement this gap by synthesizing the available evidence on the applications of AI in online mental healthcare. Method We searched the following databases: MEDLINE, CINAHL, PsycINFO, EMBASE, and Cochrane. This review included peer-reviewed randomized controlled trials, observational studies, non-randomized experimental studies, and case studies that were selected using the PRISMA guidelines. Data regarding pre and post-intervention outcomes and AI applications were extracted and analyzed. A mixed-methods approach encompassing meta-analysis and network meta-analysis was used to analyze pre and post-intervention outcomes, including main effects, depression, anxiety, and study dropouts. We applied the Cochrane risk of bias tool and the Grading of Recommendations Assessment, Development and Evaluation (GRADE) to assess the quality of the evidence. Results Twenty-nine studies were included revealing a variety of AI applications including triage, psychotherapy delivery, treatment monitoring, therapy engagement support, identification of effective therapy features, and prediction of treatment response, dropout, and adherence. AI-delivered self-guided interventions demonstrated medium to large effects on managing mental health symptoms, with dropout rates comparable to non-AI interventions. The quality of the data was low to very low. Discussion The review supported the use of AI in enhancing treatment response, adherence, and improvements in online mental healthcare. Nevertheless, given the low quality of the available evidence, this study highlighted the need for additional robust and high-powered studies in this emerging field. Systematic review registration https://www.crd.york.ac.uk/prospero/display_record.php?RecordID=443575, identifier CRD42023443575.
Collapse
Affiliation(s)
- Gilmar Gutierrez
- Department of Psychiatry, Faculty of Health Sciences, Queen’s University, Kingston, ON, Canada
| | - Callum Stephenson
- Department of Psychiatry, Faculty of Health Sciences, Queen’s University, Kingston, ON, Canada
| | - Jazmin Eadie
- Department of Psychiatry, Faculty of Health Sciences, Queen’s University, Kingston, ON, Canada
- Faculty of Education, Queen’s University, Kingston, ON, Canada
- Department of Psychology, Faculty of Arts and Sciences, Queen’s University, Kingston, ON, Canada
| | - Kimia Asadpour
- Department of Psychiatry, Faculty of Health Sciences, Queen’s University, Kingston, ON, Canada
| | - Nazanin Alavi
- Department of Psychiatry, Faculty of Health Sciences, Queen’s University, Kingston, ON, Canada
- Centre for Neuroscience Studies, Faculty of Health Sciences, Queen’s University, Kingston, ON, Canada
- OPTT Inc., Toronto, ON, Canada
| |
Collapse
|
8
|
Elyoseph Z, Refoua E, Asraf K, Lvovsky M, Shimoni Y, Hadar-Shoval D. Capacity of Generative AI to Interpret Human Emotions From Visual and Textual Data: Pilot Evaluation Study. JMIR Ment Health 2024; 11:e54369. [PMID: 38319707 PMCID: PMC10879976 DOI: 10.2196/54369] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/07/2023] [Revised: 12/09/2023] [Accepted: 12/25/2023] [Indexed: 02/07/2024] Open
Abstract
BACKGROUND Mentalization, which is integral to human cognitive processes, pertains to the interpretation of one's own and others' mental states, including emotions, beliefs, and intentions. With the advent of artificial intelligence (AI) and the prominence of large language models in mental health applications, questions persist about their aptitude in emotional comprehension. The prior iteration of the large language model from OpenAI, ChatGPT-3.5, demonstrated an advanced capacity to interpret emotions from textual data, surpassing human benchmarks. Given the introduction of ChatGPT-4, with its enhanced visual processing capabilities, and considering Google Bard's existing visual functionalities, a rigorous assessment of their proficiency in visual mentalizing is warranted. OBJECTIVE The aim of the research was to critically evaluate the capabilities of ChatGPT-4 and Google Bard with regard to their competence in discerning visual mentalizing indicators as contrasted with their textual-based mentalizing abilities. METHODS The Reading the Mind in the Eyes Test developed by Baron-Cohen and colleagues was used to assess the models' proficiency in interpreting visual emotional indicators. Simultaneously, the Levels of Emotional Awareness Scale was used to evaluate the large language models' aptitude in textual mentalizing. Collating data from both tests provided a holistic view of the mentalizing capabilities of ChatGPT-4 and Bard. RESULTS ChatGPT-4, displaying a pronounced ability in emotion recognition, secured scores of 26 and 27 in 2 distinct evaluations, significantly deviating from a random response paradigm (P<.001). These scores align with established benchmarks from the broader human demographic. Notably, ChatGPT-4 exhibited consistent responses, with no discernible biases pertaining to the sex of the model or the nature of the emotion. In contrast, Google Bard's performance aligned with random response patterns, securing scores of 10 and 12 and rendering further detailed analysis redundant. In the domain of textual analysis, both ChatGPT and Bard surpassed established benchmarks from the general population, with their performances being remarkably congruent. CONCLUSIONS ChatGPT-4 proved its efficacy in the domain of visual mentalizing, aligning closely with human performance standards. Although both models displayed commendable acumen in textual emotion interpretation, Bard's capabilities in visual emotion interpretation necessitate further scrutiny and potential refinement. This study stresses the criticality of ethical AI development for emotional recognition, highlighting the need for inclusive data, collaboration with patients and mental health experts, and stringent governmental oversight to ensure transparency and protect patient privacy.
Collapse
Affiliation(s)
- Zohar Elyoseph
- Department of Educational Psychology, The Center for Psychobiological Research, The Max Stern Yezreel Valley College, Emek Yezreel, Israel
- Imperial College London, London, United Kingdom
| | - Elad Refoua
- Department of Psychology, Bar-Ilan University, Ramat Gan, Israel
| | - Kfir Asraf
- Department of Psychology, The Max Stern Yezreel Valley College, Emek Yezreel, Israel
| | - Maya Lvovsky
- Department of Psychology, The Max Stern Yezreel Valley College, Emek Yezreel, Israel
| | - Yoav Shimoni
- Boston Children's Hospital, Boston, MA, United States
| | - Dorit Hadar-Shoval
- Department of Psychology, The Max Stern Yezreel Valley College, Emek Yezreel, Israel
| |
Collapse
|
9
|
Groh M, Badri O, Daneshjou R, Koochek A, Harris C, Soenksen LR, Doraiswamy PM, Picard R. Deep learning-aided decision support for diagnosis of skin disease across skin tones. Nat Med 2024; 30:573-583. [PMID: 38317019 PMCID: PMC10878981 DOI: 10.1038/s41591-023-02728-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2023] [Accepted: 11/16/2023] [Indexed: 02/07/2024]
Abstract
Although advances in deep learning systems for image-based medical diagnosis demonstrate their potential to augment clinical decision-making, the effectiveness of physician-machine partnerships remains an open question, in part because physicians and algorithms are both susceptible to systematic errors, especially for diagnosis of underrepresented populations. Here we present results from a large-scale digital experiment involving board-certified dermatologists (n = 389) and primary-care physicians (n = 459) from 39 countries to evaluate the accuracy of diagnoses submitted by physicians in a store-and-forward teledermatology simulation. In this experiment, physicians were presented with 364 images spanning 46 skin diseases and asked to submit up to four differential diagnoses. Specialists and generalists achieved diagnostic accuracies of 38% and 19%, respectively, but both specialists and generalists were four percentage points less accurate for the diagnosis of images of dark skin as compared to light skin. Fair deep learning system decision support improved the diagnostic accuracy of both specialists and generalists by more than 33%, but exacerbated the gap in the diagnostic accuracy of generalists across skin tones. These results demonstrate that well-designed physician-machine partnerships can enhance the diagnostic accuracy of physicians, illustrating that success in improving overall diagnostic accuracy does not necessarily address bias.
Collapse
Affiliation(s)
- Matthew Groh
- Northwestern University Kellogg School of Management, Evanston, IL, USA.
- MIT Media Lab, Cambridge, MA, USA.
| | - Omar Badri
- Northeast Dermatology Associates, Beverly, MA, USA
| | - Roxana Daneshjou
- Stanford Department of Biomedical Data Science, Stanford, CA, USA
- Stanford Department of Dermatology, Redwood City, CA, USA
| | | | | | - Luis R Soenksen
- Wyss Institute for Bioinspired Engineering at Harvard, Boston, MA, USA
| | - P Murali Doraiswamy
- MIT Media Lab, Cambridge, MA, USA
- Duke University School of Medicine, Durham, NC, USA
| | | |
Collapse
|
10
|
Turchioe MR, Hermann A, Benda NC. Recentering responsible and explainable artificial intelligence research on patients: implications in perinatal psychiatry. Front Psychiatry 2024; 14:1321265. [PMID: 38304402 PMCID: PMC10832054 DOI: 10.3389/fpsyt.2023.1321265] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/13/2023] [Accepted: 12/27/2023] [Indexed: 02/03/2024] Open
Abstract
In the setting of underdiagnosed and undertreated perinatal depression (PD), Artificial intelligence (AI) solutions are poised to help predict and treat PD. In the near future, perinatal patients may interact with AI during clinical decision-making, in their patient portals, or through AI-powered chatbots delivering psychotherapy. The increase in potential AI applications has led to discussions regarding responsible AI and explainable AI (XAI). Current discussions of RAI, however, are limited in their consideration of the patient as an active participant with AI. Therefore, we propose a patient-centered, rather than a patient-adjacent, approach to RAI and XAI, that identifies autonomy, beneficence, justice, trust, privacy, and transparency as core concepts to uphold for health professionals and patients. We present empirical evidence that these principles are strongly valued by patients. We further suggest possible design solutions that uphold these principles and acknowledge the pressing need for further research about practical applications to uphold these principles.
Collapse
Affiliation(s)
| | - Alison Hermann
- Department of Psychiatry, Weill Cornell Medicine, New York, NY, United States
| | - Natalie C. Benda
- School of Nursing, Columbia University School of Nursing, New York, NY, United States
| |
Collapse
|
11
|
Singh V, Sarkar S, Gaur V, Grover S, Singh OP. Clinical Practice Guidelines on using artificial intelligence and gadgets for mental health and well-being. Indian J Psychiatry 2024; 66:S414-S419. [PMID: 38445270 PMCID: PMC10911327 DOI: 10.4103/indianjpsychiatry.indianjpsychiatry_926_23] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/06/2023] [Revised: 12/12/2023] [Accepted: 12/18/2023] [Indexed: 03/07/2024] Open
Affiliation(s)
- Vipul Singh
- Department of Psychiatry, Government Medical College, Kannauj, Uttar Pradesh, India
| | - Sharmila Sarkar
- Department of Psychiatry, Calcutta National Medical College, Kolkata, West Bengal, India
| | - Vikas Gaur
- Department of Psychiatry, Jaipur National University Institute for Medical Sciences and Research Centre, Jaipur, Rajasthan, India
| | - Sandeep Grover
- Department of Psychiatry, Post Graduate Institute of Medical Education and Research, Chandigarh, India
| | - Om Prakash Singh
- Department of Psychiatry, Midnapore Medical College, Midnapore, West Bengal, India E-mail:
| |
Collapse
|
12
|
Antweiler D, Albiez D, Bures D, Hosters B, Jovy-Klein F, Nickel K, Reibel T, Schramm J, Sander J, Antons D, Diehl A. [Use of AI-based applications by hospital staff: task profiles and qualification requirements]. Bundesgesundheitsblatt Gesundheitsforschung Gesundheitsschutz 2024; 67:66-75. [PMID: 38032516 PMCID: PMC10776476 DOI: 10.1007/s00103-023-03817-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2023] [Accepted: 11/24/2023] [Indexed: 12/01/2023]
Abstract
BACKGROUND Artificial intelligence (AI) is becoming increasingly important for the future development of hospitals. To unlock the large potential of AI, job profiles of hospital staff members need to be further developed in the direction of AI and digitization skills through targeted qualification measures. This affects both medical and non-medical processes along the entire value chain in hospitals. The aim of this paper is to provide an overview of the skills required to deal with smart technologies in a clinical context and to present measures for training employees. METHODS As part of the "SmartHospital.NRW" project in 2022, we conducted a literature review as well as interviews and workshops with experts. AI technologies and fields of application were identified. RESULTS Key findings include adapted and new task profiles, synergies and dependencies between individual task profiles, and the need for a comprehensive interdisciplinary and interprofessional exchange when using AI-based applications in hospitals. DISCUSSION Our article shows that hospitals need to promote digital health literacy skills for hospital staff members at an early stage and at the same time recruit technology- and AI-savvy staff. Interprofessional exchange formats and accompanying change management are essential for the use of AI in hospitals.
Collapse
Affiliation(s)
- Dario Antweiler
- Fraunhofer Institut für Intelligente Analyse und Informationssysteme IAIS, Abteilung Knowledge Discovery, Schloss Birlinghoven 1, 53757, Sankt Augustin, Deutschland.
| | - Daniela Albiez
- Fraunhofer Institut für Intelligente Analyse und Informationssysteme IAIS, Abteilung Adaptive Reflective Teams, Sankt Augustin, Deutschland
| | - Dominik Bures
- Stabsstelle Digitale Transformation, Universitätsmedizin Essen, Essen, Deutschland
| | - Bernadette Hosters
- Stabsstelle Entwicklung und Forschung Pflege, Universitätsmedizin Essen, Essen, Deutschland
| | - Florian Jovy-Klein
- Institut für Technologie- und Innovationsmanagement, RWTH Aachen, Aachen, Deutschland
| | - Kilian Nickel
- Fraunhofer Institut für Intelligente Analyse und Informationssysteme IAIS, Abteilung Adaptive Reflective Teams, Sankt Augustin, Deutschland
| | - Thomas Reibel
- Institut für Technologie- und Innovationsmanagement, RWTH Aachen, Aachen, Deutschland
| | - Johanna Schramm
- Stabsstelle Entwicklung und Forschung Pflege, Universitätsmedizin Essen, Essen, Deutschland
| | - Jil Sander
- Stabsstelle Digitale Transformation, Universitätsmedizin Essen, Essen, Deutschland
| | - David Antons
- Institut für Technologie- und Innovationsmanagement, RWTH Aachen, Aachen, Deutschland
| | - Anke Diehl
- Stabsstelle Digitale Transformation, Universitätsmedizin Essen, Essen, Deutschland
| |
Collapse
|
13
|
Squires M, Tao X, Elangovan S, Gururajan R, Zhou X, Li Y, Acharya UR. Identifying predictive biomarkers for repetitive transcranial magnetic stimulation response in depression patients with explainability. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 242:107771. [PMID: 37717523 DOI: 10.1016/j.cmpb.2023.107771] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/08/2023] [Revised: 08/12/2023] [Accepted: 08/19/2023] [Indexed: 09/19/2023]
Abstract
Repetitive Transcranial Magnetic Stimulation (rTMS) is an evidence-based treatment for depression. However, the patterns of response to this treatment modality are inconsistent. Whilst many people see a significant reduction in the severity of their depression following rTMS treatment, some patients do not. To support and improve patient outcomes, recent work is exploring the possibility of using Machine Learning to predict rTMS treatment outcomes. Our proposed model is the first to combine functional magnetic resonance imaging (fMRI) connectivity with deep learning techniques to predict treatment outcomes before treatment starts. Furthermore, with the use of Explainable AI (XAI) techniques, we identify potential biomarkers that may discriminate between rTMS responders and non-responders. Our experiments utilize 200 runs of repeated bootstrap sampling on two rTMS datasets. We compare performances between our proposed feedforward deep neural network against existing methods, and compare the average accuracy, balanced accuracy and F1-score on a held-out test set. The results of these experiments show that our model outperforms existing methods with an average accuracy of 0.9423, balanced accuracy of 0.9423, and F1-score of 0.9461 in a sample of 61 patients. We found that functional connectivity measures between the Subgenual Anterior Cingulate Cortex and Centeral Opercular Cortex are a key determinant of rTMS treatment response. This knowledge provides psychiatrists with further information to explore the potential mechanisms of responses to rTMS treatment. Our developed prototype is ready to be deployed across large datasets in multiple centres and different countries.
Collapse
Affiliation(s)
- Matthew Squires
- School of Mathematics, Physics and Computing, University of Southern Queensland, Toowoomba, Australia.
| | - Xiaohui Tao
- School of Mathematics, Physics and Computing, University of Southern Queensland, Toowoomba, Australia.
| | | | - Raj Gururajan
- School of Business, University of Southern Queensland, Springfield, Australia.
| | - Xujuan Zhou
- School of Business, University of Southern Queensland, Springfield, Australia.
| | - Yuefeng Li
- School of Computer Science, Queensland University of Technology, Brisbane, Australia.
| | - U Rajendra Acharya
- School of Mathematics, Physics and Computing, University of Southern Queensland, Springfield, Australia.
| |
Collapse
|
14
|
Vo V, Chen G, Aquino YSJ, Carter SM, Do QN, Woode ME. Multi-stakeholder preferences for the use of artificial intelligence in healthcare: A systematic review and thematic analysis. Soc Sci Med 2023; 338:116357. [PMID: 37949020 DOI: 10.1016/j.socscimed.2023.116357] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2023] [Revised: 09/04/2023] [Accepted: 10/24/2023] [Indexed: 11/12/2023]
Abstract
INTRODUCTION Despite the proliferation of Artificial Intelligence (AI) technology over the last decade, clinician, patient, and public perceptions of its use in healthcare raise a number of ethical, legal and social questions. We systematically review the literature on attitudes towards the use of AI in healthcare from patients, the general public and health professionals' perspectives to understand these issues from multiple perspectives. METHODOLOGY A search for original research articles using qualitative, quantitative, and mixed methods published between 1 Jan 2001 to 24 Aug 2021 was conducted on six bibliographic databases. Data were extracted and classified into different themes representing views on: (i) knowledge and familiarity of AI, (ii) AI benefits, risks, and challenges, (iii) AI acceptability, (iv) AI development, (v) AI implementation, (vi) AI regulations, and (vii) Human - AI relationship. RESULTS The final search identified 7,490 different records of which 105 publications were selected based on predefined inclusion/exclusion criteria. While the majority of patients, the general public and health professionals generally had a positive attitude towards the use of AI in healthcare, all groups indicated some perceived risks and challenges. Commonly perceived risks included data privacy; reduced professional autonomy; algorithmic bias; healthcare inequities; and greater burnout to acquire AI-related skills. While patients had mixed opinions on whether healthcare workers suffer from job loss due to the use of AI, health professionals strongly indicated that AI would not be able to completely replace them in their professions. Both groups shared similar doubts about AI's ability to deliver empathic care. The need for AI validation, transparency, explainability, and patient and clinical involvement in the development of AI was emphasised. To help successfully implement AI in health care, most participants envisioned that an investment in training and education campaigns was necessary, especially for health professionals. Lack of familiarity, lack of trust, and regulatory uncertainties were identified as factors hindering AI implementation. Regarding AI regulations, key themes included data access and data privacy. While the general public and patients exhibited a willingness to share anonymised data for AI development, there remained concerns about sharing data with insurance or technology companies. One key domain under this theme was the question of who should be held accountable in the case of adverse events arising from using AI. CONCLUSIONS While overall positivity persists in attitudes and preferences toward AI use in healthcare, some prevalent problems require more attention. There is a need to go beyond addressing algorithm-related issues to look at the translation of legislation and guidelines into practice to ensure fairness, accountability, transparency, and ethics in AI.
Collapse
Affiliation(s)
- Vinh Vo
- Centre for Health Economics, Monash University, Australia.
| | - Gang Chen
- Centre for Health Economics, Monash University, Australia
| | - Yves Saint James Aquino
- Australian Centre for Health Engagement, Evidence and Values, School of Health and Soceity, University of Wollongong, Australia
| | - Stacy M Carter
- Australian Centre for Health Engagement, Evidence and Values, School of Health and Soceity, University of Wollongong, Australia
| | - Quynh Nga Do
- Department of Economics, Monash University, Australia
| | - Maame Esi Woode
- Centre for Health Economics, Monash University, Australia; Monash Data Futures Research Institute, Australia
| |
Collapse
|
15
|
Wilhelmy S, Giupponi G, Groß D, Eisendle K, Conca A. A shift in psychiatry through AI? Ethical challenges. Ann Gen Psychiatry 2023; 22:43. [PMID: 37919759 PMCID: PMC10623776 DOI: 10.1186/s12991-023-00476-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/19/2023] [Accepted: 10/24/2023] [Indexed: 11/04/2023] Open
Abstract
The digital transformation has made its way into many areas of society, including medicine. While AI-based systems are widespread in medical disciplines, their use in psychiatry is progressing more slowly. However, they promise to revolutionize psychiatric practice in terms of prevention options, diagnostics, or even therapy. Psychiatry is in the midst of this digital transformation, so the question is no longer "whether" to use technology, but "how" we can use it to achieve goals of progress or improvement. The aim of this article is to argue that this revolution brings not only new opportunities but also new ethical challenges for psychiatry, especially with regard to safety, responsibility, autonomy, or transparency. As an example, the relationship between doctor and patient in psychiatry will be addressed, in which digitization is also leading to ethically relevant changes. Ethical reflection on the use of AI systems offers the opportunity to accompany these changes carefully in order to take advantage of the benefits that this change brings. The focus should therefore always be on balancing what is technically possible with what is ethically necessary.
Collapse
Affiliation(s)
- Saskia Wilhelmy
- Institute for History, Theory and Ethics in Medicine, University Hospital, RWTH Aachen University, Wendlingweg 2, 5074, Aachen, Germany.
| | - Giancarlo Giupponi
- Academic Teaching Department of Psychiatry, Central Hospital, Sanitary Agency of South Tyrol, Via Lorenz Böhler 5, 39100, Bolzano, Italy
| | - Dominik Groß
- Institute for History, Theory and Ethics in Medicine, University Hospital, RWTH Aachen University, Wendlingweg 2, 5074, Aachen, Germany
| | - Klaus Eisendle
- Institute of General Practice and Public Health, Provincial College for Health Professions Claudiana, Lorenz-Böhler-Straße 13, 39100, Bolzano, Italy
| | - Andreas Conca
- Academic Teaching Department of Psychiatry, Central Hospital, Sanitary Agency of South Tyrol, Via Lorenz Böhler 5, 39100, Bolzano, Italy
| |
Collapse
|
16
|
Chen Y, Wu Z, Wang P, Xie L, Yan M, Jiang M, Yang Z, Zheng J, Zhang J, Zhu J. Radiology Residents' Perceptions of Artificial Intelligence: Nationwide Cross-Sectional Survey Study. J Med Internet Res 2023; 25:e48249. [PMID: 37856181 PMCID: PMC10623237 DOI: 10.2196/48249] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2023] [Revised: 07/07/2023] [Accepted: 09/01/2023] [Indexed: 10/20/2023] Open
Abstract
BACKGROUND Artificial intelligence (AI) is transforming various fields, with health care, especially diagnostic specialties such as radiology, being a key but controversial battleground. However, there is limited research systematically examining the response of "human intelligence" to AI. OBJECTIVE This study aims to comprehend radiologists' perceptions regarding AI, including their views on its potential to replace them, its usefulness, and their willingness to accept it. We examine the influence of various factors, encompassing demographic characteristics, working status, psychosocial aspects, personal experience, and contextual factors. METHODS Between December 1, 2020, and April 30, 2021, a cross-sectional survey was completed by 3666 radiology residents in China. We used multivariable logistic regression models to examine factors and associations, reporting odds ratios (ORs) and 95% CIs. RESULTS In summary, radiology residents generally hold a positive attitude toward AI, with 29.90% (1096/3666) agreeing that AI may reduce the demand for radiologists, 72.80% (2669/3666) believing AI improves disease diagnosis, and 78.18% (2866/3666) feeling that radiologists should embrace AI. Several associated factors, including age, gender, education, region, eye strain, working hours, time spent on medical images, resilience, burnout, AI experience, and perceptions of residency support and stress, significantly influence AI attitudes. For instance, burnout symptoms were associated with greater concerns about AI replacement (OR 1.89; P<.001), less favorable views on AI usefulness (OR 0.77; P=.005), and reduced willingness to use AI (OR 0.71; P<.001). Moreover, after adjusting for all other factors, perceived AI replacement (OR 0.81; P<.001) and AI usefulness (OR 5.97; P<.001) were shown to significantly impact the intention to use AI. CONCLUSIONS This study profiles radiology residents who are accepting of AI. Our comprehensive findings provide insights for a multidimensional approach to help physicians adapt to AI. Targeted policies, such as digital health care initiatives and medical education, can be developed accordingly.
Collapse
Affiliation(s)
- Yanhua Chen
- Vanke School of Public Health, Tsinghua University, Beijing, China
- School of Medicine, Tsinghua University, Beijing, China
| | - Ziye Wu
- Vanke School of Public Health, Tsinghua University, Beijing, China
| | - Peicheng Wang
- Vanke School of Public Health, Tsinghua University, Beijing, China
- School of Medicine, Tsinghua University, Beijing, China
| | - Linbo Xie
- Vanke School of Public Health, Tsinghua University, Beijing, China
- School of Medicine, Tsinghua University, Beijing, China
| | - Mengsha Yan
- Vanke School of Public Health, Tsinghua University, Beijing, China
| | - Maoqing Jiang
- Department of Radiology, Ningbo No. 2 Hospital, Ningbo, China
| | - Zhenghan Yang
- Department of Radiology, Beijing Friendship Hospital, Capital Medical University, Beijing, China
| | - Jianjun Zheng
- Department of Radiology, Ningbo No. 2 Hospital, Ningbo, China
| | - Jingfeng Zhang
- Department of Radiology, Ningbo No. 2 Hospital, Ningbo, China
| | - Jiming Zhu
- Vanke School of Public Health, Tsinghua University, Beijing, China
- Institute for Healthy China, Tsinghua University, Beijing, China
| |
Collapse
|
17
|
Sahin E. Are medical oncologists ready for the artificial intelligence revolution? Evaluation of the opinions, knowledge, and experiences of medical oncologists about artificial intelligence technologies. Med Oncol 2023; 40:327. [PMID: 37812310 DOI: 10.1007/s12032-023-02200-9] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2023] [Accepted: 09/19/2023] [Indexed: 10/10/2023]
Abstract
The use of artificial intelligence technologies (AIT) in medicine is increasing worldwide. In this study, it was aimed to evaluate the experiences, opinions, and future expectations of medical oncologists on artificial intelligence (AI). After the reliability and validity analyses were carried out by a pilot study, the main online questionnaire was sent to the members of the "Turkish Society of Medical Oncology" mail group by an invitation e-mail. The anonymized responses of the participants were analyzed. The median age of the 156 participants was 36 (34-43) years and half (51%) were male. Most (45%) were fellows. Forty-six percent were working in university hospitals, 56% were visiting 20-40 patients a day. Medical oncologists' view of AIT was mostly positive (78%). However, some (especially women) had doubts about the reliability of AI (44%) and the establishment of its ethical/legal basis (49%). Sixty-five percent of the participants had no/superficial knowledge about AI. More than half (55%) had never used AI-based applications in their academic or clinical work. However, unlike now, 80% of the participants believed that they would use AIT frequently in their practice in the future and it would be beneficial. The most anticipated (81%) benefit was real-time information processing and real-time access to big data. Sixty-two percent believed that information about AI should be in the education curriculum. The vast majority of respondents (79%) thought that AI would not completely replace medical oncologists in the future. Some differences were found in the perception and experience of oncologists according to age, gender, title, and the number of patients examined per day. About AI, the general opinion of medical oncologists was positive, but their level of knowledge and use was low. However, they thought they would use it frequently in future and needed training.
Collapse
Affiliation(s)
- Elif Sahin
- Department of Medical Oncology, Kocaeli City Hospital, Tavsantepe mah., 41000, Izmit, Kocaeli, Turkey.
| |
Collapse
|
18
|
Reading Turchioe M, Harkins S, Desai P, Kumar S, Kim J, Hermann A, Joly R, Zhang Y, Pathak J, Benda NC. Women's perspectives on the use of artificial intelligence (AI)-based technologies in mental healthcare. JAMIA Open 2023; 6:ooad048. [PMID: 37425486 PMCID: PMC10329494 DOI: 10.1093/jamiaopen/ooad048] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2023] [Revised: 05/24/2023] [Accepted: 07/05/2023] [Indexed: 07/11/2023] Open
Abstract
This study aimed to evaluate women's attitudes towards artificial intelligence (AI)-based technologies used in mental health care. We conducted a cross-sectional, online survey of U.S. adults reporting female sex at birth focused on bioethical considerations for AI-based technologies in mental healthcare, stratifying by previous pregnancy. Survey respondents (n = 258) were open to AI-based technologies in mental healthcare but concerned about medical harm and inappropriate data sharing. They held clinicians, developers, healthcare systems, and the government responsible for harm. Most reported it was "very important" for them to understand AI output. More previously pregnant respondents reported being told AI played a small role in mental healthcare was "very important" versus those not previously pregnant (P = .03). We conclude that protections against harm, transparency around data use, preservation of the patient-clinician relationship, and patient comprehension of AI predictions may facilitate trust in AI-based technologies for mental healthcare among women.
Collapse
Affiliation(s)
| | - Sarah Harkins
- Columbia University School of Nursing, New York, New York, USA
| | - Pooja Desai
- Department of Biomedical Informatics, Columbia University, New York, New York, USA
| | | | - Jessica Kim
- Department of Population Health Sciences, Weill Cornell Medicine, New York, New York, USA
| | - Alison Hermann
- Department of Psychiatry, Weill Cornell Medicine, New York, New York, USA
| | - Rochelle Joly
- Department of Obstetrics and Gynecology, Weill Cornell Medicine, New York, New York, USA
| | - Yiye Zhang
- Department of Population Health Sciences, Weill Cornell Medicine, New York, New York, USA
| | - Jyotishman Pathak
- Department of Population Health Sciences, Weill Cornell Medicine, New York, New York, USA
- Department of Psychiatry, Weill Cornell Medicine, New York, New York, USA
| | - Natalie C Benda
- Columbia University School of Nursing, New York, New York, USA
| |
Collapse
|
19
|
Nashwan AJ, Gharib S, Alhadidi M, El-Ashry AM, Alamgir A, Al-Hassan M, Khedr MA, Dawood S, Abufarsakh B. Harnessing Artificial Intelligence: Strategies for Mental Health Nurses in Optimizing Psychiatric Patient Care. Issues Ment Health Nurs 2023; 44:1020-1034. [PMID: 37850937 DOI: 10.1080/01612840.2023.2263579] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/19/2023]
Abstract
This narrative review explores the transformative impact of Artificial Intelligence (AI) on mental health nursing, particularly in enhancing psychiatric patient care. AI technologies present new strategies for early detection, risk assessment, and improving treatment adherence in mental health. They also facilitate remote patient monitoring, bridge geographical gaps, and support clinical decision-making. The evolution of virtual mental health assistants and AI-enhanced therapeutic interventions are also discussed. These technological advancements reshape the nurse-patient interactions while ensuring personalized, efficient, and high-quality care. The review also addresses AI's ethical and responsible use in mental health nursing, emphasizing patient privacy, data security, and the balance between human interaction and AI tools. As AI applications in mental health care continue to evolve, this review encourages continued innovation while advocating for responsible implementation, thereby optimally leveraging the potential of AI in mental health nursing.
Collapse
Affiliation(s)
- Abdulqadir J Nashwan
- Nursing Department, Hamad Medical Corporation, Doha, Qatar
- Department of Public Health, College of Health Sciences, QU Health, Qatar University, Doha, Qatar
| | - Suzan Gharib
- Nursing Department, Al-Khaldi Hospital, Amman, Jordan
| | - Majdi Alhadidi
- Psychiatric & Mental Health Nursing, Faculty of Nursing, Al-Zaytoonah University of Jordan, Amman, Jordan
| | | | | | | | | | - Shaimaa Dawood
- Faculty of Nursing, Alexandria University, Alexandria, Egypt
| | | |
Collapse
|
20
|
Sahoo JP, Narayan BN, Santi NS. The future of psychiatry with artificial intelligence: can the man-machine duo redefine the tenets? CONSORTIUM PSYCHIATRICUM 2023; 4:72-76. [PMID: 38249529 PMCID: PMC10795941 DOI: 10.17816/cp13626] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2023] [Accepted: 09/15/2023] [Indexed: 01/23/2024] Open
Abstract
As one of the largest contributors of morbidity and mortality, psychiatric disorders are anticipated to triple in prevalence over the coming decade or so. Major obstacles to psychiatric care include stigma, funding constraints, and a dearth of resources and psychiatrists. The main thrust of our present-day discussion has been towards the direction of how machine learning and artificial intelligence could influence the way that patients experience care. To better grasp the issues regarding trust, privacy, and autonomy, their societal and ethical ramifications need to be probed. There is always the possibility that the artificial mind could malfunction or exhibit behavioral abnormalities. An in-depth philosophical understanding of these possibilities in both human and artificial intelligence could offer correlational insights into the robotic management of mental disorders in the future. This article looks into the role of artificial intelligence, the different challenges associated with it, as well as the perspectives in the management of such mental illnesses as depression, anxiety, and schizophrenia.
Collapse
Affiliation(s)
| | | | - N Simple Santi
- Veer Surendra Sai Institute Of Medical Science And Research
| |
Collapse
|
21
|
Elyoseph Z, Levkovich I. Beyond human expertise: the promise and limitations of ChatGPT in suicide risk assessment. Front Psychiatry 2023; 14:1213141. [PMID: 37593450 PMCID: PMC10427505 DOI: 10.3389/fpsyt.2023.1213141] [Citation(s) in RCA: 18] [Impact Index Per Article: 18.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/28/2023] [Accepted: 07/19/2023] [Indexed: 08/19/2023] Open
Abstract
ChatGPT, an artificial intelligence language model developed by OpenAI, holds the potential for contributing to the field of mental health. Nevertheless, although ChatGPT theoretically shows promise, its clinical abilities in suicide prevention, a significant mental health concern, have yet to be demonstrated. To address this knowledge gap, this study aims to compare ChatGPT's assessments of mental health indicators to those of mental health professionals in a hypothetical case study that focuses on suicide risk assessment. Specifically, ChatGPT was asked to evaluate a text vignette describing a hypothetical patient with varying levels of perceived burdensomeness and thwarted belongingness. The ChatGPT assessments were compared to the norms of mental health professionals. The results indicated that ChatGPT rated the risk of suicide attempts lower than did the mental health professionals in all conditions. Furthermore, ChatGPT rated mental resilience lower than the norms in most conditions. These results imply that gatekeepers, patients or even mental health professionals who rely on ChatGPT for evaluating suicidal risk or as a complementary tool to improve decision-making may receive an inaccurate assessment that underestimates the actual suicide risk.
Collapse
Affiliation(s)
- Zohar Elyoseph
- Department of Psychology and Educational Counseling, The Center for Psychobiological Research, Max Stern Yezreel Valley College, Emek Yezreel, Israel
- Department of Brain Sciences, Faculty of Medicine, Imperial College London, London, United Kingdom
| | - Inbar Levkovich
- Faculty of Graduate Studies, Oranim Academic College, Kiryat Tiv'on, Israel
| |
Collapse
|
22
|
Eastwood KW, May R, Andreou P, Abidi S, Abidi SSR, Loubani OM. Needs and expectations for artificial intelligence in emergency medicine according to Canadian physicians. BMC Health Serv Res 2023; 23:798. [PMID: 37491228 PMCID: PMC10369807 DOI: 10.1186/s12913-023-09740-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2022] [Accepted: 06/22/2023] [Indexed: 07/27/2023] Open
Abstract
BACKGROUND Artificial Intelligence (AI) is recognized by emergency physicians (EPs) as an important technology that will affect clinical practice. Several AI-tools have already been developed to aid care delivery in emergency medicine (EM). However, many EM tools appear to have been developed without a cross-disciplinary needs assessment, making it difficult to understand their broader importance to general-practice. Clinician surveys about AI tools have been conducted within other medical specialties to help guide future design. This study aims to understand the needs of Canadian EPs for the apt use of AI-based tools. METHODS A national cross-sectional, two-stage, mixed-method electronic survey of Canadian EPs was conducted from January-May 2022. The survey includes demographic and physician practice-pattern data, clinicians' current use and perceptions of AI, and individual rankings of which EM work-activities most benefit from AI. RESULTS The primary outcome is a ranked list of high-priority AI-tools for EM that physicians want translated into general use within the next 10 years. When ranking specific AI examples, 'automated charting/report generation', 'clinical prediction rules' and 'monitoring vitals with early-warning detection' were the top items. When ranking by physician work-activities, 'AI-tools for documentation', 'AI-tools for computer use' and 'AI-tools for triaging patients' were the top items. For secondary outcomes, EPs indicated AI was 'likely' (43.1%) or 'extremely likely' (43.7%) to be able to complete the task of 'documentation' and indicated either 'a-great-deal' (32.8%) or 'quite-a-bit' (39.7%) of potential for AI in EM. Further, EPs were either 'strongly' (48.5%) or 'somewhat' (39.8%) interested in AI for EM. CONCLUSIONS Physician input on the design of AI is essential to ensure the uptake of this technology. Translation of AI-tools to facilitate documentation is considered a high-priority, and respondents had high confidence that AI could facilitate this task. This study will guide future directions regarding the use of AI for EM and help direct efforts to address prevailing technology-translation barriers such as access to high-quality application-specific data and developing reporting guidelines for specific AI-applications. With a prioritized list of high-need AI applications, decision-makers can develop focused strategies to address these larger obstacles.
Collapse
Affiliation(s)
- Kyle W Eastwood
- Department of Emergency Medicine, Dalhousie University, 1796 Summer Street, Halifax Infirmary, 4Th Floor Emergency Department Administration Office, Halifax, NS, B3H 2Y9, Canada.
| | - Ronald May
- Department of Emergency Medicine, Dalhousie University, 1796 Summer Street, Halifax Infirmary, 4Th Floor Emergency Department Administration Office, Halifax, NS, B3H 2Y9, Canada
| | - Pantelis Andreou
- Department of Community Health and Epidemiology, Dalhousie University, Halifax, Canada
| | - Samina Abidi
- Department of Community Health and Epidemiology, Dalhousie University, Halifax, Canada
| | - Syed Sibte Raza Abidi
- NICHE Research Group, Faculty of Computer Science, Dalhousie University, Halifax, Canada
| | - Osama M Loubani
- Department of Emergency Medicine, Dalhousie University, 1796 Summer Street, Halifax Infirmary, 4Th Floor Emergency Department Administration Office, Halifax, NS, B3H 2Y9, Canada
| |
Collapse
|
23
|
Knights J, Bangieva V, Passoni M, Donegan ML, Shen J, Klein A, Baker J, DuBois H. A framework for precision "dosing" of mental healthcare services: algorithm development and clinical pilot. Int J Ment Health Syst 2023; 17:21. [PMID: 37408006 DOI: 10.1186/s13033-023-00581-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/19/2022] [Accepted: 05/18/2023] [Indexed: 07/07/2023] Open
Abstract
BACKGROUND One in five adults in the US experience mental illness and over half of these adults do not receive treatment. In addition to the access gap, few innovations have been reported for ensuring the right level of mental healthcare service is available at the right time for individual patients. METHODS Historical observational clinical data was leveraged from a virtual healthcare system. We conceptualize mental healthcare services themselves as therapeutic interventions and develop a prototype computational framework to estimate their potential longitudinal impacts on depressive symptom severity, which is then used to assess new treatment schedules and delivered to clinicians via a dashboard. We operationally define this process as "session dosing": 497 patients who started treatment with severe symptoms of depression between November 2020 and October 2021 were used for modeling. Subsequently, 22 mental health providers participated in a 5-week clinical quality improvement (QI) pilot, where they utilized the prototype dashboard in treatment planning with 126 patients. RESULTS The developed framework was able to resolve patient symptom fluctuations from their treatment schedules: 77% of the modeling dataset fit criteria for using the individual fits for subsequent clinical planning where five anecdotal profile types were identified that presented different clinical opportunities. Based on initial quality thresholds for model fits, 88% of those individuals were identified as adequate for session optimization planning using the developed dashboard, while 12% supported more thorough treatment planning (e.g. different treatment modalities). In the clinical pilot, 90% of clinicians reported using the dashboard a few times or more per member. Although most clinicians (67.5%) either rarely or never used the dashboard to change session types, numerous other discussions were enabled, and opportunities for automating session recommendations were identified. CONCLUSIONS It is possible to model and identify the extent to which mental healthcare services can resolve depressive symptom severity fluctuations. Implementation of one such prototype framework in a real-world clinic represents an advancement in mental healthcare treatment planning; however, investigations to assess which clinical endpoints are impacted by this technology, and the best way to incorporate such frameworks into clinical workflows, are needed and are actively being pursued.
Collapse
Affiliation(s)
- Jonathan Knights
- Mindstrong, Inc., 101 Jefferson Drive, Suite 228, Menlo Park, CA, 94025, USA.
| | - Victoria Bangieva
- Mindstrong, Inc., 101 Jefferson Drive, Suite 228, Menlo Park, CA, 94025, USA
| | - Michela Passoni
- Mindstrong, Inc., 101 Jefferson Drive, Suite 228, Menlo Park, CA, 94025, USA
| | - Macayla L Donegan
- Mindstrong, Inc., 101 Jefferson Drive, Suite 228, Menlo Park, CA, 94025, USA
| | - Jacob Shen
- Mindstrong, Inc., 101 Jefferson Drive, Suite 228, Menlo Park, CA, 94025, USA
| | - Audrey Klein
- Mindstrong, Inc., 101 Jefferson Drive, Suite 228, Menlo Park, CA, 94025, USA
| | - Justin Baker
- Mindstrong, Inc., 101 Jefferson Drive, Suite 228, Menlo Park, CA, 94025, USA
| | - Holly DuBois
- Mindstrong, Inc., 101 Jefferson Drive, Suite 228, Menlo Park, CA, 94025, USA
| |
Collapse
|
24
|
Elyoseph Z, Hadar-Shoval D, Asraf K, Lvovsky M. ChatGPT outperforms humans in emotional awareness evaluations. Front Psychol 2023; 14:1199058. [PMID: 37303897 PMCID: PMC10254409 DOI: 10.3389/fpsyg.2023.1199058] [Citation(s) in RCA: 30] [Impact Index Per Article: 30.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2023] [Accepted: 05/11/2023] [Indexed: 06/13/2023] Open
Abstract
The artificial intelligence chatbot, ChatGPT, has gained widespread attention for its ability to perform natural language processing tasks and has the fastest-growing user base in history. Although ChatGPT has successfully generated theoretical information in multiple fields, its ability to identify and describe emotions is still unknown. Emotional awareness (EA), the ability to conceptualize one's own and others' emotions, is considered a transdiagnostic mechanism for psychopathology. This study utilized the Levels of Emotional Awareness Scale (LEAS) as an objective, performance-based test to analyze ChatGPT's responses to twenty scenarios and compared its EA performance with that of the general population norms, as reported by a previous study. A second examination was performed one month later to measure EA improvement over time. Finally, two independent licensed psychologists evaluated the fit-to-context of ChatGPT's EA responses. In the first examination, ChatGPT demonstrated significantly higher performance than the general population on all the LEAS scales (Z score = 2.84). In the second examination, ChatGPT's performance significantly improved, almost reaching the maximum possible LEAS score (Z score = 4.26). Its accuracy levels were also extremely high (9.7/10). The study demonstrated that ChatGPT can generate appropriate EA responses, and that its performance may improve significantly over time. The study has theoretical and clinical implications, as ChatGPT can be used as part of cognitive training for clinical populations with EA impairments. In addition, ChatGPT's EA-like abilities may facilitate psychiatric diagnosis and assessment and be used to enhance emotional language. Further research is warranted to better understand the potential benefits and risks of ChatGPT and refine it to promote mental health.
Collapse
Affiliation(s)
- Zohar Elyoseph
- Department of Psychology and Educational Counseling, The Center for Psychobiological Research, Max Stern Yezreel Valley College, Emek Yezreel, Israel
- Department of Brain Sciences, Faculty of Medicine, Imperial College London, London, England
| | - Dorit Hadar-Shoval
- Department of Psychology and Educational Counseling, The Center for Psychobiological Research, Max Stern Yezreel Valley College, Emek Yezreel, Israel
| | - Kfir Asraf
- Psychology Department, Center for Psychobiological Research, Max Stern Yezreel Valley College, Emek Yezreel, Israel
| | - Maya Lvovsky
- Psychology Department, Center for Psychobiological Research, Max Stern Yezreel Valley College, Emek Yezreel, Israel
| |
Collapse
|
25
|
Squires M, Tao X, Elangovan S, Gururajan R, Zhou X, Acharya UR, Li Y. Deep learning and machine learning in psychiatry: a survey of current progress in depression detection, diagnosis and treatment. Brain Inform 2023; 10:10. [PMID: 37093301 PMCID: PMC10123592 DOI: 10.1186/s40708-023-00188-6] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2022] [Accepted: 03/08/2023] [Indexed: 04/25/2023] Open
Abstract
Informatics paradigms for brain and mental health research have seen significant advances in recent years. These developments can largely be attributed to the emergence of new technologies such as machine learning, deep learning, and artificial intelligence. Data-driven methods have the potential to support mental health care by providing more precise and personalised approaches to detection, diagnosis, and treatment of depression. In particular, precision psychiatry is an emerging field that utilises advanced computational techniques to achieve a more individualised approach to mental health care. This survey provides an overview of the ways in which artificial intelligence is currently being used to support precision psychiatry. Advanced algorithms are being used to support all phases of the treatment cycle. These systems have the potential to identify individuals suffering from mental health conditions, allowing them to receive the care they need and tailor treatments to individual patients who are mostly to benefit. Additionally, unsupervised learning techniques are breaking down existing discrete diagnostic categories and highlighting the vast disease heterogeneity observed within depression diagnoses. Artificial intelligence also provides the opportunity to shift towards evidence-based treatment prescription, moving away from existing methods based on group averages. However, our analysis suggests there are several limitations currently inhibiting the progress of data-driven paradigms in care. Significantly, none of the surveyed articles demonstrate empirically improved patient outcomes over existing methods. Furthermore, greater consideration needs to be given to uncertainty quantification, model validation, constructing interdisciplinary teams of researchers, improved access to diverse data and standardised definitions within the field. Empirical validation of computer algorithms via randomised control trials which demonstrate measurable improvement to patient outcomes are the next step in progressing models to clinical implementation.
Collapse
Affiliation(s)
- Matthew Squires
- School of Mathematics, Physics and Computing, University of Southern Queensland, Toowoomba, QLD, Australia.
| | - Xiaohui Tao
- School of Mathematics, Physics and Computing, University of Southern Queensland, Toowoomba, QLD, Australia
| | | | - Raj Gururajan
- School of Business, University of Southern Queensland, Springfield, QLD, Australia
| | - Xujuan Zhou
- School of Business, University of Southern Queensland, Springfield, QLD, Australia
| | - U Rajendra Acharya
- School of Mathematics, Physics and Computing, University of Southern Queensland, Toowoomba, QLD, Australia
| | - Yuefeng Li
- School of Computer Science, Queensland University of Technology, Brisbane, QLD, Australia
| |
Collapse
|
26
|
Al-Medfa MK, Al-Ansari AM, Darwish AH, Qreeballa TA, Jahrami H. Physicians’ attitudes and knowledge toward artificial intelligence in medicine: Benefits and drawbacks. Heliyon 2023; 9:e14744. [PMID: 37035387 PMCID: PMC10073828 DOI: 10.1016/j.heliyon.2023.e14744] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2022] [Revised: 03/09/2023] [Accepted: 03/16/2023] [Indexed: 03/28/2023] Open
Abstract
The use of artificial intelligence (AI) in the medical field is increasing and is expected to shape future clinical practice and job security. Therefore, this study aimed to assess the opinions and attitudes of practicing physicians in Bahrain regarding the benefits and drawbacks of AI for their future daily practice. A cross-sectional survey of practicing physicians with a minimum of five years' experience across the main secondary and tertiary care hospitals in Bahrain was conducted. An online questionnaire was used to collect data on demographics, knowledge of AI, attitudes towards the use of AI in 10 tasks of daily clinical practice, and opinions on the benefits and drawbacks of AI. A total of 114 physicians participated in the survey. Among them, 43 (37.7%) were registered psychiatrists, 15 (13.2%) were pathologists, 17 (14.9%) were radiologists, and 39 (34.2%) were surgical specialists. The participants' attitudes were overall positive towards AI. Pathologists were particularly in favor of using AI to "Formulate personalized medication and/or treatment plans for patients" and to "Interview patients in a range of settings to obtain medical history." Most participants agreed that AI would reduce the time needed to establish a diagnosis and negatively affect employment rates. There were no correlations between the responses and the participants' age, gender, years of experience, or AI knowledge. This study demonstrates that the attitudes towards the use of AI in medicine among practicing physicians in Bahrain are similar to those of physicians in developed countries in that they are positive and welcoming of AI implementation in practice. However, the potential effects of AI on job security are a major concern.
Collapse
Affiliation(s)
- Mohammed Khalid Al-Medfa
- Department of Internal Medicine, College of Medicine and Medical Sciences, Arabian Gulf University, Bahrain
| | - Ahmed M.S. Al-Ansari
- Department of Psychiatry, College of Medicine and Medical Sciences, Arabian Gulf University, Bahrain
- Corresponding author.
| | | | | | - Haitham Jahrami
- Department of Psychiatry, College of Medicine and Medical Sciences, Arabian Gulf University, Bahrain
| |
Collapse
|
27
|
Blease C, Kharko A, Bernstein M, Bradley C, Houston M, Walsh I, D Mandl K. Computerization of the Work of General Practitioners: Mixed Methods Survey of Final-Year Medical Students in Ireland. JMIR MEDICAL EDUCATION 2023; 9:e42639. [PMID: 36939809 PMCID: PMC10131917 DOI: 10.2196/42639] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/13/2022] [Revised: 12/14/2022] [Accepted: 01/15/2023] [Indexed: 06/18/2023]
Abstract
BACKGROUND The potential for digital health technologies, including machine learning (ML)-enabled tools, to disrupt the medical profession is the subject of ongoing debate within biomedical informatics. OBJECTIVE We aimed to describe the opinions of final-year medical students in Ireland regarding the potential of future technology to replace or work alongside general practitioners (GPs) in performing key tasks. METHODS Between March 2019 and April 2020, using a convenience sample, we conducted a mixed methods paper-based survey of final-year medical students. The survey was administered at 4 out of 7 medical schools in Ireland across each of the 4 provinces in the country. Quantitative data were analyzed using descriptive statistics and nonparametric tests. We used thematic content analysis to investigate free-text responses. RESULTS In total, 43.1% (252/585) of the final-year students at 3 medical schools responded, and data collection at 1 medical school was terminated due to disruptions associated with the COVID-19 pandemic. With regard to forecasting the potential impact of artificial intelligence (AI)/ML on primary care 25 years from now, around half (127/246, 51.6%) of all surveyed students believed the work of GPs will change minimally or not at all. Notably, students who did not intend to enter primary care predicted that AI/ML will have a great impact on the work of GPs. CONCLUSIONS We caution that without a firm curricular foundation on advances in AI/ML, students may rely on extreme perspectives involving self-preserving optimism biases that demote the impact of advances in technology on primary care on the one hand and technohype on the other. Ultimately, these biases may lead to negative consequences in health care. Improvements in medical education could help prepare tomorrow's doctors to optimize and lead the ethical and evidence-based implementation of AI/ML-enabled tools in medicine for enhancing the care of tomorrow's patients.
Collapse
Affiliation(s)
- Charlotte Blease
- General Medicine and Primary Care, Beth Israel Deaconess Medical Center, Boston, MA, United States
| | - Anna Kharko
- Healthcare Sciences and e-Health, Department of Women's and Children's Health, Uppsala University, Uppsala, Sweden
- School of Psychology, University of Plymouth, Plymouth, United Kingdom
| | - Michael Bernstein
- Department of Behavioral and Social Sciences, School of Public Health, Brown University, Providence, RI, United States
- Department of Diagnostic Imaging, Warren Alpert Medical School, Brown University, Providence, RI, United States
| | - Colin Bradley
- School of Medicine, University College Cork, Cork, Ireland
| | - Muiris Houston
- School of Medicine, National University of Ireland Galway, Galway, Ireland
- School of Medicine, Trinity College Dublin, Dublin, Ireland
| | - Ian Walsh
- Dentistry and Biomedical Sciences, School of Medicine, Queen's University, Belfast, Ireland
| | - Kenneth D Mandl
- Computational Health Informatics Program, Boston Children's Hospital, Boston, MA, United States
| |
Collapse
|
28
|
Sharma A, Lin IW, Miner AS, Atkins DC, Althoff T. Human–AI collaboration enables more empathic conversations in text-based peer-to-peer mental health support. NAT MACH INTELL 2023. [DOI: 10.1038/s42256-022-00593-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/24/2023]
|
29
|
Morrow E, Zidaru T, Ross F, Mason C, Patel KD, Ream M, Stockley R. Artificial intelligence technologies and compassion in healthcare: A systematic scoping review. Front Psychol 2023; 13:971044. [PMID: 36733854 PMCID: PMC9887144 DOI: 10.3389/fpsyg.2022.971044] [Citation(s) in RCA: 21] [Impact Index Per Article: 21.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2022] [Accepted: 12/05/2022] [Indexed: 01/18/2023] Open
Abstract
Background Advances in artificial intelligence (AI) technologies, together with the availability of big data in society, creates uncertainties about how these developments will affect healthcare systems worldwide. Compassion is essential for high-quality healthcare and research shows how prosocial caring behaviors benefit human health and societies. However, the possible association between AI technologies and compassion is under conceptualized and underexplored. Objectives The aim of this scoping review is to provide a comprehensive depth and a balanced perspective of the emerging topic of AI technologies and compassion, to inform future research and practice. The review questions were: How is compassion discussed in relation to AI technologies in healthcare? How are AI technologies being used to enhance compassion in healthcare? What are the gaps in current knowledge and unexplored potential? What are the key areas where AI technologies could support compassion in healthcare? Materials and methods A systematic scoping review following five steps of Joanna Briggs Institute methodology. Presentation of the scoping review conforms with PRISMA-ScR (Preferred Reporting Items for Systematic reviews and Meta-Analyses extension for Scoping Reviews). Eligibility criteria were defined according to 3 concept constructs (AI technologies, compassion, healthcare) developed from the literature and informed by medical subject headings (MeSH) and key words for the electronic searches. Sources of evidence were Web of Science and PubMed databases, articles published in English language 2011-2022. Articles were screened by title/abstract using inclusion/exclusion criteria. Data extracted (author, date of publication, type of article, aim/context of healthcare, key relevant findings, country) was charted using data tables. Thematic analysis used an inductive-deductive approach to generate code categories from the review questions and the data. A multidisciplinary team assessed themes for resonance and relevance to research and practice. Results Searches identified 3,124 articles. A total of 197 were included after screening. The number of articles has increased over 10 years (2011, n = 1 to 2021, n = 47 and from Jan-Aug 2022 n = 35 articles). Overarching themes related to the review questions were: (1) Developments and debates (7 themes) Concerns about AI ethics, healthcare jobs, and loss of empathy; Human-centered design of AI technologies for healthcare; Optimistic speculation AI technologies will address care gaps; Interrogation of what it means to be human and to care; Recognition of future potential for patient monitoring, virtual proximity, and access to healthcare; Calls for curricula development and healthcare professional education; Implementation of AI applications to enhance health and wellbeing of the healthcare workforce. (2) How AI technologies enhance compassion (10 themes) Empathetic awareness; Empathetic response and relational behavior; Communication skills; Health coaching; Therapeutic interventions; Moral development learning; Clinical knowledge and clinical assessment; Healthcare quality assessment; Therapeutic bond and therapeutic alliance; Providing health information and advice. (3) Gaps in knowledge (4 themes) Educational effectiveness of AI-assisted learning; Patient diversity and AI technologies; Implementation of AI technologies in education and practice settings; Safety and clinical effectiveness of AI technologies. (4) Key areas for development (3 themes) Enriching education, learning and clinical practice; Extending healing spaces; Enhancing healing relationships. Conclusion There is an association between AI technologies and compassion in healthcare and interest in this association has grown internationally over the last decade. In a range of healthcare contexts, AI technologies are being used to enhance empathetic awareness; empathetic response and relational behavior; communication skills; health coaching; therapeutic interventions; moral development learning; clinical knowledge and clinical assessment; healthcare quality assessment; therapeutic bond and therapeutic alliance; and to provide health information and advice. The findings inform a reconceptualization of compassion as a human-AI system of intelligent caring comprising six elements: (1) Awareness of suffering (e.g., pain, distress, risk, disadvantage); (2) Understanding the suffering (significance, context, rights, responsibilities etc.); (3) Connecting with the suffering (e.g., verbal, physical, signs and symbols); (4) Making a judgment about the suffering (the need to act); (5) Responding with an intention to alleviate the suffering; (6) Attention to the effect and outcomes of the response. These elements can operate at an individual (human or machine) and collective systems level (healthcare organizations or systems) as a cyclical system to alleviate different types of suffering. New and novel approaches to human-AI intelligent caring could enrich education, learning, and clinical practice; extend healing spaces; and enhance healing relationships. Implications In a complex adaptive system such as healthcare, human-AI intelligent caring will need to be implemented, not as an ideology, but through strategic choices, incentives, regulation, professional education, and training, as well as through joined up thinking about human-AI intelligent caring. Research funders can encourage research and development into the topic of AI technologies and compassion as a system of human-AI intelligent caring. Educators, technologists, and health professionals can inform themselves about the system of human-AI intelligent caring.
Collapse
Affiliation(s)
| | - Teodor Zidaru
- Department of Anthropology, London School of Economics and Political Sciences, London, United Kingdom
| | - Fiona Ross
- Faculty of Health, Science, Social Care and Education, Kingston University London, London, United Kingdom
| | - Cindy Mason
- Artificial Intelligence Researcher (Independent), Palo Alto, CA, United States
| | | | - Melissa Ream
- Kent Surrey Sussex Academic Health Science Network (AHSN) and the National AHSN Network Artificial Intelligence (AI) Initiative, Surrey, United Kingdom
| | - Rich Stockley
- Head of Research and Engagement, Surrey Heartlands Health and Care Partnership, Surrey, United Kingdom
| |
Collapse
|
30
|
York TJ, Raj S, Ashdown T, Jones G. Clinician and computer: a study on doctors' perceptions of artificial intelligence in skeletal radiography. BMC MEDICAL EDUCATION 2023; 23:16. [PMID: 36627640 PMCID: PMC9830124 DOI: 10.1186/s12909-022-03976-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/26/2022] [Accepted: 12/19/2022] [Indexed: 06/17/2023]
Abstract
BACKGROUND Traumatic musculoskeletal injuries are a common presentation to emergency care, the first-line investigation often being plain radiography. The interpretation of this imaging frequently falls to less experienced clinicians despite well-established challenges in reporting. This study presents novel data of clinicians' confidence in interpreting trauma radiographs, their perception of AI in healthcare, and their support for the development of systems applied to skeletal radiography. METHODS A novel questionnaire was distributed through a network of collaborators to clinicians across the Southeast of England. Over a three-month period, responses were compiled into a database before undergoing statistical review. RESULTS The responses of 297 participants were included. The mean self-assessed knowledge of AI in healthcare was 3.68 out of ten, with significantly higher knowledge reported by the most senior doctors (Specialty Trainee/Specialty Registrar or above = 4.88). 13.8% of participants reported an awareness of AI in their clinical practice. Overall, participants indicated substantial favourability towards AI in healthcare (7.87) and in AI applied to skeletal radiography (7.75). There was a preference for a hypothetical system indicating positive findings rather than ruling as negative (7.26 vs 6.20). CONCLUSIONS This study identifies clear support, amongst a cross section of student and qualified doctors, for both the general use of AI technology in healthcare and in its application to skeletal radiography for trauma. The development of systems to address this demand appear well founded and popular. The engagement of a small but reticent minority should be sought, along with improving the wider education of doctors on AI.
Collapse
Affiliation(s)
- Thomas James York
- Alexander Fleming Building, Imperial College London, South Kensington Campus, London, UK.
| | | | | | - Gareth Jones
- Imperial College Healthcare NHS Trust, London, UK
| |
Collapse
|
31
|
Kushniruk A, de Hond AAH, Thoral PJ, Steyerberg EW, Kant IMJ, Cinà G, Arbous MS. Intensive Care Unit Physicians' Perspectives on Artificial Intelligence-Based Clinical Decision Support Tools: Preimplementation Survey Study. JMIR Hum Factors 2023; 10:e39114. [PMID: 36602843 PMCID: PMC9853335 DOI: 10.2196/39114] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2022] [Revised: 11/21/2022] [Accepted: 11/27/2022] [Indexed: 11/29/2022] Open
Abstract
BACKGROUND Artificial intelligence-based clinical decision support (AI-CDS) tools have great potential to benefit intensive care unit (ICU) patients and physicians. There is a gap between the development and implementation of these tools. OBJECTIVE We aimed to investigate physicians' perspectives and their current decision-making behavior before implementing a discharge AI-CDS tool for predicting readmission and mortality risk after ICU discharge. METHODS We conducted a survey of physicians involved in decision-making on discharge of patients at two Dutch academic ICUs between July and November 2021. Questions were divided into four domains: (1) physicians' current decision-making behavior with respect to discharging ICU patients, (2) perspectives on the use of AI-CDS tools in general, (3) willingness to incorporate a discharge AI-CDS tool into daily clinical practice, and (4) preferences for using a discharge AI-CDS tool in daily workflows. RESULTS Most of the 64 respondents (of 93 contacted, 69%) were familiar with AI (62/64, 97%) and had positive expectations of AI, with 55 of 64 (86%) believing that AI could support them in their work as a physician. The respondents disagreed on whether the decision to discharge a patient was complex (23/64, 36% agreed and 22/64, 34% disagreed); nonetheless, most (59/64, 92%) agreed that a discharge AI-CDS tool could be of value. Significant differences were observed between physicians from the 2 academic sites, which may be related to different levels of involvement in the development of the discharge AI-CDS tool. CONCLUSIONS ICU physicians showed a favorable attitude toward the integration of AI-CDS tools into the ICU setting in general, and in particular toward a tool to predict a patient's risk of readmission and mortality within 7 days after discharge. The findings of this questionnaire will be used to improve the implementation process and training of end users.
Collapse
Affiliation(s)
| | - Anne A H de Hond
- Clinical AI Implementation and Research Lab, Leiden University Medical Center, Leiden, Netherlands.,Department of Biomedical Data Sciences, Leiden University Medical Center, Leiden, Netherlands
| | - Patrick J Thoral
- Department of Intensive Care Medicine, Laboratory for Critical Care Computational Intelligence, Amsterdam Medical Data Science, Amsterdam University Medical Centers, Amsterdam, Netherlands
| | - Ewout W Steyerberg
- Department of Biomedical Data Sciences, Leiden University Medical Center, Leiden, Netherlands
| | - Ilse M J Kant
- Clinical AI Implementation and Research Lab, Leiden University Medical Center, Leiden, Netherlands.,Department of Biomedical Data Sciences, Leiden University Medical Center, Leiden, Netherlands
| | - Giovanni Cinà
- Pacmed, Amsterdam, Netherlands.,Institute for Logic, Language and Computation, University of Amsterdam, Amsterdam, Netherlands.,Department of Medical Informatics, Amsterdam University Medical Center, University of Amsterdam, Amsterdam, Netherlands
| | - M Sesmu Arbous
- Department of Intensive Care Medicine, Leiden University Medical Center, Leiden, Netherlands
| |
Collapse
|
32
|
Mosch L, Fürstenau D, Brandt J, Wagnitz J, Klopfenstein SAI, Poncette AS, Balzer F. The medical profession transformed by artificial intelligence: Qualitative study. Digit Health 2022; 8:20552076221143903. [PMID: 36532112 PMCID: PMC9756357 DOI: 10.1177/20552076221143903] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2022] [Accepted: 11/18/2022] [Indexed: 12/23/2022] Open
Abstract
BACKGROUND Healthcaare delivery will change through the increasing use of artificial intelligence (AI). Physicians are likely to be among the professions most affected, though to what extent is not yet clear. OBJECTIVE We analyzed physicians' and AI experts' stances towards AI-induced changes. This concerned (1) physicians' tasks, (2) job replacement risk, and (3) implications for the ways of working, including human-AI interaction, changes in job profiles, and hierarchical and cross-professional collaboration patterns. METHODS We adopted an exploratory, qualitative research approach, using semi-structured interviews with 24 experts in the fields of AI and medicine, medical informatics, digital medicine, and medical education and training. Thematic analysis of the interview transcripts was performed. RESULTS Specialized tasks currently performed by physicians in all areas of medicine would likely be taken over by AI, including bureaucratic tasks, clinical decision support, and research. However, the concern that physicians will be replaced by an AI system is unfounded, according to experts; AI systems today would be designed only for a specific use case and could not replace the human factor in the patient-physician relationship. Nevertheless, the job profile and professional role of physicians would be transformed as a result of new forms of human-AI collaboration and shifts to higher-value activities. AI could spur novel, more interprofessional teams in medical practice and research and, eventually, democratization and de-hierarchization. CONCLUSIONS The study highlights changes in job profiles of physicians and outlines demands for new categories of medical professionals considering AI-induced changes of work. Physicians should redefine their self-image and assume more responsibility in the age of AI-supported medicine. There is a need for the development of scenarios and concepts for future job profiles in the health professions as well as their education and training.
Collapse
Affiliation(s)
- Lina Mosch
- Charité – Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Institute of Medical Informatics, Berlin, Germany,Department of Anesthesiology and Intensive Care Medicine, Charité – Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Berlin, Germany,Lina Mosch, Charité – Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Institute of Medical Informatics, Charitéplatz 1, 10117 Berlin, Germany
| | - Daniel Fürstenau
- Charité – Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Institute of Medical Informatics, Berlin, Germany,Department of Business IT, IT University of Copenhagen, København, Denmark
| | - Jenny Brandt
- Universitätsmedizin Mainz, corporate member of Johannes Gutenberg University, Mainz, Germany
| | - Jasper Wagnitz
- Charité – Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Institute of Medical Informatics, Berlin, Germany
| | - Sophie AI Klopfenstein
- Charité – Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Institute of Medical Informatics, Berlin, Germany,Core Facility Digital Medicine and Interoperability, Berlin Institute of Health at Charité – Universitätsmedizin Berlin, Berlin, Germany
| | - Akira-Sebastian Poncette
- Charité – Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Institute of Medical Informatics, Berlin, Germany,Department of Anesthesiology and Intensive Care Medicine, Charité – Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Berlin, Germany
| | - Felix Balzer
- Charité – Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Institute of Medical Informatics, Berlin, Germany
| |
Collapse
|
33
|
Chen ZS, Kulkarni P(P, Galatzer-Levy IR, Bigio B, Nasca C, Zhang Y. Modern views of machine learning for precision psychiatry. PATTERNS (NEW YORK, N.Y.) 2022; 3:100602. [PMID: 36419447 PMCID: PMC9676543 DOI: 10.1016/j.patter.2022.100602] [Citation(s) in RCA: 21] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
In light of the National Institute of Mental Health (NIMH)'s Research Domain Criteria (RDoC), the advent of functional neuroimaging, novel technologies and methods provide new opportunities to develop precise and personalized prognosis and diagnosis of mental disorders. Machine learning (ML) and artificial intelligence (AI) technologies are playing an increasingly critical role in the new era of precision psychiatry. Combining ML/AI with neuromodulation technologies can potentially provide explainable solutions in clinical practice and effective therapeutic treatment. Advanced wearable and mobile technologies also call for the new role of ML/AI for digital phenotyping in mobile mental health. In this review, we provide a comprehensive review of ML methodologies and applications by combining neuroimaging, neuromodulation, and advanced mobile technologies in psychiatry practice. We further review the role of ML in molecular phenotyping and cross-species biomarker identification in precision psychiatry. We also discuss explainable AI (XAI) and neuromodulation in a closed human-in-the-loop manner and highlight the ML potential in multi-media information extraction and multi-modal data fusion. Finally, we discuss conceptual and practical challenges in precision psychiatry and highlight ML opportunities in future research.
Collapse
Affiliation(s)
- Zhe Sage Chen
- Department of Psychiatry, New York University Grossman School of Medicine, New York, NY 10016, USA
- Department of Neuroscience and Physiology, New York University Grossman School of Medicine, New York, NY 10016, USA
- The Neuroscience Institute, New York University Grossman School of Medicine, New York, NY 10016, USA
- Department of Biomedical Engineering, New York University Tandon School of Engineering, Brooklyn, NY 11201, USA
| | | | - Isaac R. Galatzer-Levy
- Department of Psychiatry, New York University Grossman School of Medicine, New York, NY 10016, USA
- Meta Reality Lab, New York, NY, USA
| | - Benedetta Bigio
- Department of Psychiatry, New York University Grossman School of Medicine, New York, NY 10016, USA
| | - Carla Nasca
- Department of Psychiatry, New York University Grossman School of Medicine, New York, NY 10016, USA
- The Neuroscience Institute, New York University Grossman School of Medicine, New York, NY 10016, USA
| | - Yu Zhang
- Department of Bioengineering, Lehigh University, Bethlehem, PA 18015, USA
- Department of Electrical and Computer Engineering, Lehigh University, Bethlehem, PA 18015, USA
| |
Collapse
|
34
|
Monteith S, Glenn T, Geddes J, Whybrow PC, Achtyes E, Bauer M. Expectations for Artificial Intelligence (AI) in Psychiatry. Curr Psychiatry Rep 2022; 24:709-721. [PMID: 36214931 PMCID: PMC9549456 DOI: 10.1007/s11920-022-01378-5] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 09/15/2022] [Indexed: 01/29/2023]
Abstract
PURPOSE OF REVIEW Artificial intelligence (AI) is often presented as a transformative technology for clinical medicine even though the current technology maturity of AI is low. The purpose of this narrative review is to describe the complex reasons for the low technology maturity and set realistic expectations for the safe, routine use of AI in clinical medicine. RECENT FINDINGS For AI to be productive in clinical medicine, many diverse factors that contribute to the low maturity level need to be addressed. These include technical problems such as data quality, dataset shift, black-box opacity, validation and regulatory challenges, and human factors such as a lack of education in AI, workflow changes, automation bias, and deskilling. There will also be new and unanticipated safety risks with the introduction of AI. The solutions to these issues are complex and will take time to discover, develop, validate, and implement. However, addressing the many problems in a methodical manner will expedite the safe and beneficial use of AI to augment medical decision making in psychiatry.
Collapse
Affiliation(s)
- Scott Monteith
- Michigan State University College of Human Medicine, Traverse City Campus, Traverse City, MI, 49684, USA.
| | - Tasha Glenn
- ChronoRecord Association, Fullerton, CA, USA
| | - John Geddes
- Department of Psychiatry, University of Oxford, Warneford Hospital, Oxford, UK
| | - Peter C Whybrow
- Department of Psychiatry and Biobehavioral Sciences, Semel Institute for Neuroscience and Human Behavior, University of California Los Angeles (UCLA), Los Angeles, CA, USA
| | - Eric Achtyes
- Michigan State University College of Human Medicine, Grand Rapids, MI, 49684, USA
- Network180, Grand Rapids, MI, USA
| | - Michael Bauer
- Department of Psychiatry and Psychotherapy, University Hospital Carl Gustav Carus Medical Faculty, Technische Universität Dresden, Dresden, Germany
| |
Collapse
|
35
|
Ho S, Doig GS, Ly A. Attitudes of optometrists towards artificial intelligence for the diagnosis of retinal disease: A cross-sectional mail-out survey. Ophthalmic Physiol Opt 2022; 42:1170-1179. [PMID: 35924658 DOI: 10.1111/opo.13034] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2022] [Revised: 07/01/2022] [Accepted: 07/01/2022] [Indexed: 01/07/2023]
Abstract
PURPOSE Artificial intelligence (AI)-based systems have demonstrated great potential in improving the diagnostic accuracy of retinal disease but are yet to achieve widespread acceptance in routine clinical practice. Clinician attitudes are known to influence implementation. Therefore, this study aimed to identify optometrists' attitudes towards the use of AI to assist in diagnosing retinal disease. METHODS A paper-based survey was designed to assess general attitudes towards AI in diagnosing retinal disease and motivators/barriers for future use. Two clinical scenarios for using AI were evaluated: (1) at the point of care to obtain a diagnostic recommendation, versus (2) after the consultation to provide a second opinion. Relationships between participant characteristics and attitudes towards AI were explored. The survey was mailed to 252 randomly selected practising optometrists across Australia, with repeat mail-outs to non-respondents. RESULTS The response rate was 53% (133/252). Respondents' mean (SD) age was 42.7 (13.3) years, and 44.4% (59/133) identified as female, whilst 1.5% (2/133) identified as gender diverse. The mean number of years practising in primary eye care was 18.8 (13.2) years with 64.7% (86/133) working in an independently owned practice. On average, responding optometrists reported positive attitudes (mean score 4.0 out of 5, SD 0.8) towards using AI as a tool to aid the diagnosis of retinal disease, and would be more likely to use AI if it is proven to increase patient access to healthcare (mean score 4.4 out of 5, SD 0.6). Furthermore, optometrists expressed a statistically significant preference for using AI after the consultation to provide a second opinion rather than during the consultation, at the point-of-care (+0.12, p = 0.01). CONCLUSIONS Optometrists have positive attitudes towards the future use of AI as an aid to diagnose retinal disease. Understanding clinician attitudes and preferences for using AI may help maximise its clinical potential and ensure its successful translation into practice.
Collapse
Affiliation(s)
- Sharon Ho
- Centre for Eye Health, The University of New South Wales, Sydney, New South Wales, Australia.,School of Optometry and Vision Science, The University of New South Wales, Sydney, New South Wales, Australia
| | - Gordon S Doig
- Centre for Eye Health, The University of New South Wales, Sydney, New South Wales, Australia.,School of Optometry and Vision Science, The University of New South Wales, Sydney, New South Wales, Australia
| | - Angelica Ly
- Centre for Eye Health, The University of New South Wales, Sydney, New South Wales, Australia.,School of Optometry and Vision Science, The University of New South Wales, Sydney, New South Wales, Australia.,Brien Holden Vision Institute, The University of New South Wales, Sydney, New South Wales, Australia
| |
Collapse
|
36
|
Huang J, Zhao Y, Qu W, Tian Z, Tan Y, Wang Z, Tan S. Automatic recognition of schizophrenia from facial videos using 3D convolutional neural network. Asian J Psychiatr 2022; 77:103263. [PMID: 36152565 DOI: 10.1016/j.ajp.2022.103263] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/14/2022] [Revised: 08/22/2022] [Accepted: 09/14/2022] [Indexed: 11/17/2022]
Abstract
Schizophrenia affects patients and their families and society because of chronic impairments in cognition, behavior, and emotion. However, its clinical diagnosis mainly depends on the clinicians' knowledge of the patients' symptoms. Other auxiliary diagnostic methods such as MRI and EEG are cumbersome and time-consuming. Recently, the convolutional neural network (CNN) has been applied to the auxiliary diagnosis of psychiatry. Hence, in this study, a method based on deep learning and facial videos is proposed for the rapid detection of schizophrenia. Herein, 125 videos from 125 schizophrenic patients and 75 videos from 75 healthy controls based on emotional stimulation tasks were obtained. The video preprocessing included the experiment clips extraction, face detection, facial region cropping, resizing to 500 × 500 pixel size, and uniform sampling of 100 frames. The preprocessed facial videos were used to train the Resnet18_3D. We utilized ten-fold cross-validation, and held-out testing set to evaluate the model with the accuracy, the precision, the sensitivity, the specificity, the balanced accuracy, and the AUC. The Resnet18_3D trained on Film_order achieved the best performance with accuracy, sensitivity, specificity, balanced accuracy, and AUC of 89.00%, 96.80%, 76.00%, 86.40% and 0.9397. The neural network model indeed recognizes healthy controls and schizophrenic patients through the changes in the area of the face. The results show that facial video under emotional stimulation can be used to classify schizophrenic patients and help clinicians with diagnosis in the clinical environment. Among the different types of stimuli, the video stimuli with fixed emotional order showed the best classification performance.
Collapse
Affiliation(s)
- Jie Huang
- Beijing HuiLongGuan Hospital, Peking University HuiLongGuan Clinical Medical School, Beijing, 100096, China
| | - Yanli Zhao
- Beijing HuiLongGuan Hospital, Peking University HuiLongGuan Clinical Medical School, Beijing, 100096, China
| | - Wei Qu
- Beijing HuiLongGuan Hospital, Peking University HuiLongGuan Clinical Medical School, Beijing, 100096, China
| | - Zhanxiao Tian
- Beijing HuiLongGuan Hospital, Peking University HuiLongGuan Clinical Medical School, Beijing, 100096, China
| | - Yunlong Tan
- Beijing HuiLongGuan Hospital, Peking University HuiLongGuan Clinical Medical School, Beijing, 100096, China
| | - Zhiren Wang
- Beijing HuiLongGuan Hospital, Peking University HuiLongGuan Clinical Medical School, Beijing, 100096, China
| | - Shuping Tan
- Beijing HuiLongGuan Hospital, Peking University HuiLongGuan Clinical Medical School, Beijing, 100096, China.
| |
Collapse
|
37
|
Pap IA, Oniga S. A Review of Converging Technologies in eHealth Pertaining to Artificial Intelligence. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2022; 19:11413. [PMID: 36141685 PMCID: PMC9517043 DOI: 10.3390/ijerph191811413] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/23/2022] [Revised: 08/31/2022] [Accepted: 09/06/2022] [Indexed: 06/16/2023]
Abstract
Over the last couple of years, in the context of the COVID-19 pandemic, many healthcare issues have been exacerbated, highlighting the paramount need to provide both reliable and affordable health services to remote locations by using the latest technologies such as video conferencing, data management, the secure transfer of patient information, and efficient data analysis tools such as machine learning algorithms. In the constant struggle to offer healthcare to everyone, many modern technologies find applicability in eHealth, mHealth, telehealth or telemedicine. Through this paper, we attempt to render an overview of what different technologies are used in certain healthcare applications, ranging from remote patient monitoring in the field of cardio-oncology to analyzing EEG signals through machine learning for the prediction of seizures, focusing on the role of artificial intelligence in eHealth.
Collapse
Affiliation(s)
- Iuliu Alexandru Pap
- Department of Electric, Electronic and Computer Engineering, Technical University of Cluj-Napoca, North University Center of Baia Mare, 430083 Baia Mare, Romania
| | - Stefan Oniga
- Department of Electric, Electronic and Computer Engineering, Technical University of Cluj-Napoca, North University Center of Baia Mare, 430083 Baia Mare, Romania
- Department of IT Systems and Networks, Faculty of Informatics, University of Debrecen, 4032 Debrecen, Hungary
| |
Collapse
|
38
|
Li B, de Mestral C, Mamdani M, Al-Omran M. Perceptions of Canadian vascular surgeons toward artificial intelligence and machine learning. J Vasc Surg Cases Innov Tech 2022; 8:466-472. [PMID: 36016703 PMCID: PMC9396444 DOI: 10.1016/j.jvscit.2022.06.018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2022] [Accepted: 06/06/2022] [Indexed: 11/16/2022] Open
Abstract
Background Artificial intelligence (AI) and machine learning (ML) are rapidly advancing fields with increasing utility in health care. We conducted a survey to determine the perceptions of Canadian vascular surgeons toward AI/ML. Methods An online questionnaire was distributed to 162 members of the Canadian Society for Vascular Surgery. Self-reported knowledge, attitudes, and perceptions with respect to potential applications, limitations, and facilitators of AI/ML were assessed. Results Overall, 50 of the 162 Canadian vascular surgeons (31%) responded to the survey. Most respondents were aged 30 to 59 years (72%), male (80%), and White (67%) and practiced in academic settings (72%). One half of the participants reported that their knowledge of AI/ML was poor or very poor. Most were excited or very excited about AI/ML (66%) and were interested or very interested in learning more about the field (83.7%). The respondents believed that AI/ML would be useful or very useful for diagnosis (62%), prognosis (72%), patient selection (56%), image analysis (64%), intraoperative guidance (52%), research (88%), and education (80%). The limitations that the participants were most concerned about were errors leading to patient harm (42%), bias based on patient demographics (42%), and lack of clinician knowledge and skills in AI/ML (40%). Most were not concerned or were mildly concerned about job replacement (86%). The factors that were most important to encouraging clinicians to use AI/ML models were improvements in efficiency (88%), accurate predictions (84%), and ease of use (84%). The comments from respondents focused on the pressing need for the implementation of AI/ML in vascular surgery owing to the potential to improve care delivery. Conclusions Canadian vascular surgeons have positive views on AI/ML and believe this technology can be applied to multiple aspects of the specialty to improve patient care, research, and education. Current self-reported knowledge is poor, although interest was expressed in learning more about the field. The facilitators and barriers to the effective use of AI/ML identified in the present study can guide future development of these tools in vascular surgery.
Collapse
|
39
|
Chen M, Zhang B, Cai Z, Seery S, Gonzalez MJ, Ali NM, Ren R, Qiao Y, Xue P, Jiang Y. Acceptance of clinical artificial intelligence among physicians and medical students: A systematic review with cross-sectional survey. Front Med (Lausanne) 2022; 9:990604. [PMID: 36117979 PMCID: PMC9472134 DOI: 10.3389/fmed.2022.990604] [Citation(s) in RCA: 32] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2022] [Accepted: 08/01/2022] [Indexed: 11/13/2022] Open
Abstract
Background Artificial intelligence (AI) needs to be accepted and understood by physicians and medical students, but few have systematically assessed their attitudes. We investigated clinical AI acceptance among physicians and medical students around the world to provide implementation guidance. Materials and methods We conducted a two-stage study, involving a foundational systematic review of physician and medical student acceptance of clinical AI. This enabled us to design a suitable web-based questionnaire which was then distributed among practitioners and trainees around the world. Results Sixty studies were included in this systematic review, and 758 respondents from 39 countries completed the online questionnaire. Five (62.50%) of eight studies reported 65% or higher awareness regarding the application of clinical AI. Although, only 10–30% had actually used AI and 26 (74.28%) of 35 studies suggested there was a lack of AI knowledge. Our questionnaire uncovered 38% awareness rate and 20% utility rate of clinical AI, although 53% lacked basic knowledge of clinical AI. Forty-five studies mentioned attitudes toward clinical AI, and over 60% from 38 (84.44%) studies were positive about AI, although they were also concerned about the potential for unpredictable, incorrect results. Seventy-seven percent were optimistic about the prospect of clinical AI. The support rate for the statement that AI could replace physicians ranged from 6 to 78% across 40 studies which mentioned this topic. Five studies recommended that efforts should be made to increase collaboration. Our questionnaire showed 68% disagreed that AI would become a surrogate physician, but believed it should assist in clinical decision-making. Participants with different identities, experience and from different countries hold similar but subtly different attitudes. Conclusion Most physicians and medical students appear aware of the increasing application of clinical AI, but lack practical experience and related knowledge. Overall, participants have positive but reserved attitudes about AI. In spite of the mixed opinions around clinical AI becoming a surrogate physician, there was a consensus that collaborations between the two should be strengthened. Further education should be conducted to alleviate anxieties associated with change and adopting new technologies.
Collapse
Affiliation(s)
- Mingyang Chen
- School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Bo Zhang
- School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Ziting Cai
- School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Samuel Seery
- Faculty of Health and Medicine, Division of Health Research, Lancaster University, Lancaster, United Kingdom
| | | | - Nasra M. Ali
- The First Affiliated Hospital, Dalian Medical University, Dalian, China
| | - Ran Ren
- Global Health Research Center, Dalian Medical University, Dalian, China
| | - Youlin Qiao
- School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
- *Correspondence: Youlin Qiao,
| | - Peng Xue
- School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
- Peng Xue,
| | - Yu Jiang
- School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
- Yu Jiang,
| |
Collapse
|
40
|
Real-World Implementation of Precision Psychiatry: A Systematic Review of Barriers and Facilitators. Brain Sci 2022; 12:brainsci12070934. [PMID: 35884740 PMCID: PMC9313345 DOI: 10.3390/brainsci12070934] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2022] [Revised: 07/08/2022] [Accepted: 07/12/2022] [Indexed: 01/27/2023] Open
Abstract
Background: Despite significant research progress surrounding precision medicine in psychiatry, there has been little tangible impact upon real-world clinical care. Objective: To identify barriers and facilitators affecting the real-world implementation of precision psychiatry. Method: A PRISMA-compliant systematic literature search of primary research studies, conducted in the Web of Science, Cochrane Central Register of Controlled Trials, PsycINFO and OpenGrey databases. We included a qualitative data synthesis structured according to the ‘Consolidated Framework for Implementation Research’ (CFIR) key constructs. Results: Of 93,886 records screened, 28 studies were suitable for inclusion. The included studies reported 38 barriers and facilitators attributed to the CFIR constructs. Commonly reported barriers included: potential psychological harm to the service user (n = 11), cost and time investments (n = 9), potential economic and occupational harm to the service user (n = 8), poor accuracy and utility of the model (n = 8), and poor perceived competence in precision medicine amongst staff (n = 7). The most highly reported facilitator was the availability of adequate competence and skills training for staff (n = 7). Conclusions: Psychiatry faces widespread challenges in the implementation of precision medicine methods. Innovative solutions are required at the level of the individual and the wider system to fulfil the translational gap and impact real-world care.
Collapse
|
41
|
Hanis TM, Islam MA, Musa KI. Diagnostic Accuracy of Machine Learning Models on Mammography in Breast Cancer Classification: A Meta-Analysis. Diagnostics (Basel) 2022; 12:1643. [PMID: 35885548 PMCID: PMC9320089 DOI: 10.3390/diagnostics12071643] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2022] [Revised: 06/29/2022] [Accepted: 06/29/2022] [Indexed: 11/16/2022] Open
Abstract
In this meta-analysis, we aimed to estimate the diagnostic accuracy of machine learning models on digital mammograms and tomosynthesis in breast cancer classification and to assess the factors affecting its diagnostic accuracy. We searched for related studies in Web of Science, Scopus, PubMed, Google Scholar and Embase. The studies were screened in two stages to exclude the unrelated studies and duplicates. Finally, 36 studies containing 68 machine learning models were included in this meta-analysis. The area under the curve (AUC), hierarchical summary receiver operating characteristics (HSROC) curve, pooled sensitivity and pooled specificity were estimated using a bivariate Reitsma model. Overall AUC, pooled sensitivity and pooled specificity were 0.90 (95% CI: 0.85-0.90), 0.83 (95% CI: 0.78-0.87) and 0.84 (95% CI: 0.81-0.87), respectively. Additionally, the three significant covariates identified in this study were country (p = 0.003), source (p = 0.002) and classifier (p = 0.016). The type of data covariate was not statistically significant (p = 0.121). Additionally, Deeks' linear regression test indicated that there exists a publication bias in the included studies (p = 0.002). Thus, the results should be interpreted with caution.
Collapse
Affiliation(s)
- Tengku Muhammad Hanis
- Department of Community Medicine, School of Medical Sciences, Universiti Sains Malaysia, Kubang Kerian 16150, Kelantan, Malaysia;
| | - Md Asiful Islam
- Department of Haematology, School of Medical Sciences, Universiti Sains Malaysia, Kubang Kerian 16150, Kelantan, Malaysia
- Institute of Metabolism and Systems Research, University of Birmingham, Birmingham B15 2TT, UK
| | - Kamarul Imran Musa
- Department of Community Medicine, School of Medical Sciences, Universiti Sains Malaysia, Kubang Kerian 16150, Kelantan, Malaysia;
| |
Collapse
|
42
|
An interdisciplinary review of AI and HRM: Challenges and future directions. HUMAN RESOURCE MANAGEMENT REVIEW 2022. [DOI: 10.1016/j.hrmr.2022.100924] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
43
|
Çitil ET, Çitil Canbay F. Artificial intelligence and the future of midwifery: What do midwives think about artificial intelligence? A qualitative study. Health Care Women Int 2022; 43:1510-1527. [PMID: 35452353 DOI: 10.1080/07399332.2022.2055760] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/03/2023]
Abstract
The evidence on how AI will make a revolution is insufficient. Our aim was to investigate opinions of midwives on the future of AI and midwifery. Semi-structured interviews were done with 18 midwives in Turkey. Themes were identified: expectations included the advantages and conditional acceptance of robotic technology, prejudices reflected perceived shortcomings, lack of human competencies, and trust issues. Concerns included midwifery care and concerns about her future. Midwives were overwhelmingly skeptical about the replacement of human capabilities by AI and found the technology's potential limited.
Collapse
Affiliation(s)
- Elif Tuğçe Çitil
- Department of Midwifery, Health Science Faculty, Kütahya Health Science University, Kütahya, Turkey
| | - Funda Çitil Canbay
- Department of Midwifery, Health Science Faculty, Atatürk University, Erzurum, Turkey
| |
Collapse
|
44
|
Gupta M, Ramar D, Vijayan R, Gupta N. Artificial Intelligence Tools for Suicide Prevention in Adolescents and Young Adults. ADOLESCENT PSYCHIATRY 2022. [DOI: 10.2174/2210676612666220408095913] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
Abstract
Background:
Artificial Intelligence is making a significant transformation in human lives. Its application in the medical and healthcare field has been also observed making an impact and improving overall outcomes. There has been a quest for similar processes in mental health due to the lack of observable changes in the areas of suicide prevention. In the last five years, there has been an emerging body of empirical research applying the technology of artificial intelligence (AI) and machine learning (ML) in mental health.
Objective:
To review the clinical applicability of the AI/ML-based tools in suicide prevention.
Methods:
The compelling question of predicting suicidality has been the focus of this research.
We performed a broad literature search and then identified 36 articles relevant to meet the objectives of this review. We review the available evidence and provide a brief overview of the advances in this field.
Conclusion:
In the last five years, there has been more evidence supporting the implementation of these algorithms in clinical practice. Its current clinical utility is limited to using electronic health records and could be highly effective in conjunction with existing tools for suicide prevention. Other potential sources of relevant data include smart devices and social network sites. There are some serious questions about data privacy and ethics which need more attention while developing these new modalities in suicide research.
Collapse
Affiliation(s)
| | - Dhanvendran Ramar
- Bellin Health Psychiatric Clinical Services, & Medical College of Wisconsin Green Bay Wisconsin 54301
| | - Rekha Vijayan
- Bellin Health Psychiatric Clinical Services, & Medical College of Wisconsin Green Bay Wisconsin 54301
| | - Nihit Gupta
- University of West Virginia, Reynolds Memorial Hospital Glendale WV 26038
| |
Collapse
|
45
|
Hitczenko K, Cowan HR, Goldrick M, Mittal VA. Racial and Ethnic Biases in Computational Approaches to Psychopathology. Schizophr Bull 2022; 48:285-288. [PMID: 34729605 PMCID: PMC8886581 DOI: 10.1093/schbul/sbab131] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Affiliation(s)
- Kasia Hitczenko
- Department of Linguistics, Northwestern University, Evanston, IL, USA
| | - Henry R Cowan
- Department of Psychology, Northwestern University, Evanston, IL, USA
| | - Matthew Goldrick
- Department of Linguistics, Northwestern University, Evanston, IL, USA
- Department of Psychology, Northwestern University, Evanston, IL, USA
- Institute for Innovations in Developmental Sciences, Northwestern University, Evanston/Chicago, IL, USA
| | - Vijay A Mittal
- Department of Psychology, Northwestern University, Evanston, IL, USA
- Institute for Innovations in Developmental Sciences, Northwestern University, Evanston/Chicago, IL, USA
- Department of Psychiatry, Northwestern University, Chicago, IL, USA
- Institute for Policy Research, Northwestern University, Evanston, IL, USA
- Medical Social Sciences, Northwestern University, Chicago, IL, USA
| |
Collapse
|
46
|
Blease C, Kharko A, Bernstein M, Bradley C, Houston M, Walsh I, Hägglund M, DesRoches C, Mandl KD. Machine learning in medical education: a survey of the experiences and opinions of medical students in Ireland. BMJ Health Care Inform 2022; 29:bmjhci-2021-100480. [PMID: 35105606 PMCID: PMC8808371 DOI: 10.1136/bmjhci-2021-100480] [Citation(s) in RCA: 27] [Impact Index Per Article: 13.5] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/02/2022] Open
Affiliation(s)
- Charlotte Blease
- Division of General Medicine, Beth Israel Deaconess Medical Center, Boston, Massachusetts, USA
| | - Anna Kharko
- Faculty of Health and Human Sciences, University of Plymouth, Plymouth, UK.,Department of Women's and Children's Health, Uppsala University, Uppsala, Sweden
| | - Michael Bernstein
- School of Public Health, Brown University, Providence, Rhode Island, USA
| | - Colin Bradley
- School of Medicine, University College Cork, Cork, Ireland
| | - Muiris Houston
- School of Medicine, National University of Ireland Galway, Galway, Ireland.,School of Medicine, Trinity College Dublin, Dublin, Ireland
| | - Ian Walsh
- School of Medicine, Dentistry and Biomedical Sciences, Queen's University, Belfast, Belfast, Northern Ireland, UK
| | - Maria Hägglund
- Department of Women's and Children's Health, Uppsala University, Uppsala, Sweden
| | - Catherine DesRoches
- Division of General Medicine, Beth Israel Deaconess Medical Center, Boston, Massachusetts, USA.,Harvard Medical School, Boston, Massachusetts, USA
| | - Kenneth D Mandl
- Harvard Medical School, Boston, Massachusetts, USA.,Computational Health Informatics Program, Boston Children's Hospital, Boston, Massachusetts, USA
| |
Collapse
|
47
|
Martin VP, Rouas JL, Philip P, Fourneret P, Micoulaud-Franchi JA, Gauld C. How Does Comparison With Artificial Intelligence Shed Light on the Way Clinicians Reason? A Cross-Talk Perspective. Front Psychiatry 2022; 13:926286. [PMID: 35757203 PMCID: PMC9218339 DOI: 10.3389/fpsyt.2022.926286] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/22/2022] [Accepted: 05/13/2022] [Indexed: 11/16/2022] Open
Abstract
In order to create a dynamic for the psychiatry of the future, bringing together digital technology and clinical practice, we propose in this paper a cross-teaching translational roadmap comparing clinical reasoning with computational reasoning. Based on the relevant literature on clinical ways of thinking, we differentiate the process of clinical judgment into four main stages: collection of variables, theoretical background, construction of the model, and use of the model. We detail, for each step, parallels between: i) clinical reasoning; ii) the ML engineer methodology to build a ML model; iii) and the ML model itself. Such analysis supports the understanding of the empirical practice of each of the disciplines (psychiatry and ML engineering). Thus, ML does not only bring methods to the clinician, but also supports educational issues for clinical practice. Psychiatry can rely on developments in ML reasoning to shed light on its own practice in a clever way. In return, this analysis highlights the importance of subjectivity of the ML engineers and their methodologies.
Collapse
Affiliation(s)
- Vincent P Martin
- Université de Bordeaux, CNRS, Bordeaux INP, LaBRI, UMR5800, Talence, France.,Université de Bordeaux, CNRS, SANPSY, UMR6033, CHU de Bordeaux, Bordeaux, France
| | - Jean-Luc Rouas
- Université de Bordeaux, CNRS, Bordeaux INP, LaBRI, UMR5800, Talence, France
| | - Pierre Philip
- Université de Bordeaux, CNRS, SANPSY, UMR6033, CHU de Bordeaux, Bordeaux, France.,University Sleep Clinic, Services of Functional Exploration of the Nervous System, University Hospital of Bordeaux, Bordeaux, France
| | - Pierre Fourneret
- Department of Child Psychiatry, Hospices Civils de Lyon, Lyon, France
| | - Jean-Arthur Micoulaud-Franchi
- Université de Bordeaux, CNRS, SANPSY, UMR6033, CHU de Bordeaux, Bordeaux, France.,University Sleep Clinic, Services of Functional Exploration of the Nervous System, University Hospital of Bordeaux, Bordeaux, France
| | - Christophe Gauld
- Department of Child Psychiatry, Hospices Civils de Lyon, Lyon, France.,IHPST, CNRS UMR 8590, Sorbonne University, Paris, France
| |
Collapse
|
48
|
Perrier E, Rifai M, Terzic A, Dubois C, Cohen JF. Knowledge, attitudes, and practices towards artificial intelligence among young pediatricians: A nationwide survey in France. Front Pediatr 2022; 10:1065957. [PMID: 36619510 PMCID: PMC9816325 DOI: 10.3389/fped.2022.1065957] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/10/2022] [Accepted: 11/30/2022] [Indexed: 12/24/2022] Open
Abstract
OBJECTIVE To assess the knowledge, attitudes, and practices (KAP) towards artificial intelligence (AI) among young pediatricians in France. METHODS We invited young French pediatricians to participate in an online survey. Invitees were identified through various email listings and social media. We conducted a descriptive analysis and explored whether survey responses varied according to respondents' previous training in AI and level of clinical experience (i.e., residents vs. experienced doctors). RESULTS In total, 165 French pediatricians participated in the study (median age 27 years, women 78%, residents 64%). While 90% of participants declared they understood the term "artificial intelligence", only 40% understood the term "deep learning". Most participants expected AI would lead to improvements in healthcare (e.g., better access to healthcare, 80%; diagnostic assistance, 71%), and 86% declared they would favor implementing AI tools in pediatrics. Fifty-nine percent of respondents declared seeing AI as a threat to medical data security and 35% as a threat to the ethical and human dimensions of medicine. Thirty-nine percent of respondents feared losing clinical skills because of AI, and 6% feared losing their job because of AI. Only 5% of respondents had received specific training in AI, while 87% considered implementing such programs would be necessary. Respondents who received training in AI had significantly better knowledge and a higher probability of having encountered AI tools in their medical practice (p < 0.05 for both). There was no statistically significant difference between residents' and experienced doctors' responses. CONCLUSION In this survey, most young French pediatricians had favorable views toward AI, but a large proportion expressed concerns regarding the ethical, societal, and professional issues linked with the implementation of AI.
Collapse
Affiliation(s)
- Emma Perrier
- Child Neurological Rehabilitation Unit and Learning Disorders Reference Centre, Assistance Publique-Hôpitaux de Paris, Hôpital Bicêtre, Université Paris-Saclay, Le Kremlin-Bicêtre, France
| | - Mahmoud Rifai
- Pediatric Intensive Care Unit, Assistance Publique-Hôpitaux de Paris, Hôpital Raymond-Poincaré, Université Paris-Saclay, Paris, France
| | - Arnaud Terzic
- Pediatric Intensive Care and Neonatal Medicine, Assistance Publique - Hôpitaux de Paris, Hôpital Bicêtre, Université Paris-Saclay, Le Kremlin-Bicêtre, France
| | - Constance Dubois
- Centre of Research in Epidemiology and Statistics, Inserm UMR 1153, Université Paris Cité, Paris, France
| | - Jérémie F Cohen
- Centre of Research in Epidemiology and Statistics, Inserm UMR 1153, Université Paris Cité, Paris, France.,Department of General Pediatrics and Pediatric Infectious Disease, Assistance Publique - Hôpitaux de Paris, Hôpital Necker - Enfants Malades, Université Paris Cité, Paris, France
| |
Collapse
|
49
|
Nilsen P, Svedberg P, Nygren J, Frideros M, Johansson J, Schueller S. Accelerating the impact of artificial intelligence in mental healthcare through implementation science. IMPLEMENTATION RESEARCH AND PRACTICE 2022; 3:26334895221112033. [PMID: 37091110 PMCID: PMC9924259 DOI: 10.1177/26334895221112033] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Background The implementation of artificial intelligence (AI) in mental healthcare offers a potential solution to some of the problems associated with the availability, attractiveness, and accessibility of mental healthcare services. However, there are many knowledge gaps regarding how to implement and best use AI to add value to mental healthcare services, providers, and consumers. The aim of this paper is to identify challenges and opportunities for AI use in mental healthcare and to describe key insights from implementation science of potential relevance to understand and facilitate AI implementation in mental healthcare. Methods The paper is based on a selective review of articles concerning AI in mental healthcare and implementation science. Results Research in implementation science has established the importance of considering and planning for implementation from the start, the progression of implementation through different stages, and the appreciation of determinants at multiple levels. Determinant frameworks and implementation theories have been developed to understand and explain how different determinants impact on implementation. AI research should explore the relevance of these determinants for AI implementation. Implementation strategies to support AI implementation must address determinants specific to AI implementation in mental health. There might also be a need to develop new theoretical approaches or augment and recontextualize existing ones. Implementation outcomes may have to be adapted to be relevant in an AI implementation context. Conclusion Knowledge derived from implementation science could provide an important starting point for research on implementation of AI in mental healthcare. This field has generated many insights and provides a broad range of theories, frameworks, and concepts that are likely relevant for this research. However, when taking advantage of the existing knowledge basis, it is important to also be explorative and study AI implementation in health and mental healthcare as a new phenomenon in its own right since implementing AI may differ in various ways from implementing evidence-based practices in terms of what implementation determinants, strategies, and outcomes are most relevant. Plain Language Summary: The implementation of artificial intelligence (AI) in mental healthcare offers a potential solution to some of the problems associated with the availability, attractiveness, and accessibility of mental healthcare services. However, there are many knowledge gaps concerning how to implement and best use AI to add value to mental healthcare services, providers, and consumers. This paper is based on a selective review of articles concerning AI in mental healthcare and implementation science, with the aim to identify challenges and opportunities for the use of AI in mental healthcare and describe key insights from implementation science of potential relevance to understand and facilitate AI implementation in mental healthcare. AI offers opportunities for identifying the patients most in need of care or the interventions that might be most appropriate for a given population or individual. AI also offers opportunities for supporting a more reliable diagnosis of psychiatric disorders and ongoing monitoring and tailoring during the course of treatment. However, AI implementation challenges exist at organizational/policy, individual, and technical levels, making it relevant to draw on implementation science knowledge for understanding and facilitating implementation of AI in mental healthcare. Knowledge derived from implementation science could provide an important starting point for research on AI implementation in mental healthcare. This field has generated many insights and provides a broad range of theories, frameworks, and concepts that are likely relevant for this research.
Collapse
Affiliation(s)
| | - Petra Svedberg
- Halmstad University School of Health and Welfare, Halmstad University, Halmstad, Sweden
| | - Jens Nygren
- Halmstad University School of Health and Welfare, Halmstad University, Halmstad, Sweden
| | | | | | - Stephen Schueller
- Psychological Science, University of California Irvine, Irvine, CA, USA
| |
Collapse
|
50
|
Boucher EM, Harake NR, Ward HE, Stoeckl SE, Vargas J, Minkel J, Parks AC, Zilca R. Artificially intelligent chatbots in digital mental health interventions: a review. Expert Rev Med Devices 2021; 18:37-49. [PMID: 34872429 DOI: 10.1080/17434440.2021.2013200] [Citation(s) in RCA: 52] [Impact Index Per Article: 17.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
INTRODUCTION Increasing demand for mental health services and the expanding capabilities of artificial intelligence (AI) in recent years has driven the development of digital mental health interventions (DMHIs). To date, AI-based chatbots have been integrated into DMHIs to support diagnostics and screening, symptom management and behavior change, and content delivery. AREAS COVERED We summarize the current landscape of DMHIs, with a focus on AI-based chatbots. Happify Health's AI chatbot, Anna, serves as a case study for discussion of potential challenges and how these might be addressed, and demonstrates the promise of chatbots as effective, usable, and adoptable within DMHIs. Finally, we discuss ways in which future research can advance the field, addressing topics including perceptions of AI, the impact of individual differences, and implications for privacy and ethics. EXPERT OPINION Our discussion concludes with a speculative viewpoint on the future of AI in DMHIs, including the use of chatbots, the evolution of AI, dynamic mental health systems, hyper-personalization, and human-like intervention delivery.
Collapse
|