1
|
Chang LC, Wang YN, Lin HL, Liao LL. Registered Nurses' Attitudes Towards ChatGPT and Self-Directed Learning: A Cross-Sectional Study. J Adv Nurs 2024. [PMID: 39382347 DOI: 10.1111/jan.16519] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2024] [Revised: 09/09/2024] [Accepted: 09/23/2024] [Indexed: 10/10/2024]
Abstract
BACKGROUND Self-directed, lifelong learning is essential for nurses' competence in complex healthcare environments, which are characterised by rapid advancements in medicine and technology and nursing shortages. Previous studies have demonstrated that ChatGPT technology fosters self-directed learning by motivating users to engage with it. OBJECTIVES To explore the relationships amongst socio-demographic data, attitudes towards ChatGPT use, and self-directed learning amongst registered nurses in Taiwan. METHODS A cross-sectional study design with an online survey was adopted. Registered nurses from various healthcare settings were recruited through Facebook and LINE, a widely used messaging application in East Asia, reaching over 1000 nurses across five distinct online groups. An online survey was used to collect data, including socio-demographic characteristics, attitudes towards ChatGPT use, and a self-directed learning scale. Data were analysed using descriptive statistical methods, t-tests, Pearson's correlation, one-way analysis of variance, and multiple linear regression analysis. RESULTS Amongst the 330 participants, 50.6% worked in hospitals, 51.8% had more than 15 years of work experience, and 78.2% did not hold supervisory positions. Of the participants, 46.7% had used ChatGPT. For all nurses, work experience and awareness of ChatGPT statistically significantly predicted self-directed learning, explaining 32.0% of the variance. For those familiar with ChatGPT, work experience in nursing and the technological/social influence of ChatGPT statistically significantly predicted self-directed learning, explaining 35.3% of the variance. CONCLUSIONS Work experience in nursing provides critical opportunities for professional development and training. Therefore, ChatGPT-supported self-directed learning should be customised for degrees of experience to optimise continuous education. IMPLICATIONS FOR NURSING MANAGEMENT AND HEALTH POLICY This study explores nurses' diverse use of and attitudes towards ChatGPT for self-directed learning. It suggests that administrators customise support and training when incorporating ChatGPT into professional development, accounting for nurses' varied experiences to enhance learning outcomes. PATIENT OR PUBLIC CONTRIBUTION No patient or public contribution. REPORTING METHOD This study adhered to the relevant cross-sectional STROBE guidelines.
Collapse
Affiliation(s)
- Li-Chun Chang
- School of Nursing, Chang Gung University of Science and Technology, Taoyuan, Taiwan, R.O.C
- Department of Nursing, Linkou Branch, Chang Gung Memorial Hospital, Taoyuan, Taiwan, R.O.C
- School of Nursing, College of Medicine, Chang Gung University, Taoyuan, Taiwan, R.O.C
| | - Ya-Ni Wang
- School of Nursing, Chang Gung University of Science and Technology, Taoyuan, Taiwan, R.O.C
| | - Hui-Ling Lin
- School of Nursing, Chang Gung University of Science and Technology, Taoyuan, Taiwan, R.O.C
- Department of Nursing, Linkou Branch, Chang Gung Memorial Hospital, Taoyuan, Taiwan, R.O.C
- School of Nursing, College of Medicine, Chang Gung University, Taoyuan, Taiwan, R.O.C
- Taipei Medical University, Taipei, Taiwan, R.O.C
| | - Li-Ling Liao
- Department of Public Health, College of Health Science, Kaohsiung Medical University, Kaohsiung City, Taiwan, R.O.C
- Department of Medical Research, Kaohsiung Medical University Hospital, Kaohsiung City, Taiwan, R.O.C
| |
Collapse
|
2
|
Kumari K, Pahuja SK, Kumar S. A Comprehensive Examination of ChatGPT's Contribution to the Healthcare Sector and Hepatology. Dig Dis Sci 2024:10.1007/s10620-024-08659-4. [PMID: 39354272 DOI: 10.1007/s10620-024-08659-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/02/2024] [Accepted: 09/20/2024] [Indexed: 10/03/2024]
Abstract
Artificial Intelligence and Natural Language Processing technology have demonstrated significant promise across several domains within the medical and healthcare sectors. This technique has numerous uses in the field of healthcare. One of the primary challenges in implementing ChatGPT in healthcare is the requirement for precise and up-to-date data. In the case of the involvement of sensitive medical information, it is imperative to carefully address concerns regarding privacy and security when using GPT in the healthcare sector. This paper outlines ChatGPT and its relevance in the healthcare industry. It discusses the important aspects of ChatGPT's workflow and highlights the usual features of ChatGPT specifically designed for the healthcare domain. The present review uses the ChatGPT model within the research domain to investigate disorders associated with the hepatic system. This review demonstrates the possible use of ChatGPT in supporting researchers and clinicians in analyzing and interpreting liver-related data, thereby improving disease diagnosis, prognosis, and patient care.
Collapse
Affiliation(s)
- Kabita Kumari
- Department of Instrumentation and Control Engineering, Dr B. R. Ambedkar National Institute of Technology, Jalandhar, Punjab, 144011, India.
| | - Sharvan Kumar Pahuja
- Department of Instrumentation and Control Engineering, Dr B. R. Ambedkar National Institute of Technology, Jalandhar, Punjab, 144011, India
| | - Sanjeev Kumar
- Biomedical Instrumentation Unit, CSIR-Central Scientific Instruments Organisation (CSIR-CSIO), Chandigarh, India
| |
Collapse
|
3
|
Cheong KX, Zhang C, Tan TE, Fenner BJ, Wong WM, Teo KY, Wang YX, Sivaprasad S, Keane PA, Lee CS, Lee AY, Cheung CMG, Wong TY, Cheong YG, Song SJ, Tham YC. Comparing generative and retrieval-based chatbots in answering patient questions regarding age-related macular degeneration and diabetic retinopathy. Br J Ophthalmol 2024; 108:1443-1449. [PMID: 38749531 DOI: 10.1136/bjo-2023-324533] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2023] [Accepted: 04/21/2024] [Indexed: 09/22/2024]
Abstract
BACKGROUND/AIMS To compare the performance of generative versus retrieval-based chatbots in answering patient inquiries regarding age-related macular degeneration (AMD) and diabetic retinopathy (DR). METHODS We evaluated four chatbots: generative models (ChatGPT-4, ChatGPT-3.5 and Google Bard) and a retrieval-based model (OcularBERT) in a cross-sectional study. Their response accuracy to 45 questions (15 AMD, 15 DR and 15 others) was evaluated and compared. Three masked retinal specialists graded the responses using a three-point Likert scale: either 2 (good, error-free), 1 (borderline) or 0 (poor with significant inaccuracies). The scores were aggregated, ranging from 0 to 6. Based on majority consensus among the graders, the responses were also classified as 'Good', 'Borderline' or 'Poor' quality. RESULTS Overall, ChatGPT-4 and ChatGPT-3.5 outperformed the other chatbots, both achieving median scores (IQR) of 6 (1), compared with 4.5 (2) in Google Bard, and 2 (1) in OcularBERT (all p ≤8.4×10-3). Based on the consensus approach, 83.3% of ChatGPT-4's responses and 86.7% of ChatGPT-3.5's were rated as 'Good', surpassing Google Bard (50%) and OcularBERT (10%) (all p ≤1.4×10-2). ChatGPT-4 and ChatGPT-3.5 had no 'Poor' rated responses. Google Bard produced 6.7% Poor responses, and OcularBERT produced 20%. Across question types, ChatGPT-4 outperformed Google Bard only for AMD, and ChatGPT-3.5 outperformed Google Bard for DR and others. CONCLUSION ChatGPT-4 and ChatGPT-3.5 demonstrated superior performance, followed by Google Bard and OcularBERT. Generative chatbots are potentially capable of answering domain-specific questions outside their original training. Further validation studies are still required prior to real-world implementation.
Collapse
Affiliation(s)
- Kai Xiong Cheong
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
| | - Chenxi Zhang
- Chinese Academy of Medical Sciences & Peking Union Medical College Hospital, Beijing, China
| | - Tien-En Tan
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
| | - Beau J Fenner
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
- Ophthalmology & Visual Sciences Academic Clinical Program (Eye-ACP), Duke-NUS Medical School, Singapore
| | - Wendy Meihua Wong
- Department of Ophthalmology, National University Hospital, Singapore
- Centre for Innovation and Precision Eye Health; and Department of Ophthalmology, National University of Singapore Yong Loo Lin School of Medicine, Singapore
| | - Kelvin Yc Teo
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
- Ophthalmology & Visual Sciences Academic Clinical Program (Eye-ACP), Duke-NUS Medical School, Singapore
| | - Ya Xing Wang
- Beijing Institute of Ophthalmology, Beijing Tongren Hospital, Capital University of Medical Science, Beijing, China
| | | | - Pearse A Keane
- Medical Retina, Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Cecilia Sungmin Lee
- Department of Ophthalmology, University of Washington, Seattle, Washington, USA
| | - Aaron Y Lee
- Department of Ophthalmology, University of Washington, Seattle, Washington, USA
| | - Chui Ming Gemmy Cheung
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
- Ophthalmology & Visual Sciences Academic Clinical Program (Eye-ACP), Duke-NUS Medical School, Singapore
| | - Tien Yin Wong
- Tsinghua Medicine, Tsinghua University, Beijing, China
- School of Clinical Medicine, Beijing Tsinghua Changgung Hospital, Beijing, People's Republic of China
| | | | - Su Jeong Song
- Kangbuk Samsung Hospital, Jongno-gu, Seoul, South Korea
| | - Yih Chung Tham
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
- Ophthalmology & Visual Sciences Academic Clinical Program (Eye-ACP), Duke-NUS Medical School, Singapore
- Centre for Innovation and Precision Eye Health; and Department of Ophthalmology, National University of Singapore Yong Loo Lin School of Medicine, Singapore
| |
Collapse
|
4
|
Wang Y, Liu C, Zhou K, Zhu T, Han X. Towards regulatory generative AI in ophthalmology healthcare: a security and privacy perspective. Br J Ophthalmol 2024; 108:1349-1353. [PMID: 38834290 DOI: 10.1136/bjo-2024-325167] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2024] [Accepted: 05/19/2024] [Indexed: 06/06/2024]
Abstract
As the healthcare community increasingly harnesses the power of generative artificial intelligence (AI), critical issues of security, privacy and regulation take centre stage. In this paper, we explore the security and privacy risks of generative AI from model-level and data-level perspectives. Moreover, we elucidate the potential consequences and case studies within the domain of ophthalmology. Model-level risks include knowledge leakage from the model and model safety under AI-specific attacks, while data-level risks involve unauthorised data collection and data accuracy concerns. Within the healthcare context, these risks can bear severe consequences, encompassing potential breaches of sensitive information, violating privacy rights and threats to patient safety. This paper not only highlights these challenges but also elucidates governance-driven solutions that adhere to AI and healthcare regulations. We advocate for preparedness against potential threats, call for transparency enhancements and underscore the necessity of clinical validation before real-world implementation. The objective of security and privacy improvement in generative AI warrants emphasising the role of ophthalmologists and other healthcare providers, and the timely introduction of comprehensive regulations.
Collapse
Affiliation(s)
- Yueye Wang
- Sun Yat-sen University Zhongshan Ophthalmic Center State Key Laboratory of Ophthalmology, Guangzhou, Guangdong, China
| | - Chi Liu
- Faculty of Data Science, City University of Macau, Macao SAR, China
| | - Keyao Zhou
- Department of Ophthalmology, Guangdong Provincial People's Hospital, Guangzhou, Guangdong, China
- Department of Neurosurgery, Huashan Hospital, Fudan University, Shanghai, China
| | - Tianqing Zhu
- Faculty of Data Science, City University of Macau, Macao SAR, China
| | - Xiaotong Han
- Sun Yat-sen University Zhongshan Ophthalmic Center State Key Laboratory of Ophthalmology, Guangzhou, Guangdong, China
| |
Collapse
|
5
|
Chen SY, Kuo HY, Chang SH. Perceptions of ChatGPT in healthcare: usefulness, trust, and risk. Front Public Health 2024; 12:1457131. [PMID: 39346584 PMCID: PMC11436320 DOI: 10.3389/fpubh.2024.1457131] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2024] [Accepted: 08/29/2024] [Indexed: 10/01/2024] Open
Abstract
Introduction This study explores the perceptions of ChatGPT in healthcare settings in Taiwan, focusing on its usefulness, trust, and associated risks. As AI technologies like ChatGPT increasingly influence various sectors, their potential in public health education, promotion, medical education, and clinical practice is significant but not without challenges. The study aims to assess how individuals with and without healthcare-related education perceive and adopt ChatGPT, contributing to a deeper understanding of AI's role in enhancing public health outcomes. Methods An online survey was conducted among 659 university and graduate students, all of whom had prior experience using ChatGPT. The survey measured perceptions of ChatGPT's ease of use, novelty, usefulness, trust, and risk, particularly within clinical practice, medical education, and research settings. Multiple linear regression models were used to analyze how these factors influence perception in healthcare applications, comparing responses between healthcare majors and non-healthcare majors. Results The study revealed that both healthcare and non-healthcare majors find ChatGPT more useful in medical education and research than in clinical practice. Regression analysis revealed that for healthcare majors, general trust is crucial for ChatGPT's adoption in clinical practice and influences its use in medical education and research. For non-healthcare majors, novelty, perceived general usefulness, and trust are key predictors. Interestingly, while healthcare majors were cautious about ease of use, fearing it might increase risk, non-healthcare majors associated increased complexity with greater trust. Conclusion This study highlights the varying expectations between healthcare and non-healthcare majors regarding ChatGPT's role in healthcare. The findings suggest the need for AI applications to be tailored to address specific user needs, particularly in clinical practice, where trust and reliability are paramount. Additionally, the potential of AI tools like ChatGPT to contribute to public health education and promotion is significant, as these technologies can enhance health literacy and encourage behavior change. These insights can inform future healthcare practices and policies by guiding the thoughtful and effective integration of AI tools like ChatGPT, ensuring they complement clinical judgment, enhance educational outcomes, support research integrity, and ultimately contribute to improved public health outcomes.
Collapse
Affiliation(s)
- Su-Yen Chen
- Institute of Learning Sciences and Technologies, National Tsing Hua University, Hsinchu, Taiwan
| | - H Y Kuo
- Institute of Learning Sciences and Technologies, National Tsing Hua University, Hsinchu, Taiwan
| | - Shu-Hao Chang
- Department of Sport Management, College of Health and Human Performance, University of Florida, Gainesville, FL, United States
| |
Collapse
|
6
|
Kisvarday S, Yan A, Yarahuan J, Kats DJ, Ray M, Kim E, Hong P, Spector J, Bickel J, Parsons C, Rabbani N, Hron JD. ChatGPT Use Among Pediatric Health Care Providers: Cross-Sectional Survey Study. JMIR Form Res 2024; 8:e56797. [PMID: 39265163 PMCID: PMC11427860 DOI: 10.2196/56797] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2024] [Revised: 05/10/2024] [Accepted: 07/14/2024] [Indexed: 09/14/2024] Open
Abstract
BACKGROUND The public launch of OpenAI's ChatGPT platform generated immediate interest in the use of large language models (LLMs). Health care institutions are now grappling with establishing policies and guidelines for the use of these technologies, yet little is known about how health care providers view LLMs in medical settings. Moreover, there are no studies assessing how pediatric providers are adopting these readily accessible tools. OBJECTIVE The aim of this study was to determine how pediatric providers are currently using LLMs in their work as well as their interest in using a Health Insurance Portability and Accountability Act (HIPAA)-compliant version of ChatGPT in the future. METHODS A survey instrument consisting of structured and unstructured questions was iteratively developed by a team of informaticians from various pediatric specialties. The survey was sent via Research Electronic Data Capture (REDCap) to all Boston Children's Hospital pediatric providers. Participation was voluntary and uncompensated, and all survey responses were anonymous. RESULTS Surveys were completed by 390 pediatric providers. Approximately 50% (197/390) of respondents had used an LLM; of these, almost 75% (142/197) were already using an LLM for nonclinical work and 27% (52/195) for clinical work. Providers detailed the various ways they are currently using an LLM in their clinical and nonclinical work. Only 29% (n=105) of 362 respondents indicated that ChatGPT should be used for patient care in its present state; however, 73.8% (273/368) reported they would use a HIPAA-compliant version of ChatGPT if one were available. Providers' proposed future uses of LLMs in health care are described. CONCLUSIONS Despite significant concerns and barriers to LLM use in health care, pediatric providers are already using LLMs at work. This study will give policy makers needed information about how providers are using LLMs clinically.
Collapse
Affiliation(s)
- Susannah Kisvarday
- Division of General Pediatrics, Boston Children's Hospital, Boston, MA, United States
- Department of Biomedical Informatics, Harvard Medical School, Boston, MA, United States
| | - Adam Yan
- Division of Hematology/Oncology, The Hospital for Sick Kids, Toronto, ON, Canada
- Department of Pediatrics, The University of Toronto, Toronto, ON, Canada
| | - Julia Yarahuan
- Children's Healthcare of Atlanta, Atlanta, GA, United States
- School of Medicine, Emory University, Atlanta, GA, United States
| | - Daniel J Kats
- Division of General Pediatrics, Boston Children's Hospital, Boston, MA, United States
- Department of Biomedical Informatics, Harvard Medical School, Boston, MA, United States
| | - Mondira Ray
- Division of General Pediatrics, Boston Children's Hospital, Boston, MA, United States
- Department of Biomedical Informatics, Harvard Medical School, Boston, MA, United States
| | - Eugene Kim
- Division of General Pediatrics, Boston Children's Hospital, Boston, MA, United States
- Department of Biomedical Informatics, Harvard Medical School, Boston, MA, United States
| | - Peter Hong
- Division of General Pediatrics, Boston Children's Hospital, Boston, MA, United States
- Department of Pediatrics, Harvard Medical School, Boston, MA, United States
| | - Jacob Spector
- Division of General Pediatrics, Boston Children's Hospital, Boston, MA, United States
- Department of Pediatrics, Harvard Medical School, Boston, MA, United States
| | | | - Chase Parsons
- Division of General Pediatrics, Boston Children's Hospital, Boston, MA, United States
- Department of Pediatrics, Harvard Medical School, Boston, MA, United States
| | - Naveed Rabbani
- Department of Pediatrics, Harvard Medical School, Boston, MA, United States
- Pediatric Physicians' Organization at Children's Hospital, Wellesley, MA, United States
- Computational Health Informatics Program, Boston Children's Hospital, Boston, MA, United States
| | - Jonathan D Hron
- Division of General Pediatrics, Boston Children's Hospital, Boston, MA, United States
- Department of Pediatrics, Harvard Medical School, Boston, MA, United States
| |
Collapse
|
7
|
Hassan M, Kushniruk A, Borycki E. Barriers to and Facilitators of Artificial Intelligence Adoption in Health Care: Scoping Review. JMIR Hum Factors 2024; 11:e48633. [PMID: 39207831 PMCID: PMC11393514 DOI: 10.2196/48633] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2023] [Revised: 02/28/2024] [Accepted: 06/12/2024] [Indexed: 09/04/2024] Open
Abstract
BACKGROUND Artificial intelligence (AI) use cases in health care are on the rise, with the potential to improve operational efficiency and care outcomes. However, the translation of AI into practical, everyday use has been limited, as its effectiveness relies on successful implementation and adoption by clinicians, patients, and other health care stakeholders. OBJECTIVE As adoption is a key factor in the successful proliferation of an innovation, this scoping review aimed at presenting an overview of the barriers to and facilitators of AI adoption in health care. METHODS A scoping review was conducted using the guidance provided by the Joanna Briggs Institute and the framework proposed by Arksey and O'Malley. MEDLINE, IEEE Xplore, and ScienceDirect databases were searched to identify publications in English that reported on the barriers to or facilitators of AI adoption in health care. This review focused on articles published between January 2011 and December 2023. The review did not have any limitations regarding the health care setting (hospital or community) or the population (patients, clinicians, physicians, or health care administrators). A thematic analysis was conducted on the selected articles to map factors associated with the barriers to and facilitators of AI adoption in health care. RESULTS A total of 2514 articles were identified in the initial search. After title and abstract reviews, 50 (1.99%) articles were included in the final analysis. These articles were reviewed for the barriers to and facilitators of AI adoption in health care. Most articles were empirical studies, literature reviews, reports, and thought articles. Approximately 18 categories of barriers and facilitators were identified. These were organized sequentially to provide considerations for AI development, implementation, and the overall structure needed to facilitate adoption. CONCLUSIONS The literature review revealed that trust is a significant catalyst of adoption, and it was found to be impacted by several barriers identified in this review. A governance structure can be a key facilitator, among others, in ensuring all the elements identified as barriers are addressed appropriately. The findings demonstrate that the implementation of AI in health care is still, in many ways, dependent on the establishment of regulatory and legal frameworks. Further research into a combination of governance and implementation frameworks, models, or theories to enhance trust that would specifically enable adoption is needed to provide the necessary guidance to those translating AI research into practice. Future research could also be expanded to include attempts at understanding patients' perspectives on complex, high-risk AI use cases and how the use of AI applications affects clinical practice and patient care, including sociotechnical considerations, as more algorithms are implemented in actual clinical environments.
Collapse
Affiliation(s)
- Masooma Hassan
- Department of Health Information Science, University of Victoria, Victoria, BC, Canada
| | - Andre Kushniruk
- Department of Health Information Science, University of Victoria, Victoria, BC, Canada
| | - Elizabeth Borycki
- Department of Health Information Science, University of Victoria, Victoria, BC, Canada
| |
Collapse
|
8
|
Pan G, Ni J. A cross sectional investigation of ChatGPT-like large language models application among medical students in China. BMC MEDICAL EDUCATION 2024; 24:908. [PMID: 39180023 PMCID: PMC11342543 DOI: 10.1186/s12909-024-05871-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/22/2024] [Accepted: 08/07/2024] [Indexed: 08/26/2024]
Abstract
OBJECTIVE To investigate the level of understanding and trust of medical students towards ChatGPT-like large language models, as well as their utilization and attitudes towards these models. METHODS Data collection was concentrated from December 2023 to mid-January 2024, utilizing a self-designed questionnaire to assess the use of large language models among undergraduate medical students at Anhui Medical University. The normality of the data was confirmed with Shapiro-Wilk tests. We used Chi-square tests for comparisons of categorical variables, Mann-Whitney U tests for comparisons of ordinal variables and non-normal continuous variables between two groups, Kruskall-Wallis H tests for comparisons of ordinal variables between multiple groups, and Bonferroni tests for post hoc comparisons. RESULTS A total of 1774 questionnaires were distributed and 1718 valid questionnaires were collected, with an effective rate of 96.84%. Among these students, 34.5% had heard and used large language models. There were statistically significant differences in the understanding of large language models between genders (p < 0.001), grade levels (junior-level students and senior-level students) (p = 0.03), and major (p < 0.001). Male, junior-level students, and public health management had a higher level of understanding of these models. Genders and majors had statistically significant effects on the degree of trust in large language models (p = 0.004; p = 0.02). Male and nursing students exhibited a higher degree of trust in large language models. As for usage, Male and junior-level students showed a significantly higher proportion of using these models for assisted learning (p < 0.001). Neutral sentiments were held by over two-thirds of the students (66.7%) regarding large language models, with only 51(3.0%) expressing pessimism. There were significant gender-based disparities in attitudes towards large language models, and male exhibited a more optimistic attitude towards these models (p < 0.001). Notably, among students with different levels of knowledge and trust in large language models, statistically significant differences were observed in their perceptions of the shortcomings and benefits of these models. CONCLUSION Our study identified gender, grade levels, and major as influential factors in students' understanding and utilization of large language models. This also suggested the feasibility of integrating large language models with traditional medical education to further enhance teaching effectiveness in the future.
Collapse
Affiliation(s)
- Guixia Pan
- Department of Epidemiology and Biostatistics, School of Public Health, Anhui Medical University, Meishan Road 81, Hefei, 230032, Anhui, China.
| | - Jing Ni
- Department of Epidemiology and Biostatistics, School of Public Health, Anhui Medical University, Meishan Road 81, Hefei, 230032, Anhui, China
| |
Collapse
|
9
|
Teasdale A, Mills L, Costello R. Artificial Intelligence-Powered Surgical Consent: Patient Insights. Cureus 2024; 16:e68134. [PMID: 39347259 PMCID: PMC11438496 DOI: 10.7759/cureus.68134] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/28/2024] [Indexed: 10/01/2024] Open
Abstract
Introduction The integration of artificial intelligence (AI) in healthcare has revolutionized patient interactions and service delivery. AI's role extends from supporting clinical diagnostics and enhancing operational efficiencies to potentially improving informed consent processes in surgical settings. This study investigates the application of AI, particularly large language models like OpenAI's ChatGPT, in facilitating surgical consent, focusing on patient understanding, satisfaction, and trust. Methods We employed a mixed-methods approach involving 86 participants, including laypeople and medical staff, who engaged in a simulated AI-driven consent process for a tonsillectomy. Participants interacted with ChatGPT-4, which provided detailed procedure explanations, risks, and benefits. Post-interaction, participants completed a survey assessing their experience through quantitative and qualitative measures. Results Participants had a cautiously optimistic response to AI in the surgical consent process. Notably, 71% felt adequately informed, 86% found the information clear, and 71% felt they could make informed decisions. Overall, 71% were satisfied, 57% felt respected and confident, and 57% would recommend it, indicating areas needing refinement. However, concerns about data privacy and the lack of personal interaction were significant, with only 42% reassured about the security of their data. The standardization of information provided by AI was appreciated for potentially reducing human error, but the absence of empathetic human interaction was noted as a drawback. Discussion While AI shows promise in enhancing the consistency and comprehensiveness of information delivered during the consent process, significant challenges remain. These include addressing data privacy concerns and bridging the gap in personal interaction. The potential for AI to misinform due to system "hallucinations" or inherent biases also needs consideration. Future research should focus on refining AI interactions to support more nuanced and empathetic engagements, ensuring that AI supplements rather than replacing human elements in healthcare. Conclusion The integration of AI into surgical consent processes could standardize and potentially improve the delivery of information but must be balanced with efforts to maintain the critical human elements of care. Collaborative efforts between developers, clinicians, and ethicists are essential to optimize AI use, ensuring it complements the traditional consent process while enhancing patient satisfaction and trust.
Collapse
Affiliation(s)
| | - Laura Mills
- General Practice, Dyfed Road Surgery, Swansea, GBR
| | | |
Collapse
|
10
|
Lin HL, Liao LL, Wang YN, Chang LC. Attitude and utilization of ChatGPT among registered nurses: A cross-sectional study. Int Nurs Rev 2024. [PMID: 38979771 DOI: 10.1111/inr.13012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2024] [Accepted: 06/10/2024] [Indexed: 07/10/2024]
Abstract
AIM This study explores the influencing factors of attitudes and behaviors toward use of ChatGPT based on the Technology Acceptance Model among registered nurses in Taiwan. BACKGROUND The complexity of medical services and nursing shortages increases workloads. ChatGPT swiftly answers medical questions, provides clinical guidelines, and assists with patient information management, thereby improving nursing efficiency. INTRODUCTION To facilitate the development of effective ChatGPT training programs, it is essential to examine registered nurses' attitudes toward and utilization of ChatGPT across diverse workplace settings. METHODS An anonymous online survey was used to collect data from over 1000 registered nurses recruited through social media platforms between November 2023 and January 2024. Descriptive statistics and multiple linear regression analyses were conducted for data analysis. RESULTS Among respondents, some were unfamiliar with ChatGPT, while others had used it before, with higher usage among males, higher-educated individuals, experienced nurses, and supervisors. Gender and work settings influenced perceived risks, and those familiar with ChatGPT recognized its social impact. Perceived risk and usefulness significantly influenced its adoption. DISCUSSION Nurse attitudes to ChatGPT vary based on gender, education, experience, and role. Positive perceptions emphasize its usefulness, while risk concerns affect adoption. The insignificant role of perceived ease of use highlights ChatGPT's user-friendly nature. CONCLUSION Over half of the surveyed nurses had used or were familiar with ChatGPT and showed positive attitudes toward its use. Establishing rigorous guidelines to enhance their interaction with ChatGPT is crucial for future training. IMPLICATIONS FOR NURSING AND HEALTH POLICY Nurse managers should understand registered nurses' attitudes toward ChatGPT and integrate it into in-service education with tailored support and training, including appropriate prompt formulation and advanced decision-making, to prevent misuse.
Collapse
Affiliation(s)
- Hui-Ling Lin
- Department of Nursing, Linkou Branch, Chang Gung Memorial Hospital, Taoyuan, Taiwan, ROC
- School of Nursing, College of Medicine, Chang Gung University, Taoyuan, Taiwan, ROC
- School of Nursing, Chang Gung University of Science and Technology, Gui-Shan Town, Taoyuan, Taiwan, ROC
- Taipei Medical University, Taipei, Taiwan
| | - Li-Ling Liao
- Department of Public Health, College of Health Science, Kaohsiung Medical University, Kaohsiung City, Taiwan
- Department of Medical Research, Kaohsiung Medical University Hospital, Kaohsiung City, Taiwan
| | - Ya-Ni Wang
- School of Nursing, College of Medicine, Chang Gung University, Taoyuan, Taiwan, ROC
| | - Li-Chun Chang
- Department of Nursing, Linkou Branch, Chang Gung Memorial Hospital, Taoyuan, Taiwan, ROC
- School of Nursing, College of Medicine, Chang Gung University, Taoyuan, Taiwan, ROC
- School of Nursing, Chang Gung University of Science and Technology, Gui-Shan Town, Taoyuan, Taiwan, ROC
| |
Collapse
|
11
|
Syed W, Bashatah A, Alharbi K, Bakarman SS, Asiri S, Alqahtani N. Awareness and Perceptions of ChatGPT Among Academics and Research Professionals in Riyadh, Saudi Arabia: Implications for Responsible AI Use. Med Sci Monit 2024; 30:e944993. [PMID: 38976518 PMCID: PMC11302236 DOI: 10.12659/msm.944993] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2024] [Accepted: 05/21/2024] [Indexed: 07/10/2024] Open
Abstract
BACKGROUND Chat Generative Pre-Trained (ChatGPT) Transformer was created by OpenAI and has a powerful tool used in research. This study aimed to assess the awareness and perceptions of ChatGPT among researchers and academicians at King Saud University, Riyadh, Saudi Arabia. MATERIAL AND METHODS A self-administered cross-sectional study was conducted among academicians and researchers from November 2023 to March 2024 using electronic questionnaires prepared in Google Forms. The data were collected using the Tawasul platform, which sent the electronic questionnaires to the targeted population. To determine the association between variables, the chi-square or Fisher exact test was applied at a significance level of <0.05. To find predictors of use of ChatGPT, multiple linear regression analysis was applied. RESULTS A response rate of 66.5% was obtained. Among those, 60.2% (n=121) had expertise in computer skills and 63.7% were familiar with ChatGPT. The respondents' gender, age, and specialization had a significant association with familiarity with ChatGPT (p<0.001). The results of the multiple linear regression analysis revealed that there was a significant association between the use of ChatGPT, age (B=0.048; SE=0.022; t=2.207; p=.028; CI=0.005-0.092) gender (B=0.330; SE=0.067; t=4.906; p=.001; CI=197-.462) and nationality, (B=0.194; SE=0.065; t=2.982; p=.003, CI=.066-.322). CONCLUSIONS The growing use of ChatGPT in scholarly research offers a chance to promote the ethical and responsible use of artificial intelligence. Future studies ought to concentrate on assessing ChatGPT's clinical results and comparing its effectiveness to those of other ChatGPT and other AI tools.
Collapse
Affiliation(s)
- Wajid Syed
- Department of Clinical Pharmacy, College of Pharmacy, King Saud University, Riyadh, Saudi Arabia
| | - Adel Bashatah
- Department of Nursing Administration and Education, College of Nursing, King Saud University, Riyadh, Saudi Arabia
| | - Kholoud Alharbi
- Department of Nursing Administration and Education, College of Nursing, King Saud University, Riyadh, Saudi Arabia
| | - Safiya Salem Bakarman
- Department of Community and Mental Health Nursing, College of Nursing, King Saud University, Riyadh, Saudi Arabia
| | - Saeed Asiri
- Department of Nursing Administration and Education, College of Nursing, King Saud University, Riyadh, Saudi Arabia
| | - Naji Alqahtani
- Department of Nursing Administration and Education, College of Nursing, King Saud University, Riyadh, Saudi Arabia
| |
Collapse
|
12
|
Vyas R, Pawa A, Shaikh C, Singh A, Shah H, Jain S, Brar V. ChatGPT for Patients: A Comprehensive Study on Atrial Fibrillation Awareness. J Innov Card Rhythm Manag 2024; 15:5946-5949. [PMID: 39011463 PMCID: PMC11238883 DOI: 10.19102/icrm.2024.15072] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2024] [Accepted: 03/14/2024] [Indexed: 07/17/2024] Open
Abstract
Due to the intricate nature of atrial fibrillation (AF), the diagnostic process often gives rise to a spectrum of concerns and inquiries. A 20-question survey on AF, covering general concerns, diagnosis, treatment, and post-diagnosis inquiries, was conducted via Google Forms (Google LLC, Mountain View, CA, USA). The questions were input into the Chat Generative Pre-trained Transformer (ChatGPT) system (OpenAI LP, San Francisco, CA, USA) in November 2023, and the responses were meticulously collated within the same Google Forms. The survey, involving 30 experienced physicians, including 22 cardiologists and 8 hospitalists, practicing for an average of 18 years, assessed artificial intelligence (AI)-generated responses to 20 medical queries. Out of 600 evaluations, "excellent" responses were most common (29.50%), followed by "very good" (26%), "good" (19.50%), and "fair" (17.3%). The least common response was "poor" (7.67%). Questions were categorized into "general concerns," "diagnosis-related," "treatment-related," and "post-diagnosis general questions." Across all categories, >50% of experts rated responses as "excellent" or "very good," indicating the potential for improvement in the AI's clinical response methodology. This study highlights the efficacy of ChatGPT as an AF informational resource, with expert-rated responses comparable to those of clinicians. While proficient, concerns include infrequent updates and ethical considerations. Nevertheless, it underscores the growing role of AI in health care information access.
Collapse
Affiliation(s)
- Rahul Vyas
- Department of Internal Medicine, LSU, Shreveport, LA, USA
| | - Arpita Pawa
- Department of Internal Medicine, Willis-Knighton Health System, Shreveport, LA, USA
| | - Chanza Shaikh
- Department of Internal Medicine, LSU, Shreveport, LA, USA
| | - Anaiya Singh
- Department of Internal Medicine, LSU, Shreveport, LA, USA
| | - Hetvi Shah
- R.C.S.M Government Medical College, Kolhapur, Maharashtra, India
| | | | | |
Collapse
|
13
|
Daungsupawong H, Wiwanitkit V. Optimizing ChatGPT's performance in hypertension care: Correspondence. J Clin Hypertens (Greenwich) 2024; 26:872-873. [PMID: 38874356 PMCID: PMC11232440 DOI: 10.1111/jch.14850] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2024] [Revised: 05/10/2024] [Accepted: 05/14/2024] [Indexed: 06/15/2024]
Affiliation(s)
| | - Viroj Wiwanitkit
- Medical College, Saveetha Institute of Medical and Technical Sciences Saveetha University India, Chennai, India
| |
Collapse
|
14
|
Dallari V, Liberale C, De Cecco F, Nocini R, Arietti V, Monzani D, Sacchetto L. The role of artificial intelligence in training ENT residents: a survey on ChatGPT, a new method of investigation. ACTA OTORHINOLARYNGOLOGICA ITALICA : ORGANO UFFICIALE DELLA SOCIETA ITALIANA DI OTORINOLARINGOLOGIA E CHIRURGIA CERVICO-FACCIALE 2024; 44:161-168. [PMID: 38712520 PMCID: PMC11166211 DOI: 10.14639/0392-100x-n2806] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/08/2023] [Accepted: 01/02/2024] [Indexed: 05/08/2024]
Abstract
Objective The primary focus of this study was to analyze the adoption of ChatGPT among Ear, Nose, and Throat (ENT) trainees, encompassing its role in scientific research and personal study. We examined in which year ENT trainees become involved in clinical research and how many scientific investigations they have been engaged in. Methods An online survey was distributed to ENT residents employed in Italian University Hospitals. Results Out of 609 Italian ENT trainees, 181 (29.7%) responded to the survey. Among these, 67.4% were familiar with ChatGPT, and 18.9% of them used artificial intelligence as a tool for research and study. In all, 32.6% were not familiar with ChatGPT and its functions. Within our sample, there was an increasing trend of participation by ENT trainees in scientific publications throughout their training. Conclusions ChatGPT remains relatively unfamiliar and underutilised in Italy, even though it could be a valuable and efficient tool for ENT trainees, providing quick access for study and research through both personal computers and smartphones.
Collapse
Affiliation(s)
- Virginia Dallari
- Unit of Otorhinolaryngology, Head & Neck Department, University of Verona, Verona, Italy
- Member of the Young Confederation of European ORL-HNS
| | - Carlotta Liberale
- Unit of Otorhinolaryngology, Head & Neck Department, University of Verona, Verona, Italy
| | - Francesca De Cecco
- Unit of Otorhinolaryngology, Head & Neck Department, University of Verona, Verona, Italy
| | - Riccardo Nocini
- Unit of Otorhinolaryngology, Head & Neck Department, University of Verona, Verona, Italy
- Member of the Young Confederation of European ORL-HNS
| | - Valerio Arietti
- Unit of Otorhinolaryngology, Head & Neck Department, University of Verona, Verona, Italy
| | - Daniele Monzani
- Unit of Otorhinolaryngology, Head & Neck Department, University of Verona, Verona, Italy
| | - Luca Sacchetto
- Unit of Otorhinolaryngology, Head & Neck Department, University of Verona, Verona, Italy
| |
Collapse
|
15
|
Naamati-Schneider L. Enhancing AI competence in health management: students' experiences with ChatGPT as a learning Tool. BMC MEDICAL EDUCATION 2024; 24:598. [PMID: 38816721 PMCID: PMC11140890 DOI: 10.1186/s12909-024-05595-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/23/2024] [Accepted: 05/23/2024] [Indexed: 06/01/2024]
Abstract
BACKGROUND The healthcare industry has had to adapt to significant shifts caused by technological advancements, demographic changes, economic pressures, and political dynamics. These factors are reshaping the complex ecosystem in which healthcare organizations operate and have forced them to modify their operations in response to the rapidly evolving landscape. The increase in automation and the growing importance of digital and virtual environments are the key drivers necessitating this change. In the healthcare sector in particular, processes of change, including the incorporation of artificial intelligent language models like ChatGPT into daily life, necessitate a reevaluation of digital literacy skills. METHODS This study proposes a novel pedagogical framework that integrates problem-based learning with the use of ChatGPT for undergraduate healthcare management students, while qualitatively exploring the students' experiences with this technology through a thematic analysis of the reflective journals of 65 students. RESULTS Through the data analysis, the researcher identified five main categories: (1) Use of Literacy Skills; (2) User Experiences with ChatGPT; (3) ChatGPT Information Credibility; (4) Challenges and Barriers when Working with ChatGPT; (5) Mastering ChatGPT-Prompting Competencies. The findings show that incorporating digital tools, and particularly ChatGPT, in medical education has a positive impact on students' digital literacy and on AI Literacy skills. CONCLUSIONS The results underscore the evolving nature of these skills in an AI-integrated educational environment and offer valuable insights into students' perceptions and experiences. The study contributes to the broader discourse about the need for updated AI literacy skills in medical education from the early stages of education.
Collapse
|
16
|
Alhasan K, Alsalmi AA, Almaiman W, Al Herbish AJ, Farhat A, Sandokji I, Aloufi M, Faqeehi HY, Abdulmajeed N, Alanazi A, AlHassan A, Alshathri A, Almalki AM, Bafageeh AA, Aldajani AM, AlMuzain A, Almuteri FS, Nasser HH, Al Alsheikh K, Almokali KM, Maghfuri M, Abukhatwah MW, Ahmed MAM, Fatani N, Al-Harbi N, AlDhaferi RF, Amohaimeed S, AlSannaa ZH, Shalaby MA, Raina R, Broering DC, Kari JA, Temsah MH. Insight into prevalence, etiology, and modalities of pediatric chronic dialysis: a comprehensive nationwide analysis. Pediatr Nephrol 2024; 39:1559-1566. [PMID: 38091245 DOI: 10.1007/s00467-023-06245-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/26/2023] [Revised: 11/15/2023] [Accepted: 11/18/2023] [Indexed: 03/16/2024]
Abstract
BACKGROUND This study aimed to determine the prevalence and etiology of kidney failure (KF) among children below 15 years of age receiving chronic dialysis in Saudi Arabia and describe their dialysis modalities. METHODS This cross-sectional descriptive study was conducted on 8 August 2022, encompassing all 23 pediatric dialysis centers in Saudi Arabia. Data gathered comprised patient demographics, causes of KF, and the dialysis methods employed. Collected data underwent analysis to determine prevalence of children undergoing chronic dialysis, discern underlying causes of KF, and evaluate distribution of patients across different dialysis modalities. RESULTS The prevalence of children on chronic dialysis is 77.6 per million children living in Saudi Arabia, equating to 419 children. The predominant underlying cause of KF was congenital anomalies of the kidneys and urinary tract (CAKUT), representing a substantial 41% of cases. Following this, others or unknown etiologies accounted for a noteworthy 25% of cases, with focal segmental glomerulosclerosis (FSGS) comprising 13%, glomerulonephritis at 11%, and congenital nephrotic syndrome contributing 10% to etiological distribution. Regarding dialysis modalities employed, 67% of patients were on peritoneal dialysis (PD), while the remaining 33% were on hemodialysis (HD). CONCLUSIONS This first nationwide study of pediatric chronic dialysis in Saudi Arabia sheds light on the prevalence of children undergoing chronic dialysis and underlying causes of their KF, thereby contributing to our understanding of clinical management considerations. This research serves as a stepping stone for the development of national registries.
Collapse
Affiliation(s)
- Khalid Alhasan
- Pediatric Department, College of Medicine, King Saud University, Riyadh, Saudi Arabia.
- Organ Transplant Center of Excellence, King Faisal Specialist Hospital & Research Center, Riyadh, Saudi Arabia.
- Division of Nephrology, Department of Pediatrics, King Saud University Medical City, King Saud University, Riyadh, Saudi Arabia.
| | - Amro Attaf Alsalmi
- Division of Nephrology, Department of Pediatrics, King Saud University Medical City, King Saud University, Riyadh, Saudi Arabia
| | - Weiam Almaiman
- Organ Transplant Center of Excellence, King Faisal Specialist Hospital & Research Center, Riyadh, Saudi Arabia
- College of Medicine, Alfaisal University, Riyadh, Saudi Arabia
| | - Adi J Al Herbish
- Pediatric Nephrology Division, Pediatric Department, King Abdullah Specialized Children Hospital, Ministry of National Guard, Riyadh, Saudi Arabia
| | - Afrah Farhat
- Division of Nephrology, Department of Pediatrics, King Saud University Medical City, King Saud University, Riyadh, Saudi Arabia
| | - Ibrahim Sandokji
- Section of Nephrology, Department of Pediatrics, College of Medicine, Taibah University, Medina, Saudi Arabia
| | - Majed Aloufi
- Pediatric Nephrology Department, Prince Sultan Military Medical City, Riyadh, Saudi Arabia
| | - Hassan Yahya Faqeehi
- Division of Pediatric Nephrology, King Fahad Medical City, Children Specialized Hospital, Riyadh, Saudi Arabia
| | - Naif Abdulmajeed
- Pediatric Department, College of Medicine, King Saud University, Riyadh, Saudi Arabia
- Division of Nephrology, Department of Pediatrics, King Saud University Medical City, King Saud University, Riyadh, Saudi Arabia
- Pediatric Nephrology Department, Prince Sultan Military Medical City, Riyadh, Saudi Arabia
| | - Abdulkarim Alanazi
- Division of Pediatric Nephrology, King Fahad Medical City, Children Specialized Hospital, Riyadh, Saudi Arabia
| | - Abdulaziz AlHassan
- Pediatric Department, Maternity and Children Hospital, Ministry of Health, Al Ahsa, Saudi Arabia
| | - Abdulaziz Alshathri
- Pediatric Nephrology Department, King Saud Medical City, Riyadh, Saudi Arabia
| | - Abeer Mohammad Almalki
- Pediatric Nephrology Department, Children's Hospital, Ministry of Health, Taif, Saudi Arabia
| | - Afaf Alawi Bafageeh
- Center of Multi-Organ Transplant, King Fahad Specialist Hospital, Dammam, Saudi Arabia
| | - Ali M Aldajani
- Pediatric Nephrology Department, Maternity Children Hospital, Dammam, Saudi Arabia
| | - Ashraf AlMuzain
- Pediatric Department, King Fahd Hospital of the University, Khobar, Saudi Arabia
| | - Faten Sudan Almuteri
- Pediatric Nephrology Division, Pediatric Department, King Salman Bin Abdulaziz Medical City, Ministry of Health, Madina, Saudi Arabia
| | - Haydar Hassan Nasser
- Division of Nephrology, Pediatric Department, King Fahd Armed Forces Hospital, Jeddah, Saudi Arabia
| | - Khalid Al Alsheikh
- Pediatric Department, Maternity and Children Hospital, Abha, Saudi Arabia
| | - Khamisa Mohamed Almokali
- Pediatric Nephrology Division, Pediatric Department, King Abdullah Specialized Children Hospital, Ministry of National Guard, Riyadh, Saudi Arabia
| | - Magbul Maghfuri
- Pediatric Nephrology Department, King Fahad Central Hospital, Jazan, Saudi Arabia
| | - Mohamed Waleed Abukhatwah
- Pediatric Nephrology Section, Pediatric Department, Alhada Armed Forces Hospital, Taif, Saudi Arabia
| | | | - Naeima Fatani
- Pediatric Department, Maternity and Childcare Hospital, Ministry of Health, Makkah, Saudi Arabia
| | - Naffaa Al-Harbi
- Department of Pediatrics, King Faisal Specialist Hospital & Research Center, Jeddah, Saudi Arabia
| | - Rezqah Fajor AlDhaferi
- Organ Transplant Center of Excellence, King Faisal Specialist Hospital & Research Center, Riyadh, Saudi Arabia
| | - Sulaiman Amohaimeed
- Pediatric Department, King Fahad Military Medical Complex, Dhahran, Saudi Arabia
| | | | - Mohamed A Shalaby
- Department of Pediatrics, Faculty of Medicine, King Abdulaziz University, Jeddah, Saudi Arabia
- Pediatric Nephrology Centre of Excellence, King Abdulaziz University Hospital, Jeddah, Saudi Arabia
| | - Rupesh Raina
- Department of Nephrology, Akron Children's Hospital, Akron, OH, USA
| | - Dieter Clemens Broering
- Organ Transplant Center of Excellence, King Faisal Specialist Hospital & Research Center, Riyadh, Saudi Arabia
| | - Jameela A Kari
- Department of Pediatrics, Faculty of Medicine, King Abdulaziz University, Jeddah, Saudi Arabia
- Pediatric Nephrology Centre of Excellence, King Abdulaziz University Hospital, Jeddah, Saudi Arabia
| | - Mohamad-Hani Temsah
- Pediatric Department, College of Medicine, King Saud University, Riyadh, Saudi Arabia.
- Evidence-Based Healthcare and Knowledge Translation Research Chair, King Saud University, Riyadh, Saudi Arabia.
| |
Collapse
|
17
|
Temsah MH, Jamal A, Alhasan K, Aljamaan F, Altamimi I, Malki KH, Temsah A, Ohannessian R, Al-Eyadhy A. Transforming Virtual Healthcare: The Potentials of ChatGPT-4omni in Telemedicine. Cureus 2024; 16:e61377. [PMID: 38817799 PMCID: PMC11139454 DOI: 10.7759/cureus.61377] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/30/2024] [Indexed: 06/01/2024] Open
Abstract
The introduction of OpenAI's ChatGPT-4omni (GPT-4o) represents a potential advancement in virtual healthcare and telemedicine. GPT-4o excels in processing audio, visual, and textual data in real time, offering possible enhancements in understanding natural language in both English and non-English contexts. Furthermore, the new "Temporary Chat" feature may improve privacy and data confidentiality during interactions, potentially increasing integration with healthcare systems. These innovations promise to enhance communication clarity, facilitate the integration of medical images, and increase data privacy in online consultations. This editorial explores some future implications of these advancements for telemedicine, highlighting the necessity for further research on reliability and the integration of advanced language models with human expertise.
Collapse
Affiliation(s)
- Mohamad-Hani Temsah
- Pediatric Intensive Care Unit, Pediatric Department, King Saud University Medical City, College of Medicine, King Saud University, Riyadh, SAU
| | - Amr Jamal
- Family and Community Medicine, King Saud University, Riyadh, SAU
| | | | - Fadi Aljamaan
- Critical Care Department, College of Medicine, King Saud University, Riyadh, SAU
| | | | - Khalid H Malki
- Department of Otolaryngology, College of Medicine, King Saud University, Riyadh, SAU
| | - Abdulrahman Temsah
- Software Engineering Department, College of Engineering, Alfaisal University, Riyadh, SAU
| | | | - Ayman Al-Eyadhy
- Department of Pediatrics, Pediatric Intensive Care Unit, College of Medicine, King Saud University, Riyadh, SAU
- Pediatric Intensive Care Unit, King Saud University Medical City, Riyadh, SAU
| |
Collapse
|
18
|
Ferdush J, Begum M, Hossain ST. ChatGPT and Clinical Decision Support: Scope, Application, and Limitations. Ann Biomed Eng 2024; 52:1119-1124. [PMID: 37516680 DOI: 10.1007/s10439-023-03329-4] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2023] [Accepted: 07/18/2023] [Indexed: 07/31/2023]
Abstract
This study examines ChatGPT's role in clinical decision support, by analyzing its scope, application, and limitations. By analyzing patient data and providing evidence-based recommendations, ChatGPT, an AI language model, can help healthcare professionals make well-informed decisions. This study examines ChatGPT's use in clinical decision support, including diagnosis and treatment planning. However, it acknowledges limitations like biases, lack of contextual understanding, and human oversight and also proposes a framework for the future clinical decision support system. Understanding these factors will allow healthcare professionals to utilize ChatGPT effectively and make accurate clinical decisions. Further research is needed to understand the implications of using ChatGPT in healthcare settings and to develop safeguards for responsible use.
Collapse
Affiliation(s)
- Jannatul Ferdush
- Department of Computer Science and Engineering, Jashore University of Science and Technology, Jashore, 7408, Bangladesh.
| | - Mahbuba Begum
- Department of Computer Science and Engineering, Mawlana Bhasani Science and Technology, Tangail, 1902, Bangladesh
| | - Sakib Tanvir Hossain
- Department of Mechanical Engineering, Khulna University of Engineering and Technology, Khulna, 9203, Bangladesh
| |
Collapse
|
19
|
Alsanosi SM, Padmanabhan S. Potential Applications of Artificial Intelligence (AI) in Managing Polypharmacy in Saudi Arabia: A Narrative Review. Healthcare (Basel) 2024; 12:788. [PMID: 38610210 PMCID: PMC11011812 DOI: 10.3390/healthcare12070788] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2024] [Revised: 04/03/2024] [Accepted: 04/04/2024] [Indexed: 04/14/2024] Open
Abstract
Prescribing medications is a fundamental practice in the management of illnesses that necessitates in-depth knowledge of clinical pharmacology. Polypharmacy, or the concurrent use of multiple medications by individuals with complex health conditions, poses significant challenges, including an increased risk of drug interactions and adverse reactions. The Saudi Vision 2030 prioritises enhancing healthcare quality and safety, including addressing polypharmacy. Artificial intelligence (AI) offers promising tools to optimise medication plans, predict adverse drug reactions and ensure drug safety. This review explores AI's potential to revolutionise polypharmacy management in Saudi Arabia, highlighting practical applications, challenges and the path forward for the integration of AI solutions into healthcare practices.
Collapse
Affiliation(s)
- Safaa M. Alsanosi
- Department of Pharmacology and Toxicology, Faculty of Medicine, Umm Al Qura University, Makkah 24382, Saudi Arabia
- BHF Glasgow Cardiovascular Research Centre, School of Cardiovascular and Metabolic Health, University of Glasgow, Glasgow G12 8QQ, UK;
| | - Sandosh Padmanabhan
- BHF Glasgow Cardiovascular Research Centre, School of Cardiovascular and Metabolic Health, University of Glasgow, Glasgow G12 8QQ, UK;
| |
Collapse
|
20
|
Gupta V, Yang H. Study protocol for factors influencing the adoption of ChatGPT technology by startups: Perceptions and attitudes of entrepreneurs. PLoS One 2024; 19:e0298427. [PMID: 38358993 PMCID: PMC10868733 DOI: 10.1371/journal.pone.0298427] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2023] [Accepted: 01/21/2024] [Indexed: 02/17/2024] Open
Abstract
BACKGROUND Generative Artificial Intelligence (AI) technology, for instance Chat Generative Pre-trained Transformer (ChatGPT), is continuously evolving, and its userbase is growing. These technologies are now being experimented by the businesses to leverage their potential and minimise their risks in business operations. The continuous adoption of the emerging Generative AI technologies will help startups gain more and more experience with adoptions, helping them to leverage continuously evolving technological innovation landscape. However, there is a dearth of prior research on ChatGPT adoption in the startup context, especially from Entrepreneur perspective, highlights the urgent need for a thorough investigation to identify the variables influencing this technological adoption. The primary objective of this study is to ascertain the factors that impact the uptake of ChatGPT technology by startups, anticipate their influence on the triumph of companies, and offer pragmatic suggestions for various stakeholders, including entrepreneurs, and policymakers. METHOD AND ANALYSIS This study attempts to explore the variables impacting startups' adoption of ChatGPT technology, with an emphasis on comprehending entrepreneurs' attitudes and perspectives. To identify and then empirically validate the Generative AI technology adoption framework, the study uses a two-stage methodology that includes experience-based research, and survey research. The research method design is descriptive and Correlational design. Stage one of the research study is descriptive and involves adding practical insights, and real-world context to the model by drawing from the professional consulting experiences of the researchers with the SMEs. The outcome of this stage is the adoption model (also called as research framework), building Upon Technology Adoption Model (TAM), that highlight the technology adoption factors (also called as latent variables) connected with subset of each other and finally to the technology adoption factor (or otherwise). Further, the latent variables and their relationships with other latent variables as graphically highlighted by the adoption model will be translated into the structured questionnaire. Stage two involves survey based research. In this stage, structured questionnaire is tested with small group of entrepreneurs (who has provided informed consent) and finally to be distributed among startup founders to further validate the relationships between these factors and the level of influence individual factors have on overall technology adoption. Partial Least Squares Structural Equation Modeling (PLS-SEM) will be used to analyze the gathered data. This multifaceted approach allows for a comprehensive analysis of the adoption process, with an emphasis on understanding, describing, and correlating the key elements at play. DISCUSSION This is the first study to investigate the factors that impact the adoption of Generative AI, for instance ChatGPT technology by startups from the Entrepreneurs perspectives. The study's findings will give Entrepreneurs, Policymakers, technology providers, researchers, and Institutions offering support for entrepreneurs like Academia, Incubators and Accelerators, University libraries, public libraries, chambers of commerce, and foreign embassies important new information that will help them better understand the factors that encourage and hinder ChatGPT adoption. This will allow them to make well-informed strategic decisions about how to apply and use this technology in startup settings thereby improving their services for businesses.
Collapse
Affiliation(s)
- Varun Gupta
- School of Computing and Mathematical Sciences, Leicester University, Leicester, England
- Multidisciplinary Research Centre for Innovations in SMEs (MrciS), Gisma University of Applied Sciences, Potsdam, Germany
- Department of Economics and Business Administration, University of Alcala, Alcalá de Henares (Madrid), Madrid, Spain
| | - Hongji Yang
- School of Computing and Mathematical Sciences, Leicester University, Leicester, England
| |
Collapse
|
21
|
Kapsali MZ, Livanis E, Tsalikidis C, Oikonomou P, Voultsos P, Tsaroucha A. Ethical Concerns About ChatGPT in Healthcare: A Useful Tool or the Tombstone of Original and Reflective Thinking? Cureus 2024; 16:e54759. [PMID: 38523987 PMCID: PMC10961144 DOI: 10.7759/cureus.54759] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/23/2024] [Indexed: 03/26/2024] Open
Abstract
Artificial intelligence (AI), the uprising technology of computer science aiming to create digital systems with human behavior and intelligence, seems to have invaded almost every field of modern life. Launched in November 2022, ChatGPT (Chat Generative Pre-trained Transformer) is a textual AI application capable of creating human-like responses characterized by original language and high coherence. Although AI-based language models have demonstrated impressive capabilities in healthcare, ChatGPT has received controversial annotations from the scientific and academic communities. This chatbot already appears to have a massive impact as an educational tool for healthcare professionals and transformative potential for clinical practice and could lead to dramatic changes in scientific research. Nevertheless, rational concerns were raised regarding whether the pre-trained, AI-generated text would be a menace not only for original thinking and new scientific ideas but also for academic and research integrity, as it gets more and more difficult to distinguish its AI origin due to the coherence and fluency of the produced text. This short review aims to summarize the potential applications and the consequential implications of ChatGPT in the three critical pillars of medicine: education, research, and clinical practice. In addition, this paper discusses whether the current use of this chatbot is in compliance with the ethical principles for the safe use of AI in healthcare, as determined by the World Health Organization. Finally, this review highlights the need for an updated ethical framework and the increased vigilance of healthcare stakeholders to harvest the potential benefits and limit the imminent dangers of this new innovative technology.
Collapse
Affiliation(s)
- Marina Z Kapsali
- Postgraduate Program on Bioethics, Laboratory of Bioethics, Democritus University of Thrace, Alexandroupolis, GRC
| | - Efstratios Livanis
- Department of Accounting and Finance, University of Macedonia, Thessaloniki, GRC
| | - Christos Tsalikidis
- Department of General Surgery, Democritus University of Thrace, Alexandroupolis, GRC
| | - Panagoula Oikonomou
- Laboratory of Experimental Surgery, Department of General Surgery, Democritus University of Thrace, Alexandroupolis, GRC
| | - Polychronis Voultsos
- Laboratory of Forensic Medicine & Toxicology (Medical Law and Ethics), School of Medicine, Faculty of Health Sciences, Aristotle University of Thessaloniki, Thessaloniki, GRC
| | - Aleka Tsaroucha
- Department of General Surgery, Democritus University of Thrace, Alexandroupolis, GRC
| |
Collapse
|
22
|
Elyoseph Z, Levkovich I, Shinan-Altman S. Assessing prognosis in depression: comparing perspectives of AI models, mental health professionals and the general public. Fam Med Community Health 2024; 12:e002583. [PMID: 38199604 PMCID: PMC10806564 DOI: 10.1136/fmch-2023-002583] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/12/2024] Open
Abstract
BACKGROUND Artificial intelligence (AI) has rapidly permeated various sectors, including healthcare, highlighting its potential to facilitate mental health assessments. This study explores the underexplored domain of AI's role in evaluating prognosis and long-term outcomes in depressive disorders, offering insights into how AI large language models (LLMs) compare with human perspectives. METHODS Using case vignettes, we conducted a comparative analysis involving different LLMs (ChatGPT-3.5, ChatGPT-4, Claude and Bard), mental health professionals (general practitioners, psychiatrists, clinical psychologists and mental health nurses), and the general public that reported previously. We evaluate the LLMs ability to generate prognosis, anticipated outcomes with and without professional intervention, and envisioned long-term positive and negative consequences for individuals with depression. RESULTS In most of the examined cases, the four LLMs consistently identified depression as the primary diagnosis and recommended a combined treatment of psychotherapy and antidepressant medication. ChatGPT-3.5 exhibited a significantly pessimistic prognosis distinct from other LLMs, professionals and the public. ChatGPT-4, Claude and Bard aligned closely with mental health professionals and the general public perspectives, all of whom anticipated no improvement or worsening without professional help. Regarding long-term outcomes, ChatGPT 3.5, Claude and Bard consistently projected significantly fewer negative long-term consequences of treatment than ChatGPT-4. CONCLUSIONS This study underscores the potential of AI to complement the expertise of mental health professionals and promote a collaborative paradigm in mental healthcare. The observation that three of the four LLMs closely mirrored the anticipations of mental health experts in scenarios involving treatment underscores the technology's prospective value in offering professional clinical forecasts. The pessimistic outlook presented by ChatGPT 3.5 is concerning, as it could potentially diminish patients' drive to initiate or continue depression therapy. In summary, although LLMs show potential in enhancing healthcare services, their utilisation requires thorough verification and a seamless integration with human judgement and skills.
Collapse
Affiliation(s)
- Zohar Elyoseph
- Department of Psychology and Educational Counseling, The Center for Psychobiological Research, Max Stern Yezreel Valley College, Yezreel Valley, Israel
- Department of Brain Sciences, Imperial College London, London, UK
| | - Inbar Levkovich
- Faculty of Graduate Studies, Oranim Academic College, Tivon, Israel
| | - Shiri Shinan-Altman
- The Louis and Gabi Weisfeld School of Social Work, Bar-Ilan University, Ramat Gan, Tel Aviv, Israel
| |
Collapse
|
23
|
Younis HA, Eisa TAE, Nasser M, Sahib TM, Noor AA, Alyasiri OM, Salisu S, Hayder IM, Younis HA. A Systematic Review and Meta-Analysis of Artificial Intelligence Tools in Medicine and Healthcare: Applications, Considerations, Limitations, Motivation and Challenges. Diagnostics (Basel) 2024; 14:109. [PMID: 38201418 PMCID: PMC10802884 DOI: 10.3390/diagnostics14010109] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2023] [Revised: 12/02/2023] [Accepted: 12/04/2023] [Indexed: 01/12/2024] Open
Abstract
Artificial intelligence (AI) has emerged as a transformative force in various sectors, including medicine and healthcare. Large language models like ChatGPT showcase AI's potential by generating human-like text through prompts. ChatGPT's adaptability holds promise for reshaping medical practices, improving patient care, and enhancing interactions among healthcare professionals, patients, and data. In pandemic management, ChatGPT rapidly disseminates vital information. It serves as a virtual assistant in surgical consultations, aids dental practices, simplifies medical education, and aids in disease diagnosis. A total of 82 papers were categorised into eight major areas, which are G1: treatment and medicine, G2: buildings and equipment, G3: parts of the human body and areas of the disease, G4: patients, G5: citizens, G6: cellular imaging, radiology, pulse and medical images, G7: doctors and nurses, and G8: tools, devices and administration. Balancing AI's role with human judgment remains a challenge. A systematic literature review using the PRISMA approach explored AI's transformative potential in healthcare, highlighting ChatGPT's versatile applications, limitations, motivation, and challenges. In conclusion, ChatGPT's diverse medical applications demonstrate its potential for innovation, serving as a valuable resource for students, academics, and researchers in healthcare. Additionally, this study serves as a guide, assisting students, academics, and researchers in the field of medicine and healthcare alike.
Collapse
Affiliation(s)
- Hussain A. Younis
- College of Education for Women, University of Basrah, Basrah 61004, Iraq
| | | | - Maged Nasser
- Computer & Information Sciences Department, Universiti Teknologi PETRONAS, Seri Iskandar 32610, Malaysia;
| | - Thaeer Mueen Sahib
- Kufa Technical Institute, Al-Furat Al-Awsat Technical University, Kufa 54001, Iraq;
| | - Ameen A. Noor
- Computer Science Department, College of Education, University of Almustansirya, Baghdad 10045, Iraq;
| | | | - Sani Salisu
- Department of Information Technology, Federal University Dutse, Dutse 720101, Nigeria;
| | - Israa M. Hayder
- Qurna Technique Institute, Southern Technical University, Basrah 61016, Iraq;
| | - Hameed AbdulKareem Younis
- Department of Cybersecurity, College of Computer Science and Information Technology, University of Basrah, Basrah 61016, Iraq;
| |
Collapse
|
24
|
Tuncer GZ, Tuncer M. Investigation of nurses' general attitudes toward artificial intelligence and their perceptions of ChatGPT usage and influencing factors. Digit Health 2024; 10:20552076241277025. [PMID: 39193312 PMCID: PMC11348479 DOI: 10.1177/20552076241277025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2024] [Accepted: 08/06/2024] [Indexed: 08/29/2024] Open
Abstract
Purpose This study aimed to investigate professional nurses' general attitudes toward artificial intelligence, their knowledge and perceptions of ChatGPT usage, and the influencing factors. Methods The population of the research consists of nurses who follow a social media platform account in Turkey. The sample of the study consisted of 288 nurses who participated in the study between December 2023 and March 2024. Data were collected through an account on a social media platform via Google Forms using the Information Identification Questionnaire for ChatGPT and Artificial Intelligence Programs and the General Attitudes to Artificial Intelligence Scale (GAAIS). Results The mean scores obtained from the overall GAAIS and its Positive Attitudes subscale from the participants in this study were 67.54 ± 13.14 and 41.89 ± 11.24, respectively. Of the participants, 48.3% knew about ChatGPT and artificial intelligence programs. Of the participants, 27.8% used ChatGPT and artificial intelligence programs. Their scores for the Positive Attitude subscale were higher than were the scores of those who did not use such programs. Of the participants, 84.4% thought that nurses should be made aware of ChatGPT and artificial intelligence programs, 67% thought that the use of these programs would contribute to nurses' professional development, 42.4% thought that the use of these programs would not reduce nurses' workload, and 58.3% thought that the use of these programs would positively affect patient care. Conclusion In this study, it can be said that nurses in Turkey have positive attitudes toward integrating ChatGPT and AI programs to improve patient outcomes and add them to nursing practices. Implications for nursing practice The present study in which nurses' attitudes toward the implementation of ChatGPT and artificial intelligence programs were investigated is expected to provide information for healthcare institutions, policy makers and artificial intelligence developers on the integration of ChatGPT and artificial intelligence into nursing practice. It is necessary to create environments that use AI technologies that reduce the nursing workload of nurses in the clinical area and positively affect the quality of patient care.
Collapse
Affiliation(s)
- Gülsüm Zekiye Tuncer
- Department of Psychiatric Nursing, Faculty of Nursing, Dokuz Eylül University, Izmir, Türkiye
| | - Metin Tuncer
- Department of Nursing, Faculty of Health Sciences, Gümüşhane University, Gümüşhane, Türkiye
| |
Collapse
|
25
|
Bazzari FH, Bazzari AH. Utilizing ChatGPT in Telepharmacy. Cureus 2024; 16:e52365. [PMID: 38230387 PMCID: PMC10790595 DOI: 10.7759/cureus.52365] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/15/2024] [Indexed: 01/18/2024] Open
Abstract
BACKGROUND ChatGPT is an artificial intelligence-powered chatbot that has demonstrated capabilities in numerous fields, including medical and healthcare sciences. This study evaluates the potential for ChatGPT application in telepharmacy, the delivering of pharmaceutical care via means of telecommunications, through assessing its interactions, adherence to instructions, and ability to role-play as a pharmacist while handling a series of life-like scenario questions. METHODS Two versions (ChatGPT 3.5 and 4.0, OpenAI) were assessed using two independent trials each. ChatGPT was instructed to act as a pharmacist and answer patient inquiries, followed by a set of 20 assessment questions. Then, ChatGPT was instructed to stop its act, provide feedback and list its sources for drug information. The responses to the assessment questions were evaluated in terms of accuracy, precision and clarity using a 4-point Likert-like scale. RESULTS ChatGPT demonstrated the ability to follow detailed instructions, role-play as a pharmacist, and appropriately handle all questions. ChatGPT was able to understand case details, recognize generic and brand drug names, identify drug side effects, interactions, prescription requirements and precautions, and provide proper point-by-point instructions regarding administration, dosing, storage and disposal. The overall means of pooled scores were 3.425 (0.712) and 3.7 (0.61) for ChatGPT 3.5 and 4.0, respectively. The rank distribution of scores was not significantly different (P>0.05). None of the answers could be considered directly harmful or labeled as entirely or mostly incorrect, and most point deductions were due to other factors such as indecisiveness, adding immaterial information, missing certain considerations, or partial unclarity. The answers were similar in length across trials and appropriately concise. ChatGPT 4.0 showed superior performance, higher consistency, better character adherence and the ability to report various reliable information sources. However, it only allowed an input of 40 questions every three hours and provided inaccurate feedback regarding the number of assessed patients, compared to 3.5 which allowed unlimited input but was unable to provide feedback. CONCLUSIONS Integrating ChatGPT in telepharmacy holds promising potential; however, a number of drawbacks are to be overcome in order to function effectively.
Collapse
Affiliation(s)
| | - Amjad H Bazzari
- Basic Scientific Sciences, Applied Science Private University, Amman, JOR
| |
Collapse
|
26
|
Hillmann HAK, Angelini E, Karfoul N, Feickert S, Mueller-Leisse J, Duncker D. Accuracy and comprehensibility of chat-based artificial intelligence for patient information on atrial fibrillation and cardiac implantable electronic devices. Europace 2023; 26:euad369. [PMID: 38127304 PMCID: PMC10824484 DOI: 10.1093/europace/euad369] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2023] [Accepted: 12/19/2023] [Indexed: 12/23/2023] Open
Abstract
AIMS Natural language processing chatbots (NLPC) can be used to gather information for medical content. However, these tools contain a potential risk of misinformation. This study aims to evaluate different aspects of responses given by different NLPCs on questions about atrial fibrillation (AF) and clinical implantable electronic devices (CIED). METHODS AND RESULTS Questions were entered into three different NLPC interfaces. Responses were evaluated with regard to appropriateness, comprehensibility, appearance of confabulation, absence of relevant content, and recommendations given for clinically relevant decisions. Moreover, readability was assessed by calculating word count and Flesch Reading Ease score. 52, 60, and 84% of responses on AF and 16, 72, and 88% on CIEDs were evaluated to be appropriate for all responses given by Google Bard, (GB) Bing Chat (BC) and ChatGPT Plus (CGP), respectively. Assessment of comprehensibility showed that 96, 88, and 92% of responses on AF and 92 and 88%, and 100% on CIEDs were comprehensible for all responses created by GB, BC, and CGP, respectively. Readability varied between different NLPCs. Relevant aspects were missing in 52% (GB), 60% (BC), and 24% (CGP) for AF, and in 92% (GB), 88% (BC), and 52% (CGP) for CIEDs. CONCLUSION Responses generated by an NLPC are mostly easy to understand with varying readability between the different NLPCs. The appropriateness of responses is limited and varies between different NLPCs. Important aspects are often missed to be mentioned. Thus, chatbots should be used with caution to gather medical information about cardiac arrhythmias and devices.
Collapse
Affiliation(s)
- Henrike A K Hillmann
- Hannover Heart Rhythm Center, Department of Cardiology and Angiology, Hannover Medical School, Carl-Neuberg-Str. 1, 30625 Hannover, Germany
| | - Eleonora Angelini
- Hannover Heart Rhythm Center, Department of Cardiology and Angiology, Hannover Medical School, Carl-Neuberg-Str. 1, 30625 Hannover, Germany
| | - Nizar Karfoul
- Hannover Heart Rhythm Center, Department of Cardiology and Angiology, Hannover Medical School, Carl-Neuberg-Str. 1, 30625 Hannover, Germany
| | - Sebastian Feickert
- Department of Cardiology and Internal Intensive Care Unit, Vivantes Clinic Am Urban, Dieffenbachstraße 1, 10967 Berlin, Germany
- Department of Cardiology, University Medical Center Rostock, Ernst-Heydemann-Straße 6, 18057 Rostock, Germany
| | - Johanna Mueller-Leisse
- Hannover Heart Rhythm Center, Department of Cardiology and Angiology, Hannover Medical School, Carl-Neuberg-Str. 1, 30625 Hannover, Germany
| | - David Duncker
- Hannover Heart Rhythm Center, Department of Cardiology and Angiology, Hannover Medical School, Carl-Neuberg-Str. 1, 30625 Hannover, Germany
| |
Collapse
|
27
|
Alkhaaldi SMI, Kassab CH, Dimassi Z, Oyoun Alsoud L, Al Fahim M, Al Hageh C, Ibrahim H. Medical Student Experiences and Perceptions of ChatGPT and Artificial Intelligence: Cross-Sectional Study. JMIR MEDICAL EDUCATION 2023; 9:e51302. [PMID: 38133911 PMCID: PMC10770787 DOI: 10.2196/51302] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/27/2023] [Revised: 11/10/2023] [Accepted: 12/11/2023] [Indexed: 12/23/2023]
Abstract
BACKGROUND Artificial intelligence (AI) has the potential to revolutionize the way medicine is learned, taught, and practiced, and medical education must prepare learners for these inevitable changes. Academic medicine has, however, been slow to embrace recent AI advances. Since its launch in November 2022, ChatGPT has emerged as a fast and user-friendly large language model that can assist health care professionals, medical educators, students, trainees, and patients. While many studies focus on the technology's capabilities, potential, and risks, there is a gap in studying the perspective of end users. OBJECTIVE The aim of this study was to gauge the experiences and perspectives of graduating medical students on ChatGPT and AI in their training and future careers. METHODS A cross-sectional web-based survey of recently graduated medical students was conducted in an international academic medical center between May 5, 2023, and June 13, 2023. Descriptive statistics were used to tabulate variable frequencies. RESULTS Of 325 applicants to the residency programs, 265 completed the survey (an 81.5% response rate). The vast majority of respondents denied using ChatGPT in medical school, with 20.4% (n=54) using it to help complete written assessments and only 9.4% using the technology in their clinical work (n=25). More students planned to use it during residency, primarily for exploring new medical topics and research (n=168, 63.4%) and exam preparation (n=151, 57%). Male students were significantly more likely to believe that AI will improve diagnostic accuracy (n=47, 51.7% vs n=69, 39.7%; P=.001), reduce medical error (n=53, 58.2% vs n=71, 40.8%; P=.002), and improve patient care (n=60, 65.9% vs n=95, 54.6%; P=.007). Previous experience with AI was significantly associated with positive AI perception in terms of improving patient care, decreasing medical errors and misdiagnoses, and increasing the accuracy of diagnoses (P=.001, P<.001, P=.008, respectively). CONCLUSIONS The surveyed medical students had minimal formal and informal experience with AI tools and limited perceptions of the potential uses of AI in health care but had overall positive views of ChatGPT and AI and were optimistic about the future of AI in medical education and health care. Structured curricula and formal policies and guidelines are needed to adequately prepare medical learners for the forthcoming integration of AI in medicine.
Collapse
Affiliation(s)
- Saif M I Alkhaaldi
- Khalifa University College of Medicine and Health Sciences, Abu Dhabi, United Arab Emirates
| | - Carl H Kassab
- Khalifa University College of Medicine and Health Sciences, Abu Dhabi, United Arab Emirates
| | - Zakia Dimassi
- Department of Medical Science, Khalifa University College of Medicine and Health Sciences, Abu Dhabi, United Arab Emirates
| | - Leen Oyoun Alsoud
- Department of Medical Science, Khalifa University College of Medicine and Health Sciences, Abu Dhabi, United Arab Emirates
| | - Maha Al Fahim
- Education Institute, Sheikh Khalifa Medical City, Abu Dhabi, United Arab Emirates
| | - Cynthia Al Hageh
- Department of Medical Science, Khalifa University College of Medicine and Health Sciences, Abu Dhabi, United Arab Emirates
| | - Halah Ibrahim
- Department of Medical Science, Khalifa University College of Medicine and Health Sciences, Abu Dhabi, United Arab Emirates
| |
Collapse
|
28
|
Zawiah M, Al-Ashwal FY, Gharaibeh L, Abu Farha R, Alzoubi KH, Abu Hammour K, Qasim QA, Abrah F. ChatGPT and Clinical Training: Perception, Concerns, and Practice of Pharm-D Students. J Multidiscip Healthc 2023; 16:4099-4110. [PMID: 38116306 PMCID: PMC10729768 DOI: 10.2147/jmdh.s439223] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2023] [Accepted: 12/04/2023] [Indexed: 12/21/2023] Open
Abstract
Background The emergence of Chat-Generative Pre-trained Transformer (ChatGPT) by OpenAI has revolutionized AI technology, demonstrating significant potential in healthcare and pharmaceutical education, yet its real-world applicability in clinical training warrants further investigation. Methods A cross-sectional study was conducted between April and May 2023 to assess PharmD students' perceptions, concerns, and experiences regarding the integration of ChatGPT into clinical pharmacy education. The study utilized a convenient sampling method through online platforms and involved a questionnaire with sections on demographics, perceived benefits, concerns, and experience with ChatGPT. Statistical analysis was performed using SPSS, including descriptive and inferential analyses. Results The findings of the study involving 211 PharmD students revealed that the majority of participants were male (77.3%), and had prior experience with artificial intelligence (68.2%). Over two-thirds were aware of ChatGPT. Most students (n= 139, 65.9%) perceived potential benefits in using ChatGPT for various clinical tasks, with concerns including over-reliance, accuracy, and ethical considerations. Adoption of ChatGPT in clinical training varied, with some students not using it at all, while others utilized it for tasks like evaluating drug-drug interactions and developing care plans. Previous users tended to have higher perceived benefits and lower concerns, but the differences were not statistically significant. Conclusion Utilizing ChatGPT in clinical training offers opportunities, but students' lack of trust in it for clinical decisions highlights the need for collaborative human-ChatGPT decision-making. It should complement healthcare professionals' expertise and be used strategically to compensate for human limitations. Further research is essential to optimize ChatGPT's effective integration.
Collapse
Affiliation(s)
- Mohammed Zawiah
- Department of Clinical Pharmacy, College of Pharmacy, Northern Border University, Rafha, 91911, Saudi Arabia
- Department of Pharmacy Practice, College of Clinical Pharmacy, Hodeidah University, Al Hodeidah, Yemen
| | - Fahmi Y Al-Ashwal
- Department of Clinical Pharmacy, College of Pharmacy, Al-Ayen University, Thi-Qar, Iraq
| | - Lobna Gharaibeh
- Pharmacological and Diagnostic Research Center, Faculty of Pharmacy, Al-Ahliyya Amman University, Amman, Jordan
| | - Rana Abu Farha
- Clinical Pharmacy and Therapeutics Department, Faculty of Pharmacy, Applied Science Private University, Amman, Jordan
| | - Karem H Alzoubi
- Department of Pharmacy Practice and Pharmacotherapeutics, University of Sharjah, Sharjah, 27272, United Arab Emirates
- Department of Clinical Pharmacy, Faculty of Pharmacy, Jordan University of Science and Technology, Irbid, 22110, Jordan
| | - Khawla Abu Hammour
- Department of Clinical Pharmacy and Biopharmaceutics, Faculty of Pharmacy, University of Jordan, Amman, Jordan
| | - Qutaiba A Qasim
- Department of Clinical Pharmacy, College of Pharmacy, Al-Ayen University, Thi-Qar, Iraq
| | - Fahd Abrah
- Discipline of Social and Administrative Pharmacy, School of Pharmaceutical Sciences, Universiti Sains Malaysia, Penang, Malaysia
| |
Collapse
|
29
|
Abu Hammour K, Alhamad H, Al-Ashwal FY, Halboup A, Abu Farha R, Abu Hammour A. ChatGPT in pharmacy practice: a cross-sectional exploration of Jordanian pharmacists' perception, practice, and concerns. J Pharm Policy Pract 2023; 16:115. [PMID: 37789443 PMCID: PMC10548710 DOI: 10.1186/s40545-023-00624-2] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2023] [Accepted: 09/22/2023] [Indexed: 10/05/2023] Open
Abstract
OBJECTIVES The purpose of this study is to find out how much pharmacists know and have used ChatGPT in their practice. We investigated the advantages and disadvantages of utilizing ChatGPT in a pharmacy context, the amount of training necessary to use it proficiently, and the influence on patient care using a survey. METHODS This cross-sectional study was carried out between May and June 2023 to assess the potential and problems that pharmacists observed while integrating chatbots powered by AI (ChatGPT) in pharmacy practice. The correlation between perceived benefits and concerns was evaluated using Spearman's rho correlation due to the data's non-normal distribution.Any pharmacists licensed by the Jordanian Pharmacists Association were included in the study. A convenient sampling technique was used to choose the participants, and the study questionnaire was distributed utilizing an online medium (Facebook and WhatsApp). Anyone who expressed interest in taking part was given a link to the study's instructions so they may read them before giving their electronic consent and accessing the survey. RESULTS The potential advantages of ChatGPT in the pharmacy practice were widely acknowledged by the participants. The majority of participants (69.9%) concurred that educational material about pharmacy items or therapeutic areas can be provided using ChatGPT, with 66.9% of respondents believing that ChatGPT is a machine learning algorithm. Concerns about the accuracy of AI-generated responses were also prevalent. More than half of the participants (55.7%) raised the possibility that AI systems such as ChatGPT could pick up on and replicate prejudices and discriminatory patterns from the data they were trained on. Analysis shows a statistically significant positive link, albeit a minor one, between the perceived advantages of ChatGPT and its drawbacks (r = 0.255, p < 0.001). However, concerns were strongly correlated with knowledge of ChatGPT. In contrast to those who were either unsure or had not heard of ChatGPT (64.2%), individuals who had heard of it were more likely to have strong concerns (79.8%) (p = 0.002). Finally, the results show a statistically significant association between the frequency of ChatGPT use and positive perceptions of the tool (p < 0.001). CONCLUSIONS Although ChatGPT has shown promise in health and pharmaceutical practice, its application should be rigorously regulated by evidence-based law. According to the study's findings, pharmacists support the use of ChatGPT in pharmacy practice but have concerns about its use due to ethical reasons, legal problems, privacy concerns, worries about the accuracy of the data generated, data learning, and bias risk.
Collapse
Affiliation(s)
- Khawla Abu Hammour
- Department of Clinical Pharmacy and Biopharmaceutics, Faculty of Pharmacy, University of Jordan, Amman, Jordan
| | - Hamza Alhamad
- Department of Clinical Pharmacy, Faculty of Pharmacy, Zarqa University, Zarqa, Jordan
| | - Fahmi Y Al-Ashwal
- Department of Clinical Pharmacy, College of Pharmacy, Al-Ayen University, Thi-Qar, Iraq.
- Department of Clinical Pharmacy and Pharmacy Practice, Faculty of Pharmacy, University of Science and Technology, Sana'a, Yemen.
| | - Abdulsalam Halboup
- Department of Clinical Pharmacy and Pharmacy Practice, Faculty of Pharmacy, University of Science and Technology, Sana'a, Yemen
- Discipline of Clinical Pharmacy, School of Pharmaceutical Sciences, University Sains Malaysia, Gelugor, Pulau Pinang, Malaysia
| | - Rana Abu Farha
- Clinical Pharmacy and Therapeutics Department, Faculty of Pharmacy, Applied Science Private University, P.O. Box 11937, Amman, Jordan
| | - Adnan Abu Hammour
- Medrise Medical Center, Dubai Healthcare City, Dubai, United Arab Emirates
| |
Collapse
|
30
|
Temsah R, Altamimi I, Alhasan K, Temsah MH, Jamal A. Healthcare's New Horizon With ChatGPT's Voice and Vision Capabilities: A Leap Beyond Text. Cureus 2023; 15:e47469. [PMID: 37873042 PMCID: PMC10590619 DOI: 10.7759/cureus.47469] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/22/2023] [Indexed: 10/25/2023] Open
Abstract
The integration of artificial intelligence (AI) in healthcare is responsible for a paradigm shift in medicine. OpenAI's recent augmentation of their Generative Pre-trained Transformer (ChatGPT) large language model (LLM) with voice and image recognition capabilities (OpenAI, Delaware) presents another potential transformative tool for healthcare. Envision a healthcare setting where professionals engage in dynamic interactions with ChatGPT to navigate the complexities of atypical medical scenarios. In this innovative landscape, practitioners could solicit ChatGPT's expertise for concise summarizations and insightful extrapolations from a myriad of web-based resources pertaining to similar medical conditions. Furthermore, imagine patients using ChatGPT to identify abnormalities in medical images or skin lesions. While the prospects are diverse, challenges such as suboptimal audio quality and ensuring data security necessitate cautious integration in medical practice. Drawing insights from previous ChatGPT iterations could provide a prudent roadmap for navigating possible challenges. This editorial explores some possible horizons and potential hurdles of ChatGPT's enhanced functionalities in healthcare, emphasizing the importance of continued refinements and vigilance to maximize the benefits while minimizing risks. Through collaborative efforts between AI developers and healthcare professionals, another fusion of AI and healthcare can evolve into enriched patient care and enhanced medical experience.
Collapse
Affiliation(s)
- Reem Temsah
- College of Pharmacy, Alfaisal University, Riyadh, SAU
| | | | - Khalid Alhasan
- Pediatric Nephrology, King Saud University, Riyadh, SAU
- Solid Organ Transplant Center of Excellence, King Faisal Specialist Hospital and Research Centre, Riyadh, SAU
| | - Mohamad-Hani Temsah
- Evidence-Based Health Care & Knowledge Translation Research, King Saud University, Riyadh, SAU
- College of Medicine, King Saud University, Riyadh, SAU
| | - Amr Jamal
- Evidence-Based Health Care & Knowledge Translation Research, King Saud University, Riyadh, SAU
- College of Medicine, King Saud University, Riyadh, SAU
| |
Collapse
|
31
|
Miao H, Li C, Wang J. A Future of Smarter Digital Health Empowered by Generative Pretrained Transformer. J Med Internet Res 2023; 25:e49963. [PMID: 37751243 PMCID: PMC10565615 DOI: 10.2196/49963] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2023] [Revised: 07/30/2023] [Accepted: 08/28/2023] [Indexed: 09/27/2023] Open
Abstract
Generative pretrained transformer (GPT) tools have been thriving, as ignited by the remarkable success of OpenAI's recent chatbot product. GPT technology offers countless opportunities to significantly improve or renovate current health care research and practice paradigms, especially digital health interventions and digital health-enabled clinical care, and a future of smarter digital health can thus be expected. In particular, GPT technology can be incorporated through various digital health platforms in homes and hospitals embedded with numerous sensors, wearables, and remote monitoring devices. In this viewpoint paper, we highlight recent research progress that depicts the future picture of a smarter digital health ecosystem through GPT-facilitated centralized communications, automated analytics, personalized health care, and instant decision-making.
Collapse
Affiliation(s)
- Hongyu Miao
- College of Nursing, Florida State University, Tallahassee, FL, United States
| | - Chengdong Li
- College of Nursing, Florida State University, Tallahassee, FL, United States
| | - Jing Wang
- College of Nursing, Florida State University, Tallahassee, FL, United States
| |
Collapse
|
32
|
Levkovich I, Elyoseph Z. Suicide Risk Assessments Through the Eyes of ChatGPT-3.5 Versus ChatGPT-4: Vignette Study. JMIR Ment Health 2023; 10:e51232. [PMID: 37728984 PMCID: PMC10551796 DOI: 10.2196/51232] [Citation(s) in RCA: 13] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/25/2023] [Revised: 08/22/2023] [Accepted: 08/24/2023] [Indexed: 09/22/2023] Open
Abstract
BACKGROUND ChatGPT, a linguistic artificial intelligence (AI) model engineered by OpenAI, offers prospective contributions to mental health professionals. Although having significant theoretical implications, ChatGPT's practical capabilities, particularly regarding suicide prevention, have not yet been substantiated. OBJECTIVE The study's aim was to evaluate ChatGPT's ability to assess suicide risk, taking into consideration 2 discernable factors-perceived burdensomeness and thwarted belongingness-over a 2-month period. In addition, we evaluated whether ChatGPT-4 more accurately evaluated suicide risk than did ChatGPT-3.5. METHODS ChatGPT was tasked with assessing a vignette that depicted a hypothetical patient exhibiting differing degrees of perceived burdensomeness and thwarted belongingness. The assessments generated by ChatGPT were subsequently contrasted with standard evaluations rendered by mental health professionals. Using both ChatGPT-3.5 and ChatGPT-4 (May 24, 2023), we executed 3 evaluative procedures in June and July 2023. Our intent was to scrutinize ChatGPT-4's proficiency in assessing various facets of suicide risk in relation to the evaluative abilities of both mental health professionals and an earlier version of ChatGPT-3.5 (March 14 version). RESULTS During the period of June and July 2023, we found that the likelihood of suicide attempts as evaluated by ChatGPT-4 was similar to the norms of mental health professionals (n=379) under all conditions (average Z score of 0.01). Nonetheless, a pronounced discrepancy was observed regarding the assessments performed by ChatGPT-3.5 (May version), which markedly underestimated the potential for suicide attempts, in comparison to the assessments carried out by the mental health professionals (average Z score of -0.83). The empirical evidence suggests that ChatGPT-4's evaluation of the incidence of suicidal ideation and psychache was higher than that of the mental health professionals (average Z score of 0.47 and 1.00, respectively). Conversely, the level of resilience as assessed by both ChatGPT-4 and ChatGPT-3.5 (both versions) was observed to be lower in comparison to the assessments offered by mental health professionals (average Z score of -0.89 and -0.90, respectively). CONCLUSIONS The findings suggest that ChatGPT-4 estimates the likelihood of suicide attempts in a manner akin to evaluations provided by professionals. In terms of recognizing suicidal ideation, ChatGPT-4 appears to be more precise. However, regarding psychache, there was an observed overestimation by ChatGPT-4, indicating a need for further research. These results have implications regarding ChatGPT-4's potential to support gatekeepers, patients, and even mental health professionals' decision-making. Despite the clinical potential, intensive follow-up studies are necessary to establish the use of ChatGPT-4's capabilities in clinical practice. The finding that ChatGPT-3.5 frequently underestimates suicide risk, especially in severe cases, is particularly troubling. It indicates that ChatGPT may downplay one's actual suicide risk level.
Collapse
Affiliation(s)
- Inbar Levkovich
- Oranim Academic College, Faculty of Graduate Studies, Kiryat Tivon, Israel
| | - Zohar Elyoseph
- Department of Psychology and Educational Counseling, The Center for Psychobiological Research, Max Stern Yezreel Valley College, Emek Yezreel, Israel
- Department of Brain Sciences, Faculty of Medicine, Imperial College London, London, United Kingdom
| |
Collapse
|
33
|
ChatGPT Acceptance and Use Among Undergraduate Students. ARTIFICIAL INTELLIGENCE APPLICATIONS USING CHATGPT IN EDUCATION 2023:31-47. [DOI: 10.4018/978-1-6684-9300-7.ch003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/02/2024]
Abstract
In this chapter, the authors use a adapted technology acceptance model (ATAM) to examined the extent to which undergraduate students perceived ChatGPT to be a resource that is useful and easy to use. A pilot study was performed from different discipline of undergraduate students exploring their perceptions of ChatGPT as part of their research process. A statistical analysis was performed using Smart-PLS 4.0. The current study confirmed the use of adapted technology acceptance model for predicting undergraduate student use of ChatGPT.
Collapse
|
34
|
Chinnadurai S, Mahadevan S, Navaneethakrishnan B, Mamadapur M. Decoding Applications of Artificial Intelligence in Rheumatology. Cureus 2023; 15:e46164. [PMID: 37905264 PMCID: PMC10613315 DOI: 10.7759/cureus.46164] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/27/2023] [Indexed: 11/02/2023] Open
Abstract
Artificial intelligence (AI) is not a newcomer in medicine. It has been employed for image analysis, disease diagnosis, drug discovery, and improving overall patient care. ChatGPT (Chat Generative Pre-trained Transformer, Inc., Delaware) has renewed interest and enthusiasm in artificial intelligence. Algorithms, machine learning, deep learning, and data analysis are some of the complex terminologies often encountered when health professionals try to learn AI. In this article, we try to review the practical applications of artificial intelligence in vernacular language in the fields of medicine and rheumatology in particular. From the standpoint of the everyday physician, we have endeavored to encapsulate the influence of AI on the cutting edge of medical practice and the potential revolutionary shift in the realm of rheumatology.
Collapse
Affiliation(s)
- Saranya Chinnadurai
- Rheumatology, Sri Ramachandra Institute of Higher Education and Research, Chennai, IND
| | | | | | | |
Collapse
|
35
|
Suppadungsuk S, Thongprayoon C, Krisanapan P, Tangpanithandee S, Garcia Valencia O, Miao J, Mekraksakit P, Kashani K, Cheungpasitporn W. Examining the Validity of ChatGPT in Identifying Relevant Nephrology Literature: Findings and Implications. J Clin Med 2023; 12:5550. [PMID: 37685617 PMCID: PMC10488525 DOI: 10.3390/jcm12175550] [Citation(s) in RCA: 14] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2023] [Revised: 08/21/2023] [Accepted: 08/24/2023] [Indexed: 09/10/2023] Open
Abstract
Literature reviews are valuable for summarizing and evaluating the available evidence in various medical fields, including nephrology. However, identifying and exploring the potential sources requires focus and time devoted to literature searching for clinicians and researchers. ChatGPT is a novel artificial intelligence (AI) large language model (LLM) renowned for its exceptional ability to generate human-like responses across various tasks. However, whether ChatGPT can effectively assist medical professionals in identifying relevant literature is unclear. Therefore, this study aimed to assess the effectiveness of ChatGPT in identifying references to literature reviews in nephrology. We keyed the prompt "Please provide the references in Vancouver style and their links in recent literature on… name of the topic" into ChatGPT-3.5 (03/23 Version). We selected all the results provided by ChatGPT and assessed them for existence, relevance, and author/link correctness. We recorded each resource's citations, authors, title, journal name, publication year, digital object identifier (DOI), and link. The relevance and correctness of each resource were verified by searching on Google Scholar. Of the total 610 references in the nephrology literature, only 378 (62%) of the references provided by ChatGPT existed, while 31% were fabricated, and 7% of citations were incomplete references. Notably, only 122 (20%) of references were authentic. Additionally, 256 (68%) of the links in the references were found to be incorrect, and the DOI was inaccurate in 206 (54%) of the references. Moreover, among those with a link provided, the link was correct in only 20% of cases, and 3% of the references were irrelevant. Notably, an analysis of specific topics in electrolyte, hemodialysis, and kidney stones found that >60% of the references were inaccurate or misleading, with less reliable authorship and links provided by ChatGPT. Based on our findings, the use of ChatGPT as a sole resource for identifying references to literature reviews in nephrology is not recommended. Future studies could explore ways to improve AI language models' performance in identifying relevant nephrology literature.
Collapse
Affiliation(s)
- Supawadee Suppadungsuk
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA; (S.S.); (C.T.); (P.K.); (S.T.); (O.G.V.); (J.M.); (P.M.); (K.K.)
- Chakri Naruebodindra Medical Institute, Faculty of Medicine Ramathibodi Hospital, Mahidol University, Samut Prakan 10540, Thailand
| | - Charat Thongprayoon
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA; (S.S.); (C.T.); (P.K.); (S.T.); (O.G.V.); (J.M.); (P.M.); (K.K.)
| | - Pajaree Krisanapan
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA; (S.S.); (C.T.); (P.K.); (S.T.); (O.G.V.); (J.M.); (P.M.); (K.K.)
- Division of Nephrology, Thammasat University Hospital, Pathum Thani 12120, Thailand
| | - Supawit Tangpanithandee
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA; (S.S.); (C.T.); (P.K.); (S.T.); (O.G.V.); (J.M.); (P.M.); (K.K.)
- Chakri Naruebodindra Medical Institute, Faculty of Medicine Ramathibodi Hospital, Mahidol University, Samut Prakan 10540, Thailand
| | - Oscar Garcia Valencia
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA; (S.S.); (C.T.); (P.K.); (S.T.); (O.G.V.); (J.M.); (P.M.); (K.K.)
| | - Jing Miao
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA; (S.S.); (C.T.); (P.K.); (S.T.); (O.G.V.); (J.M.); (P.M.); (K.K.)
| | - Poemlarp Mekraksakit
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA; (S.S.); (C.T.); (P.K.); (S.T.); (O.G.V.); (J.M.); (P.M.); (K.K.)
| | - Kianoush Kashani
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA; (S.S.); (C.T.); (P.K.); (S.T.); (O.G.V.); (J.M.); (P.M.); (K.K.)
| | - Wisit Cheungpasitporn
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA; (S.S.); (C.T.); (P.K.); (S.T.); (O.G.V.); (J.M.); (P.M.); (K.K.)
| |
Collapse
|
36
|
Kumar M, Mani UA, Tripathi P, Saalim M, Roy S. Artificial Hallucinations by Google Bard: Think Before You Leap. Cureus 2023; 15:e43313. [PMID: 37700993 PMCID: PMC10492900 DOI: 10.7759/cureus.43313] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/10/2023] [Indexed: 09/14/2023] Open
Abstract
One of the critical challenges posed by artificial intelligence (AI) tools like Google Bard (Google LLC, Mountain View, California, United States) is the potential for "artificial hallucinations." These refer to instances where an AI chatbot generates fictional, erroneous, or unsubstantiated information in response to queries. In research, such inaccuracies can lead to the propagation of misinformation and undermine the credibility of scientific literature. The experience presented here highlights the importance of cross-checking the information provided by AI tools with reliable sources and maintaining a cautious approach when utilizing these tools in research writing.
Collapse
Affiliation(s)
- Mukesh Kumar
- Emergency Medicine, King George's Medical University, Lucknow, IND
| | - Utsav Anand Mani
- Emergency Medicine, King George's Medical University, Lucknow, IND
| | | | - Mohd Saalim
- Emergency Medicine, King George's Medical University, Lucknow, IND
| | - Sneha Roy
- Medicine, King George's Medical University, Lucknow, IND
| |
Collapse
|
37
|
Meo SA, Al-Masri AA, Alotaibi M, Meo MZS, Meo MOS. ChatGPT Knowledge Evaluation in Basic and Clinical Medical Sciences: Multiple Choice Question Examination-Based Performance. Healthcare (Basel) 2023; 11:2046. [PMID: 37510487 PMCID: PMC10379728 DOI: 10.3390/healthcare11142046] [Citation(s) in RCA: 13] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2023] [Revised: 07/12/2023] [Accepted: 07/14/2023] [Indexed: 07/30/2023] Open
Abstract
The Chatbot Generative Pre-Trained Transformer (ChatGPT) has garnered great attention from the public, academicians and science communities. It responds with appropriate and articulate answers and explanations across various disciplines. For the use of ChatGPT in education, research and healthcare, different perspectives exist with some level of ambiguity around its acceptability and ideal uses. However, the literature is acutely lacking in establishing a link to assess the intellectual levels of ChatGPT in the medical sciences. Therefore, the present study aimed to investigate the knowledge level of ChatGPT in medical education both in basic and clinical medical sciences, multiple-choice question (MCQs) examination-based performance and its impact on the medical examination system. In this study, initially, a subject-wise question bank was established with a pool of multiple-choice questions (MCQs) from various medical textbooks and university examination pools. The research team members carefully reviewed the MCQ contents and ensured that the MCQs were relevant to the subject's contents. Each question was scenario-based with four sub-stems and had a single correct answer. In this study, 100 MCQs in various disciplines, including basic medical sciences (50 MCQs) and clinical medical sciences (50 MCQs), were randomly selected from the MCQ bank. The MCQs were manually entered one by one, and a fresh ChatGPT session was started for each entry to avoid memory retention bias. The task was given to ChatGPT to assess the response and knowledge level of ChatGPT. The first response obtained was taken as the final response. Based on a pre-determined answer key, scoring was made on a scale of 0 to 1, with zero representing incorrect and one representing the correct answer. The results revealed that out of 100 MCQs in various disciplines of basic and clinical medical sciences, ChatGPT attempted all the MCQs and obtained 37/50 (74%) marks in basic medical sciences and 35/50 (70%) marks in clinical medical sciences, with an overall score of 72/100 (72%) in both basic and clinical medical sciences. It is concluded that ChatGPT obtained a satisfactory score in both basic and clinical medical sciences subjects and demonstrated a degree of understanding and explanation. This study's findings suggest that ChatGPT may be able to assist medical students and faculty in medical education settings since it has potential as an innovation in the framework of medical sciences and education.
Collapse
Affiliation(s)
- Sultan Ayoub Meo
- Department of Physiology, College of Medicine, King Saud University, Riyadh 11461, Saudi Arabia;
| | - Abeer A. Al-Masri
- Department of Physiology, College of Medicine, King Saud University, Riyadh 11461, Saudi Arabia;
| | - Metib Alotaibi
- University Diabetes Unit, Department of Medicine, College of Medicine, King Saud University, Riyadh 11461, Saudi Arabia;
| | | | | |
Collapse
|
38
|
Altamimi I, Altamimi A, Alhumimidi AS, Altamimi A, Temsah MH. Artificial Intelligence (AI) Chatbots in Medicine: A Supplement, Not a Substitute. Cureus 2023; 15:e40922. [PMID: 37496532 PMCID: PMC10367431 DOI: 10.7759/cureus.40922] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/25/2023] [Indexed: 07/28/2023] Open
Abstract
This editorial discusses the role of artificial intelligence (AI) chatbots in the healthcare sector, emphasizing their potential as supplements rather than substitutes for medical professionals. While AI chatbots have demonstrated significant potential in managing routine tasks, processing vast amounts of data, and aiding in patient education, they still lack the empathy, intuition, and experience intrinsic to human healthcare providers. Furthermore, the deployment of AI in medicine brings forth ethical and legal considerations that require robust regulatory measures. As we move towards the future, the editorial underscores the importance of a collaborative model, wherein AI chatbots and medical professionals work together to optimize patient outcomes. Despite the potential for AI advancements, the likelihood of chatbots completely replacing medical professionals remains low, as the complexity of healthcare necessitates human involvement. The ultimate aim should be to use technology like AI chatbots to enhance patient care and outcomes, not to replace the irreplaceable human elements of healthcare.
Collapse
Affiliation(s)
| | - Abdullah Altamimi
- Pediatric Emergency, Toxicology, King Fahad Medical City, Riyadh, SAU
| | | | - Abdulaziz Altamimi
- College of Medicine, King Saud Bin Abdulaziz University for Health Sciences, Riyadh, SAU
| | | |
Collapse
|