1
|
Khosravi M, Mojtabaeian SM, Demiray EKD, Sayar B. A Systematic Review of the Outcomes of Utilization of Artificial Intelligence Within the Healthcare Systems of the Middle East: A Thematic Analysis of Findings. Health Sci Rep 2024; 7:e70300. [PMID: 39720235 PMCID: PMC11667773 DOI: 10.1002/hsr2.70300] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2024] [Revised: 12/03/2024] [Accepted: 12/08/2024] [Indexed: 12/26/2024] Open
Abstract
Background and Aims The rapid expansion of artificial intelligence (AI) within worldwide healthcare systems is occurring at a significant rate. In this context, the Middle East has demonstrated distinctive characteristics in the application of AI within the healthcare sector, particularly shaped by regional policies. This study examined the outcomes resulting from the utilization of AI within healthcare systems in the Middle East. Methods A systematic review was conducted across several databases, including PubMed, Scopus, ProQuest, and the Cochrane Database of Systematic Reviews in 2024. The quality assessment of the included studies was conducted using the Authority, Accuracy, Coverage, Objectivity, Date, Significance checklist. Following this, a thematic analysis was carried out on the acquired data, adhering to the Boyatzis approach. Results 100 papers were included. The quality and bias risk of the included studies were delineated to be within an acceptable range. Multiple themes were derived from the thematic analysis including: "Prediction of diseases, their diagnosis, and outcomes," "Prediction of organizational issues and attributes," "Prediction of mental health issues and attributes," "Prediction of polypharmacy and emotional analysis of texts," "Prediction of climate change issues and attributes," and "Prediction and identification of success and satisfaction among healthcare individuals." Conclusion The findings emphasized AI's significant potential in addressing prevalent healthcare challenges in the Middle East, such as cancer, diabetes, and climate change. AI has the potential to overhaul the healthcare systems. The findings also highlighted the need for policymakers and administrators to develop a concrete plan to effectively integrate AI into healthcare systems.
Collapse
Affiliation(s)
- Mohsen Khosravi
- Imam Hossein Hospital Shahroud University of Medical Sciences Shahroud Iran
| | - Seyyed Morteza Mojtabaeian
- Department of Healthcare Services Management, School of Management and Medical Informatics Shiraz University of Medical Sciences Shiraz Iran
| | | | - Burak Sayar
- Bitlis Eren University Vocational School of Social Sciences Bitlis Türkiye
| |
Collapse
|
2
|
Giske CG, Bressan M, Fiechter F, Hinic V, Mancini S, Nolte O, Egli A. GPT-4-based AI agents-the new expert system for detection of antimicrobial resistance mechanisms? J Clin Microbiol 2024; 62:e0068924. [PMID: 39417635 PMCID: PMC11559085 DOI: 10.1128/jcm.00689-24] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2024] [Accepted: 09/04/2024] [Indexed: 10/19/2024] Open
Abstract
The European Committee on Antimicrobial Susceptibility Testing (EUCAST) recommends two steps for detecting beta-lactamases in Gram-negative bacteria. Screening for potential extended-spectrum beta-lactamase (ESBL), plasmid-mediated AmpC beta-lactamase, or carbapenemase production is confirmed. We aimed to validate generative pre-trained transformer (GPT)-4 and GPT-agent for pre-classification of disk diffusion to indicate potential beta-lactamases. We assigned 225 Gram-negative isolates based on phenotypic resistances against beta-lactam antibiotics and additional tests to one or more resistance mechanisms as follows: "none," "ESBL," "AmpC," or "carbapenemase." Next, we customized a GPT-agent with EUCAST guidelines and breakpoint table (v13.1). We compared routine diagnostics (reference) to those of (i) EUCAST-GPT-expert, (ii) microbiologists, and (iii) non-customized GPT-4. We determined sensitivities and specificities to flag suspect resistances. Three microbiologists showed concordance in 814/862 (94.4%) phenotypic categories and were used in median eight words (interquartile range [IQR] 4-11) for reasoning. Median sensitivity/specificity for ESBL, AmpC, and carbapenemase were 98%/99.1%, 96.8%/97.1%, and 95.5%/98.5%, respectively. Three prompts of EUCAST-GPT-expert showed concordance in 706/862 (81.9%) categories but were used in median 158 words (IQR 140-174) for reasoning. Sensitivity/specificity for ESBL, AmpC, and carbapenemase prediction were 95.4%/69.23%, 96.9%/86.3%, and 100%/98.8%, respectively. Non-customized GPT-4 could interpret 169/862 (19.6%) categories, and 137/169 (81.1%) agreed with routine diagnostics. Non-customized GPT-4 was used in median 85 words (IQR 72-105) for reasoning. Microbiologists showed higher concordance and shorter argumentations compared to GPT-agents. Humans showed higher specificities compared to GPT-agents. GPT-agent's unspecific flagging of ESBL and AmpC potentially results in additional testing, diagnostic delays, and higher costs. GPT-4 is not approved by regulatory bodies, but validation of large language models is needed. IMPORTANCE The study titled "GPT-4-based AI agents-the new expert system for detection of antimicrobial resistance mechanisms?" is critically important as it explores the integration of advanced artificial intelligence (AI) technologies, like generative pre-trained transformer (GPT)-4, into the field of laboratory medicine, specifically in the diagnostics of antimicrobial resistance (AMR). With the growing challenge of AMR, there is a pressing need for innovative solutions that can enhance diagnostic accuracy and efficiency. This research assesses the capability of AI to support the existing two-step confirmatory process recommended by the European Committee on Antimicrobial Susceptibility Testing for detecting beta-lactamases in Gram-negative bacteria. By potentially speeding up and improving the precision of initial screenings, AI could reduce the time to appropriate treatment interventions. Furthermore, this study is vital for validating the reliability and safety of AI tools in clinical settings, ensuring they meet stringent regulatory standards before they can be broadly implemented. This could herald a significant shift in how laboratory diagnostics are performed, ultimately leading to better patient outcomes.
Collapse
Affiliation(s)
- Christian G. Giske
- Division of Clinical Microbiology, Department of Laboratory Medicine, Karolinska Institutet, Stockholm, Sweden
- Department of Clinical Microbiology, Karolinska University Hospital, Solna, Sweden
| | - Michelle Bressan
- Institute of Medical Microbiology, University of Zurich, Zurich, Switzerland
| | - Farah Fiechter
- Institute of Medical Microbiology, University of Zurich, Zurich, Switzerland
| | - Vladimira Hinic
- Institute of Medical Microbiology, University of Zurich, Zurich, Switzerland
| | - Stefano Mancini
- Institute of Medical Microbiology, University of Zurich, Zurich, Switzerland
| | - Oliver Nolte
- Institute of Medical Microbiology, University of Zurich, Zurich, Switzerland
| | - Adrian Egli
- Institute of Medical Microbiology, University of Zurich, Zurich, Switzerland
| |
Collapse
|
3
|
Tam TYC, Sivarajkumar S, Kapoor S, Stolyar AV, Polanska K, McCarthy KR, Osterhoudt H, Wu X, Visweswaran S, Fu S, Mathur P, Cacciamani GE, Sun C, Peng Y, Wang Y. A framework for human evaluation of large language models in healthcare derived from literature review. NPJ Digit Med 2024; 7:258. [PMID: 39333376 PMCID: PMC11437138 DOI: 10.1038/s41746-024-01258-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2024] [Accepted: 09/11/2024] [Indexed: 09/29/2024] Open
Abstract
With generative artificial intelligence (GenAI), particularly large language models (LLMs), continuing to make inroads in healthcare, assessing LLMs with human evaluations is essential to assuring safety and effectiveness. This study reviews existing literature on human evaluation methodologies for LLMs in healthcare across various medical specialties and addresses factors such as evaluation dimensions, sample types and sizes, selection, and recruitment of evaluators, frameworks and metrics, evaluation process, and statistical analysis type. Our literature review of 142 studies shows gaps in reliability, generalizability, and applicability of current human evaluation practices. To overcome such significant obstacles to healthcare LLM developments and deployments, we propose QUEST, a comprehensive and practical framework for human evaluation of LLMs covering three phases of workflow: Planning, Implementation and Adjudication, and Scoring and Review. QUEST is designed with five proposed evaluation principles: Quality of Information, Understanding and Reasoning, Expression Style and Persona, Safety and Harm, and Trust and Confidence.
Collapse
Affiliation(s)
- Thomas Yu Chow Tam
- Department of Health Information Management, University of Pittsburgh, Pittsburgh, PA, USA
| | | | - Sumit Kapoor
- Department of Critical Care Medicine, University of Pittsburgh Medical Center, Pittsburgh, PA, USA
| | - Alisa V Stolyar
- Department of Health Information Management, University of Pittsburgh, Pittsburgh, PA, USA
| | - Katelyn Polanska
- Department of Health Information Management, University of Pittsburgh, Pittsburgh, PA, USA
| | - Karleigh R McCarthy
- Department of Health Information Management, University of Pittsburgh, Pittsburgh, PA, USA
| | - Hunter Osterhoudt
- Department of Health Information Management, University of Pittsburgh, Pittsburgh, PA, USA
| | - Xizhi Wu
- Department of Health Information Management, University of Pittsburgh, Pittsburgh, PA, USA
| | - Shyam Visweswaran
- Department of Biomedical Informatics, University of Pittsburgh, Pittsburgh, PA, USA
- Clinical and Translational Science Institute, University of Pittsburgh, Pittsburgh, PA, USA
| | - Sunyang Fu
- Department of Clinical and Health Informatics, Center for Translational AI Excellence and Applications in Medicine, University of Texas Health Science Center at Houston, Houston, TX, USA
| | - Piyush Mathur
- Department of Anesthesiology, Cleveland Clinic, Cleveland, OH, USA
- BrainX AI ReSearch, BrainX LLC, Cleveland, OH, USA
| | - Giovanni E Cacciamani
- Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
| | - Cong Sun
- Department of Population Health Sciences, Weill Cornell Medicine, New York, NY, USA
| | - Yifan Peng
- Department of Population Health Sciences, Weill Cornell Medicine, New York, NY, USA
| | - Yanshan Wang
- Department of Health Information Management, University of Pittsburgh, Pittsburgh, PA, USA.
- Intelligent Systems Program, University of Pittsburgh, Pittsburgh, PA, USA.
- Department of Biomedical Informatics, University of Pittsburgh, Pittsburgh, PA, USA.
- Clinical and Translational Science Institute, University of Pittsburgh, Pittsburgh, PA, USA.
- Hillman Cancer Center, University of Pittsburgh Medical Center, Pittsburgh, PA, USA.
| |
Collapse
|
4
|
Sallam M, Al-Salahat K, Eid H, Egger J, Puladi B. Human versus Artificial Intelligence: ChatGPT-4 Outperforming Bing, Bard, ChatGPT-3.5 and Humans in Clinical Chemistry Multiple-Choice Questions. ADVANCES IN MEDICAL EDUCATION AND PRACTICE 2024; 15:857-871. [PMID: 39319062 PMCID: PMC11421444 DOI: 10.2147/amep.s479801] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/26/2024] [Accepted: 09/15/2024] [Indexed: 09/26/2024]
Abstract
Introduction Artificial intelligence (AI) chatbots excel in language understanding and generation. These models can transform healthcare education and practice. However, it is important to assess the performance of such AI models in various topics to highlight its strengths and possible limitations. This study aimed to evaluate the performance of ChatGPT (GPT-3.5 and GPT-4), Bing, and Bard compared to human students at a postgraduate master's level in Medical Laboratory Sciences. Methods The study design was based on the METRICS checklist for the design and reporting of AI-based studies in healthcare. The study utilized a dataset of 60 Clinical Chemistry multiple-choice questions (MCQs) initially conceived for assessing 20 MSc students. The revised Bloom's taxonomy was used as the framework for classifying the MCQs into four cognitive categories: Remember, Understand, Analyze, and Apply. A modified version of the CLEAR tool was used for the assessment of the quality of AI-generated content, with Cohen's κ for inter-rater agreement. Results Compared to the mean students' score which was 0.68±0.23, GPT-4 scored 0.90 ± 0.30, followed by Bing (0.77 ± 0.43), GPT-3.5 (0.73 ± 0.45), and Bard (0.67 ± 0.48). Statistically significant better performance was noted in lower cognitive domains (Remember and Understand) in GPT-3.5 (P=0.041), GPT-4 (P=0.003), and Bard (P=0.017) compared to the higher cognitive domains (Apply and Analyze). The CLEAR scores indicated that ChatGPT-4 performance was "Excellent" compared to the "Above average" performance of ChatGPT-3.5, Bing, and Bard. Discussion The findings indicated that ChatGPT-4 excelled in the Clinical Chemistry exam, while ChatGPT-3.5, Bing, and Bard were above average. Given that the MCQs were directed to postgraduate students with a high degree of specialization, the performance of these AI chatbots was remarkable. Due to the risk of academic dishonesty and possible dependence on these AI models, the appropriateness of MCQs as an assessment tool in higher education should be re-evaluated.
Collapse
Affiliation(s)
- Malik Sallam
- Department of Pathology, Microbiology and Forensic Medicine, School of Medicine, The University of Jordan, Amman, Jordan
- Department of Clinical Laboratories and Forensic Medicine, Jordan University Hospital, Amman, Jordan
- Scientific Approaches to Fight Epidemics of Infectious Diseases (SAFE-ID) Research Group, The University of Jordan, Amman, Jordan
| | - Khaled Al-Salahat
- Department of Pathology, Microbiology and Forensic Medicine, School of Medicine, The University of Jordan, Amman, Jordan
- Scientific Approaches to Fight Epidemics of Infectious Diseases (SAFE-ID) Research Group, The University of Jordan, Amman, Jordan
| | - Huda Eid
- Scientific Approaches to Fight Epidemics of Infectious Diseases (SAFE-ID) Research Group, The University of Jordan, Amman, Jordan
| | - Jan Egger
- Institute for AI in Medicine (IKIM), University Medicine Essen (AöR), Essen, Germany
| | - Behrus Puladi
- Institute of Medical Informatics, University Hospital RWTH Aachen, Aachen, Germany
| |
Collapse
|
5
|
Hassona Y, Alqaisi D, Al-Haddad A, Georgakopoulou EA, Malamos D, Alrashdan MS, Sawair F. How good is ChatGPT at answering patients' questions related to early detection of oral (mouth) cancer? Oral Surg Oral Med Oral Pathol Oral Radiol 2024; 138:269-278. [PMID: 38714483 DOI: 10.1016/j.oooo.2024.04.010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2024] [Revised: 03/22/2024] [Accepted: 04/14/2024] [Indexed: 05/10/2024]
Abstract
OBJECTIVES To examine the quality, reliability, readability, and usefulness of ChatGPT in promoting oral cancer early detection. STUDY DESIGN About 108 patient-oriented questions about oral cancer early detection were compiled from expert panels, professional societies, and web-based tools. Questions were categorized into 4 topic domains and ChatGPT 3.5 was asked each question independently. ChatGPT answers were evaluated regarding quality, readability, actionability, and usefulness using. Two experienced reviewers independently assessed each response. RESULTS Questions related to clinical appearance constituted 36.1% (n = 39) of the total questions. ChatGPT provided "very useful" responses to the majority of questions (75%; n = 81). The mean Global Quality Score was 4.24 ± 1.3 of 5. The mean reliability score was 23.17 ± 9.87 of 25. The mean understandability score was 76.6% ± 25.9% of 100, while the mean actionability score was 47.3% ± 18.9% of 100. The mean FKS reading ease score was 38.4% ± 29.9%, while the mean SMOG index readability score was 11.65 ± 8.4. No misleading information was identified among ChatGPT responses. CONCLUSION ChatGPT is an attractive and potentially useful resource for informing patients about early detection of oral cancer. Nevertheless, concerns do exist about readability and actionability of the offered information.
Collapse
Affiliation(s)
- Yazan Hassona
- Faculty of Dentistry, Centre for Oral Diseases Studies (CODS), Al-Ahliyya Amman University, Jordan; School of Dentistry, The University of Jordan, Jordan.
| | - Dua'a Alqaisi
- School of Dentistry, The University of Jordan, Jordan
| | | | - Eleni A Georgakopoulou
- Molecular Carcinogenesis Group, Department of Histology and Embryology, Medical School, National and Kapodistrian University of Athens, Greece
| | - Dimitris Malamos
- Oral Medicine Clinic of the National Organization for the Provision of Health, Athens, Greece
| | - Mohammad S Alrashdan
- Department of Oral and Craniofacial Health Sciences, College of Dental Medicine, University of Sharjah, Sharjah, United Arab Emirates
| | - Faleh Sawair
- School of Dentistry, The University of Jordan, Jordan
| |
Collapse
|
6
|
Yilmaz Muluk S, Olcucu N. Comparative Analysis of Artificial Intelligence Platforms: ChatGPT-3.5 and GoogleBard in Identifying Red Flags of Low Back Pain. Cureus 2024; 16:e63580. [PMID: 39087174 PMCID: PMC11290316 DOI: 10.7759/cureus.63580] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/25/2024] [Indexed: 08/02/2024] Open
Abstract
BACKGROUND Low back pain (LBP) is a prevalent healthcare concern that is frequently responsive to conservative treatment. However, it can also stem from severe conditions, marked by 'red flags' (RF) such as malignancy, cauda equina syndrome, fractures, infections, spondyloarthropathies, and aneurysm rupture, which physicians should be vigilant about. Given the increasing reliance on online health information, this study assessed ChatGPT-3.5's (OpenAI, San Francisco, CA, USA) and GoogleBard's (Google, Mountain View, CA, USA) accuracy in responding to RF-related LBP questions and their capacity to discriminate the severity of the condition. METHODS We created 70 questions on RF-related symptoms and diseases following the LBP guidelines. Among them, 58 had a single symptom (SS), and 12 had multiple symptoms (MS) of LBP. Questions were posed to ChatGPT and GoogleBard, and responses were assessed by two authors for accuracy, completeness, and relevance (ACR) using a 5-point rubric criteria. RESULTS Cohen's kappa values (0.60-0.81) indicated significant agreement among the authors. The average scores for responses ranged from 3.47 to 3.85 for ChatGPT-3.5 and from 3.36 to 3.76 for GoogleBard for 58 SS questions, and from 4.04 to 4.29 for ChatGPT-3.5 and from 3.50 to 3.71 for GoogleBard for 12 MS questions. The ratings for these responses ranged from 'good' to 'excellent'. Most SS responses effectively conveyed the severity of the situation (93.1% for ChatGPT-3.5, 94.8% for GoogleBard), and all MS responses did so. No statistically significant differences were found between ChatGPT-3.5 and GoogleBard scores (p>0.05). CONCLUSIONS In an era characterized by widespread online health information seeking, artificial intelligence (AI) systems play a vital role in delivering precise medical information. These technologies may hold promise in the field of health information if they continue to improve.
Collapse
Affiliation(s)
| | - Nazli Olcucu
- Physical Medicine and Rehabilitation, Antalya Ataturk State Hospital, Antalya, TUR
| |
Collapse
|
7
|
Bharatha A, Ojeh N, Fazle Rabbi AM, Campbell MH, Krishnamurthy K, Layne-Yarde RNA, Kumar A, Springer DCR, Connell KL, Majumder MAA. Comparing the Performance of ChatGPT-4 and Medical Students on MCQs at Varied Levels of Bloom's Taxonomy. ADVANCES IN MEDICAL EDUCATION AND PRACTICE 2024; 15:393-400. [PMID: 38751805 PMCID: PMC11094742 DOI: 10.2147/amep.s457408] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 12/31/2023] [Accepted: 05/01/2024] [Indexed: 05/18/2024]
Abstract
Introduction This research investigated the capabilities of ChatGPT-4 compared to medical students in answering MCQs using the revised Bloom's Taxonomy as a benchmark. Methods A cross-sectional study was conducted at The University of the West Indies, Barbados. ChatGPT-4 and medical students were assessed on MCQs from various medical courses using computer-based testing. Results The study included 304 MCQs. Students demonstrated good knowledge, with 78% correctly answering at least 90% of the questions. However, ChatGPT-4 achieved a higher overall score (73.7%) compared to students (66.7%). Course type significantly affected ChatGPT-4's performance, but revised Bloom's Taxonomy levels did not. A detailed association check between program levels and Bloom's taxonomy levels for correct answers by ChatGPT-4 showed a highly significant correlation (p<0.001), reflecting a concentration of "remember-level" questions in preclinical and "evaluate-level" questions in clinical courses. Discussion The study highlights ChatGPT-4's proficiency in standardized tests but indicates limitations in clinical reasoning and practical skills. This performance discrepancy suggests that the effectiveness of artificial intelligence (AI) varies based on course content. Conclusion While ChatGPT-4 shows promise as an educational tool, its role should be supplementary, with strategic integration into medical education to leverage its strengths and address limitations. Further research is needed to explore AI's impact on medical education and student performance across educational levels and courses.
Collapse
Affiliation(s)
- Ambadasu Bharatha
- Faculty of Medical Sciences, The University of the West Indies, Bridgetown, Barbados
| | - Nkemcho Ojeh
- Faculty of Medical Sciences, The University of the West Indies, Bridgetown, Barbados
| | | | - Michael H Campbell
- Faculty of Medical Sciences, The University of the West Indies, Bridgetown, Barbados
| | | | | | - Alok Kumar
- Faculty of Medical Sciences, The University of the West Indies, Bridgetown, Barbados
| | - Dale C R Springer
- Faculty of Medical Sciences, The University of the West Indies, Bridgetown, Barbados
| | - Kenneth L Connell
- Faculty of Medical Sciences, The University of the West Indies, Bridgetown, Barbados
| | | |
Collapse
|
8
|
Abdaljaleel M, Barakat M, Alsanafi M, Salim NA, Abazid H, Malaeb D, Mohammed AH, Hassan BAR, Wayyes AM, Farhan SS, Khatib SE, Rahal M, Sahban A, Abdelaziz DH, Mansour NO, AlZayer R, Khalil R, Fekih-Romdhane F, Hallit R, Hallit S, Sallam M. A multinational study on the factors influencing university students' attitudes and usage of ChatGPT. Sci Rep 2024; 14:1983. [PMID: 38263214 PMCID: PMC10806219 DOI: 10.1038/s41598-024-52549-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2023] [Accepted: 01/19/2024] [Indexed: 01/25/2024] Open
Abstract
Artificial intelligence models, like ChatGPT, have the potential to revolutionize higher education when implemented properly. This study aimed to investigate the factors influencing university students' attitudes and usage of ChatGPT in Arab countries. The survey instrument "TAME-ChatGPT" was administered to 2240 participants from Iraq, Kuwait, Egypt, Lebanon, and Jordan. Of those, 46.8% heard of ChatGPT, and 52.6% used it before the study. The results indicated that a positive attitude and usage of ChatGPT were determined by factors like ease of use, positive attitude towards technology, social influence, perceived usefulness, behavioral/cognitive influences, low perceived risks, and low anxiety. Confirmatory factor analysis indicated the adequacy of the "TAME-ChatGPT" constructs. Multivariate analysis demonstrated that the attitude towards ChatGPT usage was significantly influenced by country of residence, age, university type, and recent academic performance. This study validated "TAME-ChatGPT" as a useful tool for assessing ChatGPT adoption among university students. The successful integration of ChatGPT in higher education relies on the perceived ease of use, perceived usefulness, positive attitude towards technology, social influence, behavioral/cognitive elements, low anxiety, and minimal perceived risks. Policies for ChatGPT adoption in higher education should be tailored to individual contexts, considering the variations in student attitudes observed in this study.
Collapse
Affiliation(s)
- Maram Abdaljaleel
- Department of Pathology, Microbiology and Forensic Medicine, School of Medicine, The University of Jordan, Amman, 11942, Jordan
- Department of Clinical Laboratories and Forensic Medicine, Jordan University Hospital, Amman, 11942, Jordan
| | - Muna Barakat
- Department of Clinical Pharmacy and Therapeutics, Faculty of Pharmacy, Applied Science Private University, Amman, 11931, Jordan
| | - Mariam Alsanafi
- Department of Pharmacy Practice, Faculty of Pharmacy, Kuwait University, Kuwait City, Kuwait
- Department of Pharmaceutical Sciences, Public Authority for Applied Education and Training, College of Health Sciences, Safat, Kuwait
| | - Nesreen A Salim
- Prosthodontic Department, School of Dentistry, The University of Jordan, Amman, 11942, Jordan
- Prosthodontic Department, Jordan University Hospital, Amman, 11942, Jordan
| | - Husam Abazid
- Department of Clinical Pharmacy and Therapeutics, Faculty of Pharmacy, Applied Science Private University, Amman, 11931, Jordan
| | - Diana Malaeb
- College of Pharmacy, Gulf Medical University, P.O. Box 4184, Ajman, United Arab Emirates
| | - Ali Haider Mohammed
- School of Pharmacy, Monash University Malaysia, Jalan Lagoon Selatan, 47500, Bandar Sunway, Selangor Darul Ehsan, Malaysia
| | | | | | - Sinan Subhi Farhan
- Department of Anesthesia, Al Rafidain University College, Baghdad, 10001, Iraq
| | - Sami El Khatib
- Department of Biomedical Sciences, School of Arts and Sciences, Lebanese International University, Bekaa, Lebanon
- Center for Applied Mathematics and Bioinformatics (CAMB), Gulf University for Science and Technology (GUST), 32093, Hawally, Kuwait
| | - Mohamad Rahal
- School of Pharmacy, Lebanese International University, Beirut, 961, Lebanon
| | - Ali Sahban
- School of Dentistry, The University of Jordan, Amman, 11942, Jordan
| | - Doaa H Abdelaziz
- Pharmacy Practice and Clinical Pharmacy Department, Faculty of Pharmacy, Future University in Egypt, Cairo, 11835, Egypt
- Department of Clinical Pharmacy, Faculty of Pharmacy, Al-Baha University, Al-Baha, Saudi Arabia
| | - Noha O Mansour
- Clinical Pharmacy and Pharmacy Practice Department, Faculty of Pharmacy, Mansoura University, Mansoura, 35516, Egypt
- Clinical Pharmacy and Pharmacy Practice Department, Faculty of Pharmacy, Mansoura National University, Dakahlia Governorate, 7723730, Egypt
| | - Reem AlZayer
- Clinical Pharmacy Practice, Department of Pharmacy, Mohammed Al-Mana College for Medical Sciences, 34222, Dammam, Saudi Arabia
| | - Roaa Khalil
- Department of Pathology, Microbiology and Forensic Medicine, School of Medicine, The University of Jordan, Amman, 11942, Jordan
| | - Feten Fekih-Romdhane
- The Tunisian Center of Early Intervention in Psychosis, Department of Psychiatry "Ibn Omrane", Razi Hospital, 2010, Manouba, Tunisia
- Faculty of Medicine of Tunis, Tunis El Manar University, Tunis, Tunisia
| | - Rabih Hallit
- School of Medicine and Medical Sciences, Holy Spirit University of Kaslik, Jounieh, Lebanon
- Department of Infectious Disease, Bellevue Medical Center, Mansourieh, Lebanon
- Department of Infectious Disease, Notre Dame des Secours, University Hospital Center, Byblos, Lebanon
| | - Souheil Hallit
- School of Medicine and Medical Sciences, Holy Spirit University of Kaslik, Jounieh, Lebanon
- Research Department, Psychiatric Hospital of the Cross, Jal Eddib, Lebanon
| | - Malik Sallam
- Department of Pathology, Microbiology and Forensic Medicine, School of Medicine, The University of Jordan, Amman, 11942, Jordan.
- Department of Clinical Laboratories and Forensic Medicine, Jordan University Hospital, Amman, 11942, Jordan.
| |
Collapse
|