1
|
Sallam M, Al-Salahat K, Eid H, Egger J, Puladi B. Human versus Artificial Intelligence: ChatGPT-4 Outperforming Bing, Bard, ChatGPT-3.5 and Humans in Clinical Chemistry Multiple-Choice Questions. ADVANCES IN MEDICAL EDUCATION AND PRACTICE 2024; 15:857-871. [PMID: 39319062 PMCID: PMC11421444 DOI: 10.2147/amep.s479801] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/26/2024] [Accepted: 09/15/2024] [Indexed: 09/26/2024]
Abstract
Introduction Artificial intelligence (AI) chatbots excel in language understanding and generation. These models can transform healthcare education and practice. However, it is important to assess the performance of such AI models in various topics to highlight its strengths and possible limitations. This study aimed to evaluate the performance of ChatGPT (GPT-3.5 and GPT-4), Bing, and Bard compared to human students at a postgraduate master's level in Medical Laboratory Sciences. Methods The study design was based on the METRICS checklist for the design and reporting of AI-based studies in healthcare. The study utilized a dataset of 60 Clinical Chemistry multiple-choice questions (MCQs) initially conceived for assessing 20 MSc students. The revised Bloom's taxonomy was used as the framework for classifying the MCQs into four cognitive categories: Remember, Understand, Analyze, and Apply. A modified version of the CLEAR tool was used for the assessment of the quality of AI-generated content, with Cohen's κ for inter-rater agreement. Results Compared to the mean students' score which was 0.68±0.23, GPT-4 scored 0.90 ± 0.30, followed by Bing (0.77 ± 0.43), GPT-3.5 (0.73 ± 0.45), and Bard (0.67 ± 0.48). Statistically significant better performance was noted in lower cognitive domains (Remember and Understand) in GPT-3.5 (P=0.041), GPT-4 (P=0.003), and Bard (P=0.017) compared to the higher cognitive domains (Apply and Analyze). The CLEAR scores indicated that ChatGPT-4 performance was "Excellent" compared to the "Above average" performance of ChatGPT-3.5, Bing, and Bard. Discussion The findings indicated that ChatGPT-4 excelled in the Clinical Chemistry exam, while ChatGPT-3.5, Bing, and Bard were above average. Given that the MCQs were directed to postgraduate students with a high degree of specialization, the performance of these AI chatbots was remarkable. Due to the risk of academic dishonesty and possible dependence on these AI models, the appropriateness of MCQs as an assessment tool in higher education should be re-evaluated.
Collapse
Affiliation(s)
- Malik Sallam
- Department of Pathology, Microbiology and Forensic Medicine, School of Medicine, The University of Jordan, Amman, Jordan
- Department of Clinical Laboratories and Forensic Medicine, Jordan University Hospital, Amman, Jordan
- Scientific Approaches to Fight Epidemics of Infectious Diseases (SAFE-ID) Research Group, The University of Jordan, Amman, Jordan
| | - Khaled Al-Salahat
- Department of Pathology, Microbiology and Forensic Medicine, School of Medicine, The University of Jordan, Amman, Jordan
- Scientific Approaches to Fight Epidemics of Infectious Diseases (SAFE-ID) Research Group, The University of Jordan, Amman, Jordan
| | - Huda Eid
- Scientific Approaches to Fight Epidemics of Infectious Diseases (SAFE-ID) Research Group, The University of Jordan, Amman, Jordan
| | - Jan Egger
- Institute for AI in Medicine (IKIM), University Medicine Essen (AöR), Essen, Germany
| | - Behrus Puladi
- Institute of Medical Informatics, University Hospital RWTH Aachen, Aachen, Germany
| |
Collapse
|
2
|
Sallam M, Al-Mahzoum K, Almutawaa RA, Alhashash JA, Dashti RA, AlSafy DR, Almutairi RA, Barakat M. The performance of OpenAI ChatGPT-4 and Google Gemini in virology multiple-choice questions: a comparative analysis of English and Arabic responses. BMC Res Notes 2024; 17:247. [PMID: 39228001 PMCID: PMC11373487 DOI: 10.1186/s13104-024-06920-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2024] [Accepted: 08/28/2024] [Indexed: 09/05/2024] Open
Abstract
OBJECTIVE The integration of artificial intelligence (AI) in healthcare education is inevitable. Understanding the proficiency of generative AI in different languages to answer complex questions is crucial for educational purposes. The study objective was to compare the performance ChatGPT-4 and Gemini in answering Virology multiple-choice questions (MCQs) in English and Arabic, while assessing the quality of the generated content. Both AI models' responses to 40 Virology MCQs were assessed for correctness and quality based on the CLEAR tool designed for evaluation of AI-generated content. The MCQs were classified into lower and higher cognitive categories based on the revised Bloom's taxonomy. The study design considered the METRICS checklist for the design and reporting of generative AI-based studies in healthcare. RESULTS ChatGPT-4 and Gemini performed better in English compared to Arabic, with ChatGPT-4 consistently surpassing Gemini in correctness and CLEAR scores. ChatGPT-4 led Gemini with 80% vs. 62.5% correctness in English compared to 65% vs. 55% in Arabic. For both AI models, superior performance in lower cognitive domains was reported. Both ChatGPT-4 and Gemini exhibited potential in educational applications; nevertheless, their performance varied across languages highlighting the importance of continued development to ensure the effective AI integration in healthcare education globally.
Collapse
Affiliation(s)
- Malik Sallam
- Department of Pathology, Microbiology and Forensic Medicine, School of Medicine, The University of Jordan, Amman, 11942, Jordan.
- Department of Clinical Laboratories and Forensic Medicine, Jordan University Hospital, Queen Rania Al-Abdullah Street-Aljubeiha, P.O. Box: 13046, Amman, 11942, Jordan.
| | | | | | | | | | | | | | - Muna Barakat
- Department of Clinical Pharmacy and Therapeutics, Faculty of Pharmacy, Applied Science Private University, Amman, 11931, Jordan
| |
Collapse
|
3
|
Siebielec J, Ordak M, Oskroba A, Dworakowska A, Bujalska-Zadrozny M. Assessment Study of ChatGPT-3.5's Performance on the Final Polish Medical Examination: Accuracy in Answering 980 Questions. Healthcare (Basel) 2024; 12:1637. [PMID: 39201195 PMCID: PMC11353589 DOI: 10.3390/healthcare12161637] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2024] [Revised: 08/05/2024] [Accepted: 08/15/2024] [Indexed: 09/02/2024] Open
Abstract
BACKGROUND/OBJECTIVES The use of artificial intelligence (AI) in education is dynamically growing, and models such as ChatGPT show potential in enhancing medical education. In Poland, to obtain a medical diploma, candidates must pass the Medical Final Examination, which consists of 200 questions with one correct answer per question, is administered in Polish, and assesses students' comprehensive medical knowledge and readiness for clinical practice. The aim of this study was to determine how ChatGPT-3.5 handles questions included in this exam. METHODS This study considered 980 questions from five examination sessions of the Medical Final Examination conducted by the Medical Examination Center in the years 2022-2024. The analysis included the field of medicine, the difficulty index of the questions, and their type, namely theoretical versus case-study questions. RESULTS The average correct answer rate achieved by ChatGPT for the five examination sessions hovered around 60% and was lower (p < 0.001) than the average score achieved by the examinees. The lowest percentage of correct answers was in hematology (42.1%), while the highest was in endocrinology (78.6%). The difficulty index of the questions showed a statistically significant correlation with the correctness of the answers (p = 0.04). Questions for which ChatGPT-3.5 provided incorrect answers had a lower (p < 0.001) percentage of correct responses. The type of questions analyzed did not significantly affect the correctness of the answers (p = 0.46). CONCLUSIONS This study indicates that ChatGPT-3.5 can be an effective tool for assisting in passing the final medical exam, but the results should be interpreted cautiously. It is recommended to further verify the correctness of the answers using various AI tools.
Collapse
Affiliation(s)
| | - Michal Ordak
- Department of Pharmacotherapy and Pharmaceutical Care, Faculty of Pharmacy, Medical University of Warsaw, 02-091 Warsaw, Poland; (J.S.); (A.O.); (A.D.); (M.B.-Z.)
| | | | | | | |
Collapse
|
4
|
Bharatha A, Ojeh N, Fazle Rabbi AM, Campbell MH, Krishnamurthy K, Layne-Yarde RNA, Kumar A, Springer DCR, Connell KL, Majumder MAA. Comparing the Performance of ChatGPT-4 and Medical Students on MCQs at Varied Levels of Bloom's Taxonomy. ADVANCES IN MEDICAL EDUCATION AND PRACTICE 2024; 15:393-400. [PMID: 38751805 PMCID: PMC11094742 DOI: 10.2147/amep.s457408] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 12/31/2023] [Accepted: 05/01/2024] [Indexed: 05/18/2024]
Abstract
Introduction This research investigated the capabilities of ChatGPT-4 compared to medical students in answering MCQs using the revised Bloom's Taxonomy as a benchmark. Methods A cross-sectional study was conducted at The University of the West Indies, Barbados. ChatGPT-4 and medical students were assessed on MCQs from various medical courses using computer-based testing. Results The study included 304 MCQs. Students demonstrated good knowledge, with 78% correctly answering at least 90% of the questions. However, ChatGPT-4 achieved a higher overall score (73.7%) compared to students (66.7%). Course type significantly affected ChatGPT-4's performance, but revised Bloom's Taxonomy levels did not. A detailed association check between program levels and Bloom's taxonomy levels for correct answers by ChatGPT-4 showed a highly significant correlation (p<0.001), reflecting a concentration of "remember-level" questions in preclinical and "evaluate-level" questions in clinical courses. Discussion The study highlights ChatGPT-4's proficiency in standardized tests but indicates limitations in clinical reasoning and practical skills. This performance discrepancy suggests that the effectiveness of artificial intelligence (AI) varies based on course content. Conclusion While ChatGPT-4 shows promise as an educational tool, its role should be supplementary, with strategic integration into medical education to leverage its strengths and address limitations. Further research is needed to explore AI's impact on medical education and student performance across educational levels and courses.
Collapse
Affiliation(s)
- Ambadasu Bharatha
- Faculty of Medical Sciences, The University of the West Indies, Bridgetown, Barbados
| | - Nkemcho Ojeh
- Faculty of Medical Sciences, The University of the West Indies, Bridgetown, Barbados
| | | | - Michael H Campbell
- Faculty of Medical Sciences, The University of the West Indies, Bridgetown, Barbados
| | | | | | - Alok Kumar
- Faculty of Medical Sciences, The University of the West Indies, Bridgetown, Barbados
| | - Dale C R Springer
- Faculty of Medical Sciences, The University of the West Indies, Bridgetown, Barbados
| | - Kenneth L Connell
- Faculty of Medical Sciences, The University of the West Indies, Bridgetown, Barbados
| | | |
Collapse
|
5
|
Abdaljaleel M, Barakat M, Alsanafi M, Salim NA, Abazid H, Malaeb D, Mohammed AH, Hassan BAR, Wayyes AM, Farhan SS, Khatib SE, Rahal M, Sahban A, Abdelaziz DH, Mansour NO, AlZayer R, Khalil R, Fekih-Romdhane F, Hallit R, Hallit S, Sallam M. A multinational study on the factors influencing university students' attitudes and usage of ChatGPT. Sci Rep 2024; 14:1983. [PMID: 38263214 PMCID: PMC10806219 DOI: 10.1038/s41598-024-52549-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2023] [Accepted: 01/19/2024] [Indexed: 01/25/2024] Open
Abstract
Artificial intelligence models, like ChatGPT, have the potential to revolutionize higher education when implemented properly. This study aimed to investigate the factors influencing university students' attitudes and usage of ChatGPT in Arab countries. The survey instrument "TAME-ChatGPT" was administered to 2240 participants from Iraq, Kuwait, Egypt, Lebanon, and Jordan. Of those, 46.8% heard of ChatGPT, and 52.6% used it before the study. The results indicated that a positive attitude and usage of ChatGPT were determined by factors like ease of use, positive attitude towards technology, social influence, perceived usefulness, behavioral/cognitive influences, low perceived risks, and low anxiety. Confirmatory factor analysis indicated the adequacy of the "TAME-ChatGPT" constructs. Multivariate analysis demonstrated that the attitude towards ChatGPT usage was significantly influenced by country of residence, age, university type, and recent academic performance. This study validated "TAME-ChatGPT" as a useful tool for assessing ChatGPT adoption among university students. The successful integration of ChatGPT in higher education relies on the perceived ease of use, perceived usefulness, positive attitude towards technology, social influence, behavioral/cognitive elements, low anxiety, and minimal perceived risks. Policies for ChatGPT adoption in higher education should be tailored to individual contexts, considering the variations in student attitudes observed in this study.
Collapse
Affiliation(s)
- Maram Abdaljaleel
- Department of Pathology, Microbiology and Forensic Medicine, School of Medicine, The University of Jordan, Amman, 11942, Jordan
- Department of Clinical Laboratories and Forensic Medicine, Jordan University Hospital, Amman, 11942, Jordan
| | - Muna Barakat
- Department of Clinical Pharmacy and Therapeutics, Faculty of Pharmacy, Applied Science Private University, Amman, 11931, Jordan
| | - Mariam Alsanafi
- Department of Pharmacy Practice, Faculty of Pharmacy, Kuwait University, Kuwait City, Kuwait
- Department of Pharmaceutical Sciences, Public Authority for Applied Education and Training, College of Health Sciences, Safat, Kuwait
| | - Nesreen A Salim
- Prosthodontic Department, School of Dentistry, The University of Jordan, Amman, 11942, Jordan
- Prosthodontic Department, Jordan University Hospital, Amman, 11942, Jordan
| | - Husam Abazid
- Department of Clinical Pharmacy and Therapeutics, Faculty of Pharmacy, Applied Science Private University, Amman, 11931, Jordan
| | - Diana Malaeb
- College of Pharmacy, Gulf Medical University, P.O. Box 4184, Ajman, United Arab Emirates
| | - Ali Haider Mohammed
- School of Pharmacy, Monash University Malaysia, Jalan Lagoon Selatan, 47500, Bandar Sunway, Selangor Darul Ehsan, Malaysia
| | | | | | - Sinan Subhi Farhan
- Department of Anesthesia, Al Rafidain University College, Baghdad, 10001, Iraq
| | - Sami El Khatib
- Department of Biomedical Sciences, School of Arts and Sciences, Lebanese International University, Bekaa, Lebanon
- Center for Applied Mathematics and Bioinformatics (CAMB), Gulf University for Science and Technology (GUST), 32093, Hawally, Kuwait
| | - Mohamad Rahal
- School of Pharmacy, Lebanese International University, Beirut, 961, Lebanon
| | - Ali Sahban
- School of Dentistry, The University of Jordan, Amman, 11942, Jordan
| | - Doaa H Abdelaziz
- Pharmacy Practice and Clinical Pharmacy Department, Faculty of Pharmacy, Future University in Egypt, Cairo, 11835, Egypt
- Department of Clinical Pharmacy, Faculty of Pharmacy, Al-Baha University, Al-Baha, Saudi Arabia
| | - Noha O Mansour
- Clinical Pharmacy and Pharmacy Practice Department, Faculty of Pharmacy, Mansoura University, Mansoura, 35516, Egypt
- Clinical Pharmacy and Pharmacy Practice Department, Faculty of Pharmacy, Mansoura National University, Dakahlia Governorate, 7723730, Egypt
| | - Reem AlZayer
- Clinical Pharmacy Practice, Department of Pharmacy, Mohammed Al-Mana College for Medical Sciences, 34222, Dammam, Saudi Arabia
| | - Roaa Khalil
- Department of Pathology, Microbiology and Forensic Medicine, School of Medicine, The University of Jordan, Amman, 11942, Jordan
| | - Feten Fekih-Romdhane
- The Tunisian Center of Early Intervention in Psychosis, Department of Psychiatry "Ibn Omrane", Razi Hospital, 2010, Manouba, Tunisia
- Faculty of Medicine of Tunis, Tunis El Manar University, Tunis, Tunisia
| | - Rabih Hallit
- School of Medicine and Medical Sciences, Holy Spirit University of Kaslik, Jounieh, Lebanon
- Department of Infectious Disease, Bellevue Medical Center, Mansourieh, Lebanon
- Department of Infectious Disease, Notre Dame des Secours, University Hospital Center, Byblos, Lebanon
| | - Souheil Hallit
- School of Medicine and Medical Sciences, Holy Spirit University of Kaslik, Jounieh, Lebanon
- Research Department, Psychiatric Hospital of the Cross, Jal Eddib, Lebanon
| | - Malik Sallam
- Department of Pathology, Microbiology and Forensic Medicine, School of Medicine, The University of Jordan, Amman, 11942, Jordan.
- Department of Clinical Laboratories and Forensic Medicine, Jordan University Hospital, Amman, 11942, Jordan.
| |
Collapse
|
6
|
Lin SY, Hsu YY, Ju SW, Yeh PC, Hsu WH, Kao CH. Assessing AI efficacy in medical knowledge tests: A study using Taiwan's internal medicine exam questions from 2020 to 2023. Digit Health 2024; 10:20552076241291404. [PMID: 39430693 PMCID: PMC11490985 DOI: 10.1177/20552076241291404] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2024] [Accepted: 09/27/2024] [Indexed: 10/22/2024] Open
Abstract
Background The aim of this study is to evaluate the ability of generative artificial intelligence (AI) models to handle specialized medical knowledge and problem-solving in a formal examination context. Methods This research utilized internal medicine exam questions provided by the Taiwan Internal Medicine Society from 2020 to 2023, testing three AI models: GPT-4o, Claude_3.5 Sonnet, and Gemini Advanced models. Rejected queries for Gemini Advanced were translated into French for resubmission. Performance was assessed using IBM SPSS Statistics 26, with accuracy percentages calculated and statistical analyses such as Pearson correlation and analysis of variance (ANOVA) performed to gauge AI efficacy. Results GPT-4o's top annual score was 86.25 in 2022, with an average of 81.97. Claude_3.5 Sonnet reached a peak score of 88.13 in 2021 and 2022, averaging 84.85, while Gemini Advanced lagged with an average score of 69.84. In specific specialties, Claude_3.5 Sonnet scored highest in Psychiatry (100%) and Nephrology (97.26%), with GPT-4o performing similarly well in Hematology & oncology (97.10%) and Nephrology (94.52%). Gemini's best scores were in Psychiatry (86.96%) and Hematology & Oncology (82.76%). Gemini Advanced models struggled with Neurology, scoring below 60%. Additionally, all models performed better on text-based questions than on image-based ones, without significant differences. Claude 3 Opus scored highest on COVID-19-related questions at 89.29%, followed by GPT-4o at 75.00% and Gemini Advanced at 67.86%. Conclusions AI models showed varied proficiency across medical specialties and question types. GPT-4o demonstrated higher image-based correction rates. Claude_3.5 Sonnet generally and consistently outperformed others, highlighting significant potential for AI in assisting medical education.
Collapse
Affiliation(s)
- Shih-Yi Lin
- Graduate Institute of Biomedical Sciences, College of Medicine, China Medical University, Taichung
- Division of Nephrology and Kidney Institute, China Medical University Hospital, Taichung
| | - Ying-Yu Hsu
- 11th Grade Student, National Changhua Senior High School, Changhua
| | - Shu-Woei Ju
- Division of Nephrology and Kidney Institute, China Medical University Hospital, Taichung
| | - Pei-Chun Yeh
- Artificial Intelligence Center, China Medical University Hospital, Taichung
| | - Wu-Huei Hsu
- Graduate Institute of Biomedical Sciences, College of Medicine, China Medical University, Taichung
- Department of Chest Medicine, China Medical University Hospital, Taichung
| | - Chia-Hung Kao
- Graduate Institute of Biomedical Sciences, College of Medicine, China Medical University, Taichung
- Artificial Intelligence Center, China Medical University Hospital, Taichung
- Department of Nuclear Medicine and PET Center, China Medical University Hospital, Taichung
- Department of Bioinformatics and Medical Engineering, Asia University, Taichung
| |
Collapse
|
7
|
Sallam M, Al-Salahat K, Al-Ajlouni E. ChatGPT Performance in Diagnostic Clinical Microbiology Laboratory-Oriented Case Scenarios. Cureus 2023; 15:e50629. [PMID: 38107211 PMCID: PMC10725273 DOI: 10.7759/cureus.50629] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/16/2023] [Indexed: 12/19/2023] Open
Abstract
BACKGROUND Artificial intelligence (AI)-based tools can reshape healthcare practice. This includes ChatGPT which is considered among the most popular AI-based conversational models. Nevertheless, the performance of different versions of ChatGPT needs further evaluation in different settings to assess its reliability and credibility in various healthcare-related tasks. Therefore, the current study aimed to assess the performance of the freely available ChatGPT-3.5 and the paid version ChatGPT-4 in 10 different diagnostic clinical microbiology case scenarios. METHODS The current study followed the METRICS (Model, Evaluation, Timing/Transparency, Range/Randomization, Individual factors, Count, Specificity of the prompts/language) checklist for standardization of the design and reporting of AI-based studies in healthcare. The models tested on December 3, 2023 included ChatGPT-3.5 and ChatGPT-4 and the evaluation of the ChatGPT-generated content was based on the CLEAR tool (Completeness, Lack of false information, Evidence support, Appropriateness, and Relevance) assessed on a 5-point Likert scale with a range of the CLEAR scores of 1-5. ChatGPT output was evaluated by two raters independently and the inter-rater agreement was based on the Cohen's κ statistic. Ten diagnostic clinical microbiology laboratory case scenarios were created in the English language by three microbiologists at diverse levels of expertise following an internal discussion of common cases observed in Jordan. The range of topics included bacteriology, mycology, parasitology, and virology cases. Specific prompts were tailored based on the CLEAR tool and a new session was selected following prompting each case scenario. RESULTS The Cohen's κ values for the five CLEAR items were 0.351-0.737 for ChatGPT-3.5 and 0.294-0.701 for ChatGPT-4 indicating fair to good agreement and suitability for analysis. Based on the average CLEAR scores, ChatGPT-4 outperformed ChatGPT-3.5 (mean: 2.64±1.06 vs. 3.21±1.05, P=.012, t-test). The performance of each model varied based on the CLEAR items, with the lowest performance for the "Relevance" item (2.15±0.71 for ChatGPT-3.5 and 2.65±1.16 for ChatGPT-4). A statistically significant difference upon assessing the performance per each CLEAR item was only seen in ChatGPT-4 with the best performance in "Completeness", "Lack of false information", and "Evidence support" (P=0.043). The lowest level of performance for both models was observed with antimicrobial susceptibility testing (AST) queries while the highest level of performance was seen in bacterial and mycologic identification. CONCLUSIONS Assessment of ChatGPT performance across different diagnostic clinical microbiology case scenarios showed that ChatGPT-4 outperformed ChatGPT-3.5. The performance of ChatGPT demonstrated noticeable variability depending on the specific topic evaluated. A primary shortcoming of both ChatGPT models was the tendency to generate irrelevant content lacking the needed focus. Although the overall ChatGPT performance in these diagnostic microbiology case scenarios might be described as "above average" at best, there remains a significant potential for improvement, considering the identified limitations and unsatisfactory results in a few cases.
Collapse
Affiliation(s)
- Malik Sallam
- Department of Pathology, Microbiology and Forensic Medicine, The University of Jordan, School of Medicine, Amman, JOR
- Department of Clinical Laboratories and Forensic Medicine, Jordan University Hospital, Amman, JOR
| | - Khaled Al-Salahat
- Department of Pathology, Microbiology and Forensic Medicine, The University of Jordan, School of Medicine, Amman, JOR
| | - Eyad Al-Ajlouni
- Department of Pathology, Microbiology and Forensic Medicine, The University of Jordan, School of Medicine, Amman, JOR
| |
Collapse
|