1
|
Fatima SS, Sheikh NA, Osama A. Authentic assessment in medical education: exploring AI integration and student-as-partners collaboration. Postgrad Med J 2024:qgae088. [PMID: 39041454 DOI: 10.1093/postmj/qgae088] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2024] [Revised: 06/25/2024] [Accepted: 07/03/2024] [Indexed: 07/24/2024]
Abstract
BACKGROUND Traditional assessments often lack flexibility, personalized feedback, real-world applicability, and the ability to measure skills beyond rote memorization. These may not adequately accommodate diverse learning styles and preferences, nor do they always foster critical thinking or creativity. The inclusion of Artificial Intelligence (AI), especially Generative Pre-trained Transformers, in medical education marks a significant shift, offering both exciting opportunities and notable challenges for authentic assessment practices. Various fields, including anatomy, physiology, pharmacy, dentistry, and pathology, are anticipated to employ the metaverse for authentic assessments increasingly. This innovative approach will likely enable students to engage in immersive, project-based learning experiences, facilitating interdisciplinary collaboration and providing a platform for real-world application of knowledge and skills. METHODS This commentary paper explores how AI, authentic assessment, and Student-as-Partners (SaP) methodologies can work together to reshape assessment practices in medical education. RESULTS The paper provides practical insights into effectively utilizing AI tools to create authentic assessments, offering educators actionable guidance to enhance their teaching practices. It also addresses the challenges and ethical considerations inherent in implementing AI-driven assessments, emphasizing the need for responsible and inclusive practices within medical education. Advocating for a collaborative approach between AI and SaP methodologies, the commentary proposes a robust plan to ensure ethical use while upholding academic integrity. CONCLUSION Through navigating emerging assessment paradigms and promoting genuine evaluation of medical knowledge and proficiency, this collaborative effort aims to elevate the quality of medical education and better prepare learners for the complexities of clinical practice.
Collapse
Affiliation(s)
- Syeda Sadia Fatima
- Department of Biological and Biomedical Sciences, Aga Khan University, Karachi 74800, Pakistan
| | - Nabeel Ashfaque Sheikh
- Medical Oncology, Shaukat Khanum Memorial Cancer Hospital and Research Center, Lahore 54000, Pakistan
| | - Athar Osama
- INNOVentures Global (Pvt) Ltd., Karachi, 75350, Pakistan
| |
Collapse
|
2
|
Ba H, Zhang L, Yi Z. Enhancing clinical skills in pediatric trainees: a comparative study of ChatGPT-assisted and traditional teaching methods. BMC MEDICAL EDUCATION 2024; 24:558. [PMID: 38778332 PMCID: PMC11112818 DOI: 10.1186/s12909-024-05565-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/01/2024] [Accepted: 05/16/2024] [Indexed: 05/25/2024]
Abstract
BACKGROUND As artificial intelligence (AI) increasingly integrates into medical education, its specific impact on the development of clinical skills among pediatric trainees needs detailed investigation. Pediatric training presents unique challenges which AI tools like ChatGPT may be well-suited to address. OBJECTIVE This study evaluates the effectiveness of ChatGPT-assisted instruction versus traditional teaching methods on pediatric trainees' clinical skills performance. METHODS A cohort of pediatric trainees (n = 77) was randomly assigned to two groups; one underwent ChatGPT-assisted training, while the other received conventional instruction over a period of two weeks. Performance was assessed using theoretical knowledge exams and Mini-Clinical Evaluation Exercises (Mini-CEX), with particular attention to professional conduct, clinical judgment, patient communication, and overall clinical skills. Trainees' acceptance and satisfaction with the AI-assisted method were evaluated through a structured survey. RESULTS Both groups performed similarly in theoretical exams, indicating no significant difference (p > 0.05). However, the ChatGPT-assisted group showed a statistically significant improvement in Mini-CEX scores (p < 0.05), particularly in patient communication and clinical judgment. The AI-teaching approach received positive feedback from the majority of trainees, highlighting the perceived benefits in interactive learning and skill acquisition. CONCLUSION ChatGPT-assisted instruction did not affect theoretical knowledge acquisition but did enhance practical clinical skills among pediatric trainees. The positive reception of the AI-based method suggests that it has the potential to complement and augment traditional training approaches in pediatric education. These promising results warrant further exploration into the broader applications of AI in medical education scenarios.
Collapse
Affiliation(s)
- Hongjun Ba
- Department of Pediatric Cardiology, Heart Center, First Affiliated Hospital of Sun Yat-sen University, 58# Zhongshan Road 2, Guangzhou, 510080, China.
- Key Laboratory on Assisted Circulation, Ministry of Health, 58# Zhongshan Road 2, Guangzhou, 510080, China.
| | - Lili Zhang
- Department of Pediatric Cardiology, Heart Center, First Affiliated Hospital of Sun Yat-sen University, 58# Zhongshan Road 2, Guangzhou, 510080, China
| | - Zizheng Yi
- Department of Pediatric Cardiology, Heart Center, First Affiliated Hospital of Sun Yat-sen University, 58# Zhongshan Road 2, Guangzhou, 510080, China
| |
Collapse
|
3
|
Bamber H. Evaluation of the Workplace-Based Assessment Anaesthesia-Clinical Evaluation Exercise (A-CEX) and Its Role in the Royal College of Anaesthetists 2021 Curriculum. Cureus 2023; 15:e37402. [PMID: 37181999 PMCID: PMC10171902 DOI: 10.7759/cureus.37402] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/09/2023] [Indexed: 05/16/2023] Open
Abstract
The workplace-based assessment (WPBA) Anaesthesia-Clinical Evaluation Exercise (A-CEX) is used in anaesthetic training in the Royal College of Anaesthetists 2021 curriculum. WBPAs are part of a multimodal approach to assess competencies, but can be limited by their granularity. They are an essential component of assessment and are used in both a formative and summative capacity. The A-CEX is a form of WBPA which evaluates knowledge, behaviours and skill of anaesthetists in training across a variety of 'real world' situations. An entrustment scale is assigned to the evaluation which has implications for future practice and ongoing supervision requirements. Despite being a key component in the curriculum the A-CEX has drawbacks. Its qualitative nature results in variation in feedback provided amongst assessors, which may have ongoing implications for clinical practice. Furthermore, the completion of an A-CEX can be viewed as a 'tick box' exercise and does not guarantee that learning has taken place. Currently no direct evidence exists as to the benefit of the A-CEX in anaesthetic training, but extrapolated data from other studies may show validity. However, the assessment remains a key part of the 2021 curriculum, Future areas for consideration include education for those assessing trainees via A-CEX, altering the matrix of assessment to a less granular approach and a longitudinal study as to the utility of A-CEX in anaesthetics training.
Collapse
|
4
|
Leep Hunderfund AN, Santilli AR, Rubin DI, Laughlin RS, Sorenson EJ, Park YS. Assessing electrodiagnostic skills among residents and fellows: Relationships between workplace-based assessments using the Electromyography Direct Observation Tool and other measures of trainee performance. Muscle Nerve 2022; 66:671-678. [PMID: 35470901 DOI: 10.1002/mus.27566] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2021] [Revised: 04/21/2022] [Accepted: 04/23/2022] [Indexed: 12/14/2022]
Abstract
INTRODUCTION/AIMS Graduate medical education programs must ensure residents and fellows acquire skills needed for independent practice. Workplace-based observational assessments are informative but can be time- and resource-intensive. In this study we sought to gather "relations-to-other-variables" validity evidence for scores generated by the Electromyography Direct Observation Tool (EMG-DOT) to inform its use as a measure of electrodiagnostic skill acquisition. METHODS Scores on multiple assessments were compiled by trainees during Clinical Neurophysiology and Electromyography rotations at a large US academic medical center. Relationships between workplace-based EMG-DOT scores (n = 298) and scores on a prerequisite simulated patient exercise, patient experience surveys (n = 199), end-of-rotation evaluations (n = 301), and an American Association of Neuromuscular & Electrodiagnostic Medicine (AANEM) self-assessment examination were assessed using Pearson correlations. RESULTS Among 23 trainees, EMG-DOT scores assigned by physician raters correlated positively with end-of-rotation evaluations (r = 0.63, P = .001), but EMG-DOT scores assigned by technician raters did not (r = 0.10, P = .663). When physician and technician ratings were combined, higher EMG-DOT scores correlated with better patient experience survey scores (r = 0.42, P = .047), but not with simulated patient or AANEM self-assessment examination scores. DISCUSSION End-of-rotation evaluations can provide valid assessments of trainee performance when completed by individuals with ample opportunities to directly observe trainees. Inclusion of observational assessments by technicians and patients provides a more comprehensive view of trainee performance. Workplace- and classroom-based assessments provide complementary information about trainee performance, reflecting underlying differences in types of skills measured.
Collapse
Affiliation(s)
| | - Ashley R Santilli
- Department of Neurology at Mayo Clinic College of Medicine, Mayo Clinic, Rochester, Minnesota
| | - Devon I Rubin
- Department of Neurology at Mayo Clinic College of Medicine, Jacksonville, Florida
| | - Ruple S Laughlin
- Department of Neurology at Mayo Clinic College of Medicine, Mayo Clinic, Rochester, Minnesota
| | - Eric J Sorenson
- Department of Neurology at Mayo Clinic College of Medicine, Mayo Clinic, Rochester, Minnesota
| | - Yoon S Park
- Department of Medical Education, University of Illinois College of Medicine, Chicago, Illinois.,Health Professions Education Research at Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts
| |
Collapse
|
5
|
Régent A, Arlet JB, Cheminet G, Pouchot J, Mouthon L, Le Jeunne C. [Contribution and limits of "OSCE", "long-case" and "global end-of-placement marking" as end-of-rotation assessment methods. Experience from two internal medicine wards]. Rev Med Interne 2022; 43:581-588. [PMID: 36089428 DOI: 10.1016/j.revmed.2022.07.014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2022] [Revised: 07/11/2022] [Accepted: 07/27/2022] [Indexed: 11/18/2022]
Abstract
INTRODUCTION During placements, there is an opportunity to learn clinical skills and to assess their application. However, it represents two different goals. The validity of an end-of-placement assessment is questionable, as the medical competency is contextual. We decided to evaluate the contribution and limits of different assessment modalities as an end-of-placement assessment. MATERIAL AND METHODS Internal medicine clerks were assessed using the Mini-Cex grid by a structured objective clinical examination (OSCE), a long-case clinical examination (LCE) and a global end-of-placement marking (GEPM). Following these evaluations, students and teachers fulfilled an open questionnaire. RESULTS In 2021, 41 students and 16 teachers participated in the study. Physical examination was evaluated in 0%, 97% et 76% of cases during OSCE, LCE and GEPM, respectively; teaching skills were assessed for 100, 42 et 49% of students in OSCE, LCE and GEPM, respectively. As compared to OSCE, there was a perceived superiority of LCE regarding its formative value (P=0.07 and P=0.03) and its summative value (P=0.0007 and P=0.02), for students and teachers, respectively. Qualitative analysis highlights the breadth of clinical skills that could be assessed during OSCE stations. Integration into a team was an additional skill that could specifically be assessed during GEPM. GEPM could also take into account the progress made during placement. CONCLUSION Despite its subjectivity, LCE seemed to be the preferred modality for an end-of-rotation assessment.
Collapse
Affiliation(s)
- A Régent
- Université de Paris, 15, rue de l'école de médecine, 75006 Paris, France; Service de médecine interne, centre de référence maladies auto-immunes et systémiques rares d'ile de France, hôpital Cochin, AP-HP-CUP, 75014 Paris, France.
| | - J-B Arlet
- Université de Paris, 15, rue de l'école de médecine, 75006 Paris, France; Service de médecine interne, centre de référence des syndromes drépanocytaires majeurs, hôpital européen Georges-Pompidou, AP-HP-CUP, 75015 Paris, France
| | - G Cheminet
- Université de Paris, 15, rue de l'école de médecine, 75006 Paris, France; Service de médecine interne, centre de référence des syndromes drépanocytaires majeurs, hôpital européen Georges-Pompidou, AP-HP-CUP, 75015 Paris, France
| | - J Pouchot
- Université de Paris, 15, rue de l'école de médecine, 75006 Paris, France; Service de médecine interne, centre de référence des syndromes drépanocytaires majeurs, hôpital européen Georges-Pompidou, AP-HP-CUP, 75015 Paris, France
| | - L Mouthon
- Université de Paris, 15, rue de l'école de médecine, 75006 Paris, France; Service de médecine interne, centre de référence maladies auto-immunes et systémiques rares d'ile de France, hôpital Cochin, AP-HP-CUP, 75014 Paris, France
| | - C Le Jeunne
- Université de Paris, 15, rue de l'école de médecine, 75006 Paris, France; Service de médecine interne, centre de référence maladies auto-immunes et systémiques rares d'ile de France, hôpital Cochin, AP-HP-CUP, 75014 Paris, France
| |
Collapse
|
6
|
Fu CP, Chen YL, Kuo NC, Su CT, Huang CK, Li MW, Chi HY, Yang CL, Chang WY. Developing the Occupational Therapy-Specific Mini-Clinical Evaluation Exercise (Mini-CEX) for Evaluating Interns' Clinical Skills and Attitudes in Pediatric Occupational Therapy. Am J Occup Ther 2022; 76:23876. [PMID: 35904505 DOI: 10.5014/ajot.2022.049319] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
IMPORTANCE The Mini-Clinical Evaluation Exercise (Mini-CEX) is highly recommended for assessing interns' performance. OBJECTIVE To develop a pediatric occupational therapy-specific Mini-CEX and examine its psychometrics. DESIGN Stage 1 had a retrospective design; Stage 2 had a prospective design. SETTING Pediatric occupational therapy unit in a hospital in Taiwan. PARTICIPANTS Thirty-four occupational therapy interns were evaluated with the Mini-CEX (physician version), and 57 were evaluated with the occupational therapy-specific Mini-CEX. OUTCOMES AND MEASURES The occupational therapy-specific Mini-CEX was developed with seven items on a 9-point scale categorized into three levels (unsatisfactory, satisfactory, highly satisfactory). RESULTS In Stage 1, the frequency of Mini-CEX (physician version) items receiving a rating of not applicable ranged from 1.9% to 88.1%. In Stage 2, the frequency of occupational therapy-specific Mini-CEX items receiving a rating of not applicable ranged from 3.5% to 31.6%. With the theme of evaluation taken into consideration, the frequency of not-applicable ratings was 0% to 8.8%. For the occupational therapy-specific Mini-CEX, content validity (item-level content validity index = 1, scale-level content validity index = 1) and internal consistency (Cronbach's α = .93) were excellent. The interns' scores on the second evaluation were significantly higher than those on their first evaluation, indicating good discriminant validity. CONCLUSIONS AND RELEVANCE The occupational therapy-specific Mini-CEX appears to be reliable and valid, and it is appropriate for evaluating interns' skills and attitudes in pediatric occupational therapy practice. What This Article Adds: The results support the development of the occupational therapy-specific Mini-CEX and its application in pediatric internship training.
Collapse
Affiliation(s)
- Chung-Pei Fu
- Chung-Pei Fu, PhD, is Associate Professor, Department of Occupational Therapy, College of Medicine, Fu Jen Catholic University, New Taipei City, Taiwan
| | - Yu-Lan Chen
- Yu-Lan Chen, MS, is Occupational Therapist, Department of Physical Medicine and Rehabilitation, Taipei Veterans General Hospital, Taipei, Taiwan
| | - Nung-Chen Kuo
- Nung-Chen Kuo, BS, is Director, Department of Occupational Therapy, Taoyuan General Hospital, Ministry of Health and Welfare, Taoyuan, Taiwan
| | - Chia-Ting Su
- Chia-Ting Su, PhD, is Professor, Department of Occupational Therapy, College of Medicine, Fu Jen Catholic University, New Taipei City, Taiwan
| | - Ching-Kai Huang
- Ching-Kai Huang, MS, is Occupational Therapist, Department of Rehabilitation, Chang Gung Memorial Hospital, Linkou, Taiwan
| | - Ming-Wei Li
- Ming-Wei Li, BS, is Chief of Occupational Therapists, Department of Rehabilitation, Sijhih Cathay General Hospital, New Taipei City, Taiwan
| | - Hsin-Yu Chi
- Hsin-Yu Chi, BS, is Occupational Therapist, Department of Rehabilitation, Sijhih Cathay General Hospital, New Taipei City, Taiwan
| | - Chien-Lun Yang
- Chien-Lun Yang, BS, is Occupational Therapist, Child Developmental Center, Far Eastern Memorial Hospital, New Taipei City, Taiwan
| | - Wan-Ying Chang
- Wan-Ying Chang, MS, is Chief of Therapists, Department of Physical Medicine and Rehabilitation, Taipei Hospital, Ministry of Health and Welfare, New Taipei City, Taiwan;
| |
Collapse
|
7
|
Ganji J, Shirvani MA, Motahari-Tabari N, Tayebi T. Design, implementation and evaluation of a virtual clinical training protocol for midwifery internship in a gynecology course during COVID-19 pandemic: A semi-experimental study. NURSE EDUCATION TODAY 2022; 111:105293. [PMID: 35134637 PMCID: PMC8809642 DOI: 10.1016/j.nedt.2022.105293] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/11/2021] [Revised: 01/11/2022] [Accepted: 01/31/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND Over the past year, the occurrence of COVID-19 pandemic has challenged clinical education for health care students, due to the possibility of exposure to the virus and increased spread of the disease. Clinical training of midwifery students in gynecologic problems, based on the Iran midwifery education curriculum, was also disrupted during this pandemic. OBJECTIVES This study was aimed at designing, implementing and evaluating a virtual clinical training protocol for midwifery internship in a Gynecology course. DESIGN A semi-experimental study. SETTINGS Faculty of Nursing and Midwifery, Mazandaran University of Medical Sciences, Sari, Iran. PARTICIPANTS Forty-seven midwifery interns in Gynecology course were recruited during two semesters in 2020. METHODS Five steps based on the Analysis, Design, Development, Implementation, and Evaluation (ADDIE) model were taken, which included 1) educational and skills needs assessment, 2) design, 3) development via focused group interviews and brainstorming with the presence of the midwifery department members in three sessions, 4) implementation including pretest and posttest, webinar, uploading the information of virtual patients, questions and correct answers, and 5) evaluation including knowledge assessment by a designed questionnaire and skills evaluation by the modified-Mini-CEX checklist. Data were analyzed using mean, standard deviation and paired t-test. RESULTS After training, a significant increase (p < 0.001) was observed in scores of knowledge and interview skills, clinical judgment, consultation, efficiency, professionalism, clinical competence and total score of clinical skills. CONCLUSIONS Training for gynecological diseases through virtual clinic promoted knowledge and clinical skills of midwifery interns. To enhance education, a virtual clinic may be used in crisis situations and in combination with teaching under normal circumstances by strengthening the infrastructure and removing barriers.
Collapse
Affiliation(s)
- Jila Ganji
- Department of Midwifery, Sexual and Reproductive Health Research Center, Faculty of Nursing and Midwifery, Mazandaran University of Medical Sciences, Sari, Iran
| | - Marjan Ahmad Shirvani
- Department of Midwifery, Sexual and Reproductive Health Research Center, Faculty of Nursing and Midwifery, Mazandaran University of Medical Sciences, Sari, Iran.
| | - Narges Motahari-Tabari
- Department of Midwifery, Faculty of Nursing and Midwifery, Mazandaran University of Medical Sciences, Sari, Iran
| | - Tahereh Tayebi
- Department of Midwifery, Faculty of Nursing and Midwifery, Mazandaran University of Medical Sciences, Sari, Iran
| |
Collapse
|
8
|
Gittinger FP, Lemos M, Neumann JL, Förster J, Dohmen D, Berke B, Olmeo A, Lucas G, Jonas SM. Interrater reliability in the assessment of physiotherapy students. BMC MEDICAL EDUCATION 2022; 22:186. [PMID: 35296313 PMCID: PMC8928589 DOI: 10.1186/s12909-022-03231-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/03/2021] [Accepted: 01/17/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND Reliable and objective assessment of psychomotor skills in physiotherapy students' education is essential for direct feedback and skill improvement. The aim of this study is to determine the interrater reliability in the assessment process of physiotherapy students and to analyse the assessment behaviour of the examiners. METHODS Physiotherapy teachers from two different schools assessed students from two different schools performing proprioceptive neuromuscular facilitation (PNF) patterns. An evaluation sheet with a 6-point rating scale and 20 evaluation criteria including an overall rating was used for assessment. The interrater reliability was determined calculating an intraclass-correlation coefficient (ICC) and Krippendorff's alpha. The assessment behaviour of the examiners was further analysed calculating the location parameters and showing the item response distribution over item in form of a Likert plot. RESULTS The ICC estimates were mostly below 0.4, indicating poor interrater reliability. This was confirmed by Krippendorff's alpha. The examiners showed a certain central tendency and intergroup bias. DISCUSSION AND CONCLUSION The interrater reliability in this assessment format was rather low. No difference between the two physiotherapy schools concerning the interrater reliability could be identified. Despite certain limitations of this study, there is a definite need for improvement of the assessment process in physiotherapy education to provide the students with reliable and objective feedback and ensure a certain level of professional competence in the students. TRIAL REGISTRATION The study was approved by the ethics committee of the Medical Faculty RWTH Aachen University (EK 340/16).
Collapse
Affiliation(s)
- Flora P Gittinger
- Department of Medical Informatics, Faculty of Medicine, RWTH Aachen University, Pauwelsstraße 30, 52074, Aachen, Germany.
| | - Martin Lemos
- Audiovisual Media Center, Faculty of Medicine, RWTH Aachen University, Aachen, Germany
| | - Jan L Neumann
- Schule für Physiotherapie, Uniklinik RWTH Aachen, Aachen, Germany
| | - Jürgen Förster
- Schule für Physiotherapie, Uniklinik RWTH Aachen, Aachen, Germany
| | - Daniel Dohmen
- Schule für Physiotherapie, Uniklinik RWTH Aachen, Aachen, Germany
| | - Birgit Berke
- Berufsfachschule für Physiotherapie, Grone-Bildungszentrum für Gesundheits- und Sozialberufe GmbH, Hamburg, Germany
| | - Anke Olmeo
- Berufsfachschule für Physiotherapie, Grone-Bildungszentrum für Gesundheits- und Sozialberufe GmbH, Hamburg, Germany
| | - Gisela Lucas
- Berufsfachschule für Physiotherapie, Grone-Bildungszentrum für Gesundheits- und Sozialberufe GmbH, Hamburg, Germany
| | - Stephan M Jonas
- Department of Medical Informatics, Faculty of Medicine, RWTH Aachen University, Pauwelsstraße 30, 52074, Aachen, Germany
- Department of Informatics, Technical University of Munich, Munich, Germany
- Department of Digital Health, University Hospital Bonn, Bonn, Germany
| |
Collapse
|
9
|
Jaiswal A, Kaushik A, Singh AK, Rizvi G. Challenges to new undergraduate medical curriculum due to COVID-19 pandemic and possible solution in India. MEDICAL JOURNAL OF DR. D.Y. PATIL VIDYAPEETH 2022. [DOI: 10.4103/mjdrdypu.mjdrdypu_263_20] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022] Open
|
10
|
He Y, Wen S, Zhou M, Li X, Gong M, Zhou L. A Pilot Study of Modified Mini-Clinical Evaluation Exercises (Mini-CEX) in Rotation Students in the Department of Endocrinology. Diabetes Metab Syndr Obes 2022; 15:2031-2038. [PMID: 35846182 PMCID: PMC9278438 DOI: 10.2147/dmso.s372253] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/25/2022] [Accepted: 07/05/2022] [Indexed: 12/24/2022] Open
Abstract
BACKGROUND The mini-clinical evaluation exercise (mini-CEX) is an excellent tool for assessing the clinical abilities of medical students in intense clinical practice. In this study, the Mini-CEX was adapted to professional questionnaires for Diabetes Mellitus (DM), and examined in medical students completing their clerkship rotation in the department of endocrinology. METHODS From January 2021 to January 2022, all rotating medical students at Shanghai Pudong Hospital completed two mini-CEX exams before and following their rotation under the supervision and guidance of six tutors. The mini-CEX form was modified in this study primarily for inpatient management based on our clinical experience and updated DM guidelines of the American Diabetes Association (ADA), the European Association for the Study of Diabetes (EASD), and the Chinese Diabetes Society (CDS). Each component of the mini-CEX assessment, including medical interviews, physical examination, clinical judgment, clinical management, and overall clinical competence was evaluated using a nine-item questionnaire. RESULTS Our findings revealed that the second-round performance on the assessments significantly improved, as indicated by higher scores on each component. The Pearson association analysis revealed that the feedback time of the first examination was markedly associated with improved overall scores (r= 0.391, p<0.001). However, no correlations were discovered between patient age, gender, disease severity disparity, or the interval between examinations (p>0.05). Additional regression analysis revealed that the feedback time during the initial examination was the most significant contributor to the increased overall scores (β=0.391, p<0.001). CONCLUSION This newly designed mini-CEX form based on current ADA and EASD guidelines may assist trainees in more effectively diagnosing and managing DM in inpatients, particularly those with macrovascular, microvascular, or peripheral nerve neuropathy. This study aims to assess the efficacy of administering a modified mini-CEX form to rotating trainees participating in an endocrine clerkship.
Collapse
Affiliation(s)
- Yanju He
- Department of Endocrinology, Shanghai Pudong Hospital, Fudan University, Shanghai, 201399, People’s Republic of China
| | - Song Wen
- Department of Endocrinology, Shanghai Pudong Hospital, Fudan University, Shanghai, 201399, People’s Republic of China
| | - Mingyue Zhou
- Helen Driller Family Comprehensive Cancer Center, University of California, San Francisco, CA, USA
| | - Xiucai Li
- Department of Endocrinology, Shanghai Pudong Hospital, Fudan University, Shanghai, 201399, People’s Republic of China
| | - Min Gong
- Department of Endocrinology, Shanghai Pudong Hospital, Fudan University, Shanghai, 201399, People’s Republic of China
| | - Ligang Zhou
- Department of Endocrinology, Shanghai Pudong Hospital, Fudan University, Shanghai, 201399, People’s Republic of China
- Shanghai Key Laboratory of Vascular Lesions Regulation and Remodeling, Shanghai Pudong Hospital, Fudan University, Shanghai, 201399, People’s Republic of China
- Correspondence: Ligang Zhou, Department of Endocrinology, Shanghai Pudong Hospital, Fudan University, Shanghai, 201399, People’s Republic of China, Tel +8613611927616, Email
| |
Collapse
|
11
|
Liang Y, Noble LM. Chinese doctors' views on workplace-based assessment: trainee and supervisor perspectives of the mini-CEX. MEDICAL EDUCATION ONLINE 2021; 26:1869393. [PMID: 33380291 PMCID: PMC7782920 DOI: 10.1080/10872981.2020.1869393] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/03/2020] [Revised: 12/15/2020] [Accepted: 12/22/2020] [Indexed: 05/28/2023]
Abstract
Purpose: This study investigated whether the mini-clinical evaluation exercise (mini-CEX) has been successfully integrated into the Chinese context, following its introduction as part of the national general training programme. Materials and methods: Online questionnaires (N = 91) and interviews (N = 22) were conducted with Year 1 trainee doctors and clinical supervisors at a cancer hospital in China to explore users' experiences, attitudes and opinions of the mini-CEX. Results" Trainees were more likely than supervisors to report understanding the purpose of the mini-CEX and agree that it encouraged reflection and helped improve overall performance. Both trainees and supervisors felt that it provided a framework for learning, that it was useful in identifying underperformance, and that it informed learning progression. Groups were equally positive about the commitment of their counterpart in the process and valued the focus on detailed feedback. It was perceived as cultivating the learner-teacher relationship. Overall, both groups felt they 'bought in' to using the mini-CEX. However, concerns were raised about subjectivity of ratings and lack of benchmarking with expected standards of care. Conclusions: Chinese trainees and supervisors generally perceived the mini-CEX as an acceptable and valuable medical training tool, although both groups suggested enhancements to improve its efficacy.
Collapse
Affiliation(s)
- Yuying Liang
- Department of Medical Education, Affiliated Cancer Hospital and Institute of Guangzhou Medical University, Guangzhou, China
- UCL Medical School, University College London, London, UK
| | | |
Collapse
|
12
|
Soukoulis V, Martindale J, Bray MJ, Bradley E, Gusic ME. The use of EPA assessments in decision-making: Do supervision ratings correlate with other measures of clinical performance? MEDICAL TEACHER 2021; 43:1323-1329. [PMID: 34242113 DOI: 10.1080/0142159x.2021.1947480] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
BACKGROUND Entrustable professional activities (EPAs) have been introduced as a framework for teaching and assessment in competency-based educational programs. With growing use, has come a call to examine the validity of EPA assessments. We sought to explore the correlation of EPA assessments with other clinical performance measures to support use of supervision ratings in decisions about medical students' curricular progression. METHODS Spearman rank coefficients were used to determine correlation of supervision ratings from EPA assessments with scores on clerkship evaluations and performance on an end-of-clerkship-year Objective Structured Clinical Examination (CPX). RESULTS Both overall clinical evaluation items score (rho 0.40; n = 166) and CPX patient encounter domain score (rho 0.31; n = 149) showed significant correlation with students' overall mean EPA supervision rating during the clerkship year. There was significant correlation between mean supervision rating for EPA assessments of history, exam, note, and oral presentation skills with scores for these skills on clerkship evaluations; less so on the CPX. CONCLUSIONS Correlation of EPA supervision ratings with commonly used clinical performance measures offers support for their use in undergraduate medical education. Data supporting the validity of EPA assessments promotes stakeholders' acceptance of their use in summative decisions about students' readiness for increased patient care responsibility.
Collapse
Affiliation(s)
- Victor Soukoulis
- Division of Cardiovascular Medicine, University of Virginia School of Medicine, Charlottesville, VA, USA
| | - James Martindale
- Center for Medical Education Research and Scholarly Innovation, University of Virginia School of Medicine, Charlottesville, VA, USA
| | - Megan J Bray
- Center for Medical Education Research and Scholarly Innovation and Department of Obstetrics and Gynecology, University of Virginia School of Medicine, Charlottesville, VA, USA
| | - Elizabeth Bradley
- Center for Medical Education Research and Scholarly Innovation, University of Virginia School of Medicine, Charlottesville, VA, USA
| | - Maryellen E Gusic
- Center for Medical Education Research and Scholarly Innovation and Department of Pediatrics, University of Virginia School of Medicine, Charlottesville, VA, USA
| |
Collapse
|
13
|
Parvathy MS, Parab A, R Nair BK, Matheson C, Ingham K, Gunning L. Longitudinal Outcome of Programmatic Assessment of International Medical Graduates. ADVANCES IN MEDICAL EDUCATION AND PRACTICE 2021; 12:1095-1100. [PMID: 34588836 PMCID: PMC8476105 DOI: 10.2147/amep.s324412] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/15/2021] [Accepted: 09/08/2021] [Indexed: 06/13/2023]
Abstract
INTRODUCTION Australia depends on international medical graduates (IMGs) to meet workforce shortages. The current standard assessment for IMGs is by clinical examination in observed structured clinical encounter (OSCE) format lasting 200 minutes. There are concerns about adequateness of this assessment as it does not test the qualities required to practice in a new country. We introduced a programmatic performance-based assessment for IMGs to prepare them to meet these challenges. The workplace-based assessment (WBA) program involves six-month longitudinal programmatic assessments comprising of 12 mini-clinical evaluation exercises (Mini-CEX), five case-based discussions (CBD), two in-training assessments (ITAs) and two sets of multisource feedback (MSF) assessments. We assessed 254 IMGs since 2010. We conducted a survey to evaluate the satisfaction with the program and the outcomes of these doctors. METHODS We surveyed 254 candidates from 2010 to 2020. The survey used "SelectSurvey" tool with 12 questions and free-text comments. All candidates were sent the survey link to their last registered mobile phone using "Telstra Instant Messaging Service". We analysed the data using Microsoft "Excel". RESULTS We received 153 (60%) responses. Amongst them, 141 (92%) candidates did not require further supervised practice for general registration and 129 (84%) candidates hold general/specialist registration. The candidates found the program useful and felt well supported. They appreciated real patient encounters. The feedback with positive critiquing was helpful in improving their clinical practice. The negative themes were program costs and frustration with the length of the program. CONCLUSION Upon completion of the WBA program and obtaining the AMC certificate, most of the doctors were able to gain general registration. Seventy-eight (50%) candidates chose to continue their careers within the local area with 124 (80%) of them within the state. Our survey shows a comprehensive assessment program with immediate constructive feedback produces competent doctors to fill the medical workforce shortages.
Collapse
Affiliation(s)
- Mulavana S Parvathy
- Centre for Medical Professional Development, Hunter New England Local Health District, Newcastle, NSW, Australia
- School of Medicine and Public Health, University of Newcastle, Newcastle, NSW, Australia
| | - Aditee Parab
- John Hunter Hospital, Hunter New England Local Health District, Newcastle, NSW, Australia
| | - Balakrishnan Kichu R Nair
- Centre for Medical Professional Development, Hunter New England Local Health District, Newcastle, NSW, Australia
- School of Medicine and Public Health, University of Newcastle, Newcastle, NSW, Australia
- John Hunter Hospital, Hunter New England Local Health District, Newcastle, NSW, Australia
| | - Carl Matheson
- The Australian Medical Council, Canberra, ACT, Australia
| | - Kathy Ingham
- Centre for Medical Professional Development, Hunter New England Local Health District, Newcastle, NSW, Australia
| | - Lynette Gunning
- Centre for Medical Professional Development, Hunter New England Local Health District, Newcastle, NSW, Australia
| |
Collapse
|
14
|
Felthun JZ, Taylor S, Shulruf B, Allen DW. Empirical analysis comparing the tele-objective structured clinical examination (teleOSCE) and the in-person assessment in Australia. JOURNAL OF EDUCATIONAL EVALUATION FOR HEALTH PROFESSIONS 2021; 18:23. [PMID: 34551510 PMCID: PMC8616724 DOI: 10.3352/jeehp.2021.18.23] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/19/2021] [Accepted: 09/20/2021] [Indexed: 06/13/2023]
Abstract
PURPOSE It aimed to compare the use of the tele objective structured clinical examination (teleOSCE) with in-person assessment in high-stakes clinical examination so as to determine the impact of the teleOSCE on the assessment undertaken. Discussion follows regarding what skills and domains can effectively be assessed in a teleOSCE. METHODS This study is a retrospective observational analysis. It compares the results achieved by final year medical students in their clinical examination, assessed using the teleOSCE in 2020 (n=285), with those who were examined using the traditional in-person format in 2019 (n=280). The study was undertaken at the University of New South Wales, Australia. RESULTS In the domain of physical examination, students in 2020 scored 0.277 points higher than those in 2019 (mean difference -0.277, P<0.001, effect size 0.332). Across all other domains, there was no significant difference in mean scores between 2019 and 2020. CONCLUSION The teleOSCE does not negatively impact assessment in clinical examination in all domains except physical examination. If the teleOSCE is the future of clinical skills examination, assessment of physical examination will require concomitant workplace-based assessment.
Collapse
Affiliation(s)
| | - Silas Taylor
- Office of Medical Education, University of New South Wales, Sydney, Australia
| | - Boaz Shulruf
- Office of Medical Education, University of New South Wales, Sydney, Australia
- Centre for Medical and Health Sciences Education, Faculty of Medical and Health Sciences, University of Auckland, Auckland, New Zealand
| | - Digby Wigram Allen
- School of Medicine, The University of New South Wales, Kensington, Australia
| |
Collapse
|
15
|
Gottlieb M, Jordan J, Siegelman JN, Cooney R, Stehman C, Chan TM. Direct Observation Tools in Emergency Medicine: A Systematic Review of the Literature. AEM EDUCATION AND TRAINING 2021; 5:e10519. [PMID: 34041428 PMCID: PMC8138102 DOI: 10.1002/aet2.10519] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/03/2020] [Revised: 07/31/2020] [Accepted: 08/09/2020] [Indexed: 05/07/2023]
Abstract
OBJECTIVES Direct observation is important for assessing the competency of medical learners. Multiple tools have been described in other fields, although the degree of emergency medicine-specific literature is unclear. This review sought to summarize the current literature on direct observation tools in the emergency department (ED) setting. METHODS We searched PubMed, Scopus, CINAHL, the Cochrane Central Register of Clinical Trials, the Cochrane Database of Systematic Reviews, ERIC, PsycINFO, and Google Scholar from 2012 to 2020 for publications on direct observation tools in the ED setting. Data were dual extracted into a predefined worksheet, and quality analysis was performed using the Medical Education Research Study Quality Instrument. RESULTS We identified 38 publications, comprising 2,977 learners. Fifteen different tools were described. The most commonly assessed tools included the Milestones (nine studies), Observed Structured Clinical Exercises (seven studies), the McMaster Modular Assessment Program (six studies), Queen's Simulation Assessment Test (five studies), and the mini-Clinical Evaluation Exercise (four studies). Most of the studies were performed in a single institution, and there were limited validity or reliability assessments reported. CONCLUSIONS The number of publications on direct observation tools for the ED setting has markedly increased. However, there remains a need for stronger internal and external validity data.
Collapse
Affiliation(s)
- Michael Gottlieb
- Department of Emergency MedicineRush University Medical CenterChicagoILUSA
| | - Jaime Jordan
- Department of Emergency MedicineRonald Reagan UCLA Medical CenterLos AngelesCAUSA
| | | | - Robert Cooney
- Department of Emergency MedicineGeisinger Medical CenterDanvillePAUSA
| | | | - Teresa M. Chan
- Department of MedicineDivision of Emergency MedicineMcMaster UniversityHamiltonOntarioCanada
| |
Collapse
|
16
|
Martinsen SSS, Espeland T, Berg EAR, Samstad E, Lillebo B, Slørdahl TS. Examining the educational impact of the mini-CEX: a randomised controlled study. BMC MEDICAL EDUCATION 2021; 21:228. [PMID: 33882913 PMCID: PMC8061047 DOI: 10.1186/s12909-021-02670-3] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/06/2020] [Accepted: 04/14/2021] [Indexed: 06/12/2023]
Abstract
BACKGROUND The purpose of this study is to evaluate the mini-Clinical Evaluation Exercise (mini-CEX) as a formative assessment tool among undergraduate medical students, in terms of student perceptions, effects on direct observation and feedback, and educational impact. METHODS Cluster randomised study of 38 fifth-year medical students during a 16-week clinical placement. Hospitals were randomised to provide a minimum of 8 mini-CEXs per student (intervention arm) or continue with ad-hoc feedback (control arm). After finishing their clinical placement, students completed an Objective Structured Clinical Examination (OSCE), a written test and a survey. RESULTS All participants in the intervention group completed the pre-planned number of assessments, and 60% found them to be useful during their clinical placement. Overall, there were no statistically significant differences between groups in reported quantity or quality of direct observation and feedback. Observed mean scores were marginally higher on the OSCE and written test in the intervention group, but not statistically significant. CONCLUSIONS There is considerable potential in assessing medical students during clinical placements and routine practice, but the educational impact of formative assessments remains mostly unknown. This study contributes with a robust study design, and may serve as a basis for future research.
Collapse
Affiliation(s)
| | - Torvald Espeland
- Department of Circulation and Medical Imaging, Norwegian University of Science and Technology (NTNU), Trondheim, Norway
- Clinic of Cardiology, St. Olavs Hospital, Trondheim University Hospital, Trondheim, Norway
| | - Erik Andreas Rye Berg
- Department of Circulation and Medical Imaging, Norwegian University of Science and Technology (NTNU), Trondheim, Norway
- Clinic of Thoracic and Occupational Medicine, St. Olavs Hospital, Trondheim University Hospital, Trondheim, Norway
| | - Eivind Samstad
- Department of Clinical and Molecular Medicine, Norwegian University of Science and Technology (NTNU), Trondheim, Norway
- Clinic of Medicine and Rehabilitation, Ålesund Hospital, Møre og Romsdal Hospital Trust, Ålesund, Norway
| | - Børge Lillebo
- Department of Circulation and Medical Imaging, Norwegian University of Science and Technology (NTNU), Trondheim, Norway
- Clinic of Medicine and Rehabilitation, Levanger Hospital, Nord-Trøndelag Hospital Trust, Levanger, Norway
| | - Tobias S Slørdahl
- Department of Clinical and Molecular Medicine, Norwegian University of Science and Technology (NTNU), Trondheim, Norway
- Department of Haematology, St. Olavs Hospital, Trondheim University Hospital, Trondheim, Norway
| |
Collapse
|
17
|
Motivation and Evaluation in Education from the Sustainability Perspective: A Review of the Scientific Literature. SUSTAINABILITY 2021. [DOI: 10.3390/su13074047] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/02/2023]
Abstract
(1) Background: This paper outlines the results of a literature review of meta-analyses published on motivation and evaluation in the last five years. (2) Methods: A systematic review of three educational databases (WoS, SCOPUS and ERIC) was conducted following the PRISMA and PICO approaches. A total of 54 peer-reviewed meta-analysis papers were selected, analysed and compared. (3) Results: A significant number and variety of meta-analyses have been conducted: motivation meta-analyses focus primarily on contextual variables, self-regulation and students’ academic performance, and evaluation meta-analyses examine the effectiveness of the teaching intervention, the use of teaching methodologies and technological resources for learning. (4) Conclusions: There are two important absences: on the one hand, it is necessary to develop meta-analyses that combine motivation and evaluation, also measuring their interaction, from the perspective of sustainability, and not only of educational improvement, and on the other hand, it is necessary to perform meta-analyses on the effectiveness of the formative and shared evaluation of the sustainability of learning processes.
Collapse
|
18
|
Brand PLP, Jaarsma ADC, van der Vleuten CPM. Driving lesson or driving test? : A metaphor to help faculty separate feedback from assessment. PERSPECTIVES ON MEDICAL EDUCATION 2021; 10:50-56. [PMID: 32902828 PMCID: PMC7809072 DOI: 10.1007/s40037-020-00617-w] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
Although there is consensus in the medical education world that feedback is an important and effective tool to support experiential workplace-based learning, learners tend to avoid the feedback associated with direct observation because they perceive it as a high-stakes evaluation with significant consequences for their future. The perceived dominance of the summative assessment paradigm throughout medical education reduces learners' willingness to seek feedback, and encourages supervisors to mix up feedback with provision of 'objective' grades or pass/fail marks. This eye-opener article argues that the provision and reception of effective feedback by clinical supervisors and their learners is dependent on both parties' awareness of the important distinction between feedback used in coaching towards growth and development (assessment for learning) and reaching a high-stakes judgement on the learner's competence and fitness for practice (assessment of learning). Using driving lessons and the driving test as a metaphor for feedback and assessment helps supervisors and learners to understand this crucial difference and to act upon it. It is the supervisor's responsibility to ensure that supervisor and learner achieve a clear mutual understanding of the purpose of each interaction (i.e. feedback or assessment). To allow supervisors to use the driving lesson-driving test metaphor for this purpose in their interactions with learners, it should be included in faculty development initiatives, along with a discussion of the key importance of separating feedback from assessment, to promote a feedback culture of growth and support programmatic assessment of competence.
Collapse
Affiliation(s)
- Paul L P Brand
- Department of Medical Education and Faculty Development, Isala Hospital, Isala Academy, Zwolle, The Netherlands.
- Lifelong Learning, Education and Assessment Research Network (LEARN), University Medical Centre Groningen, Groningen, The Netherlands.
| | - A Debbie C Jaarsma
- Lifelong Learning, Education and Assessment Research Network (LEARN), University Medical Centre Groningen, Groningen, The Netherlands
- Centre for Educational Development and Research (CEDAR), University Medical Centre Groningen, Groningen, The Netherlands
| | - Cees P M van der Vleuten
- Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, Maastricht University, Maastricht, The Netherlands
| |
Collapse
|
19
|
Véliz C, Fuentes-Cimma J, Fuentes-López E, Riquelme A. Adaptation, psychometric properties, and implementation of the Mini-CEX in dental clerkship. J Dent Educ 2020; 85:300-310. [PMID: 33094514 DOI: 10.1002/jdd.12462] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2020] [Revised: 09/24/2020] [Accepted: 10/06/2020] [Indexed: 11/10/2022]
Abstract
BACKGROUND Workplace-based assessment is a key component of dental-student clerkships, allowing students to demonstrate clinical proficiency. PURPOSE This study adapts the Mini-Clinical Evaluation Exercise (Mini-CEX) to a dentistry-program clerkship, analyzing the results and examining the psychometric properties of Mini-CEX. METHODS First, Delphi panel methodology was used to ensure content validity. Mini-CEX was then piloted in the dental-clerkship program, with each student assessed by at least 2 supervisors and a peer student. Subsequently, psychometric properties, acceptability, and observation time were analyzed. RESULTS The study was conducted between July and November 2019. Overall, 140 Mini-CEX evaluation exercises were carried out on 30 students by 84 supervisors and 56 peers. The adapted instrument was found to be unidimensional, obtaining an acceptable internal consistency (α = 0.74). As the assessor type changed, there were differences in observation time; the medians (Q1-Q3) were 10 minutes (5-15) for supervisors and 30 minutes (20-45) for peer students (P < 0.001). This difference was also observed in assessor perceptions (P < 0.001), with supervisors scoring a median of 6 (6-6.75) and peer students scoring a median of 7 (6-7). No differences were found between supervisor and peer scores. CONCLUSION The adapted version of Mini-CEX can objectively assess the clinical performance of dental students, achieving validity and reliability values similar to those obtained in the original instrument.
Collapse
Affiliation(s)
- Claudia Véliz
- School of Dentistry, Faculty of Medicine, Pontificia Universidad Católica de Chile, Santiago, Chile
| | - Javiera Fuentes-Cimma
- Physiotherapy Program, Department of Health Sciences, Faculty of Medicine, Pontificia Universidad Católica de Chile, Santiago, Chile
| | - Eduardo Fuentes-López
- Speech-Language Pathology Program, Department of Health Sciences, Faculty of Medicine, Pontificia Universidad Católica de Chile, Santiago, Chile
| | - Arnoldo Riquelme
- Department of Gastroenterology, Centre for Medical Education and Health Sciences, Department of Health Sciences, Faculty of Medicine, Pontificia Universidad Católica de Chile, Santiago, Chile
| |
Collapse
|
20
|
Haring CM, Klaarwater CCR, Bouwmans GA, Cools BM, van Gurp PJM, van der Meer JWM, Postma CT. Validity, reliability and feasibility of a new observation rating tool and a post encounter rating tool for the assessment of clinical reasoning skills of medical students during their internal medicine clerkship: a pilot study. BMC MEDICAL EDUCATION 2020; 20:198. [PMID: 32560648 PMCID: PMC7304120 DOI: 10.1186/s12909-020-02110-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/23/2019] [Accepted: 06/11/2020] [Indexed: 06/11/2023]
Abstract
BACKGROUND Systematic assessment of clinical reasoning skills of medical students in clinical practice is very difficult. This is partly caused by the lack of understanding of the fundamental mechanisms underlying the process of clinical reasoning. METHODS We previously developed an observation tool to assess the clinical reasoning skills of medical students during clinical practice. This observation tool consists of an 11-item observation rating form (ORT). In the present study we verified the validity, reliability and feasibility of this tool and of an already existing post-encounter rating tool (PERT) in clinical practice among medical students during the internal medicine clerkship. RESULTS Six raters each assessed the same 15 student-patient encounters. The internal consistency (Cronbach's alfa) for the (ORT) was 0.87 (0.71-0.84) and the 5-item (PERT) was 0.81 (0.71-0.87). The intraclass-correlation coefficient for single measurements was poor for both the ORT; 0.32 (p < 0.001) as well as the PERT; 0.36 (p < 0.001). The Generalizability study (G-study) and decision study (D-study) showed that 6 raters are required to achieve a G-coefficient of > 0.7 for the ORT and 7 raters for the PERT. The largest sources of variance are the interaction between raters and students. There was a consistent correlation between the ORT and PERT of 0.53 (p = 0.04). CONCLUSIONS The ORT and PERT are both feasible, valid and reliable instruments to assess students' clinical reasoning skills in clinical practice.
Collapse
|
21
|
Hejri SM, Jalili M. Ups and downs of conducting a BEME review: Lessons learned. MEDICAL TEACHER 2020; 42:240-241. [PMID: 31442097 DOI: 10.1080/0142159x.2019.1657565] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Affiliation(s)
- Sara Mortaz Hejri
- Department of Medical Education, School of Medicine, Tehran University of Medical Sciences, Tehran, Iran
| | - Mohammad Jalili
- Department of Medical Education, School of Medicine, Tehran University of Medical Sciences, Tehran, Iran
- Department of Emergency Medicine, School of Medicine, Tehran University of Medical Sciences, Tehran, Iran
| |
Collapse
|