1
|
Dalum J, Christidis N, Häbel H, Karlgren K, Leanderson C, Englund GS. Clinical skills examination as part of the Swedish proficiency test of dentists educated outside of the EU/EEA. EUROPEAN JOURNAL OF DENTAL EDUCATION : OFFICIAL JOURNAL OF THE ASSOCIATION FOR DENTAL EDUCATION IN EUROPE 2024. [PMID: 38994910 DOI: 10.1111/eje.13022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/22/2023] [Revised: 05/09/2024] [Accepted: 05/27/2024] [Indexed: 07/13/2024]
Abstract
INTRODUCTION The increase in the migration of dentists educated outside the EU/EEA calls for the sharing of information and evaluation of recognition processes within countries in the EU. In 2017, the Swedish National Board of Health and Welfare implemented the Proficiency test, a recognition process for dentists who have completed an education programme outside the EU/EEA. The Proficiency test consists of a theoretical and an integrated clinical skills examination, followed by a 6-month clinical practice. The clinical skills examination is a two-part examination that includes an OSCE and an operative test on a dental manikin. This paper presents data from proficiency tests between 2018 and 2022, and explores factors related to grade fail, that is, demographics, theoretical exam scores and language comprehension. MATERIALS AND METHODS In a cohort study, demographics and factors associated with grade fail were explored using test results from theoretical and clinical skills examinations (n = 181) from 2018 to 2022. Pearson correlation coefficient and linear regression analysis were used for studying correlations and associations between exam results. Univariable linear and logistic regression models were used for background variable associations with clinical skills exam outcomes. RESULTS Higher age was a significant risk factor for failing the clinical skills examination and the OSCE. Higher scores in the theoretical exam reduced the odds of failing the OSCE but were not associated with results in the operative test or the overall results of the clinical skills examination. Regarding the OSCE there was a statistically significant difference within all professional qualifications explored between participants who passed and participants who failed the OSCE. CONCLUSIONS Four years of data collection reveal that age and previous theoretical exam results influence the odds of failing the clinical examination. The study results also highlight the necessity of multiple assessment formats to assess clinical and communication skills of foreign-trained dentists.
Collapse
Affiliation(s)
- Jesper Dalum
- Division of Oral Diagnostics and Rehabilitation, Department of Dental Medicine, Karolinska Institutet, Huddinge, Sweden
| | - Nikolaos Christidis
- Division of Oral Diagnostics and Rehabilitation, Department of Dental Medicine, Karolinska Institutet, Huddinge, Sweden
| | - Henrike Häbel
- Department Learning, Informatics, Management and Ethics, Karolinska Institutet, Huddinge, Sweden
| | - Klas Karlgren
- Department Learning, Informatics, Management and Ethics, Karolinska Institutet, Huddinge, Sweden
- Department of Research, Education, Development and Innovation, Education Center, Södersjukhuset, Stockholm, Sweden
- Faculty of Health and Social Sciences, Western Norway University of Applied Sciences, Bergen, Norway
| | - Charlotte Leanderson
- Department of Neurobiology, Care Sciences and Society, Karolinska Institutet, Huddinge, Sweden
| | - Gunilla Sandborgh Englund
- Division of Oral Diagnostics and Rehabilitation, Department of Dental Medicine, Karolinska Institutet, Huddinge, Sweden
| |
Collapse
|
2
|
Hays RB, Wilkinson T, Green-Thompson L, McCrorie P, Bollela V, Nadarajah VD, Anderson MB, Norcini J, Samarasekera DD, Boursicot K, Malau-Aduli BS, Mandache ME, Nadkar AA. Managing assessment during curriculum change: Ottawa Consensus Statement. MEDICAL TEACHER 2024; 46:874-884. [PMID: 38766754 DOI: 10.1080/0142159x.2024.2350522] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/14/2024] [Accepted: 04/29/2024] [Indexed: 05/22/2024]
Abstract
Curriculum change is relatively frequent in health professional education. Formal, planned curriculum review must be conducted periodically to incorporate new knowledge and skills, changing teaching and learning methods or changing roles and expectations of graduates. Unplanned curriculum evolution arguably happens continually, usually taking the form of "minor" changes that in combination over time may produce a substantially different programme. However, reviewing assessment practices is less likely to be a major consideration during curriculum change, overlooking the potential for unintended consequences for learning. This includes potentially undermining or negating the impact of even well-designed and important curriculum changes. Changes to any component of the curriculum "ecosystem "- graduate outcomes, content, delivery or assessment of learning - should trigger an automatic review of the whole ecosystem to maintain constructive alignment. Consideration of potential impact on assessment is essential to support curriculum change. Powerful contextual drivers of a curriculum include national examinations and programme accreditation, so each assessment programme sits within its own external context. Internal drivers are also important, such as adoption of new learning technologies and learning preferences of students and faculty. Achieving optimal and sustainable outcomes from a curriculum review requires strong governance and support, stakeholder engagement, curriculum and assessment expertise and internal quality assurance processes. This consensus paper provides guidance on managing assessment during curriculum change, building on evidence and the contributions of previous consensus papers.
Collapse
Affiliation(s)
- Richard B Hays
- College of Medicine and Dentistry, James Cook University, Townsville, Australia
| | - Tim Wilkinson
- Christchurch School of Medicine & Health Sciences, University of Otago, Christchurch, New Zealand
| | | | - Peter McCrorie
- Centre for Medical and Healthcare Education, St George"s, University of London, London, United Kingdom of Great Britain and Northern Ireland
| | - Valdes Bollela
- Medical Education, Universidade Cidade de São Paulo, Sao Paulo, Brazil
| | | | | | | | | | | | - Bunmi S Malau-Aduli
- College of Medicine and Dentistry, James Cook University, Townsville, Australia
- School of Medicine and Public Health, The University of Newcastle College of Health Medicine and Wellbeing, New South Wales, Australia
| | | | - Azhar Adam Nadkar
- Department of Medicine and Health Sciences, Stellenbosch University, Cape Town, South Africa
| |
Collapse
|
3
|
Lim A, Krishnan S, Singh H, Furletti S, Sarkar M, Stewart D, Malone D. Linking assessment to real life practice - comparing work based assessments and objective structured clinical examinations using mystery shopping. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2024; 29:859-878. [PMID: 37728720 PMCID: PMC11208193 DOI: 10.1007/s10459-023-10284-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/22/2023] [Accepted: 09/03/2023] [Indexed: 09/21/2023]
Abstract
Objective Structured Clinical Examinations (OSCEs) and Work Based Assessments (WBAs) are the mainstays of assessing clinical competency in health professions' education. Underpinned by the extrapolation inference in Kane's Validity Framework, the purpose of this study is to determine whether OSCEs translate to real life performance by comparing students' OSCE performance to their performance in real-life (as a WBA) using the same clinical scenario, and to understand factors that affect students' performance. A sequential explanatory mixed methods approach where a grade comparison between students' performance in their OSCE and WBA was performed. Students were third year pharmacy undergraduates on placement at a community pharmacy in 2022. The WBA was conducted by a simulated patient, unbeknownst to students and indistinguishable from a genuine patient, visiting the pharmacy asking for health advice. The simulated patient was referred to as a 'mystery shopper' and the process to 'mystery shopping' in this manuscript. Community pharmacy is an ideal setting for real-time observation and mystery shopping as staff can be accessed without appointment. The students' provision of care and clinical knowledge was assessed by the mystery shopper using the same clinical checklist the student was assessed from in the OSCE. Students who had the WBA conducted were then invited to participate in semi-structured interviews to discuss their experiences in both settings. Overall, 92 mystery shopper (WBA) visits with students were conducted and 36 follow-up interviews were completed. The median WBA score was 41.7% [IQR 28.3] and significantly lower compared to the OSCE score 80.9% [IQR 19.0] in all participants (p < 0.001). Interviews revealed students knew they did not perform as well in the WBA compared to their OSCE, but reflected that they still need OSCEs to prepare them to manage real-life patients. Many students related their performance to how they perceived their role in OSCEs versus WBAs, and that OSCEs allowed them more autonomy to manage the patient as opposed to an unfamiliar workplace. As suggested by the activity theory, the performance of the student can be driven by their motivation which differed in the two contexts.
Collapse
Affiliation(s)
- Angelina Lim
- Faculty of Pharmacy and Pharmaceutical Sciences, Monash University, 3052, Parkville, VIC, Australia.
| | - Sunanthiny Krishnan
- Department of Cardiovascular Sciences, University of Leicester, Glenfield Hospital, LE3 9QP, Leicester, UK
| | - Harjit Singh
- Faculty of Pharmacy and Pharmaceutical Sciences, Monash University, 3052, Parkville, VIC, Australia
| | - Simon Furletti
- Faculty of Pharmacy and Pharmaceutical Sciences, Monash University, 3052, Parkville, VIC, Australia
| | - Mahbub Sarkar
- Monash Centre for Scholarship in Health Education, Faculty of Medicine and Nursing, Monash University, 3806, Clayton, VIC, Australia
| | | | - Daniel Malone
- Faculty of Pharmacy and Pharmaceutical Sciences, Monash University, 3052, Parkville, VIC, Australia
| |
Collapse
|
4
|
Pereira Júnior GA, Hamamoto-Filho PT, Rasslan R, Benevenuto DS, Silva EN, Oliveira AF, Portari Filho PE. Results of the Last 5 Years (2018-2022) of the Specialist Title Exam of The Brazilian College of Surgeons. Rev Col Bras Cir 2024; 51:e20243749. [PMID: 38747884 PMCID: PMC11185055 DOI: 10.1590/0100-6991e-20243749-en] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2024] [Accepted: 04/01/2024] [Indexed: 06/20/2024] Open
Abstract
The article discusses the evolution of the Brazilian College of Surgeons (CBC) specialist title exam, highlighting the importance of evaluating not only theoretical knowledge, but also the practical skills and ethical behavior of candidates. The test was instituted in 1971, initially with only the written phase, and later included the oral practical test, starting with the 13th edition in 1988. In 2022, the assessment process was improved by including the use of simulated stations in the practical test, with the aim of assessing practical and communication skills, as well as clinical reasoning, in order to guarantee excellence in the assessment of surgeons training. The aim of this study is to demonstrate the performance of candidates in the last five years of the Specialist Title Test and to compare the performance results between the different surgical training groups of the candidates. The results obtained by candidates from the various categories enrolled in the test in the 2018 to 2022 editions were analyzed. There was a clear and statistically significant difference between doctors who had completed three years of residency recognized by the Ministry of Education in relation to the other categories of candidates for the Specialist Title..
Collapse
Affiliation(s)
| | - Pedro Tadao Hamamoto-Filho
- - Universidade do Estado de São Paulo (UNESP), Faculdade de Medicina de Botucatu - Botucatu - SP - Brasil
| | - Roberto Rasslan
- - Hospital das Clínicas da FMUSP, Divisão de Clínica Cirúrgica III - São Paulo - SP - Brasil
| | - Dyego Sá Benevenuto
- - Hospital Copa Star, Cirurgia do Aparelho Digestivo - Rio de Janeiro - RJ - Brasil
| | - Eduardo Nacur Silva
- - Santa Casa de Belo Horizonte, III Clínica Cirúrgica - Belo Horizonte - MG - Brasil
| | | | - Pedro Eder Portari Filho
- - Universidade Federal do Estado do Rio de Janeiro (UNIRIO) Escola de Medicina e Cirurgia - Rio de Janeiro - RJ - Brasil
- - Presidente do Colégio Brasileiro de Cirurgiões - Rio de Janeiro - RJ - Brasil
| |
Collapse
|
5
|
Mhanni A, Elsawaay S, Qutieshat A. Development and validation of an assessment sheet for all-ceramic crown preparations: A methodological study. J Dent Res Dent Clin Dent Prospects 2023; 17:162-169. [PMID: 38023804 PMCID: PMC10676539 DOI: 10.34172/joddd.2023.37103] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2023] [Accepted: 09/01/2023] [Indexed: 12/01/2023] Open
Abstract
Background Dental students learn and practice clinical procedures in clinical skills laboratories. These practices are graded by qualified staff to evaluate the effectiveness of their learning. Valid evaluation requires accuracy and reliability. Although a well-developed checklist for pre-clinical skill evaluation exists in theory, it is challenging to implement in practice. This study was undertaken to develop and evaluate the reliability of an assessment sheet for all-ceramic crown preparations. Methods The study consisted of two phases: the development stage and the judgment-quantification stage. Two examiners evaluated all-ceramic crown preparations made by second-year dental students using the developed assessment sheet to test criterion validity. The final grade was determined based on the number of errors identified using the assessment sheet. The relationship between the negative points and the final grades awarded was determined using Spearman's correlation. The study calculated the intra- and inter-examiner agreement for two rounds of evaluation, conducted one month apart, using Cohen's unweighted Kappa test. The study employed the Item-Content Validity Index (I-CVI) to evaluate the content validity for each item and the Scale-Content Validity Index (S-CVI) to assess the content validity of the overall scale used in all-ceramic crown preparation procedures. Results The assessment sheet developed for all-ceramic crown preparations was reliable, with strong content validity and a significant negative correlation between grades assigned and the number of errors observed. The assessment sheet defined up to three levels of performance for each item, providing a consistent and objective approach to evaluation. The linear regression graph successfully determined the maximum number of acceptable errors and established the minimum passing grade. The inter- and intra-examiner agreement for the two assessment rounds was found to be fair to moderate. Conclusion The study showed that the developed assessment sheet for all-ceramic crown preparations is reliable and can provide a consistent and objective approach to evaluation. It can benefit both students and instructors. Further research is recommended to evaluate the impact of the developed assessment sheet on students' learning outcomes.
Collapse
Affiliation(s)
- Ahmed Mhanni
- Department of Prosthodontics, Tripoli Dental Faculty, University of Tripoli, Tripoli, Libya
| | - Seham Elsawaay
- Department of Prosthodontics, Tripoli Dental Faculty, University of Tripoli, Tripoli, Libya
| | - Abubaker Qutieshat
- Department of Adult Restorative Dentistry, Oman Dental College, Muscat, Oman
- Honorary Researcher, Restorative Dentistry, Dundee Dental Hospital & School, Dundee, UK
| |
Collapse
|
6
|
Costello LL, Cho DD, Daniel RC, Dida J, Pritchard J, Pardhan K. Emergency medicine resident perceptions of simulation-based training and assessment in competence by design. CAN J EMERG MED 2023; 25:828-835. [PMID: 37665550 DOI: 10.1007/s43678-023-00577-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2023] [Accepted: 08/09/2023] [Indexed: 09/05/2023]
Abstract
OBJECTIVES With the launch of competence by design (CBD) in emergency medicine (EM) in Canada, there are growing recommendations on the use of simulation for the training and assessment of residents. Many of these recommendations have been suggested by educational leaders and often exclude the resident stakeholder. This study sought to explore their experiences and perceptions of simulation in CBD. METHODS Qualitative data were collected from November 2020 to May 2021 at McMaster University and the University of Toronto after receiving ethics approval from both sites. Eligible participants included EM residents who were interviewed by a trained interviewer using a semi-structured interview guide. All interviews were recorded, transcribed, coded, and collapsed into themes. Data analysis was guided by constructivist grounded theory. RESULTS A total of seventeen residents participated. Thematic analysis revealed three major themes: 1) impact of CBD on resident views of simulation; 2) simulation's role in obtaining entrustable professional activities (EPAs) and filling educational gaps; and 3) conflicting feelings on the use of high-stakes simulation-based assessment in CBD. CONCLUSIONS EM residents strongly support using simulation in CBD and acknowledge its ability to bridge educational gaps and fulfill specific EPAs. However, this study suggests some unintended consequences of CBD and conflicting views around simulation-based assessment that challenge resident perceptions of simulation as a safe learning space. As CBD evolves, educational leaders should consider these impacts when making future curricular changes or recommendations.
Collapse
Affiliation(s)
- Lorne L Costello
- Division of Emergency Medicine, Department of Medicine, University of Toronto, Toronto, ON, Canada.
- Department of Emergency Services, Sunnybrook Health Sciences Centre, Toronto, ON, Canada.
| | - Dennis D Cho
- Division of Emergency Medicine, Department of Medicine, University of Toronto, Toronto, ON, Canada
- Department of Emergency Medicine, University Health Network, Toronto, ON, Canada
| | - Ryan C Daniel
- Department of Otolaryngology-Head & Neck Surgery, University of Toronto, Toronto, ON, Canada
| | - Joana Dida
- Division of Emergency Medicine, Department of Medicine, McMaster University, Hamilton, ON, Canada
| | - Jodie Pritchard
- Department of Emergency Medicine, Queen's University, Kingston, ON, Canada
| | - Kaif Pardhan
- Division of Emergency Medicine, Department of Medicine, University of Toronto, Toronto, ON, Canada
- Department of Emergency Services, Sunnybrook Health Sciences Centre, Toronto, ON, Canada
- Division of Pediatric Emergency Medicine, Department of Pediatrics, McMaster University, Hamilton, ON, Canada
| |
Collapse
|
7
|
Boursicot K, Kemp S, Norcini J, Nadarajah VD, Humphrey-Murto S, Archer E, Williams J, Pyörälä E, Möller R. Synthesis and perspectives from the Ottawa 2022 conference on the assessment of competence. MEDICAL TEACHER 2023; 45:978-983. [PMID: 36786837 DOI: 10.1080/0142159x.2023.2174420] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/18/2023]
Abstract
INTRODUCTION The Ottawa Conference on the Assessment of Competence in Medicine and the Healthcare Professions was first convened in 1985 in Ottawa. Since then, what has become known as the Ottawa conference has been held in various locations around the world every 2 years. It has become an important conference for the community of assessment - including researchers, educators, administrators and leaders - to share contemporary knowledge and develop international standards for assessment in medical and health professions education. METHODS The Ottawa 2022 conference was held in Lyon, France, in conjunction with the AMEE 2022 conference. A diverse group of international assessment experts were invited to present a symposium at the AMEE conference to summarise key concepts from the Ottawa conference. This paper was developed from that symposium. RESULTS AND DISCUSSION This paper summarises key themes and issues that emerged from the Ottawa 2022 conference. It highlights the importance of the consensus statements and discusses challenges for assessment such as issues of equity, diversity, and inclusion, shifts in emphasis to systems of assessment, implications of 'big data' and analytics, and challenges to ensure published research and practice are based on contemporary theories and concepts.
Collapse
Affiliation(s)
| | - Sandra Kemp
- Graduate School of Medicine, University of Wollongong, Wollongong, Australia
| | - John Norcini
- Department of Psychiatry, Upstate Medical University, Syracuse, NY, USA
| | | | | | - Elize Archer
- Faculty of Medicine and Health Sciences, Stellenbosch University, Cape Town, South Africa
| | - Jen Williams
- Faculty Dean of Medicine and Health, University of New England, Armidale, Australia
| | - Eeva Pyörälä
- Center for University Teaching and Learning, University of Helsinki, Helsinki, Finland
| | - Riitta Möller
- Department of Medical Epidemiology and Biostatistics, Karolinska Institutet, Stockholm, Sweden
| |
Collapse
|
8
|
Alalade AO, Sekar S. Simulation-Based Education for Enhancing Obstetric Emergency Response: A Needs Impact Evaluation. Cureus 2023; 15:e43908. [PMID: 37746503 PMCID: PMC10512433 DOI: 10.7759/cureus.43908] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/21/2023] [Indexed: 09/26/2023] Open
Abstract
Simulation is an ideal method for procedural training in obstetrics. To maximise training opportunities through simulation, the evaluation of these educational activities should be based on a standardised evidence-based approach. As such, the tools used in the evaluative process should be validated for content and context, as this ensures consistency of approach. It also makes the findings and recommendations acceptable, applicable and credible. More so, the information can be used for planning further learning, assessment of the competency of the trainers and educational governance purposes. In our view, simulation should be used in conjunction with other forms of procedural assessment such as mini-clinical examinations and case-based discussions to translate skills to actual life events. The learners will be able to further consolidate their learning, improve professional skills and feel involved throughout the programme.
Collapse
Affiliation(s)
| | - Sindhu Sekar
- Obstetrics and Gynaecology, Wrexham Maelor Hospital, Wrexham, GBR
| |
Collapse
|
9
|
Boulais I, Ouellet K, Lachiver EV, Marceau M, Bergeron L, Bernier F, St-Onge C. Considering the Structured Oral Examinations Beyond Its Psychometrics Properties. MEDICAL SCIENCE EDUCATOR 2023; 33:345-351. [PMID: 37261009 PMCID: PMC10226970 DOI: 10.1007/s40670-023-01729-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 01/09/2023] [Indexed: 06/02/2023]
Abstract
Decisions to set aside Structured Oral Examinations (SOE) are, almost invariably, based on their poor psychometric properties. However, considering the perspectives of the stakeholders might help us to understand its potential contribution. To explore this, we conducted focus groups and individual interviews with stakeholders: students, assessors, and administrators. Students and assessors perceived the SOE as a window on students' clinical reasoning, as an authentic assessment, but as a subjective and stressful method. Administrators emphasized the organizational consequences such as logistical challenges. Consequences must be considered when making decisions about SOE and our results support important positive consequences.
Collapse
Affiliation(s)
- Isabelle Boulais
- Department of Medicine, Faculty of Medicine and Health Sciences, Université de Sherbrooke, 3001 12th Avenue North, Sherbrooke, Québec J1H 5N4 Canada
| | - Kathleen Ouellet
- Health Sciences Education Center, Faculty of Medicine and Health Sciences, Université de Sherbrooke, Sherbrooke, Québec Canada
| | - Elise Vachon Lachiver
- Faculty of Medicine and Health Sciences, Research in Health Sciences Program, Université de Sherbrooke, Sherbrooke, Québec Canada
| | - Mélanie Marceau
- School of Nursing, Faculty of Medicine and Health Sciences, Université de Sherbrooke, Sherbrooke, Québec Canada
| | - Linda Bergeron
- Health Sciences Education Center, Faculty of Medicine and Health Sciences, University of Sherbrooke, Sherbrooke, Québec Canada
| | - Frédéric Bernier
- Department of Medicine, Faculty of Medicine and Health Sciences, Université de Sherbrooke, 3001 12th Avenue North, Sherbrooke, Québec J1H 5N4 Canada
- Centre de Recherche Clinique du CHUS, Sherbrooke, Québec Canada
| | - Christina St-Onge
- Department of Medicine, Faculty of Medicine and Health Sciences, Université de Sherbrooke, 3001 12th Avenue North, Sherbrooke, Québec J1H 5N4 Canada
- Paul Grand’Maison de La Société Des Médecins de L, Université de Sherbrooke Research Chair in Medical Education, Sherbrooke, Québec Canada
| |
Collapse
|
10
|
Thomas A, Rochette A, George C, Iqbal MZ, Ataman R, St-Onge C, Boruff J, Renaud JS. The Definitions and Conceptualizations of the Practice Context in the Health Professions: A Scoping Review. THE JOURNAL OF CONTINUING EDUCATION IN THE HEALTH PROFESSIONS 2023; 43:S18-S29. [PMID: 36877816 DOI: 10.1097/ceh.0000000000000490] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/18/2023]
Abstract
INTRODUCTION Health care professionals work in different contexts, which can influence professional competencies. Despite existing literature on the impact of context on practice, the nature and influence of contextual characteristics, and how context is defined and measured, remain poorly understood. The aim of this study was to map the breadth and depth of the literature on how context is defined and measured and the contextual characteristics that may influence professional competencies. METHODS A scoping review using Arksey and O'Malley's framework. We searched MEDLINE (Ovid) and CINAHL (EBSCO). Our inclusion criteria were studies that reported on context or relationships between contextual characteristics and professional competencies or that measured context. We extracted data on context definitions, context measures and their psychometric properties, and contextual characteristics influencing professional competencies. We performed numerical and qualitative analyses. RESULTS After duplicate removal, 9106 citations were screened and 283 were retained. We compiled a list of 67 context definitions and 112 available measures, with or without psychometric properties. We identified 60 contextual factors and organized them into five themes: Leadership and Agency, Values, Policies, Supports, and Demands. DISCUSSION Context is a complex construct that covers a wide array of dimensions. Measures are available, but none include the five dimensions in one single measure or focus on items targeting the likelihood of context influencing several competencies. Given that the practice context plays a critical role in health care professionals' competencies, stakeholders from all sectors (education, practice, and policy) should work together to address those contextual characteristics that can adversely influence practice.
Collapse
Affiliation(s)
- Aliki Thomas
- Dr. Thomas: Associate Professor, School of Physical and Occupational Therapy, Research Scientist, Institute of Health Sciences Education, McGill University. Centre for Interdisciplinary Research in Rehabilitation, Montreal, Quebec, Canada . Dr. Rochette: Professor, Occupational Therapy Program, School of Rehabilitation, Université de Montréal. Centre for Interdisciplinary Research in Rehabilitation, Institut universitaire sur la réadaptation en défience physique de Montréal (IURDPM), Montreal, Quebec, Canada. Ms. George: School of Physical and Occupational Therapy, McGill University. Centre for Interdisciplinary Research in Rehabilitation, Montreal, Quebec, Canada. Dr. Iqbal: Post-doctoral fellow, School of Physical and Occupational Therapy, Faculty of Medicine and Health Sciences, McGill University. Centre for Interdisciplinary Research in Rehabilitation Montreal, Quebec, Canada. Ms. Ataman: School of Physical and Occupational Therapy, Faculty of Medicine and Health Sciences, McGill University. Centre for Interdisciplinary Research in Rehabilitation Montreal, Quebec, Canada. Dr. St-Onge: Professor, Department of Medicine and Center for Health Professions Pedagogy, Université de Sherbrooke. Paul Grand'Maison de la Société des Médecins de l'Université de Sherbrooke - Research Chair in Medical Education, Sherbrooke, Quebec, Canada. Ms. Boruff, Associate Librarian, Schulich Library of Physical Sciences, Life Sciences, and Engineering, McGill University, Montreal, Quebec, Canada. Dr. Renaud: Professor, Department of Family and Emergency Medicine, VITAM Research Center, Université Laval, Quebec, Quebec, Canada
| | | | | | | | | | | | | | | |
Collapse
|
11
|
Rosenberg I, Thomas L, Ceccolini G, Feinn R. 'Early identification of struggling pre-clerkship learners using formative clinical skills OSCEs: an assessment for learning program.'. MEDICAL EDUCATION ONLINE 2022; 27:2028333. [PMID: 35048773 PMCID: PMC8786239 DOI: 10.1080/10872981.2022.2028333] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/17/2021] [Revised: 11/01/2021] [Accepted: 01/08/2022] [Indexed: 06/14/2023]
Abstract
Multiple experts in clinical skills remediation recommend early identification to support struggling learners, but there is minimal documentation on implementation of these programs. We share one school's outcomes-based research utilizing the formative assessment for learning model to early-identify pre-clerkship students struggling with clinical skills using formative OSCEs (F-OSCE). Student scores were monitored over longitudinal F-OSCE experiences as part of a curricular innovation. Points towards early identification accumulated when a student's score fell below the 80% threshold for each section of an OSCE. Students who accumulated enough points were advised of the need for intervention, and coaching was recommended. Students were surveyed about their experiences with the program. The objective was to explore whether this early identification program and coaching intervention had a positive impact on subsequent OSCE performance. Of 184 students in 2 cohorts who completed F-OSCEs, 38 (20.7%) were flagged for early identification. Of these, 17 (44.7%) sought additional help by voluntarily participating in the coaching program. Students who participated in extra clinical skills coaching demonstrated statistically significant improvements in performance on subsequent FOSCEs, as did the early identified students who did not participate in extra coaching. The greatest impact of coaching intervention was noted in the physical examination domain. This program was effective in identifying students struggling with clinical skills on formative OSCEs. Early identified students demonstrated improvements in subsequent OSCE performance, with those who sought coaching faring slightly better. Development of robust early identification programs as formative assessments of clinical skills and follow-up coaching programs to guide skills development are important implications of this work. Monitoring short- and long-term results for students identified through this approach to see if improvement is sustained is planned.
Collapse
Affiliation(s)
- Ilene Rosenberg
- Clinical Skills Remediation, Department of Medical Sciences, Frank H. Netter Md School of Medicine at Quinnipiac University, Hamden, CT, USA
| | - Listy Thomas
- Department of Medical Sciences, Clinical Arts and Sciences Course, the Frank H. Netter Md School of Medicine at Quinnipiac University, Hamden, CT, USA
| | - Gabbriel Ceccolini
- Standardized Patient & Assessment Center, Frank H. Netter Md School of Medicine, at Quinnipiac University, Hamden, CT, USA
| | - Richard Feinn
- Department of Medical Sciences, Frank H Netter Md School of Medicine, at Quinnipiac University, Hamden, CT, USA
| |
Collapse
|
12
|
Jiang Z, Ouyang J, Li L, Han Y, Xu L, Liu R, Sun J. Cost-Effectiveness Analysis in Performance Assessments: A Case Study of the Objective Structured Clinical Examination. MEDICAL EDUCATION ONLINE 2022; 27:2136559. [PMID: 36250891 PMCID: PMC9586649 DOI: 10.1080/10872981.2022.2136559] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/28/2022] [Revised: 10/11/2022] [Accepted: 10/12/2022] [Indexed: 06/16/2023]
Abstract
Medical education assessments are becoming more complex, resulting in the inappropriateness of traditional methods primarily consisting of direct observations, oral examinations, and multiple-choice tests. Advancements in research methods have led to the formation of new modalities, namely performance assessments, which are, on the other hand, always costly in development and implementation. Proposing using the Program Effectiveness and Cost Generalization flow within an assessment context (PRECOG-A), this brief report explores the real financial cost drivers associated with an assessment case in the context of medical education, presents the steps in bridging the effectiveness with its psychometric properties via cost-effectiveness analysis, and evaluates the two-side outcomes for further evaluation decision-making. Referentially providing a framework to investigators and researchers, the illustration of PRECOG-A in this study outlines instructional guidelines for conducting cost-effectiveness analysis in a performance assessment.
Collapse
Affiliation(s)
- Zhehan Jiang
- Institute of Medical Education, Health Science Center, Peking University, Beijing, Peking, China
- National Center for Health Professions Education Development, Peking University, Beijing, Peking, China
| | - Jinying Ouyang
- Institute of Medical Education, Health Science Center, Peking University, Beijing, Peking, China
- National Center for Health Professions Education Development, Peking University, Beijing, Peking, China
| | - Li Li
- Department of General Practice, Guangzhou First People’s Hospital, Guangzhou, Guangdong, China
| | - Yuting Han
- Institute of Medical Education, Health Science Center, Peking University, Beijing, Peking, China
- National Center for Health Professions Education Development, Peking University, Beijing, Peking, China
| | - Lingling Xu
- Institute of Medical Education, Health Science Center, Peking University, Beijing, Peking, China
- National Center for Health Professions Education Development, Peking University, Beijing, Peking, China
| | - Ren Liu
- Psychological Science, University of California Merced, Merced, CA, USA
| | - Junhua Sun
- Institute of Education, Nanjing University, Nanjing, Jiangsu, China
| |
Collapse
|
13
|
McGown PJ, Brown CA, Sebastian A, Le R, Amin A, Greenland A, Sam AH. Is the assumption of equal distances between global assessment categories used in borderline regression valid? BMC MEDICAL EDUCATION 2022; 22:708. [PMID: 36199083 PMCID: PMC9536020 DOI: 10.1186/s12909-022-03753-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/26/2022] [Accepted: 09/12/2022] [Indexed: 06/16/2023]
Abstract
BACKGROUND Standard setting for clinical examinations typically uses the borderline regression method to set the pass mark. An assumption made in using this method is that there are equal intervals between global ratings (GR) (e.g. Fail, Borderline Pass, Clear Pass, Good and Excellent). However, this assumption has never been tested in the medical literature to the best of our knowledge. We examine if the assumption of equal intervals between GR is met, and the potential implications for student outcomes. METHODS Clinical finals examiners were recruited across two institutions to place the typical 'Borderline Pass', 'Clear Pass' and 'Good' candidate on a continuous slider scale between a typical 'Fail' candidate at point 0 and a typical 'Excellent' candidate at point 1. Results were analysed using one-sample t-testing of each interval to an equal interval size of 0.25. Secondary data analysis was performed on summative assessment scores for 94 clinical stations and 1191 medical student examination outcomes in the final 2 years of study at a single centre. RESULTS On a scale from 0.00 (Fail) to 1.00 (Excellent), mean examiner GRs for 'Borderline Pass', 'Clear Pass' and 'Good' were 0.33, 0.55 and 0.77 respectively. All of the four intervals between GRs (Fail-Borderline Pass, Borderline Pass-Clear Pass, Clear Pass-Good, Good-Excellent) were statistically significantly different to the expected value of 0.25 (all p-values < 0.0125). An ordinal linear regression using mean examiner GRs was performed for each of the 94 stations, to determine pass marks out of 24. This increased pass marks for all 94 stations compared with the original GR locations (mean increase 0.21), and caused one additional fail by overall exam pass mark (out of 1191 students) and 92 additional station fails (out of 11,346 stations). CONCLUSIONS Although the current assumption of equal intervals between GRs across the performance spectrum is not met, and an adjusted regression equation causes an increase in station pass marks, the effect on overall exam pass/fail outcomes is modest.
Collapse
Affiliation(s)
- Patrick J McGown
- Imperial College School of Medicine, Imperial College London, London, UK
| | - Celia A Brown
- Warwick Medical School, University of Warwick, Warwick, UK
| | - Ann Sebastian
- Imperial College School of Medicine, Imperial College London, London, UK
| | - Ricardo Le
- Warwick Medical School, University of Warwick, Warwick, UK
| | - Anjali Amin
- Imperial College School of Medicine, Imperial College London, London, UK
| | - Andrew Greenland
- Imperial College School of Medicine, Imperial College London, London, UK
| | - Amir H Sam
- Imperial College School of Medicine, Imperial College London, London, UK.
| |
Collapse
|
14
|
Hennel EK, Trachsel A, Subotic U, Lörwald AC, Harendza S, Huwendiek S. How does multisource feedback influence residency training? A qualitative case study. MEDICAL EDUCATION 2022; 56:660-669. [PMID: 35263461 PMCID: PMC9314722 DOI: 10.1111/medu.14798] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/26/2021] [Revised: 01/20/2022] [Accepted: 02/22/2022] [Indexed: 06/14/2023]
Abstract
INTRODUCTION Multisource feedback (MSF), also called 360-degree assessment, is one form of assessment used in postgraduate training. However, there is an ongoing discussion on its value, because the factors that influence the impact of MSF and the main impact of MSF are not fully understood. In this study, we investigated both the influencing factors and the impact of MSF on residency training. METHODS We conducted a qualitative case study within the boundaries of the residency training for paediatricians and paediatric surgeons at a University Hospital. We collected data from seven focus group interviews with stakeholders of MSF (residents, raters and supervisors). By performing a reflexive thematic analysis, we extracted the influencing factors and the impact of MSF. RESULTS We found seven influencing factors: MSF is facilitated by the announcement of a clear goal of MSF, the training of raters on the MSF instrument, a longitudinal approach of observation, timing not too early and not too late during the rotation, narrative comments as part of the ratings, the residents' self-assessment and a supervisor from the same department. We found three themes on the impact of MSF: MSF supports the professional development of residents, enhances interprofessional teamwork and increases the raters' commitment to the training of residents. CONCLUSION This study illuminates the influencing factors and impact of MSF on residency training. We offer novel recommendations on the continuity of observation, the timing during rotations and the role of the supervisor. Moreover, by discussing our results through the lens of identity formation theory, this work advances our conceptual understanding of MSF. We propose identity formation theory as a framework for future research on MSF to leverage the potential of MSF in residency training.
Collapse
Affiliation(s)
- Eva K. Hennel
- Department for Assessment and Evaluation (AAE), Institute for Medical EducationUniversity of BernBernSwitzerland
| | - Andrea Trachsel
- Department for Assessment and Evaluation (AAE), Institute for Medical EducationUniversity of BernBernSwitzerland
| | | | - Andrea C. Lörwald
- Department for Assessment and Evaluation (AAE), Institute for Medical EducationUniversity of BernBernSwitzerland
| | - Sigrid Harendza
- Department of Internal MedicineUniversity Medical Center Hamburg‐EppendorfHamburgGermany
| | - Sören Huwendiek
- Department for Assessment and Evaluation (AAE), Institute for Medical EducationUniversity of BernBernSwitzerland
| |
Collapse
|
15
|
Macauley K, Laprino S, Brudvig T. Perceptions of Physical Therapy Students on their Psychomotor Examinations: a Qualitative Study. MEDICAL SCIENCE EDUCATOR 2022; 32:349-360. [PMID: 35528290 PMCID: PMC9054959 DOI: 10.1007/s40670-022-01514-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 01/21/2022] [Indexed: 06/14/2023]
Abstract
INTRODUCTION Practical examinations are necessary to demonstrate learning in the psychomotor, cognitive, and affective domains. Student perceptions of the organization and execution of practical examinations are an important consideration in the development of practical examinations. REVIEW OF THE LITERATURE Multiple other health professions have investigated students' perceptions of objective structured clinical examinations (OSCE). There is little in the physical therapy literature with respect to student perception regarding proctor presence during practical examinations or OSCEs. SUBJECTS The participants were members of the classes of 2019-2021 in a Doctor of Physical Therapy (DPT) program at a New England University. METHODS A qualitative thematic approach was applied to de-identified transcripts of student focus group interviews. Independently coded themes were identified, discussed, and refined iteratively. RESULTS AND DISCUSSION Four themes emerged with multiple subthemes: impact of proctor being present; realistic, patient-focused experience; preparation for the practical; and stress. Students valued preparation that included clear expectations, utilization of formative assessments, and peer feedback prior to the practical. They also noted that a distractive-free testing space, having no proctor present in the room, recording the practical, and the format of OSCE's decreased stress and improved performance. CONCLUSIONS These findings add to the body of knowledge in physical therapy and provide guidance to faculty as they plan and organize practical examinations.
Collapse
|
16
|
Papavasiliou T, Nicholas R, Cooper L, Chan JCY, Ibanez J, Bain CJ, Uppal L. Utilisation of a 3D printed ex vivo flexor tendon model to improve surgical training. J Plast Reconstr Aesthet Surg 2021; 75:1255-1260. [PMID: 34896043 DOI: 10.1016/j.bjps.2021.11.027] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2021] [Accepted: 11/03/2021] [Indexed: 11/16/2022]
Abstract
BACKGROUND Surgery for hand trauma accounts for a significant proportion of the plastic surgery trainee activity. The aim of this article is to create a standardised simulation training module for flexor tendon repair techniques for residents prior to their first encounter in the clinical setting. METHODS A step-ladder approach flexor tendon repair training with four levels of difficulty was conducted using a three-dimensional (3D) printed anatomical simulation model and a silicone tendon rod on a cohort of 28 plastic surgery Senior House Officers (SHOs) of various stages in their training (n=28). Assessment of knowledge (online questionnaire) and practical skills using validated score systems (global rating scale and task specific score) was performed at the beginning and end of the module by hand experts of our unit. RESULTS The overall average knowledge-based scores of the cohort pre- and post-assessment were 1.48/5 (29.6%) and 3.56/5 (71.5%), respectively. The overall average skills-based scores of the cohort pre- and post-assessments were 3.05/5 (61%) and 4.12/5 (82.5%), respectively. Significant (p<0.01) difference of improvement of knowledge and skills was noted on all trainees. All trainees confirmed that the training module improved their confidence with flexor tendon repair. CONCLUSION We demonstrate a standardised simulation training framework that employs a 3D printed flexor tendon simulation model proven to improve the skills of residents especially during their early learning curve and which paves the way to a more universal, standardised and validated training across hand surgery.
Collapse
Affiliation(s)
- Theodora Papavasiliou
- Department of Plastic Surgery, Guys' and St' Thomas' Hospitals, Westminster Bridge Rd, Lambeth SE1 7EH, London.
| | - Rebecca Nicholas
- Department of Plastic Surgery, Guys' and St' Thomas' Hospitals, Westminster Bridge Rd, Lambeth SE1 7EH, London
| | - Lilli Cooper
- Department of Plastic Surgery, Guys' and St' Thomas' Hospitals, Westminster Bridge Rd, Lambeth SE1 7EH, London
| | - Jeffrey C Y Chan
- Department of Plastic Surgery, Guys' and St' Thomas' Hospitals, Westminster Bridge Rd, Lambeth SE1 7EH, London
| | - Javier Ibanez
- Department of Plastic Surgery, Guys' and St' Thomas' Hospitals, Westminster Bridge Rd, Lambeth SE1 7EH, London
| | - Charles J Bain
- Department of Plastic Surgery, Guys' and St' Thomas' Hospitals, Westminster Bridge Rd, Lambeth SE1 7EH, London
| | - Lauren Uppal
- Department of Plastic Surgery, Guys' and St' Thomas' Hospitals, Westminster Bridge Rd, Lambeth SE1 7EH, London
| |
Collapse
|
17
|
Poole C, Patterson A. Fostering the development of professional identity within healthcare education-interdisciplinary innovation. J Med Imaging Radiat Sci 2021; 52:S45-S50. [PMID: 34483083 DOI: 10.1016/j.jmir.2021.08.012] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2021] [Revised: 07/06/2021] [Accepted: 08/12/2021] [Indexed: 11/28/2022]
Abstract
INTRODUCTION Historical theories on development of professionalism are no longer sufficient in modern radiation therapy or radiography curricula with the focus moving from 'virtues-based professionalism' to 'professional identity formation'. Professional identity formation is a new concept that is described as a transformative journey from being a layperson to 'becoming' a professional. Knowledge, values, and behaviours are transformative and unique to each individual. The overall aim is to produce a consensus statement outlining evidence based programme initiatives to support healthcare students' professional identity formation. METHODS 'Think tank' methodology was used for individual and final combined group reflective tasks to enable the creation of an evidenced based consensus statement. Participants discussed their personal views and beliefs regarding the process of Professional Identity Formation for teaching, learning, assessment, and evaluation. Discussions were recorded, transcribed, and analysed using thematic analysis from an interpretivist perspective. 'Think Tank' participants were asked to attend masterclasses to gain a greater understanding of professional identity formation from leading experts before the final combined 'Think Tanks'. RESULTS Faculty and students across all Disciplines (N22) within the school of medicine attended the 'Think Tank' sessions. DISCUSSION During each student's transformative process of professional identity formation, healthcare educators need to create evidence based pedagogic opportunities to support them. It is no longer sufficient to leave to chance within a 'hidden' or 'informal' curriculum. Professional identity is more than a set of learned behaviours that are assessed within the clinical environment. CONCLUSION The development of this consensus statement is an innovative educational strategy that will ultimately enhance the education of professionalism in the clinical environment for radiographers and radiation therapists. Through seeking an understanding of the educational needs of students and faculty, the multidisciplinary team were able to create a tailored approach to professional identity formation within the institution. This student-faculty partnership is unique and beneficial to all parties involved and is an effective method of seeking a shared understanding.
Collapse
Affiliation(s)
- Claire Poole
- Applied Radiation Therapy Trinity, Discipline of Radiation Therapy, School of Medicine, Trinity College, Dublin 2, Ireland.
| | - Aileen Patterson
- Trinity College Dublin, School of Medicine, The Trinity Biomedical Sciences Institute, Dublin 2, Ireland
| |
Collapse
|
18
|
A transparent and defensible process for applicant selection within a Canadian emergency medicine residency program. CAN J EMERG MED 2021; 22:215-223. [PMID: 31941560 DOI: 10.1017/cem.2019.460] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
OBJECTIVES The Canadian Resident Matching Service (CaRMS) selection process has come under scrutiny due to the increasing number of unmatched medical graduates. In response, we outline our residency program's selection process including how we have incorporated best practices and novel techniques. METHODS We selected file reviewers and interviewers to mitigate gender bias and increase diversity. Four residents and two attending physicians rated each file using a standardized, cloud-based file review template to allow simultaneous rating. We interviewed applicants using four standardized stations with two or three interviewers per station. We used heat maps to review rating discrepancies and eliminated rating variance using Z-scores. The number of person-hours that we required to conduct our selection process was quantified and the process outcomes were described statistically and graphically. RESULTS We received between 75 and 90 CaRMS applications during each application cycle between 2017 and 2019. Our overall process required 320 person-hours annually, excluding attendance at the social events and administrative assistant duties. Our preliminary interview and rank lists were developed using weighted Z-scores and modified through an organized discussion informed by heat mapped data. The difference between the Z-scores of applicants surrounding the interview invitation threshold was 0.18-0.3 standard deviations. Interview performance significantly impacted the final rank list. CONCLUSIONS We describe a rigorous resident selection process for our emergency medicine training program which incorporated simultaneous cloud-based rating, Z-scores, and heat maps. This standardized approach could inform other programs looking to adopt a rigorous selection process while providing applicants guidance and reassurance of a fair assessment.
Collapse
|
19
|
Thomas D, Khalifa S, Sreedharan J, Bond R. Inter-rater Reliability of Preceptors on Clinical Pharmacy Competency Evaluation. CURRENT DRUG THERAPY 2021. [DOI: 10.2174/1574885515999201209202624] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
Abstract
Background::
Clinical competence of pharmacy students is better evaluated at their practice
sites compared to the classroom. A clinical pharmacy competency evaluation rubric like that of
the American College of Clinical Pharmacy (ACCP) is an effective assessment tool for clinical
skills and can be used to show item reliability. The preceptors should be trained on how to use the
rubrics as many inherent factors could influence inter-rater reliability.
Objective::
To evaluate inter-rater reliability among preceptors on evaluating clinical competence
of pharmacy students, before and after a group discussion intervention.
Materials and Methods:
In this quasi-experimental study in a United Arab Emirates teaching hospital,
Seven clinical pharmacy preceptors rated the clinical pharmacy competencies of ten recent
PharmD graduates referring to their portfolios and preceptorship. Clinical pharmacy competencies
were adopted from ACCP and mildly modified to be relevant for the local settings.
Results::
Inter-rater reliability (Cronbach's Alpha) among preceptors was reasonable being practitioners
at a single site for 2-4 years. At domain level, inter-rater reliability ranged from 0.79 - 0.93
before intervention and 0.94 - 0.99 after intervention. No inter-rater reliability was observed in relation
to certain competency elements ranging from 0.31 - 0.61 before the intervention, but improved
to 0.79 - 0.97 after the intervention. Intra-class correlation coefficient improved among all individual
preceptors being reliable with each other after group discussion though some had no reliability
with each other before group discussion.
Conclusion::
Group discussion among preceptors at the training site was found to be effective in improving
inter-rater reliability on all elements of the clinical pharmacy competency evaluation. Removing
a preceptor from the analysis did not affect inter-rater reliability after group discussion.
Collapse
Affiliation(s)
- Dixon Thomas
- Clinical, College of Pharmacy, Gulf Medical University, Ajman, United Arab Emirates
| | - Sherief Khalifa
- Clinical, College of Pharmacy, Gulf Medical University, Ajman, United Arab Emirates
| | - Jayadevan Sreedharan
- Clinical, College of Pharmacy, Gulf Medical University, Ajman, United Arab Emirates
| | - Rucha Bond
- Department of Pharmacotherapy and Outcomes Sciences, Virginia Commonwealth University, Richmond, United States
| |
Collapse
|
20
|
Amiel JM, Andriole DA, Biskobing DM, Brown DR, Cutrer WB, Emery MT, Mejicano GC, Ryan MS, Swails JL, Wagner DP. Revisiting the Core Entrustable Professional Activities for Entering Residency. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2021; 96:S14-S21. [PMID: 34183597 DOI: 10.1097/acm.0000000000004088] [Citation(s) in RCA: 28] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
The Core EPAs for Entering Residency Pilot project aimed to test the feasibility of implementing 13 entrustable professional activities (EPAs) at 10 U.S. medical schools and to gauge whether the use of the Core EPAs could improve graduates' performance early in residency. In this manuscript, the authors (members of the pilot institutions and Association of American Medical Colleges staff supporting the project evaluation) describe the schools' capacity to collect multimodal evidence about their students' performance in each of the Core EPAs and the ability of faculty committees to use those data to make decisions regarding learners' readiness for entrustment. In reviewing data for each of the Core EPAs, the authors reflected on how each activity performed as an EPA informed by how well it could be assessed and entrusted. For EPAs that did not perform well, the authors examined whether there are underlying practical and/or theoretical issues limiting its utility as a measure of student performance in medical school.
Collapse
Affiliation(s)
- Jonathan M Amiel
- J.M. Amiel is associate professor of psychiatry, senior associate dean for curricular affairs, and interim co-vice dean for education, Columbia University Vagelos College of Physicians & Surgeons, New York, New York; ORCID: https://orcid.org/0000-0003-4027-6397
| | - Dorothy A Andriole
- D.A. Andriole is senior director, Medical Education Research, Association of American Medical Colleges, Washington, DC; ORCID: https://orcid.org/0000-0001-8902-1227
| | - Diane M Biskobing
- D.M. Biskobing is professor of medicine and associate dean for pre-clinical medical education, Virginia Commonwealth University School of Medicine, Richmond, Virginia
| | - David R Brown
- D.R. Brown is associate professor, chief, Family and Community Medicine, and interim chair, Humanities, Health, and Society, Florida International University Herbert Wertheim College of Medicine, Miami, Florida; ORCID: https://orcid.org/0000-0002-5361-6664
| | - William B Cutrer
- W.B. Cutrer is associate dean for undergraduate medical education and associate professor of pediatrics, Critical Care Medicine, Vanderbilt University School of Medicine, Nashville, Tennessee; ORCID: https://orcid.org/0000-0003-1538-9779
| | - Matthew T Emery
- M.T. Emery is associate professor of emergency medicine and medical director for simulation, Michigan State University College of Human Medicine, East Lansing, Michigan
| | - George C Mejicano
- G.C. Mejicano is professor of medicine and senior associate dean for Education, Oregon Health & Science University School of Medicine, Portland, Oregon; ORCID: https://orcid.org/0000-0002-6087-3730
| | - Michael S Ryan
- M.S. Ryan is associate professor of pediatrics and assistant dean for clinical medical education, Virginia Commonwealth University School of Medicine, Richmond, Virginia; ORCID: https://orcid.org/0000-0003-3266-9289
| | - Jennifer L Swails
- J.L. Swails is associate professor of internal medicine, McGovern Medical School, University of Texas Health Science Center, Houston, Texas; ORCID: https://orcid.org/0000-0002-6102-831X
| | - Dianne P Wagner
- D.P. Wagner is professor of medicine, associate dean for undergraduate medical education, and interim senior associate dean for academic affairs, Michigan State University College of Human Medicine, East Lansing, Michigan
| |
Collapse
|
21
|
The evolution of a national, advanced airway management simulation-based course for anaesthesia trainees. Eur J Anaesthesiol 2021; 38:138-145. [PMID: 32675701 DOI: 10.1097/eja.0000000000001268] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022]
Abstract
BACKGROUND Needs analyses involving patient complaints and anaesthesiologists' confidence levels in difficult airway management procedures in Denmark have shown a need for training in both technical and non-technical skills. OBJECTIVE To provide an example of how to design, implement and evaluate a national simulation-based course in advanced airway management for trainees within a compulsory, national specialist training programme. DESIGN AND RESULTS A national working group, established by the Danish Society for Anaesthesiology and Intensive Care Medicine, designed a standardised simulation course in advanced airway management for anaesthesiology trainees based on the six-step approach. Learning objectives are grounded in the curriculum and analyses-of-needs (in terms of knowledge, skills and attitudes, including non-technical skills, which encompass the cognitive skills and social skills, necessary for safe and effective performance). A total of 28 courses for 800 trainees have been conducted. Evaluation has been positive and pre and posttests have indicated a positive effect on learning. CONCLUSION The course was successfully designed and implemented within the national training programme for trainees. Important factors for success were involvement of all stakeholders, thorough planning, selection of the most important learning objectives, the use of interactive educational methods and training of the facilitators.
Collapse
|
22
|
Kumaravel B, Stewart C, Ilic D. Development and evaluation of a spiral model of assessing EBM competency using OSCEs in undergraduate medical education. BMC MEDICAL EDUCATION 2021; 21:204. [PMID: 33838686 PMCID: PMC8035769 DOI: 10.1186/s12909-021-02650-7] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/15/2021] [Accepted: 04/05/2021] [Indexed: 05/04/2023]
Abstract
BACKGROUND Medical students often struggle to understand the relevance of Evidence Based Medicine (EBM) to their clinical practice, yet it is a competence that all students must develop prior to graduation. Objective structured clinical examinations (OSCEs) are a valued assessment tool to assess critical components of EBM competency, particularly different levels of mastery as they progress through the course. This study developed and evaluated EBM based OSCE stations with an aim to establish a spiral approach for EBM OSCE stations for undergraduate medical students. METHODS OSCE stations were developed with increasingly complex EBM tasks. OSCE stations were classified according to the classification rubric for EBP assessment tools (CREATE) framework and mapped against the recently published core competencies for evidence-based practice (EBP). Performance data evaluation was undertaken using Classical Test Theory analysing mean scores, pass rates, and station item total correlation (ITC) using SPSS. RESULTS Six EBM based OSCE stations assessing various stages of EBM were created for use in high stakes summative OSCEs for different year groups across the undergraduate medical degree. All OSCE stations, except for one, had excellent correlation coefficients and hence a high reliability, ranging from 0.21-0.49. The domain mean score ranged from 13.33 to 16.83 out of 20. High reliability was demonstrated for the each of the summative OSCE circuits (Cronbach's alpha = 0.67-0.85). In the CREATE framework these stations assessed knowledge, skills, and behaviour of medical students in asking, searching, appraising, and integrating evidence in practice. The OSCE stations were useful in assessing six core evidence-based practice competencies, which are meant to be practiced with exercises. A spiral model of OSCEs of increasing complexity was proposed to assess EBM competency as students progressed through the MBChB course. CONCLUSIONS The use of the OSCEs is a feasible method of authentically assessing leaner EBM performance and behaviour in a high stakes assessment setting. Use of valid and reliable EBM-based OSCE stations provide evidence for continued development of a hierarchy of assessing scaffolded learning and mastery of EBM competency. Further work is needed to assess their predictive validity.
Collapse
Affiliation(s)
- B Kumaravel
- The University of Buckingham Medical School, Hunter Street, Buckingham, MK18 1EG, UK.
| | - C Stewart
- University of Nottingham, Nottingham, UK
| | - D Ilic
- School of Public Health and Preventive Medicine, Monash University, Melbourne, Australia
| |
Collapse
|
23
|
Homer M, Russell J. Conjunctive standards in OSCEs: The why and the how of number of stations passed criteria. MEDICAL TEACHER 2021; 43:448-455. [PMID: 33290124 DOI: 10.1080/0142159x.2020.1856353] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
INTRODUCTION Many institutions require candidates to achieve a minimum number of OSCE stations passed (MNSP) in addition to the aggregate pass mark. The stated rationale is usually that this conjunctive standard prevents excessive degrees of compensation across an assessment. However, there is a lack of consideration and discussion of this practice in the medical education literature. METHODS We consider the motivations for the adoption of the MNSP from the assessment designer perspective, outlining potential concerns about the complexity of what the OSCE is trying to achieve, particularly around the blueprinting process and the limitations of scoring instruments. We also introduce four potential methods for setting an examinee-centred MNSP standard, and highlight briefly the theoretical advantages and disadvantages of these approaches. DISCUSSION AND CONCLUSION There are psychometric arguments for and against the limiting of compensation in OSCEs, but it is clear that many stakeholders value the application of an MNSP standard. This paper adds to the limited literature on this important topic and notes that current MNSP practices are often problematic in high stakes settings. More empirical work is needed to develop understanding of the impact on pass/fail decision-making of the proposed standard setting methods developed in this paper.
Collapse
Affiliation(s)
- Matt Homer
- Leeds Institute of Medical Education, School of Medicine, University of Leeds, Leeds, UK
| | - Jen Russell
- Leeds Institute of Medical Education, School of Medicine, University of Leeds, Leeds, UK
| |
Collapse
|
24
|
Boursicot K, Kemp S, Wilkinson T, Findyartini A, Canning C, Cilliers F, Fuller R. Performance assessment: Consensus statement and recommendations from the 2020 Ottawa Conference. MEDICAL TEACHER 2021; 43:58-67. [PMID: 33054524 DOI: 10.1080/0142159x.2020.1830052] [Citation(s) in RCA: 27] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
INTRODUCTION In 2011 the Consensus Statement on Performance Assessment was published in Medical Teacher. That paper was commissioned by AMEE (Association for Medical Education in Europe) as part of the series of Consensus Statements following the 2010 Ottawa Conference. In 2019, it was recommended that a working group be reconvened to review and consider developments in performance assessment since the 2011 publication. METHODS Following review of the original recommendations in the 2011 paper and shifts in the field across the past 10 years, the group identified areas of consensus and yet to be resolved issues for performance assessment. RESULTS AND DISCUSSION This paper addresses developments in performance assessment since 2011, reiterates relevant aspects of the 2011 paper, and summarises contemporary best practice recommendations for OSCEs and WBAs, fit-for-purpose methods for performance assessment in the health professions.
Collapse
Affiliation(s)
- Katharine Boursicot
- Department of Assessment and Progression, Duke-National University of Singapore, Singapore, Singapore
| | - Sandra Kemp
- Curtin Medical School, Curtin University, Perth, Australia
| | - Tim Wilkinson
- Dean's Department, University of Otago, Christchurch, New Zealand
| | - Ardi Findyartini
- Department of Medical Education, Universitas Indonesia, Jakarta, Indonesia
| | - Claire Canning
- Department of Assessment and Progression, Duke-National University of Singapore, Singapore, Singapore
| | - Francois Cilliers
- Department of Health Sciences Education, University of Cape Town, Cape Town, South Africa
| | | |
Collapse
|
25
|
Mathur P. Introduction of direct observation of procedural skills as workplace-based assessment tool in department of anesthesiology: Evaluation of students’ and teachers’ perceptions. BALI JOURNAL OF ANESTHESIOLOGY 2021. [DOI: 10.4103/bjoa.bjoa_59_21] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022] Open
|
26
|
[Simulation curricular content in postgraduate emergency medicine: A multicentre Delphi study]. CAN J EMERG MED 2020; 21:667-675. [PMID: 31084629 DOI: 10.1017/cem.2019.348] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
OBJECTIVES There is increasing evidence to support integration of simulation into medical training; however, no national emergency medicine (EM) simulation curriculum exists. Using Delphi methodology, we aimed to identify and establish content validity for adult EM curricular content best suited for simulation-based training, to inform national postgraduate EM training. METHODS A national panel of experts in EM simulation iteratively rated potential curricular topics, on a 4-point scale, to determine those best suited for simulation-based training. After each round, responses were analyzed. Topics scoring <2/4 were removed and remaining topics were resent to the panel for further ratings until consensus was achieved, defined as Cronbach α ≥ 0.95. At conclusion of the Delphi process, topics rated ≥ 3.5/4 were considered "core" curricular topics, while those rated 3.0-3.5 were considered "extended" curricular topics. RESULTS Forty-five experts from 13 Canadian centres participated. Two hundred eighty potential curricular topics, in 29 domains, were generated from a systematic literature review, relevant educational documents and Delphi panellists. Three rounds of surveys were completed before consensus was achieved, with response rates ranging from 93-100%. Twenty-eight topics, in eight domains, reached consensus as "core" curricular topics. Thirty-five additional topics, in 14 domains, reached consensus as "extended" curricular topics. CONCLUSIONS Delphi methodology allowed for achievement of expert consensus and content validation of EM curricular content best suited for simulation-based training. These results provide a foundation for improved integration of simulation into postgraduate EM training and can be used to inform a national simulation curriculum to supplement clinical training and optimize learning.
Collapse
|
27
|
Hennel EK, Subotic U, Berendonk C, Stricker D, Harendza S, Huwendiek S. A german-language competency-based multisource feedback instrument for residents: development and validity evidence. BMC MEDICAL EDUCATION 2020; 20:357. [PMID: 33046060 PMCID: PMC7552497 DOI: 10.1186/s12909-020-02259-2] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/22/2019] [Accepted: 09/28/2020] [Indexed: 06/11/2023]
Abstract
BACKGROUND In medical settings, multisource feedback (MSF) is a recognised method of formative assessment. It collects feedback on a doctor's performance from several perspectives in the form of questionnaires. Yet, no validated MSF questionnaire has been publicly available in German. Thus, we aimed to develop a German MSF questionnaire based on the CanMEDS roles and to investigate the evidence of its validity. METHODS We developed a competency-based MSF questionnaire in German, informed by the literature and expert input. Four sources of validity evidence were investigated: (i) Content was examined based on MSF literature, blueprints of competency, and expert-team discussions. (ii) The response process was supported by analysis of a think-aloud study, narrative comments, "unable to comment" ratings and evaluation data. (iii) The internal structure was assessed by exploratory factor analysis, and inter-rater reliability by generalisability analysis. Data were collected during two runs of MSF, in which 47 residents were evaluated once (first run) or several times (second and third run) on 81 occasions of MSF. (iv) To investigate consequences, we analysed the residents' learning goals and the progress as reported via MSF. RESULTS Our resulting MSF questionnaire (MSF-RG) consists of 15 items and one global rating, which are each rated on a scale and accompanied by a field for narrative comments and cover a construct of a physician's competence. Additionally, there are five open questions for further suggestions. Investigation of validity evidence revealed that: (i) The expert group agreed that the content comprehensively addresses clinical competence; (ii) The response processes indicated that the questions are understood as intended and supported the acceptance and usability; (iii) For the second run, factor analysis showed a one-factor solution, a Cronbach's alpha of 0.951 and an inter-rater reliability of 0.797 with 12 raters; (iv) There are indications that residents benefitted, considering their individual learning goals and based on their ratings reported via MSF itself. CONCLUSIONS To support residency training with multisource feedback, we developed a German MSF questionnaire (MSF-RG), which is supported by four sources of validity evidence. This MSF questionnaire may be useful to implement MSF in residency training in German-speaking regions.
Collapse
Affiliation(s)
- Eva K. Hennel
- Department for Assessment and Evaluation (AAE), Institute for Medical Education, University of Bern, Mittelstrasse 43, 3012 Bern, Switzerland
| | - Ulrike Subotic
- University Children’s Hospital Basel, Spitalstrasse 33, 4056 Basel, Switzerland
| | - Christoph Berendonk
- Department for Assessment and Evaluation (AAE), Institute for Medical Education, University of Bern, Mittelstrasse 43, 3012 Bern, Switzerland
| | - Daniel Stricker
- Department for Assessment and Evaluation (AAE), Institute for Medical Education, University of Bern, Mittelstrasse 43, 3012 Bern, Switzerland
| | - Sigrid Harendza
- Department of Internal Medicine, University Medical Centre Hamburg-Eppendorf, Martinistr. 52, 20246 Hamburg, Germany
| | - Sören Huwendiek
- Department for Assessment and Evaluation (AAE), Institute for Medical Education, University of Bern, Mittelstrasse 43, 3012 Bern, Switzerland
| |
Collapse
|
28
|
Alpine LM, O'Connor A, McGuinness M, Barrett EM. Performance-based assessment during clinical placement: Cross-sectional investigation of a training workshop for practice educators. Nurs Health Sci 2020; 23:113-122. [PMID: 32803810 DOI: 10.1111/nhs.12768] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2020] [Revised: 07/28/2020] [Accepted: 08/13/2020] [Indexed: 11/29/2022]
Abstract
Performance-based assessment evaluates a health professional student's performance as they integrate their knowledge and skills into clinical practice. Performance-based assessment grades, however, are reported to be highly variable due to the complexity of decision-making in the clinical environment. The aim of this study was to evaluate the impact of a training workshop based on frame-of-reference principles on grading of student performance by physiotherapy practice educators. This was a prospective cross-sectional study which used a single group pre-test, post-test design. Fifty-three practice educators rated two video vignettes depicting a poor and very good student performance, using a subsection of a physiotherapy performance-based assessment tool before and after training. Overall, results showed that participants amended their scores on approximately half of all scoring occasions following training, with the majority decreasing the scores awarded. This impacted positively on scoring for the poor performance video, bringing scores more in line with the true score. This study provides evidence of the benefit of a training workshop to influence decision-making in performance-based assessment as part of a wider education program for practice educators.
Collapse
Affiliation(s)
- Lucy M Alpine
- Discipline of Physiotherapy, School of Medicine, Trinity College Dublin, The University of Dublin, Dublin, Ireland
| | - Anne O'Connor
- School of Allied Health, Health Sciences Building, University of Limerick, Limerick, Ireland
| | | | - Emer M Barrett
- Discipline of Physiotherapy, School of Medicine, Trinity College Dublin, The University of Dublin, Dublin, Ireland
| |
Collapse
|
29
|
Suhoyo Y, Schönrock-Adema J, Emilia O, Kuks JBM, Cohen-Schotanus J. How students and specialists appreciate the mini-clinical evaluation exercise (mini-CEX) in Indonesian clerkships. BMC MEDICAL EDUCATION 2020; 20:144. [PMID: 32384888 PMCID: PMC7206730 DOI: 10.1186/s12909-020-02062-z] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/16/2019] [Accepted: 05/01/2020] [Indexed: 06/01/2023]
Abstract
BACKGROUND Cultural differences might challenge the acceptance of the implementation of assessment formats that are developed in other countries. Acceptance of assessment formats is essential for its effectiveness; therefore, we explored the views of students and specialists on the practicality and impact on learning of these formats. This study was conducted to explore Indonesian students' and specialists' appreciation of the implementation of the Mini-Clinical Evaluation Exercise (Mini-CEX) in Indonesian clerkships. METHODS This study was conducted at the Universitas Gadjah Mada, Indonesia. Participants were 52 students and 21 specialists in neurology and 78 students and 50 specialists in internal medicine. They were asked to complete a 19-item questionnaire that covered the characteristics of the mini-CEX such as its practicality, and the impact on learning and professional development. We used a Mann-Whitney U test to analyse the data. RESULTS In total, 124 students (46 from neurology and 78 from internal medicine) and 38 specialists (13 from neurology and 25 from internal medicine) participated in this study. Students and specialists were positive about the practicality of the mini-CEX and the impact of this assessment format on learning and on professional development. The Mann-Whitney U test showed that there were no significant differences between students' and specialists' opinions on the mini-CEX, except for 2 items: specialists' appreciation of direct observation (mean rank = 93.16) was statistically significantly higher than students' appreciation of it (mean rank = 77.93; z = 2.065; p < 0.05), but students' appreciation of the item that students' past mini-CEX results affected their recent mini-CEX outcomes (mean rank = 85.29) was significantly higher than specialists' appreciation of it (mean rank = 69.12; z = 2140; p < 0.05). CONCLUSION Students and specialists were positive about the mini-CEX in Indonesian clerkships, although it was developed and validated in another culture. We found only small differences between their appreciations, which could be explained by the patterns of specialist-student interaction in Indonesian culture as large power distance and low individualism country.
Collapse
Affiliation(s)
- Yoyo Suhoyo
- Department of Medical Education, Faculty of Medicine, Public Health and Nursing, Universitas Gadjah Mada, Gd. Prof. Drs. Med. R. Radiopoetro, Lt. 6 Sayap Barat, Jl. Farmako, Sekip Utara, Yogyakarta, 55281 Indonesia
- Center for Education Development and Research in Health Professions, University of Groningen and University Medical Center Groningen, Groningen, The Netherlands
| | - Johanna Schönrock-Adema
- Center for Education Development and Research in Health Professions, University of Groningen and University Medical Center Groningen, Groningen, The Netherlands
| | - Ova Emilia
- Department of Obstetrics and Gynaecology, Faculty of Medicine, Universitas Gadjah Mada, Yogyakarta, Indonesia
| | - Jan B. M. Kuks
- Institute for Medical Education, University of Groningen and University Medical Center Groningen, Groningen, The Netherlands
- Department of Neurology, University Medical Center Groningen, Groningen, The Netherlands
| | - Janke Cohen-Schotanus
- Center for Education Development and Research in Health Professions, University of Groningen and University Medical Center Groningen, Groningen, The Netherlands
| |
Collapse
|
30
|
Cutrer WB, Russell RG, Davidson M, Lomis KD. Assessing medical student performance of Entrustable Professional Activities: A mixed methods comparison of Co-Activity and Supervisory Scales. MEDICAL TEACHER 2020; 42:325-332. [PMID: 31714166 DOI: 10.1080/0142159x.2019.1686135] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Introduction: Observations of medical student participation in entrustable professional activities (EPAs) provide insight into the student's ability to synthesize competencies across domains and effectively function in different clinical scenarios. Both Supervisory and Co-Activity Assessment Scales have been recommended for use with medical students.Methods: Students were assessed on EPAs during Acting Internships in Medicine and Pediatrics. Two rating scales were modified based on expert review and included throughout the 2017-18 academic year. Statistical analysis was conducted to clarify relationships between the scales. Raters were interviewed to explore their interpretations and response processes.Results: The results of the McNemar test suggest that the scales are different (p-value <.01). Co-activity and Supervisory EPA ratings are related, but not interchangeable. This finding concurs with themes that emerged from response process interviews: (1) the scales are not directly parallel (2) rater preference depends on diverse factors and (3) rater comments are crucial for guiding students' future learning.Conclusion: The modified Chen Supervisory Scale and the modified Ottawa Co-Activity Scales are measuring different aspects of the entrustable professional activity landscape. Both scales can provide useful information to the learner and the assessment system, but they should not be treated as interchangeable assessments.
Collapse
Affiliation(s)
| | | | - Mario Davidson
- School of Medicine, Vanderbilt University, Nashville, TN, USA
| | | |
Collapse
|
31
|
Aryal KR, Currow C, Downey S, Praseedom R, Seager A. Work-Based Assessments in Higher General Surgical Training Program: A Mixed Methods Study Exploring Trainers' and Trainees' Views and Experiences. Surg J (N Y) 2020; 6:e49-e61. [PMID: 32158953 PMCID: PMC7062550 DOI: 10.1055/s-0040-1708062] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2019] [Accepted: 01/20/2020] [Indexed: 11/25/2022] Open
Abstract
Introduction In the United Kingdom, work-based assessments (WBAs) including procedure-based assessments (PBAs), case-based discussions (CBDs), clinical evaluation exercises (CEXs), and direct observation of procedural skills (DOPS) have been used in Higher General Surgical Training Program (HGSTP) since the introduction of Modernising Medical Careers. Although the Intercollegiate Surgical Curriculum Project states that they should be used for the formative development of trainees using feedback and reflection, there is no study to look at the perception of their usefulness and barriers in using them, particularly in HGSTP. The aim of this study is to investigate trainer's and trainee's perception of their usefulness, barriers in using them, and way forward for their improvement in HGSTP. Methods This was a mixed method study. In phase I, after ethics committee approval, an online survey was sent to 83 trainers and 104 trainees, with a response rate of 33 and 37%, respectively, using Online Surveys (formerly Bristol Online Survey) from July 2018 to December 2018. After analysis of this result, in phase II, semistructured interviews were conducted with five trainees and five trainers who had volunteered to take part from phase I. Thematic analysis was performed to develop overarching themes. Results For professional formative development, 15% of the trainers and 53% of the trainees felt that WBAs had a low value. Among 4 WBAs-CEX, CBD, PBA, and DOPS-PBA was thought to be the most useful WBA by 52% trainers and 74% trainees. More trainers than trainees felt that it was being used as a formative tool (33 vs. 16%). The total number of WBAs thought to be required was between 20 and 40 per year, with 46% of the trainers and 53% of the trainees preferring these numbers. The thematic analysis generated four themes with subthemes in each: theme 1, "factors affecting usefulness," including the mode of validation, trainer/trainee engagement, and time spent in validating; theme 2, "doubt on utility" due to doubt on validity and being used as a tick-box exercise; theme 3, "pitfalls/difficulties" due to lack of time to validate, late validation, e-mail rather than face-to-face validation, trainer and trainee behavior, variability in feedback given, and emphasis on number than quality; and theme 4, "improvement strategies." Conclusions The WBAs are not being used in a way they are supposed to be used. The perception of educational impact (Kirkpatrick levels 1 and 2) by trainers was more optimistic than by trainees. Improvements can be made by giving/finding more time, trainer training, more face-to-face validation, and better trainer trainee interactions.
Collapse
Affiliation(s)
- Kamal Raj Aryal
- Department of General Surgery, James Paget University Hospital, Great Yarmouth, United Kingdom
- Department of General Surgery, University of East Anglia, Norwich, United Kingdom
| | - Chelise Currow
- Department of General Surgery, Luton and Dunstable University Hospital, Luton, United Kingdom
| | - Sarah Downey
- Department of General Surgery, James Paget University Hospital, Great Yarmouth United Kingdom
| | - Raaj Praseedom
- Department of Hepatobiliary Surgery, Addenbrookes Hospital, Cambridge, United Kingdom
| | - Alexander Seager
- Department of General Surgery, Peterborough City Hospital, Peterborough, United Kingdom
| |
Collapse
|
32
|
Keijser WA, Handgraaf HJM, Isfordink LM, Janmaat VT, Vergroesen PPA, Verkade JMJS, Wieringa S, Wilderom CPM. Development of a national medical leadership competency framework: the Dutch approach. BMC MEDICAL EDUCATION 2019; 19:441. [PMID: 31779632 PMCID: PMC6883542 DOI: 10.1186/s12909-019-1800-y] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/11/2019] [Accepted: 09/09/2019] [Indexed: 06/10/2023]
Abstract
BACKGROUND The concept of medical leadership (ML) can enhance physicians' inclusion in efforts for higher quality healthcare. Despite ML's spiking popularity, only a few countries have built a national taxonomy to facilitate ML competency education and training. In this paper we discuss the development of the Dutch ML competency framework with two objectives: to account for the framework's making and to complement to known approaches of developing such frameworks. METHODS We designed a research approach and analyzed data from multiple sources based on Grounded Theory. Facilitated by the Royal Dutch Medical Association, a group of 14 volunteer researchers met over a period of 2.5 years to perform: 1) literature review; 2) individual interviews; 3) focus groups; 4) online surveys; 5) international framework comparison; and 6) comprehensive data synthesis. RESULTS The developmental processes that led to the framework provided a taxonomic depiction of ML in Dutch perspective. It can be seen as a canonical 'knowledge artefact' created by a community of practice and comprises of a contemporary definition of ML and 12 domains, each entailing four distinct ML competencies. CONCLUSIONS This paper demonstrates how a new language for ML can be created in a healthcare system. The success of our approach to capture insights, expectations and demands relating leadership by Dutch physicians depended on close involvement of the Dutch national medical associations and a nationally active community of practice; voluntary work of diverse researchers and medical practitioners and an appropriate research design that used multiple methods and strategies to circumvent reverberation of established opinions and conventionalisms. IMPLICATIONS The experiences reported here may provide inspiration and guidance for those anticipating similar work in other countries to develop a tailored approach to create a ML framework.
Collapse
Affiliation(s)
- Wouter A. Keijser
- Faculty of Behavioural, Management and Social Sciences (BMS) Change, Management and Organizational Behavior (CMOB), University Twente, Enschede, The Netherlands
- DIRMI Foundation, Utrecht, The Netherlands
| | | | - Liz M. Isfordink
- Julius Centre for Health Sciences and Primary Care, University Medical Centre Utrecht, Utrecht Heidelberglaan 100, 3584 CX Utrecht, The Netherlands
| | - Vincent T. Janmaat
- Erasmus Medical Center, Wytemaweg 80, 3015 CP Rotterdam, The Netherlands
| | - Pieter-Paul A. Vergroesen
- Department of Orthopaedic Surgery, University Medical Center Utrecht, Utrecht Heidelberglaan 100, 3584 CX Utrecht, The Netherlands
| | | | - Sietse Wieringa
- Institute of Health and Society, University of Oslo, Oslo, Norway
- Department of Continuing Education, University of Oxford, Oxford, OX1 2JD UK
| | - Celeste P. M. Wilderom
- Faculty of Behavioural, Management and Social Sciences (BMS) Change, Management and Organizational Behavior (CMOB), University Twente, Enschede, The Netherlands
| |
Collapse
|
33
|
Simulation in surgical training: Prospective cohort study of access, attitudes and experiences of surgical trainees in the UK and Ireland. Int J Surg 2019; 67:94-100. [DOI: 10.1016/j.ijsu.2019.04.004] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2018] [Revised: 03/13/2019] [Accepted: 04/10/2019] [Indexed: 11/22/2022]
|
34
|
Müller S, Koch I, Settmacher U, Dahmen U. How the introduction of OSCEs has affected the time students spend studying: results of a nationwide study. BMC MEDICAL EDUCATION 2019; 19:146. [PMID: 31092236 PMCID: PMC6521539 DOI: 10.1186/s12909-019-1570-6] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/08/2018] [Accepted: 04/23/2019] [Indexed: 06/01/2023]
Abstract
BACKGROUND Medical schools globally now use objective structured clinical examinations (OSCEs) for assessing a student's clinical performance. In Germany, almost all of the 36 medical schools have incorporated at least one summative OSCE into their clinical curriculum. This nationwide study aimed to examine whether the introduction of OSCEs shifted studying time. The authors explored what resources were important for studying in preparation for OSCEs, how much time students spent studying, and how they performed; each compared to traditionally used multiple choice question (MCQ) tests. METHODS The authors constructed a questionnaire comprising two identical sections, one for each assessment method. Either section contained a list of 12 study resources requesting preferences on a 5-point scale, and two open-ended questions about average studying time and average grades achieved. During springtime of 2015, medical schools in Germany were asked to administer the web-based questionnaire to their students in years 3-6. Statistical analysis compared the responses on the open-ended questions between the OSCE and MCQs using a paired t-test. RESULTS The sample included 1131 students from 32 German medical schools. Physical examination courses were most important in preparation for OSCEs, followed by class notes/logs and the skills lab. Other activities in clinical settings (e.g. medical clerkships) and collaborative strategies ranked next. Conversely, resources for gathering knowledge (e.g. lectures or textbooks) were of minor importance when studying for OSCEs. Reported studying time was lower for OSCEs compared to MCQ tests. The reported average grade, however, was better on OSCEs. CONCLUSIONS The study findings suggest that the introduction of OSCEs shifted studying time. When preparing for OSCEs students focus on the acquisition of clinical skills and need less studying time to achieve the expected level of competence/performance, as compared to the MCQ tests.
Collapse
Affiliation(s)
- Stefan Müller
- Department of General, Visceral and Vascular Surgery, Universitätsklinikum Jena, Am Klinikum 1, 07747 Jena, Germany
| | - Ines Koch
- Department of Gynaecology and Reproductive Medicine, Universitätsklinikum Jena, Am Klinikum 1, 07747 Jena, Germany
| | - Utz Settmacher
- Department of General, Visceral and Vascular Surgery, Universitätsklinikum Jena, Am Klinikum 1, 07747 Jena, Germany
| | - Uta Dahmen
- Department of General, Visceral and Vascular Surgery, Experimental Transplantation Surgery, Universitätsklinikum Jena, Drackendorfer Str. 1, 07747 Jena, Germany
| |
Collapse
|
35
|
Weersink K, Hall AK, Rich J, Szulewski A, Dagnone JD. Simulation versus real-world performance: a direct comparison of emergency medicine resident resuscitation entrustment scoring. Adv Simul (Lond) 2019; 4:9. [PMID: 31061721 PMCID: PMC6492388 DOI: 10.1186/s41077-019-0099-4] [Citation(s) in RCA: 34] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2018] [Accepted: 04/15/2019] [Indexed: 11/10/2022] Open
Abstract
Background Simulation is increasingly being used in postgraduate medical education as an opportunity for competency assessment. However, there is limited direct evidence that supports performance in the simulation lab as a surrogate of workplace-based clinical performance for non-procedural tasks such as resuscitation in the emergency department (ED). We sought to directly compare entrustment scoring of resident performance in the simulation environment to clinical performance in the ED. Methods The resuscitation assessment tool (RAT) was derived from the previously implemented and studied Queen's simulation assessment tool (QSAT) via a modified expert review process. The RAT uses an anchored global assessment scale to generate an entrustment score and narrative comments. Emergency medicine (EM) residents were assessed using the RAT on cases in simulation-based examinations and in the ED during resuscitation cases from July 2016 to June 2017. Resident mean entrustment scores were compared using Pearson's correlation coefficient to determine the relationship between entrustment in simulation cases and in the ED. Inductive thematic analysis of written commentary was conducted to compare workplace-based with simulation-based feedback. Results There was a moderate, positive correlation found between mean entrustment scores in the simulated and workplace-based settings, which was statistically significant (r = 0.630, n = 17, p < 0.01). Further, qualitative analysis demonstrated overall management and leadership themes were more common narratives in the workplace, while more specific task-based feedback predominated in the simulation-based assessment. Both workplace-based and simulation-based narratives frequently commented on communication skills. Conclusions In this single-center study with a limited sample size, assessment of residents using entrustment scoring in simulation settings was demonstrated to have a moderate positive correlation with assessment of resuscitation competence in the workplace. This study suggests that resuscitation performance in simulation settings may be an indicator of competence in the clinical setting. However, multiple factors contribute to this complicated and imperfect relationship. It is imperative to consider narrative comments in supporting the rationale for numerical entrustment scores in both settings and to include both simulation and workplace-based assessment in high-stakes decisions of progression.
Collapse
Affiliation(s)
- Kristen Weersink
- 1Department of Emergency Medicine, Queen's University, Kingston Health Sciences Center c/o 76 Stuart St, Kingston, ON K7L2V7 Canada
| | - Andrew K Hall
- 1Department of Emergency Medicine, Queen's University, Kingston Health Sciences Center c/o 76 Stuart St, Kingston, ON K7L2V7 Canada
| | - Jessica Rich
- 2Faculty of Education, Queen's University, Kingston, ON Canada
| | - Adam Szulewski
- 1Department of Emergency Medicine, Queen's University, Kingston Health Sciences Center c/o 76 Stuart St, Kingston, ON K7L2V7 Canada
| | - J Damon Dagnone
- 1Department of Emergency Medicine, Queen's University, Kingston Health Sciences Center c/o 76 Stuart St, Kingston, ON K7L2V7 Canada
| |
Collapse
|
36
|
Bansal M. Introduction of Directly Observed Procedural Skills (DOPS) as a Part of Competency-Based Medical Education in Otorhinolaryngology. Indian J Otolaryngol Head Neck Surg 2019; 71:161-166. [PMID: 31275823 DOI: 10.1007/s12070-019-01624-y] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2018] [Accepted: 02/12/2019] [Indexed: 11/26/2022] Open
Abstract
The Directly Observed Procedural/Practical Skill (DOPS) is a relatively new but reliable tool for formative assessment. The lack of desired awareness regarding DOPS among the Otorhinolayngologists of India made us to conduct this study. The aim of the study was introduction of DOPS in Oto-rhino-laryngology Department. The objectives of the study were: (1) To prepare lists of Oto-rhino-laryngology procedures for DOPS, (2) To conduct Orientation program of DOPS for the participants, (3) To prepare a structured list of items for the rating scale, (4) To facilitate and conduct DOPS encounters of different Oto-rhino-laryngology procedures. The study was conducted in a tertiary care medical college hospital from April 2018 to August 2018. Thirty-three trainees and 5 trainers participated. The 421 DOPS encounters involved 41 Oto-rhino-laryngology procedures. For checking the association between average time and clinical settings and Oto-rhino-laryngology procedures and DOPS encounters, the nonparametric test χ2 test was employed. Male trainees (63.63%) outnumbered female trainees. Mostly trainees (91%) were aged 22-25 years. Approximately half (49%) of the Oto-rhino-laryngology procedures (20/41) and 9/10th (86.22%) of DOPS encounters (363/421) were conducted in OPD. The average time taken to complete the E.N.T. procedures and DOPS encounters was 15 min or less in the majority (91% and 98%) of the Oto-rhino-laryngology procedures (38/41) and DOPS encounters (414/421). DOPS was introduced as a learning tool in the Oto-rhino-laryngology Department of our medical college. For assessing the "competency level" of trainees for E.N.T. procedures, DOPS is a high quality instrument as it tests the candidate at the "does" level.
Collapse
Affiliation(s)
- Mohan Bansal
- CU Shah Medical College, C-23 Doctors Quarters, Dudhrej Road, Surendranagar, Gujarat India
| |
Collapse
|
37
|
Kirwan GW, Clark CR, Dalton M. Rating of physiotherapy student clinical performance: is it possible to gain assessor consistency? BMC MEDICAL EDUCATION 2019; 19:32. [PMID: 30678662 PMCID: PMC6346544 DOI: 10.1186/s12909-019-1459-4] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/11/2018] [Accepted: 01/08/2019] [Indexed: 05/10/2023]
Abstract
BACKGROUND Reliable interpretation of the Assessment of Physiotherapy Practice (APP) tool is necessary for consistent assessment of physiotherapy students in the clinical setting. However, since the APP was implemented, no study has reassessed how consistently a student performance is evaluated against the threshold standards. Therefore, the primary aim of this study was to determine the consistency among physiotherapy educators when assessing a student performance using the APP tool. METHODS Physiotherapists (n = 153) from Australia with a minimum 3 years clinical experience and who had supervised a physiotherapy student within the past 12-months were recruited. Three levels of performance (not adequate, adequate, good/excellent) were scripted and filmed across outpatient musculoskeletal, neurorehabilitation, cardiorespiratory and inpatient musculoskeletal. In the initial phase of the study, scripts were written by academic staff and reviewed by an expert panel (n = 8) to ensure face and content validity as well as clinical relevance prior to filming. In the second phase of the study, pilot testing of the vignettes was performed by clinical academics (n = 16) from Australian universities to confirm the validity of each vignette. In the final phase, study participants reviewed one randomly allocated vignette, in their nominated clinical area and rated the student performance including a rationale for their decision. Participants were blinded to the performance level. Percentage agreement between participants was calculated for each vignette with an a priori percentage agreement of 75% considered acceptable. RESULTS Consensus among educators across all areas was observed when assessing a performance at either the 'not adequate' (97%) or the 'good/excellent' level (89%). When assessing a student at the 'adequate' level, consensus reduced to 43%. Similarly, consensus amongst the 'not adequate' and 'good/excellent' ranged from 83 to 100% across each clinical area; while agreement was between 33 and 46% for the 'adequate' level. Percent agreement between clinical educators was 89% when differentiating 'not adequate' from 'adequate' or better. CONCLUSION Consistency is achievable for 'not adequate' and 'good/excellent' performances, although, variability exists at an adequate level. Consistency remained when differentiating an 'adequate' or better from a 'not adequate' performance.
Collapse
Affiliation(s)
- Garry W. Kirwan
- Physiotherapy Department, QEII Jubilee Hospital, Metro South Health, Coopers Plains, QLD 4109 Australia
- Menzies Health Institute, School of Allied Health Sciences, Griffith University, Gold Coast Campus, Southport, 4222 Australia
| | - Courtney R. Clark
- Menzies Health Institute, School of Allied Health Sciences, Griffith University, Gold Coast Campus, Southport, 4222 Australia
| | - Megan Dalton
- School of Physiotherapy, Australian Catholic University, Sydney, Australia
| |
Collapse
|
38
|
Affiliation(s)
| | - Pat Lilley
- a AMEE and Medical Teacher , Dundee , UK
| |
Collapse
|
39
|
Frawley HC, Neumann P, Delany C. An argument for competency-based training in pelvic floor physiotherapy practice. Physiother Theory Pract 2018; 35:1117-1130. [DOI: 10.1080/09593985.2018.1470706] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/16/2022]
Affiliation(s)
- Helena C Frawley
- Department of Physiotherapy, School of Primary and Allied Health Care, Faculty of Medicine, Nursing and Health Sciences, Monash University, Frankston, Australia
- Centre for Allied Health Research and Education, Cabrini Health, Malvern, Australia
| | - Patricia Neumann
- School of Health Sciences, University of South Australia, Adelaide, Australia
| | - Clare Delany
- Department of Medical Education, School of Medicine, Faculty of Medicine, Dentistry and Health Sciences, The University of Melbourne, Carlton, Australia
| |
Collapse
|
40
|
Escudier MP, Woolford MJ, Tricio JA. Assessing the application of knowledge in clinical problem-solving: The structured professional reasoning exercise. EUROPEAN JOURNAL OF DENTAL EDUCATION : OFFICIAL JOURNAL OF THE ASSOCIATION FOR DENTAL EDUCATION IN EUROPE 2018; 22:e269-e277. [PMID: 28804939 DOI: 10.1111/eje.12286] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 06/30/2017] [Indexed: 06/07/2023]
Abstract
INTRODUCTION Clinical reasoning is a fundamental and core clinical competence of healthcare professionals. The study aimed to investigate the utility of the Structured Professional Reasoning Exercise (SPRE), a new competence assessment method designed to measure dental students' clinical reasoning in simulated scenarios, covering the clinical areas of Oral Disease, Primary Dental Care and Restorative Dentistry, Child Dental Health and Dental Practice and Clinical Governance. MATERIALS AND METHODS A total of 313 year-5 students sat for the assessment. Students spent 45 minutes assimilating the scenarios, before rotating through four pairs of 39 trained examiners who each independently assessed a single scenario over a ten-minute period, using a structured marking sheet. After the assessment, all students and examiners were invited to complete an anonymous perception questionnaire of the exercise. These questionnaires and the examination scores were statistically analysed. RESULTS AND DISCUSSION Oral Disease showed the lowest scores; Dental Practice and Governance the highest. The overall Intraclass Correlation Coefficient (ICC) was 0.770, whilst examiner training helped to increase the ICC from 0.716 in 2013 to 0.835 in 2014. Exploratory factor analysis revealed one major factor with an eigenvalue of 2.75 (68.8% of total variance). The Generalizability coefficient was consistent at 0.806. A total of 295 students and 32 examiners completed the perception questionnaire. Students' lowest examination perceptions were an "Unpleasant" and "Unenjoyable" experience, whilst the highest were "Interesting", "Valuable" and "Important". The majority of students and examiners reported the assessment as acceptable, fair and valid. CONCLUSION The SPRE offers a reliable, valid and acceptable assessment method, provided it comprises at least four scenarios with two independently marking and trained assessors. 3.
Collapse
Affiliation(s)
- M P Escudier
- King's College London Dental Institute, London, UK
| | - M J Woolford
- King's College London Dental Institute, London, UK
| | - J A Tricio
- King's College London Dental Institute, London, UK
- Faculty of Dentistry, University of the Andes, Santiago, Chile
| |
Collapse
|
41
|
Nair B(KR, Moonen‐van Loon JMW, Parvathy M, Jolly BC, Vleuten CPM. Composite reliability of workplace‐based assessment of international medical graduates. Med J Aust 2017; 207:453. [DOI: 10.5694/mja17.00130] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2017] [Accepted: 09/25/2017] [Indexed: 11/17/2022]
Affiliation(s)
- Balakrishnan (Kichu) R Nair
- Centre for Medical Professional Development, John Hunter Hospital, Newcastle, NSW
- University of Newcastle, Newcastle, NSW
| | | | - Mulavana Parvathy
- Centre for Medical Professional Development, John Hunter Hospital, Newcastle, NSW
| | | | | |
Collapse
|
42
|
Sturt R, Burge AT, Harding P, Sayer J. Physiotherapists' perceptions of workplace competency: a mixed-methods observational study. AUST HEALTH REV 2017. [PMID: 28637577 DOI: 10.1071/ah16148] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
Objectives Workplace-based competency is increasingly considered fundamental to patient safety and quality healthcare. The aim of the present study was to identify and describe physiotherapists' perceptions of workplace competency. Methods The present study was a mixed-methods cross-sectional observational study. Thematic and descriptive analysis of qualitative and survey data were undertaken. Forty-six physiotherapists employed at a metropolitan acute public hospital participated in interviews or focus groups; a subgroup of 31 participants also completed an online survey. Results Five main themes were identified: (1) despite the availability of workplace learning opportunities and supports, less-experienced staff reported limited confidence; (2) assessment and feedback around workplace competency was limited, predominantly informal and unstructured, with less than half of the cohort (42%) agreeing feedback received was useful for improving their workplace skills; (3) practicing within individual scope is an important aspect of workplace competency as a physiotherapist; (4) most (81%) agreed it was important for them to self-determine their learning and development goals, and they relied primarily on informal discussion to achieve these goals; and, (5) physiotherapists felt motivated regarding workplace learning, with 97% interested in developing their workplace skills however, nearly half (45%) did not feel they had sufficient time to do so. Conclusions The perceptions of physiotherapists working in a metropolitan acute public hospital are reflected in five themes. These themes elucidate how workplace competency is supported, maintained and developed among physiotherapists in this setting. These themes also highlight key challenges of workplace learning faced by this cohort of physiotherapists and allude to methods that may assist with improving feedback mechanisms and knowledge acquisition. What is known about this topic? Studies investigating employee perceptions around workplace competency, knowledge, skills and learning are found across a range of industries. Workplace-based competency is increasingly considered fundamental to patient safety and quality health care. There is little known about physiotherapists' perceptions of workplace competency. What does this paper add? This study has identified and described themes around physiotherapists' perceptions of their workplace knowledge and skills. What are the implications for practitioners? The themes identified provide support for the development, implementation and evaluation of a workplace-based competency framework for physiotherapists.
Collapse
Affiliation(s)
- Rodney Sturt
- Alfred Health, The Alfred, 55 Commercial Road, Melbourne, Vic. 3004, Australia.
| | - Angela T Burge
- Alfred Health, The Alfred, 55 Commercial Road, Melbourne, Vic. 3004, Australia.
| | - Paula Harding
- Alfred Health, The Alfred, 55 Commercial Road, Melbourne, Vic. 3004, Australia.
| | - James Sayer
- Alfred Health, The Alfred, 55 Commercial Road, Melbourne, Vic. 3004, Australia.
| |
Collapse
|
43
|
Mortsiefer A, Karger A, Rotthoff T, Raski B, Pentzek M. Examiner characteristics and interrater reliability in a communication OSCE. PATIENT EDUCATION AND COUNSELING 2017; 100:1230-1234. [PMID: 28139274 DOI: 10.1016/j.pec.2017.01.013] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/20/2016] [Revised: 01/18/2017] [Accepted: 01/21/2017] [Indexed: 05/16/2023]
Abstract
OBJECTIVE To identify inter-individual examiner factors associated with interrater reliability in a summative communication OSCE in the 4th study year. METHODS The OSCE consists of 4 stations assessed with a 4-item 5-point global rating instrument. A bivariate secondary analysis of interrater reliability in relation to 4 examiner factors (gender, profession, OSCE experience, examiner training) was conducted. Intraclass correlation coefficients (ICC) were calculated and compared between examiner dyads of different similarity. RESULTS 169 pairwise ratings from 19 different examiners in 16 dyads were analysed. Interrater reliability is significantly higher in examiner dyads of same vs. different gender (ICC=0.76 (95%CI=0.65-0.83) vs. ICC=0.41 (95%CI=0.21-0.57)), in dyads of two clinicians vs. non-clinical/mixed professions (ICC=0.72 (95%CI=0.56-0.83) vs. ICC=0.57 (95%CI=0.41-0.69)), and in dyads with high vs. low/mixed OSCE experience (ICC=0.73 (95%CI 0.50-0.87) vs. ICC=0.56 (95%CI=0.41-0.69)). Participation in recent examiner training had no influence on ICCs. CONCLUSION Better concordance of ratings between clinically active examiners might be a hint for context specificity of good communication. Higher interrater reliability between examiners with same gender may indicate gender-specific communication concepts. PRACTICE IMPLICATIONS Medical faculties introducing summative assessment of communication competence should focus the influence of examiner characteristics on interrater reliability.
Collapse
Affiliation(s)
- Achim Mortsiefer
- Heinrich-Heine-University Düsseldorf, Medical Faculty, Institute of General Practice, Werdener Str. 7, 40227 Düsseldorf, Germany
| | - André Karger
- Heinrich-Heine-University Düsseldorf, Medical Faculty, Clinical Institute of Psychosomatic Medicine and Psychotherapy, Moorenstr. 5, 40225, Düsseldorf, Germany,.
| | - Thomas Rotthoff
- Heinrich-Heine-University Düsseldorf, Medical Faculty, Deanery of Study and Department for Endocrinology, Diabetes, and Rheumatology, Moorenstr. 5, 40225 Düsseldorf, Germany.
| | - Bianca Raski
- Heinrich-Heine-University Düsseldorf, Medical Faculty, Deanery of Study and Clinical Institute of Psychosomatic Medicine and Psychotherapy, Moorenstr. 5, 40225 Düsseldorf, Germany.
| | - Michael Pentzek
- Heinrich-Heine-University Düsseldorf, Medical Faculty, Institute of General Practice, Werdener Str. 7, 40227 Düsseldorf, Germany.
| |
Collapse
|
44
|
Thomsen ASS. Intraocular surgery - assessment and transfer of skills using a virtual-reality simulator. Acta Ophthalmol 2017. [PMID: 28626885 DOI: 10.1111/aos.13505] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Affiliation(s)
- Ann Sofia Skou Thomsen
- Department of Ophthalmology; Rigshospitalet - Glostrup, University of Copenhagen; Copenhagen Denmark
| |
Collapse
|
45
|
Simulation in Canadian postgraduate emergency medicine training – a national survey. CAN J EMERG MED 2017; 20:132-141. [DOI: 10.1017/cem.2017.24] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
AbstractObjectivesSimulation-based education (SBE) is an important training strategy in emergency medicine (EM) postgraduate programs. This study sought to characterize the use of simulation in FRCPC-EM residency programs across Canada.MethodsA national survey was administered to residents and knowledgeable program representatives (PRs) at all Canadian FRCPC-EM programs. Survey question themes included simulation program characteristics, the frequency of resident participation, the location and administration of SBE, institutional barriers, interprofessional involvement, content, assessment strategies, and attitudes about SBE.ResultsResident and PR response rates were 63% (203/321) and 100% (16/16), respectively. Residents reported a median of 20 (range 0–150) hours of annual simulation training, with 52% of residents indicating that the time dedicated to simulation training met their needs. PRs reported the frequency of SBE sessions ranging from weekly to every 6 months, with 15 (94%) programs having an established simulation curriculum. Two (13%) of the programs used simulation for resident assessment, although 15 (94%) of PRs indicated that they would be comfortable with simulation-based assessment. The most common PR-identified barriers to administering simulation were a lack of protected faculty time (75%) and a lack of faculty experience with simulation (56%). Interprofessional involvement in simulation was strongly valued by both residents and PRs.ConclusionsSBE is frequently used by Canadian FRCPC-EM residency programs. However, there exists considerable variability in the structure, frequency, and timing of simulation-based activities. As programs transition to competency-based medical education, national organizations and collaborations should consider the variability in how SBE is administered.
Collapse
|
46
|
Lomis KD, Russell RG, Davidson MA, Fleming AE, Pettepher CC, Cutrer WB, Fleming GM, Miller BM. Competency milestones for medical students: Design, implementation, and analysis at one medical school. MEDICAL TEACHER 2017; 39:494-504. [PMID: 28281837 DOI: 10.1080/0142159x.2017.1299924] [Citation(s) in RCA: 34] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Competency-based assessment seeks to align measures of performance directly with desired learning outcomes based upon the needs of patients and the healthcare system. Recognizing that assessment methods profoundly influence student motivation and effort, it is critical to measure all desired aspects of performance throughout an individual's medical training. The Accreditation Council for Graduate Medical Education (ACGME) defined domains of competency for residency; the subsequent Milestones Project seeks to describe each learner's progress toward competence within each domain. Because the various clinical disciplines defined unique competencies and milestones within each domain, it is difficult for undergraduate medical education to adopt existing GME milestones language. This paper outlines the process undertaken by one medical school to design, implement and improve competency milestones for medical students. A team of assessment experts developed milestones for a set of focus competencies; these have now been monitored in medical students over two years. A unique digital dashboard enables individual, aggregate and longitudinal views of student progress by domain. Validation and continuous quality improvement cycles are based upon expert review, user feedback, and analysis of variation between students and between assessors. Experience to date indicates that milestone-based assessment has significant potential to guide the development of medical students.
Collapse
Affiliation(s)
- Kimberly D Lomis
- a Office of Undergraduate Medical Education, Vanderbilt University School of Medicine , Nashville , TN , USA
| | - Regina G Russell
- a Office of Undergraduate Medical Education, Vanderbilt University School of Medicine , Nashville , TN , USA
| | - Mario A Davidson
- b Department of Biostatistics , Vanderbilt University School of Medicine , Nashville , TN , USA
| | - Amy E Fleming
- c Office of Medical Student Affairs, Vanderbilt University School of Medicine , Nashville , TN , USA
| | - Cathleen C Pettepher
- a Office of Undergraduate Medical Education, Vanderbilt University School of Medicine , Nashville , TN , USA
| | - William B Cutrer
- d Division of Pediatric Critical Care , Vanderbilt University Medical Center , Nashville , TN , USA
| | - Geoffrey M Fleming
- d Division of Pediatric Critical Care , Vanderbilt University Medical Center , Nashville , TN , USA
| | - Bonnie M Miller
- e Office of Health Sciences Education, Vanderbilt University School of Medicine , Nashville , TN , USA
| |
Collapse
|
47
|
Suhoyo Y, Van Hell EA, Kerdijk W, Emilia O, Schönrock-Adema J, Kuks JBM, Cohen-Schotanus J. Influence of feedback characteristics on perceived learning value of feedback in clerkships: does culture matter? BMC MEDICAL EDUCATION 2017; 17:69. [PMID: 28381280 PMCID: PMC5382527 DOI: 10.1186/s12909-017-0904-5] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/21/2016] [Accepted: 03/25/2017] [Indexed: 05/21/2023]
Abstract
BACKGROUND Various feedback characteristics have been suggested to positively influence student learning. It is not clear how these feedback characteristics contribute to students' perceived learning value of feedback in cultures classified low on the cultural dimension of individualism and high on power distance. This study was conducted to validate the influence of five feedback characteristics on students' perceived learning value of feedback in an Indonesian clerkship context. METHODS We asked clerks in Neurology (n = 169) and Internal Medicine (n = 132) to assess on a 5-point Likert scale the learning value of the feedback they received. We asked them to record whether the feedback provider (1) informed the student what went well, (2) mentioned which aspects of performance needed improvement, (3) compared the student's performance to a standard, (4) further explained or demonstrated the correct performance, and (5) prepared an action plan with the student to improve performance. Data were analyzed using multilevel regression. RESULTS A total of 250 students participated in this study, 131 from Internal Medicine (response rate 99%) and 119 from Neurology (response rate 70%). Of these participants, 225 respondents (44% males, 56% females) completed the form and reported 889 feedback moments. Students perceived feedback as more valuable when the feedback provider mentioned their weaknesses (β = 0.153, p < 0.01), compared their performance to a standard (β = 0.159, p < 0.01), explained or demonstrated the correct performance (β = 0.324, p < 0.001) and prepared an action plan with the student (β =0.496, p < 0.001). Appraisal of good performance did not influence the perceived learning value of feedback. No gender differences were found for perceived learning value. CONCLUSIONS In Indonesia, we could validate four out of the five characteristics for effective feedback. We argue that our findings relate to culture, in particular to the levels of individualism and power distance. The recognized characteristics of what constitutes effective feedback should be validated across cultures.
Collapse
Affiliation(s)
- Yoyo Suhoyo
- Department of Medical Education, Faculty of Medicine, Universitas Gadjah Mada, Gd. Prof. Drs. Med. R. Radiopoetro, Lt. 6 Sayap Barat, Jl. Farmako, Sekip Utara, Yogyakarta, 55281 Indonesia
- Institute for Medical Education, University of Groningen and University Medical Center Groningen, Groningen, The Netherlands
| | - Elisabeth A. Van Hell
- Institute for Medical Education, University of Groningen and University Medical Center Groningen, Groningen, The Netherlands
| | - Wouter Kerdijk
- Department of Public an Individual Oral Health, Center for Dentistry and Oral Hygiene, University of Groningen and University Medical Center Groningen, Groningen, The Netherlands
- Education and Research Department, Hanze University of Applied Sciences, Groningen, The Netherlands
| | - Ova Emilia
- Department of Obstetrics and Gynaecology, Faculty of Medicine, Universitas Gadjah Mada, Yogyakarta, Indonesia
| | - Johanna Schönrock-Adema
- Institute for Medical Education, University of Groningen and University Medical Center Groningen, Groningen, The Netherlands
| | - Jan B. M. Kuks
- Department of Public an Individual Oral Health, Center for Dentistry and Oral Hygiene, University of Groningen and University Medical Center Groningen, Groningen, The Netherlands
- Department of Neurology, University Medical Center Groningen, Groningen, The Netherlands
| | - Janke Cohen-Schotanus
- Institute for Medical Education, University of Groningen and University Medical Center Groningen, Groningen, The Netherlands
| |
Collapse
|
48
|
Palermo C, Gibson SJ, Dart J, Whelan K, Hay M. Programmatic Assessment of Competence in Dietetics: A New Frontier. J Acad Nutr Diet 2017; 117:175-179. [DOI: 10.1016/j.jand.2016.03.022] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2015] [Indexed: 10/21/2022]
|
49
|
Gormley GJ, Hodges BD, McNaughton N, Johnston JL. The show must go on? Patients, props and pedagogy in the theatre of the OSCE. MEDICAL EDUCATION 2016; 50:1237-1240. [PMID: 27873404 DOI: 10.1111/medu.13016] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/11/2015] [Revised: 12/09/2015] [Accepted: 01/14/2016] [Indexed: 06/06/2023]
Abstract
According to Shakespeare, all the world's a stage, and all the men and women merely players. The objective structured clinical examination (OSCE), that most ubiquitous form of assessment in health professions education, offers us a particular instance of this maxim. Comprising at first glance a world of psychometric data, detailed checklists and global rating scales, the OSCE sets out to facilitate the assessment of a candidate's competence in a highly standardised and objective fashion. Despite this clear intention, OSCEs also offer a rich vein of (often unacknowledged) social and cultural processes. In this commentary, we draw on Goffman's dramaturgy metaphor and our experiences to undertake a wry examination of some of the least intended consequences of OSCEs. We take a satirical look at both the potential impact on patients and the pedagogical implications of this form of assessment. We now urge you to sit back, settle in and enjoy the show, as we raise the curtain on this one-night-only performance!
Collapse
Affiliation(s)
- Gerard J Gormley
- Centre for Medical Education, Queen's University Belfast, Belfast, UK
| | - Brian D Hodges
- Department of Education, University Health Network, University of Toronto, Toronto, Ontario, Canada
| | - Nancy McNaughton
- Standardised Patient Programme, Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada
| | | |
Collapse
|
50
|
Kiessling C, Bauer J, Gartmeier M, Iblher P, Karsten G, Kiesewetter J, Moeller GE, Wiesbeck A, Zupanic M, Fischer MR. Development and validation of a computer-based situational judgement test to assess medical students' communication skills in the field of shared decision making. PATIENT EDUCATION AND COUNSELING 2016; 99:1858-1864. [PMID: 27345253 DOI: 10.1016/j.pec.2016.06.006] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/30/2015] [Revised: 05/27/2016] [Accepted: 06/12/2016] [Indexed: 05/15/2023]
Abstract
OBJECTIVE To develop a computer-based test (CBT) measuring medical students' communication skills in the field of shared decision making (SDM) and to evaluate its construct validity. METHODS The CBT was developed in the context of an experimental study comparing three different trainings for SDM (including e-learning and/or role-play) and a control group. Assessment included a CBT (Part A: seven context-poor questions, Part B: 15 context-rich questions) and interviews with two simulated patients (SP-assessment). Cronbach's α was used to test the internal consistency. Correlations between CBT and SP-assessment were used to further evaluate construct validity of the CBT. RESULTS Seventy-two students took part in the study. Mean value for the CBT score was 72% of the total score. Cronbach's α was 0.582. After eliminating three items, Cronbach's α increased to 0.625. Correlations between the CBT and SP-assessment were low to moderate. The control group scored significantly lower than the training settings (p<0.001). CONCLUSION The CBT was reliable enough to test for group differences. For summative assessment purposes, considerably more questions would be needed. PRACTICE IMPLICATIONS We encourage teachers who particularly work with large student numbers to consider CBT as a feasible assessment method for cognitive aspects of communication skills.
Collapse
Affiliation(s)
- Claudia Kiessling
- Institut für Didaktik und Ausbildungsforschung in der Medizin am Klinikum der Universität München, Germany; Assessment Department, Brandenburg Medical School Theodor Fontane, Germany.
| | - Johannes Bauer
- TUM School of Education, Technische Universität München, Germany
| | - Martin Gartmeier
- TUM School of Education, Technische Universität München, Germany
| | - Peter Iblher
- Department of Anesthesiology, Universitätsklinikum Schleswig-Holstein, Germany
| | - Gudrun Karsten
- Centre for Medical Education, Deańs Office of Education, Christian-Albrechts-Universität zu Kiel, Germany
| | - Jan Kiesewetter
- Institut für Didaktik und Ausbildungsforschung in der Medizin am Klinikum der Universität München, Germany
| | - Grit E Moeller
- Centre for Medical Education, Deańs Office of Education, Christian-Albrechts-Universität zu Kiel, Germany
| | - Anne Wiesbeck
- TUM School of Education, Technische Universität München, Germany
| | - Michaela Zupanic
- Office for Student Affairs, Fakultät für Gesundheit, Universität Witten/Herdecke, Germany
| | - Martin R Fischer
- Institut für Didaktik und Ausbildungsforschung in der Medizin am Klinikum der Universität München, Germany
| |
Collapse
|