1
|
Muacevic A, Adler JR, Danforth D, Fine L, Foster J, Jacomino M, Johnson M, Keller B, Mendez P, Saunders JM, Scalese R, Schocken DM, Stalvey C, Stevens M, Suchak N, Syms S, Uchiyama E, Velazquez M. The Florida Clinical Skills Collaborative: A New Regional Consortium for the Assessment of Clinical Skills. Cureus 2022; 14:e31263. [PMID: 36514606 PMCID: PMC9733824 DOI: 10.7759/cureus.31263] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2022] [Accepted: 11/07/2022] [Indexed: 11/10/2022] Open
Abstract
Discontinuation of the United States Medical Licensing Examination (USMLE) Step 2 Clinical Skills (CS) exam and Comprehensive Osteopathic Medical Licensing Examination (COMLEX) Level 2 Performance Evaluation (2-PE) raised questions about the ability of medical schools to ensure the clinical skills competence of graduating students. In February 2021, representatives from all Florida, United States, allopathic and osteopathic schools initiated a collaboration to address this critically important issue in the evolving landscape of medical education. A 5-point Likert scale survey of all members (n=18/20 individuals representing 10/10 institutions) reveals that initial interest in joining the collaboration was high among both individuals (mean 4.78, SD 0.43) and institutions (mean 4.69, SD 0.48). Most individuals (mean 4.78, SD 0.55) and institutions (mean 4.53, SD 0.72) are highly satisfied with their decision to join. Members most commonly cited a "desire to establish a shared assessment in place of Step 2 CS/2-PE" as their most important reason for joining. Experienced benefits of membership were ranked as the following: 1) Networking, 2) Shared resources for curriculum implementation, 3) Scholarship, and 4) Work towards a shared assessment in place of Step 2 CS/2-PE. Challenges of membership were ranked as the following: 1) Logistics such as scheduling and technology, 2) Agreement on common goals, 3) Total time commitment, and 4) Large group size. Members cited the "administration of a joint assessment pilot" as the highest priority for the coming year. Florida has successfully launched a regional consortium for the assessment of clinical skills competency with high levels of member satisfaction which may serve as a model for future regional consortia.
Collapse
|
2
|
Narayanan A, Greco M, Janamian T, Fraser T, Archer J. Are there differences between SIMG surgeons and locally trained surgeons in Australia and New Zealand, as rated by colleagues and themselves? BMC MEDICAL EDUCATION 2022; 22:516. [PMID: 35778704 PMCID: PMC9250230 DOI: 10.1186/s12909-022-03560-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/17/2021] [Accepted: 06/10/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND Representation of specialist international medical graduates (SIMGs) in specific specialties such as surgery can be expected to grow as doctor shortages are predicted in the context of additional care provision for aging populations and limited local supply. Many national medical boards and colleges provide pathways for medical registration and fellowship of SIMGs that may include examinations and short-term training. There is currently very little understanding of how SIMGs are perceived by colleagues and whether their performance is perceived to be comparable to locally trained medical specialists. It is also not known how SIMGs perceive their own capabilities in comparison to local specialists. The aim of this study is to explore the relationships between colleague feedback and self-evaluation in the specialist area of surgery to identify possible methods for enhancing registration and follow-up training within the jurisdiction of Australia and New Zealand. METHODS Feedback from 1728 colleagues to 96 SIMG surgeons and 406 colleagues to 25 locally trained Fellow surgeons was collected, resulting in 2134 responses to 121 surgeons in total. Additionally, 98 SIMGs and 25 Fellows provided self-evaluation scores (123 in total). Questionnaire and data reliability were calculated before analysis of variance, principal component analysis and network analysis were performed to identify differences between colleague evaluations and self-evaluations by surgeon type. RESULTS Colleagues rated SIMGs and Fellows in the 'very good' to 'excellent' range. Fellows received a small but statistically significant higher average score than SIMGs, especially in areas dealing with medical skills and expertise. However, SIMGs received higher scores where there was motivation to demonstrate working well with colleagues. Colleagues rated SIMGs using one dimension and Fellows using three, which can be identified as clinical management skills, inter-personal communication skills and self-management skills. On self-evaluation, both SIMGs and Fellows gave themselves a significant lower average score than their colleagues, with SIMGs giving themselves a statistically significant higher score than Fellows. CONCLUSIONS Colleagues rate SIMGs and Fellows highly. The results of this study indicate that SIMGs tend to self-assess more highly, but according to colleagues do not display the same level of differentiation between clinical management, inter-personal and self-management skills. Further research is required to confirm these provisional findings and possible reasons for lack of differentiation if this exists. Depending on the outcome, possible support mechanisms can be explored that may lead to increased comparable performance with locally trained graduates of Australia and New Zealand in these three dimensions.
Collapse
Affiliation(s)
- Ajit Narayanan
- Auckland University of Technology, Auckland, New Zealand
| | - Michael Greco
- School of Medicine, Griffith University, Brisbane, QLD Australia
- CFEP Surveys, Everton Park, QLD Australia
| | - Tina Janamian
- CFEP Surveys, Everton Park, QLD Australia
- School of Business, University of Queensland, St Lucia, QLD Australia
- Education and Innovation, Australian General Practice Accreditation Limited (AGPAL), Brisbane, QLD Australia
| | - Tamieka Fraser
- Australian General Practice Accreditation Limited (AGPAL), Brisbane, QLD Australia
| | - Julian Archer
- School of Medicine and Dentistry, Griffith University, Brisbane, Australia
| |
Collapse
|
3
|
Price T, Lynn N, Coombes L, Roberts M, Gale T, de Bere SR, Archer J. The International Landscape of Medical Licensing Examinations: A Typology Derived From a Systematic Review. Int J Health Policy Manag 2018; 7:782-790. [PMID: 30316226 PMCID: PMC6186476 DOI: 10.15171/ijhpm.2018.32] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2017] [Accepted: 03/26/2018] [Indexed: 01/12/2023] Open
Abstract
BACKGROUND National licensing examinations (NLEs) are large-scale examinations usually taken by medical doctors close to the point of graduation from medical school. Where NLEs are used, success is usually required to obtain a license for full practice. Approaches to national licensing, and the evidence that supports their use, varies significantly across the globe. This paper aims to develop a typology of NLEs, based on candidacy, to explore the implications of different examination types for workforce planning. METHODS A systematic review of the published literature and medical licensing body websites, an electronic survey of all medical licensing bodies in highly developed nations, and a survey of medical regulators. RESULTS The evidence gleaned through this systematic review highlights four approaches to NLEs: where graduating medical students wishing to practice in their national jurisdiction must pass a national licensing exam before they are granted a license to practice; where all prospective doctors, whether from the national jurisdiction or international medical graduates, are required to pass a national licensing exam in order to practice within that jurisdiction; where international medical graduates are required to pass a licensing exam if their qualifications are not acknowledged to be comparable with those students from the national jurisdiction; and where there are no NLEs in operation. This typology facilitates comparison across systems and highlights the implications of different licensing systems for workforce planning. CONCLUSION The issue of national licensing cannot be viewed in isolation from workforce planning; future research on the efficacy of national licensing systems to drive up standards should be integrated with research on the implications of such systems for the mobility of doctors to cross borders.
Collapse
Affiliation(s)
- Tristan Price
- Collaboration for the Advancement of Medical Education Research & Assessment (CAMERA), University of Plymouth, Plymouth, UK
- Peninsula Schools of Medicine & Dentistry, University of Plymouth, Plymouth, UK
| | - Nick Lynn
- Collaboration for the Advancement of Medical Education Research & Assessment (CAMERA), University of Plymouth, Plymouth, UK
- Peninsula Schools of Medicine & Dentistry, University of Plymouth, Plymouth, UK
| | - Lee Coombes
- School of Medicine, Cardiff University, Wales, UK
| | - Martin Roberts
- Collaboration for the Advancement of Medical Education Research & Assessment (CAMERA), University of Plymouth, Plymouth, UK
- Peninsula Schools of Medicine & Dentistry, University of Plymouth, Plymouth, UK
| | - Tom Gale
- Collaboration for the Advancement of Medical Education Research & Assessment (CAMERA), University of Plymouth, Plymouth, UK
- Peninsula Schools of Medicine & Dentistry, University of Plymouth, Plymouth, UK
| | - Sam Regan de Bere
- Collaboration for the Advancement of Medical Education Research & Assessment (CAMERA), University of Plymouth, Plymouth, UK
- Peninsula Schools of Medicine & Dentistry, University of Plymouth, Plymouth, UK
| | - Julian Archer
- Collaboration for the Advancement of Medical Education Research & Assessment (CAMERA), University of Plymouth, Plymouth, UK
- Peninsula Schools of Medicine & Dentistry, University of Plymouth, Plymouth, UK
| |
Collapse
|
4
|
Archer J, Lynn N, Coombes L, Roberts M, Gale T, Price T, Regan de Bere S. The impact of large scale licensing examinations in highly developed countries: a systematic review. BMC MEDICAL EDUCATION 2016; 16:212. [PMID: 27543269 PMCID: PMC4992286 DOI: 10.1186/s12909-016-0729-7] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/10/2016] [Accepted: 08/08/2016] [Indexed: 05/12/2023]
Abstract
BACKGROUND To investigate the existing evidence base for the validity of large-scale licensing examinations including their impact. METHODS Systematic review against a validity framework exploring: Embase (Ovid Medline); Medline (EBSCO); PubMed; Wiley Online; ScienceDirect; and PsychINFO from 2005 to April 2015. All papers were included when they discussed national or large regional (State level) examinations for clinical professionals, linked to examinations in early careers or near the point of graduation, and where success was required to subsequently be able to practice. Using a standardized data extraction form, two independent reviewers extracted study characteristics, with the rest of the team resolving any disagreement. A validity framework was used as developed by the American Educational Research Association, American Psychological Association, and National Council on Measurement in Education to evaluate each paper's evidence to support or refute the validity of national licensing examinations. RESULTS 24 published articles provided evidence of validity across the five domains of the validity framework. Most papers (n = 22) provided evidence of national licensing examinations relationships to other variables and their consequential validity. Overall there was evidence that those who do well on earlier or on subsequent examinations also do well on national testing. There is a correlation between NLE performance and some patient outcomes and rates of complaints, but no causal evidence has been established. CONCLUSIONS The debate around licensure examinations is strong on opinion but weak on validity evidence. This is especially true of the wider claims that licensure examinations improve patient safety and practitioner competence.
Collapse
Affiliation(s)
- Julian Archer
- Collaboration for the Advancement of Medical Education Research and Assessment, Plymouth University Peninsula Schools of Medicine & Dentistry, Plymouth, Devon UK
| | - Nick Lynn
- Collaboration for the Advancement of Medical Education Research and Assessment, Plymouth University Peninsula Schools of Medicine & Dentistry, Plymouth, Devon UK
| | - Lee Coombes
- Centre for Medical Education, Cardiff University School of Medicine, Heath Park, Cardiff, UK
| | - Martin Roberts
- Collaboration for the Advancement of Medical Education Research and Assessment, Plymouth University Peninsula Schools of Medicine & Dentistry, Plymouth, Devon UK
| | - Tom Gale
- Collaboration for the Advancement of Medical Education Research and Assessment, Plymouth University Peninsula Schools of Medicine & Dentistry, Plymouth, Devon UK
| | - Tristan Price
- Collaboration for the Advancement of Medical Education Research and Assessment, Plymouth University Peninsula Schools of Medicine & Dentistry, Plymouth, Devon UK
| | - Sam Regan de Bere
- Collaboration for the Advancement of Medical Education Research and Assessment, Plymouth University Peninsula Schools of Medicine & Dentistry, Plymouth, Devon UK
| |
Collapse
|
5
|
van Vught AJAH, Hettinga AM, Denessen EJPG, Gerhardus MJT, Bouwmans GAM, van den Brink GTWJ, Postma CT. Analysis of the level of general clinical skills of physician assistant students using an objective structured clinical examination. J Eval Clin Pract 2015; 21:971-5. [PMID: 26376735 DOI: 10.1111/jep.12418] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 06/08/2015] [Indexed: 11/30/2022]
Abstract
RATIONALE, AIMS AND OBJECTIVES The physician assistant (PA) is trained to perform clinical tasks traditionally performed by medical doctors (MDs). Previous research showed no difference in the level of clinical skills of PAs compared with MDs in a specific niche, that is the specialty in which they are employed. However, MDs as well as PAs working within a specialty have to be able to recognize medical problems in the full scope of medicine. The objective is to examine PA students' level of general clinical skills across the breadth of clinical cases. METHOD A cross-sectional study was conducted. PA students and recently graduated MDs in the Netherlands were observed on their clinical skills by means of an objective structured clinical examination comprising five stations with common medical cases. The level of mastering history taking, physical examination, communication and clinical reasoning of PA students and MDs were described in means and standard deviation. Cohen's d was used to present effect sizes. RESULTS PA students and MDs score about equal on history taking (PA 5.8 ± 0.8 vs. MD 5.7 ± 0.7), physical examination (PA 4.8 ± 1.3 vs. MD 5.4 ± 0.8) and communication (PA: 8.2 ± 0.8 vs. MD: 8.6 ± 0.5) in the full scope of medicine. In the quality of the report, including the patient management plan, PA students scored a mean of 6.0 ± 0.6 and MDs 6.8 ± 0.6. CONCLUSIONS In this setting in the Netherlands, PA students and MDs score about equal in the appraisal of common cases in medical practice. The slightly lower scores of PA students' clinical reasoning in the full scope of clinical care may have raise attention to medical teams working with PAs and PA training programmes.
Collapse
Affiliation(s)
- Anneke J A H van Vught
- Faculty of Health and Social Studies, HAN University of Applied Sciences, Nijmegen, The Netherlands
| | - Agatha M Hettinga
- Institute for (Bio) Medical Education, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Eddie J P G Denessen
- Behavioural Science Institute, Radboud University Nijmegen, Nijmegen, The Netherlands
| | - Martin J T Gerhardus
- Faculty of Health and Social Studies, HAN University of Applied Sciences, Nijmegen, The Netherlands.,Department of Primary and Community Care, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Geert A M Bouwmans
- Institute for (Bio) Medical Education, Radboud University Medical Center, Nijmegen, The Netherlands
| | | | - Cornelis T Postma
- Institute for (Bio) Medical Education, Radboud University Medical Center, Nijmegen, The Netherlands.,Department of Internal Medicine, Radboud University Medical Center, Nijmegen, The Netherlands
| |
Collapse
|
6
|
Bouwmans GAM, Denessen E, Hettinga AM, Michels C, Postma CT. Reliability and validity of an extended clinical examination. MEDICAL TEACHER 2015; 37:1072-1077. [PMID: 25683172 DOI: 10.3109/0142159x.2015.1009423] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
INTRODUCTION An extended clinical examination (ECE) was administered to 85 final year medical students at the Radboud University Medical Centre in the Netherlands. The aim of the study was to determine the psychometric quality and the suitability of the ECE as a measurement tool to assess the clinical proficiency of eight separate clinical skills. METHODS Generalizability studies were conducted to determine the generalizability coefficient and the sources of variance of the ECE. An additional D-study was performed to estimate the generalizability coefficients with altering numbers of stations. RESULTS The largest sources of variance were found in skill difficulties (36.18%), the general error term (26.76%) and in the rank ordering of skill difficulties across the stations (21.89%). The generalizability coefficient of the entire ECE was above the 0.70 lower bound (G = 0.74). D studies showed that the separate skills could yield sufficient G coefficients in seven out of eight skills, if the ECE was lengthened from 8 to 14 stations. DISCUSSION The ECE proved to be a reliable clinical assessment that enables examinees to compose a clinical reasoning path through self-obtained data. The ECE can also be used as an assessment tool for separate clinical skills.
Collapse
Affiliation(s)
| | - E Denessen
- b Radboud University Nijmegen , The Netherlands
| | - A M Hettinga
- a Radboud University Medical Centre , The Netherlands
| | - C Michels
- b Radboud University Nijmegen , The Netherlands
| | - C T Postma
- a Radboud University Medical Centre , The Netherlands
| |
Collapse
|
7
|
Hettinga AM, Denessen E, Postma CT. Checking the checklist: a content analysis of expert- and evidence-based case-specific checklist items. MEDICAL EDUCATION 2010; 44:874-883. [PMID: 20716097 DOI: 10.1111/j.1365-2923.2010.03721.x] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2023]
Abstract
OBJECTIVES Research on objective structured clinical examinations (OSCEs) is extensive. However, relatively little has been written on the development of case-specific checklists on history taking and physical examination. Background information on the development of these checklists is a key element of the assessment of their content validity. Usually, expert panels are involved in the development of checklists. The objective of this study is to compare expert-based items on OSCE checklists with evidence-based items identified in the literature. METHODS Evidence-based items covering both history taking and physical examination for specific clinical problems and diseases were identified in the literature. Items on nine expert-based checklists for OSCE examination stations were evaluated by comparing them with items identified in the literature. The data were grouped into three categories: (i) expert-based items; (ii) evidence-based items, and (iii) evidence-based items with a specific measure of their relevance. RESULTS Out of 227 expert-based items, 58 (26%) were not found in the literature. Of 388 evidence-based items found in the literature, 219 (56%) were not included in the expert-based checklists. Of these 219 items, 82 (37%) had a specific measure of importance, such as an odds ratio for a diagnosis, making that diagnosis more or less probable. CONCLUSIONS Expert-based, case-specific checklist items developed for OSCE stations do not coincide with evidence-based items identified in the literature. Further research is needed to ascertain what this inconsistency means for test validity.
Collapse
Affiliation(s)
- Agatha M Hettinga
- Radboud University Nijmegen Medical Centre, Academic Educational Institute, Nijmegen, the NetherlandsBehavioural Science Institute, Radboud University Nijmegen, Nijmegen, the NetherlandsDepartment of General Internal Medicine and Academic Educational Institute, Radboud University Nijmegen Medical Centre, Nijmegen, the Netherlands
| | - Eddie Denessen
- Radboud University Nijmegen Medical Centre, Academic Educational Institute, Nijmegen, the NetherlandsBehavioural Science Institute, Radboud University Nijmegen, Nijmegen, the NetherlandsDepartment of General Internal Medicine and Academic Educational Institute, Radboud University Nijmegen Medical Centre, Nijmegen, the Netherlands
| | - Cornelis T Postma
- Radboud University Nijmegen Medical Centre, Academic Educational Institute, Nijmegen, the NetherlandsBehavioural Science Institute, Radboud University Nijmegen, Nijmegen, the NetherlandsDepartment of General Internal Medicine and Academic Educational Institute, Radboud University Nijmegen Medical Centre, Nijmegen, the Netherlands
| |
Collapse
|