1
|
Hays RB, Wilkinson T, Green-Thompson L, McCrorie P, Bollela V, Nadarajah VD, Anderson MB, Norcini J, Samarasekera DD, Boursicot K, Malau-Aduli BS, Mandache ME, Nadkar AA. Managing assessment during curriculum change: Ottawa Consensus Statement. MEDICAL TEACHER 2024; 46:874-884. [PMID: 38766754 DOI: 10.1080/0142159x.2024.2350522] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/14/2024] [Accepted: 04/29/2024] [Indexed: 05/22/2024]
Abstract
Curriculum change is relatively frequent in health professional education. Formal, planned curriculum review must be conducted periodically to incorporate new knowledge and skills, changing teaching and learning methods or changing roles and expectations of graduates. Unplanned curriculum evolution arguably happens continually, usually taking the form of "minor" changes that in combination over time may produce a substantially different programme. However, reviewing assessment practices is less likely to be a major consideration during curriculum change, overlooking the potential for unintended consequences for learning. This includes potentially undermining or negating the impact of even well-designed and important curriculum changes. Changes to any component of the curriculum "ecosystem "- graduate outcomes, content, delivery or assessment of learning - should trigger an automatic review of the whole ecosystem to maintain constructive alignment. Consideration of potential impact on assessment is essential to support curriculum change. Powerful contextual drivers of a curriculum include national examinations and programme accreditation, so each assessment programme sits within its own external context. Internal drivers are also important, such as adoption of new learning technologies and learning preferences of students and faculty. Achieving optimal and sustainable outcomes from a curriculum review requires strong governance and support, stakeholder engagement, curriculum and assessment expertise and internal quality assurance processes. This consensus paper provides guidance on managing assessment during curriculum change, building on evidence and the contributions of previous consensus papers.
Collapse
Affiliation(s)
- Richard B Hays
- College of Medicine and Dentistry, James Cook University, Townsville, Australia
| | - Tim Wilkinson
- Christchurch School of Medicine & Health Sciences, University of Otago, Christchurch, New Zealand
| | | | - Peter McCrorie
- Centre for Medical and Healthcare Education, St George"s, University of London, London, United Kingdom of Great Britain and Northern Ireland
| | - Valdes Bollela
- Medical Education, Universidade Cidade de São Paulo, Sao Paulo, Brazil
| | | | | | | | | | | | - Bunmi S Malau-Aduli
- College of Medicine and Dentistry, James Cook University, Townsville, Australia
- School of Medicine and Public Health, The University of Newcastle College of Health Medicine and Wellbeing, New South Wales, Australia
| | | | - Azhar Adam Nadkar
- Department of Medicine and Health Sciences, Stellenbosch University, Cape Town, South Africa
| |
Collapse
|
2
|
de Laat JM, van der Horst-Schrivers AN, Appelman-Dijkstra NM, Bisschop PH, Dreijerink KM, Drent ML, van de Klauw MM, de Ranitz WL, Stades AM, Stikkelbroeck NM, Timmers HJ, ten Cate O. Assessment of Entrustable Professional Activities Among Dutch Endocrine Supervisors. JOURNAL OF CME 2024; 13:2360137. [PMID: 38831939 PMCID: PMC11146265 DOI: 10.1080/28338073.2024.2360137] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/04/2024] [Accepted: 05/19/2024] [Indexed: 06/05/2024]
Abstract
Entrustable Professional Activities (EPAs) are an important tool to support individualisation of medical training in a competency-based setting and are increasingly implemented in the clinical speciality training for endocrinologist. This study aims to assess interrater agreement and factors that potentially impact EPA scores. Five known factors that affect entrustment decisions in health profesions training (capability, integrity, reliability, humility, agency) were used in this study. A case-vignette study using standardised written cases. Case vignettes (n = 6) on the topics thyroid disease, pituitary disease, adrenal disease, calcium and bone disorders, diabetes mellitus, and gonadal disorders were written by two endocrinologists and a medical education expert and assessed by endocrinologists experienced in the supervision of residents in training. Primary outcome is the inter-rater agreement of entrustment decisions for endocrine EPAs among raters. Secondary outcomes included the dichotomous interrater agreement (entrusted vs. non-entrusted), and an exploration of factors that impact decision-making. The study protocol was registered and approved by the Ethical Review Board of the Netherlands Association for Medical Education (NVMO-ERB # 2020.2.5). Nine endocrinologists from six different academic regions participated. Overall, the Fleiss Kappa measure of agreement for the EPA level was 0.11 (95% CI: 0.03-0.22) and for the entrustment decision 0.24 (95% CI 0.11-0.37). Of the five features that impacted the entrustment decision, capability was ranked as the most important by a majority of raters (56%-67%) in every case. There is a considerable discrepancy between the EPA levels assigned by different raters. These findings emphasise the need to base entrustment decisions on multiple observations, made by a team of supervisors and enriched with factors other than direct medical competence.
Collapse
Affiliation(s)
- Joanne M. de Laat
- Department of Internal Medicine, Division of Endocrinology, Radboud University Medical Center, Nijmegen, The Netherlands
- Utrecht Center for Research and Development of Health Professions Education, University Medical Center Utrecht, Utrecht, The Netherlands
| | | | | | - Peter H. Bisschop
- Department of Endocrinology and Metabolism, Amsterdam UMC, Location Academic Medical Center, Amsterdam, The Netherlands
| | - Koen M.A. Dreijerink
- Department of Internal Medicine, Amsterdam UMC, Location VU University Medical Center, Amsterdam, The Netherlands
| | - Madeleine L. Drent
- Department of Internal Medicine, Amsterdam UMC, Location VU University Medical Center, Amsterdam, The Netherlands
| | - Melanie M. van de Klauw
- Department of Endocrinology, University Medical Center Groningen, Groningen, The Netherlands
| | - Wendela L. de Ranitz
- Department of Endocrinology, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Aline M.E. Stades
- Department of Endocrinology, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Nike M.M.L. Stikkelbroeck
- Department of Internal Medicine, Division of Endocrinology, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Henri J.L.M. Timmers
- Department of Internal Medicine, Division of Endocrinology, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Olle ten Cate
- Utrecht Center for Research and Development of Health Professions Education, University Medical Center Utrecht, Utrecht, The Netherlands
| |
Collapse
|
3
|
Hoffman KR, Swanson D, Lane S, Nickson C, Brand P, Ryan AT. The reliability of the College of Intensive Care Medicine of Australia and New Zealand "Hot Case" examination. BMC MEDICAL EDUCATION 2024; 24:527. [PMID: 38734603 PMCID: PMC11088756 DOI: 10.1186/s12909-024-05516-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/21/2024] [Accepted: 05/03/2024] [Indexed: 05/13/2024]
Abstract
BACKGROUND High stakes examinations used to credential trainees for independent specialist practice should be evaluated periodically to ensure defensible decisions are made. This study aims to quantify the College of Intensive Care Medicine of Australia and New Zealand (CICM) Hot Case reliability coefficient and evaluate contributions to variance from candidates, cases and examiners. METHODS This retrospective, de-identified analysis of CICM examination data used descriptive statistics and generalisability theory to evaluate the reliability of the Hot Case examination component. Decision studies were used to project generalisability coefficients for alternate examination designs. RESULTS Examination results from 2019 to 2022 included 592 Hot Cases, totalling 1184 individual examiner scores. The mean examiner Hot Case score was 5.17 (standard deviation 1.65). The correlation between candidates' two Hot Case scores was low (0.30). The overall reliability coefficient for the Hot Case component consisting of two cases observed by two separate pairs of examiners was 0.42. Sources of variance included candidate proficiency (25%), case difficulty and case specificity (63.4%), examiner stringency (3.5%) and other error (8.2%). To achieve a reliability coefficient of > 0.8 a candidate would need to perform 11 Hot Cases observed by two examiners. CONCLUSION The reliability coefficient for the Hot Case component of the CICM second part examination is below the generally accepted value for a high stakes examination. Modifications to case selection and introduction of a clear scoring rubric to mitigate the effects of variation in case difficulty may be helpful. Increasing the number of cases and overall assessment time appears to be the best way to increase the overall reliability. Further research is required to assess the combined reliability of the Hot Case and viva components.
Collapse
Affiliation(s)
- Kenneth R Hoffman
- Intensive Care Unit, The Alfred Hospital, Melbourne, Australia.
- Department of Epidemiology and Preventative Medicine, School of Public Health, Monash University, Melbourne, Australia.
| | - David Swanson
- Department of Medical Education, Melbourne Medical School, University of Melbourne, Melbourne, Australia
| | - Stuart Lane
- Sydney Medical School, The University of Sydney, Sydney, Australia
| | - Chris Nickson
- Intensive Care Unit, The Alfred Hospital, Melbourne, Australia
- Department of Epidemiology and Preventative Medicine, School of Public Health, Monash University, Melbourne, Australia
| | - Paul Brand
- College of Intensive Care Medicine of Australia and New Zealand, Melbourne, Australia
| | - Anna T Ryan
- Department of Medical Education, Melbourne Medical School, University of Melbourne, Melbourne, Australia
| |
Collapse
|
4
|
Rodriguez RA, Sparks MA, Conway PT, Gavhane A, Reddy S, Awdishu L, Waheed S, Davidson S, Adey DB, Lea JP, Lieske JC, McDonald FS. American Board of Internal Medicine Nephrology Procedure Requirements for Initial Certification: Time for a Change and Pursuing Consensus in the Nephrology Community. Am J Kidney Dis 2024:S0272-6386(24)00720-0. [PMID: 38640993 DOI: 10.1053/j.ajkd.2024.03.014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2023] [Revised: 02/20/2024] [Accepted: 03/01/2024] [Indexed: 04/21/2024]
Abstract
In 1988, the American Board of Internal Medicine (ABIM) defined essential procedural skills in nephrology, and candidates for ABIM certification were required to present evidence of possessing the skills necessary for placement of temporary dialysis vascular access, hemodialysis, peritoneal dialysis, and percutaneous renal biopsy. In 1996, continuous renal replacement therapy was added to the list of nephrology requirements. These procedure requirements have not been modified since 1996 while the practice of nephrology has changed dramatically. In March 2021, the ABIM Nephrology Board embarked on a policy journey to revise the procedure requirements for nephrology certification. With the guidance of nephrology diplomates, training program directors, professional and patient organizations, and other stakeholders, the ABIM Nephrology Board revised the procedure requirements to reflect current practice and national priorities. The approved changes include the Opportunity to Train standard for placement of temporary dialysis catheters, percutaneous kidney biopsies, and home hemodialysis, which better reflects the current state of training in most training programs, and the new requirements for home dialysis therapies training will align with the national priority to address the underuse of home dialysis therapies. This perspective details the ABIM process for considering changes to the certification procedure requirements and how ABIM collaborated with the larger nephrology community in considering revisions and additions to these requirements.
Collapse
Affiliation(s)
- Rudolph A Rodriguez
- Department of Medicine, University of Washington, Seattle, Washington; Hospital and Specialty Medicine, VA Puget Sound Health Care System, Seattle, Washington.
| | - Matthew A Sparks
- Division of Nephrology, Department of Medicine, School of Medicine, Duke University, Durham, North Carolina; Renal Section, Durham VA Health Care System, Durham, North Carolina
| | - Paul T Conway
- American Association of Kidney Patients, Tampa, Florida
| | - Anamika Gavhane
- American Board of Internal Medicine, Philadelphia, Pennsylvania
| | - Siddharta Reddy
- American Board of Internal Medicine, Philadelphia, Pennsylvania
| | - Linda Awdishu
- Division of Clinical Pharmacy, San Diego Skaggs School of Pharmacy and Pharmaceutical Sciences, University of California, La Jolla, California
| | - Sana Waheed
- Department of Medicine, Renal Division, School of Medicine, Emory University, Atlanta, Georgia
| | - Sandra Davidson
- American Board of Internal Medicine, Philadelphia, Pennsylvania
| | - Deborah B Adey
- Department of Medicine, Kidney Transplant Service, Division of Nephrology, University of California, San Francisco, California
| | - Janice P Lea
- Department of Medicine, Renal Division, School of Medicine, Emory University, Atlanta, Georgia
| | - John C Lieske
- Division of Nephrology and Hypertension, Mayo Clinic, Rochester, Minnesota
| | - Furman S McDonald
- American Board of Internal Medicine, Philadelphia, Pennsylvania; Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania; College of Medicine and Science, Mayo Clinic, Rochester, Minnesota
| |
Collapse
|
5
|
Sidhu NS, Fleming S. Re-examining single-moment-in-time high-stakes examinations in specialist training: A critical narrative review. MEDICAL TEACHER 2024; 46:528-536. [PMID: 37740944 DOI: 10.1080/0142159x.2023.2260081] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/25/2023]
Abstract
In this critical narrative review, we challenge the belief that single-moment-in-time high-stakes examinations (SMITHSEx) are an essential component of contemporary specialist training. We explore the arguments both for and against SMITHSEx, examine potential alternatives, and discuss the barriers to change.SMITHSEx are viewed as the "gold standard" assessment of competence but focus excessively on knowledge assessment rather than capturing essential competencies required for safe and competent workplace performance. Contrary to popular belief, regulatory bodies do not mandate SMITHSEx in specialist training. Though acting as significant drivers of learning and professional identity formation, these attributes are not exclusive to SMITHSEx.Skills such as crisis management, procedural skills, professionalism, communication, collaboration, lifelong learning, reflection on practice, and judgement are often overlooked by SMITHSEx. Their inherent design raises questions about the validity and objectivity of SMITHSEx as a measure of workplace competence. They have a detrimental impact on trainee well-being, contributing to burnout and differential attainment.Alternatives to SMITHSEx include continuous low-stakes assessments throughout training, ongoing evaluation of competence in the workplace, and competency-based medical education (CBME) concepts. These aim to provide a more comprehensive and context-specific assessment of trainees' competence while also improving trainee welfare.Specialist training colleges should evolve from exam providers to holistic education sources. Assessments should emphasise essential practical knowledge over trivia, align with clinical practice, aid learning, and be part of a diverse toolkit. Eliminating SMITHSEx from specialist training will foster a competency-based approach, benefiting future medical professionals' well-being and success.
Collapse
Affiliation(s)
- Navdeep S Sidhu
- Department of Anaesthesiology, School of Medicine, University of Auckland, Auckland, New Zealand
- Department of Anaesthesia and Perioperative Medicine, North Shore Hospital, Auckland, New Zealand
| | - Simon Fleming
- Department of Hand Surgery, Royal North Shore Hospital, Sydney, New South Wales, Australia
| |
Collapse
|
6
|
Ras T, Stander Jenkins L, Lazarus C, van Rensburg JJ, Cooke R, Senkubuge F, N Dlova A, Singaram V, Daitz E, Buch E, Green-Thompson L, Burch V. "We just don't have the resources": Supervisor perspectives on introducing workplace-based assessments into medical specialist training in South Africa. BMC MEDICAL EDUCATION 2023; 23:832. [PMID: 37932732 PMCID: PMC10629100 DOI: 10.1186/s12909-023-04840-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/30/2023] [Accepted: 11/02/2023] [Indexed: 11/08/2023]
Abstract
BACKGROUND South Africa (SA) is on the brink of implementing workplace-based assessments (WBA) in all medical specialist training programmes in the country. Despite the fact that competency-based medical education (CBME) has been in place for about two decades, WBA offers new and interesting challenges. The literature indicates that WBA has resource, regulatory, educational and social complexities. Implementing WBA would therefore require a careful approach to this complex challenge. To date, insufficient exploration of WBA practices, experiences, perceptions, and aspirations in healthcare have been undertaken in South Africa or Africa. The aim of this study was to identify factors that could impact WBA implementation from the perspectives of medical specialist educators. The outcomes being reported are themes derived from reported potential barriers and enablers to WBA implementation in the SA context. METHODS This paper reports on the qualitative data generated from a mixed methods study that employed a parallel convergent design, utilising a self-administered online questionnaire to collect data from participants. Data was analysed thematically and inductively. RESULTS The themes that emerged were: Structural readiness for WBA; staff capacity to implement WBA; quality assurance; and the social dynamics of WBA. CONCLUSIONS Participants demonstrated impressive levels of insight into their respective working environments, producing an extensive list of barriers and enablers. Despite significant structural and social barriers, this cohort perceives the impending implementation of WBA to be a positive development in registrar training in South Africa. We make recommendations for future research, and to the medical specialist educational leaders in SA.
Collapse
Affiliation(s)
- Tasleem Ras
- University of Cape Town, Cape Town, South Africa.
| | | | | | | | - Richard Cooke
- Witwatersrand University, Johannesburg, South Africa
| | | | | | | | - Emma Daitz
- University of Cape Town, Cape Town, South Africa
| | - Eric Buch
- Colleges of Medicine of South Africa, Johannesburg, South Africa
| | - Lionel Green-Thompson
- University of Cape Town & South African Committee Of Medical Deans, Cape Town, South Africa
| | - Vanessa Burch
- Colleges of Medicine of South Africa, Johannesburg, South Africa
| |
Collapse
|
7
|
Holmboe ES, Osman NY, Murphy CM, Kogan JR. The Urgency of Now: Rethinking and Improving Assessment Practices in Medical Education Programs. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2023; 98:S37-S49. [PMID: 37071705 DOI: 10.1097/acm.0000000000005251] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Assessment is essential to professional development. Assessment provides the information needed to give feedback, support coaching and the creation of individualized learning plans, inform progress decisions, determine appropriate supervision levels, and, most importantly, help ensure patients and families receive high-quality, safe care in the training environment. While the introduction of competency-based medical education has catalyzed advances in assessment, much work remains to be done. First, becoming a physician (or other health professional) is primarily a developmental process, and assessment programs must be designed using a developmental and growth mindset. Second, medical education programs must have integrated programs of assessment that address the interconnected domains of implicit, explicit and structural bias. Third, improving programs of assessment will require a systems-thinking approach. In this paper, the authors first address these overarching issues as key principles that must be embraced so that training programs may optimize assessment to ensure all learners achieve desired medical education outcomes. The authors then explore specific needs in assessment and provide suggestions to improve assessment practices. This paper is by no means inclusive of all medical education assessment challenges or possible solutions. However, there is a wealth of current assessment research and practice that medical education programs can use to improve educational outcomes and help reduce the harmful effects of bias. The authors' goal is to help improve and guide innovation in assessment by catalyzing further conversations.
Collapse
Affiliation(s)
- Eric S Holmboe
- E.S. Holmboe is chief, Research, Milestones Development and Evaluation, Accreditation Council for Graduate Medical Education, Chicago, Illinois; ORCID: https://orcid.org/0000-0003-0108-6021
| | - Nora Y Osman
- N.Y. Osman is associate professor of medicine, Harvard Medical School, and director of undergraduate medical education, Brigham and Women's Hospital Department of Medicine, Boston, Massachusetts; ORCID: https://orcid.org/0000-0003-3542-1262
| | - Christina M Murphy
- C.M. Murphy is a fourth-year medical student and president, Medical Student Government at Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania; ORCID: https://orcid.org/0000-0003-3966-5264
| | - Jennifer R Kogan
- J.R. Kogan is associate dean, Student Success and Professional Development, and professor of medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania; ORCID: https://orcid.org/0000-0001-8426-9506
| |
Collapse
|
8
|
Westein MPD, Koster AS, Daelmans HEM, Bouvy ML, Kusurkar RA. How progress evaluations are used in postgraduate education with longitudinal supervisor-trainee relationships: a mixed method study. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2023; 28:205-222. [PMID: 36094680 PMCID: PMC9992254 DOI: 10.1007/s10459-022-10153-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/30/2021] [Accepted: 08/07/2022] [Indexed: 06/15/2023]
Abstract
The combination of measuring performance and giving feedback creates tension between formative and summative purposes of progress evaluations and can be challenging for supervisors. There are conflicting perspectives and evidence on the effects supervisor-trainee relationships have on assessing performance. The aim of this study was to learn how progress evaluations are used in postgraduate education with longitudinal supervisor-trainee relationships. Progress evaluations in a two-year community-pharmacy specialization program were studied with a mixed-method approach. An adapted version of the Canadian Medical Education Directives for Specialists (CanMEDS) framework was used. Validity of the performance evaluation scores of 342 trainees was analyzed using repeated measures ANOVA. Semi-structured interviews were held with fifteen supervisors to investigate their response processes, the utility of the progress evaluations, and the influence of supervisor-trainee relationships. Time and CanMEDS roles affected the three-monthly progress evaluation scores. Interviews revealed that supervisors varied in their response processes. They were more committed to stimulating development than to scoring actual performance. Progress evaluations were utilized to discuss and give feedback on trainee development and to add structure to the learning process. A positive supervisor-trainee relationship was seen as the foundation for feedback and supervisors preferred the roles of educator, mentor, and coach over the role of assessor. We found that progress evaluations are a good method for directing feedback in longitudinal supervisor-trainee relationships. The reliability of scoring performance was low. We recommend progress evaluations to be independent of formal assessments in order to minimize roles-conflicts of supervisors.
Collapse
Affiliation(s)
- Marnix P D Westein
- Department of Pharmaceutical Sciences, Utrecht University, Universiteitsweg 99, 3584, CG, Utrecht, The Netherlands.
- Research in Education, Faculty of Medicine Vrije Universiteit, Amsterdam, The Netherlands.
- The Royal Dutch Pharmacists Association (KNMP), The Hague, The Netherlands.
| | - A S Koster
- Department of Pharmaceutical Sciences, Utrecht University, Universiteitsweg 99, 3584, CG, Utrecht, The Netherlands
| | - H E M Daelmans
- Programme Director Master of Medicine, Faculty of Medicine Vrije Universiteit, Amsterdam, The Netherlands
| | - M L Bouvy
- Department of Pharmaceutical Sciences, Utrecht University, Universiteitsweg 99, 3584, CG, Utrecht, The Netherlands
| | - R A Kusurkar
- Research in Education, Faculty of Medicine Vrije Universiteit, Amsterdam, The Netherlands
| |
Collapse
|
9
|
Bi L, Jiang T. Science Popularization Interventions on Rational Medication in Patients with Hyperuricemia. Am J Health Behav 2023; 47:153-164. [PMID: 36945088 DOI: 10.5993/ajhb.47.1.16] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/23/2023]
Abstract
Objective: This research aimed to explore the science of population intervention in the rational medication treatment of hyperuricemia patients in China. The research model was designed to determine interventions from three dimensions of science propylitization (empirical evidence, logical reasoning, and skeptical attitude). Methods: The data for this research was collected from hyperuricemia patients in China with a survey-based questionnaire. A partial least square-structural equation modeling statistical method was used for data evaluation. Results: The research investigated that science popularization can strongly influence hyperuricemia patients' rational medication with empirical evidence, logical reasoning, and a skeptical attitude. Furthermore, the research asserted that more focus on scientific knowledge of hyperuricemia patients can improve their health further. Conclusion: Theoretically, this research would have wider implications. First, the research model was based on science popularization interventions which is a novel contribution to the relationship with rational medication. Second, the practical implications of this study would lie in science population interventions improving the rational medications for hyperuricemia patients. Besides, this research asserted a few future directions for scholars to contribute and determine the impact of further variables to enhance the model of science popularization in relationship with rational medication.
Collapse
Affiliation(s)
- Lingling Bi
- Department of Pharmacy, Shandong Wendeng Orthopedic Yantai Hospital, Yantai, China
| | - Tingting Jiang
- Department of Pharmacy, Shandong Wendeng Orthopedic Yantai Hospital, Yantai, China;,
| |
Collapse
|
10
|
Lockyer J, Sargeant J. Multisource feedback: an overview of its use and application as a formative assessment. CANADIAN MEDICAL EDUCATION JOURNAL 2022; 13:30-35. [PMID: 36091727 PMCID: PMC9441111 DOI: 10.36834/cmej.73775] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Multisource feedback (MSF), often termed 360-degree feedback, is a formative performance assessment in which data about an individual's observable workplace behaviors are collected through questionnaires from those interacting with the individual; data are aggregated for anonymity and confidentiality; the aggregated data, along with self-assessment if available, are provided to the individual; and the recipient meets with a trusted individual to review the data and develop an action plan. It is used along the continuum of medical education. This article provides an overview of MSF's utility, its evidence base and cautions.
Collapse
Affiliation(s)
- Jocelyn Lockyer
- Department of Community Health Sciences, Cumming School of Medicine, Alberta, Canada
| | - Joan Sargeant
- Continuing Professional Development and Medical Education, Faculty of Medicine, Dalhousie University, Nova Scotia, Canada
| |
Collapse
|
11
|
Westein MPD, Koster AS, Daelmans HEM, Collares CF, Bouvy ML, Kusurkar RA. Validity evidence for summative performance evaluations in postgraduate community pharmacy education. CURRENTS IN PHARMACY TEACHING & LEARNING 2022; 14:701-711. [PMID: 35809899 DOI: 10.1016/j.cptl.2022.06.014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/15/2021] [Revised: 05/30/2022] [Accepted: 06/09/2022] [Indexed: 06/15/2023]
Abstract
INTRODUCTION Workplace-based assessment of competencies is complex. In this study, the validity of summative performance evaluations (SPEs) made by supervisors in a two-year longitudinal supervisor-trainee relationship was investigated in a postgraduate community pharmacy specialization program in the Netherlands. The construct of competence was based on an adapted version of the 2005 Canadian Medical Education Directive for Specialists (CanMEDS) framework. METHODS The study had a case study design. Both quantitative and qualitative data were collected. The year 1 and year 2 SPE scores of 342 trainees were analyzed using confirmatory factor analysis and generalizability theory. Semi-structured interviews were held with 15 supervisors and the program director to analyze the inferences they made and the impact of SPE scores on the decision-making process. RESULTS A good model fit was found for the adapted CanMEDS based seven-factor construct. The reliability/precision of the SPE measurements could not be completely isolated, as every trainee was trained in one pharmacy and evaluated by one supervisor. Qualitative analysis revealed that supervisors varied in their standards for scoring competencies. Some supervisors were reluctant to fail trainees. The competency scores had little impact on the high-stakes decision made by the program director. CONCLUSIONS The adapted CanMEDS competency framework provided a valid structure to measure competence. The reliability/precision of SPE measurements could not be established and the SPE measurements provided limited input for the decision-making process. Indications of a shadow assessment system in the pharmacies need further investigation.
Collapse
Affiliation(s)
- Marnix P D Westein
- Department of Pharmaceutical Sciences, Utrecht University, Royal Dutch Pharmacists Association (KNMP), Research in Education, Faculty of Medicine Vrije Universiteit, Amsterdam, the Netherlands.
| | - Andries S Koster
- Department of Pharmaceutical Sciences, Utrecht University, Utrecht, the Netherlands.
| | - Hester E M Daelmans
- Master's programme of Medicine, Faculty of Medicine Vrije Universiteit, Amsterdam, the Netherlands.
| | - Carlos F Collares
- Maastricht University Faculty of Health Medicine and Life Sciences, Maastricht, the Netherlands.
| | - Marcel L Bouvy
- Department of Pharmaceutical Sciences, Utrecht University, Utrecht, the Netherlands.
| | - Rashmi A Kusurkar
- Research in Education, Faculty of Medicine Vrije Universiteit, Amsterdam, the Netherlands.
| |
Collapse
|
12
|
Yilmaz Y, Jurado Nunez A, Ariaeinejad A, Lee M, Sherbino J, Chan TM. Harnessing Natural Language Processing to Support Decisions Around Workplace-Based Assessment: Machine Learning Study of Competency-Based Medical Education. JMIR MEDICAL EDUCATION 2022; 8:e30537. [PMID: 35622398 PMCID: PMC9187970 DOI: 10.2196/30537] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/19/2021] [Revised: 12/05/2021] [Accepted: 04/30/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND Residents receive a numeric performance rating (eg, 1-7 scoring scale) along with a narrative (ie, qualitative) feedback based on their performance in each workplace-based assessment (WBA). Aggregated qualitative data from WBA can be overwhelming to process and fairly adjudicate as part of a global decision about learner competence. Current approaches with qualitative data require a human rater to maintain attention and appropriately weigh various data inputs within the constraints of working memory before rendering a global judgment of performance. OBJECTIVE This study explores natural language processing (NLP) and machine learning (ML) applications for identifying trainees at risk using a large WBA narrative comment data set associated with numerical ratings. METHODS NLP was performed retrospectively on a complete data set of narrative comments (ie, text-based feedback to residents based on their performance on a task) derived from WBAs completed by faculty members from multiple hospitals associated with a single, large, residency program at McMaster University, Canada. Narrative comments were vectorized to quantitative ratings using the bag-of-n-grams technique with 3 input types: unigram, bigrams, and trigrams. Supervised ML models using linear regression were trained with the quantitative ratings, performed binary classification, and output a prediction of whether a resident fell into the category of at risk or not at risk. Sensitivity, specificity, and accuracy metrics are reported. RESULTS The database comprised 7199 unique direct observation assessments, containing both narrative comments and a rating between 3 and 7 in imbalanced distribution (scores 3-5: 726 ratings; and scores 6-7: 4871 ratings). A total of 141 unique raters from 5 different hospitals and 45 unique residents participated over the course of 5 academic years. When comparing the 3 different input types for diagnosing if a trainee would be rated low (ie, 1-5) or high (ie, 6 or 7), our accuracy for trigrams was 87%, bigrams 86%, and unigrams 82%. We also found that all 3 input types had better prediction accuracy when using a bimodal cut (eg, lower or higher) compared with predicting performance along the full 7-point rating scale (50%-52%). CONCLUSIONS The ML models can accurately identify underperforming residents via narrative comments provided for WBAs. The words generated in WBAs can be a worthy data set to augment human decisions for educators tasked with processing large volumes of narrative assessments.
Collapse
Affiliation(s)
- Yusuf Yilmaz
- McMaster Education Research, Innovation, and Theory Program, Faculty of Health Sciences, McMaster University, Hamilton, ON, Canada
- Department of Medical Education, Ege University, Izmir, Turkey
- Program for Faculty Development, Office of Continuing Professional Development, McMaster University, Hamilton, ON, Canada
- Department of Medicine, Faculty of Health Sciences, McMaster University, Hamilton, ON, Canada
| | - Alma Jurado Nunez
- Department of Medicine and Masters in eHealth Program, McMaster University, Hamilton, ON, Canada
| | - Ali Ariaeinejad
- Department of Medicine and Masters in eHealth Program, McMaster University, Hamilton, ON, Canada
| | - Mark Lee
- McMaster Education Research, Innovation, and Theory Program, Faculty of Health Sciences, McMaster University, Hamilton, ON, Canada
| | - Jonathan Sherbino
- McMaster Education Research, Innovation, and Theory Program, Faculty of Health Sciences, McMaster University, Hamilton, ON, Canada
- Division of Emergency Medicine, Department of Medicine, Faculty of Health Sciences, McMaster University, Hamilton, ON, Canada
- Division of Education and Innovation, Department of Medicine, Faculty of Health Sciences, McMaster University, Hamilton, ON, Canada
| | - Teresa M Chan
- McMaster Education Research, Innovation, and Theory Program, Faculty of Health Sciences, McMaster University, Hamilton, ON, Canada
- Program for Faculty Development, Office of Continuing Professional Development, McMaster University, Hamilton, ON, Canada
- Division of Emergency Medicine, Department of Medicine, Faculty of Health Sciences, McMaster University, Hamilton, ON, Canada
- Division of Education and Innovation, Department of Medicine, Faculty of Health Sciences, McMaster University, Hamilton, ON, Canada
| |
Collapse
|
13
|
Shitu K, Adugna A, Kassie A, Handebo S. Application of Health Belief Model for the assessment of COVID-19 preventive behavior and its determinants among students: A structural equation modeling analysis. PLoS One 2022; 17:e0263568. [PMID: 35312697 PMCID: PMC8936445 DOI: 10.1371/journal.pone.0263568] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2021] [Accepted: 01/23/2022] [Indexed: 12/23/2022] Open
Abstract
BACKGROUND COVID-19 is a new pandemic that poses a threat to people globally. In Ethiopia, where classrooms are limited, students are at higher risk for COVID-19 unless they take consistent preventative actions. However, there is a lack of evidence in the study area regarding student compliance with COVID-19 preventive behavior (CPB) and its predictors. OBJECTIVE This study aimed to assess CPB and its predictors among students based on the perspective of the Health Belief Model (HBM). METHOD AND MATERIALS A school-based cross-sectional survey was conducted from November to December 2020 to evaluate the determinants of CPB among high school students using a self-administered structured questionnaire. 370 participants were selected using stratified simple random sampling. Descriptive statistics were used to summarize data, and partial least squares structural equation modeling (PLS-SEM) analyses to evaluate the measurement and structural models proposed by the HBM and to identify associations between HBM variables. A T-value of > 1.96 with 95% CI and a P-value of < 0.05 were used to declare the statistical significance of path coefficients. RESULT A total of 370 students participated with a response rate of 92%. The median (interquartile range) age of the participants (51.9% females) was 18 (2) years. Only 97 (26.2%), 121 (32.7%), and 108 (29.2%) of the students had good practice in keeping physical distance, frequent hand washing, and facemask use respectively. The HBM explained 43% of the variance in CPB. Perceived barrier (β = - 0.15, p < 0.001) and self-efficacy (β = 0.51, p <0.001) were significant predictors of student compliance to CPB. Moreover, the measurement model demonstrated that the instrument had acceptable reliability and validity. CONCLUSION AND RECOMMENDATIONS COVID-19 prevention practice is quite low among students. HBM demonstrated adequate predictive utility in predicting CPBs among students, where perceived barriers and self-efficacy emerged as significant predictors of CPBs. According to the findings of this study, theory-based behavioral change interventions are urgently required for students to improve their prevention practice. Furthermore, these interventions will be effective if they are designed to remove barriers to CPBs and improve students' self-efficacy in taking preventive measures.
Collapse
Affiliation(s)
- Kegnie Shitu
- Department of Health Education and Behavioral Sciences, Institute of Public Health, College of Medicine and Health Sciences, University of Gondar, Gondar, Ethiopia
| | - Asmamaw Adugna
- Department of Health Education and Behavioral Sciences, Institute of Public Health, College of Medicine and Health Sciences, University of Gondar, Gondar, Ethiopia
| | - Ayenew Kassie
- Department of Health Education and Behavioral Sciences, Institute of Public Health, College of Medicine and Health Sciences, University of Gondar, Gondar, Ethiopia
| | - Simegnew Handebo
- School of Public Health, St. Paul’s Hospital Millennium Medical College, Addis Ababa, Ethiopia
| |
Collapse
|
14
|
Nair B, Moonen-van Loon JW. Programmatic assessment – What are we waiting for? ARCHIVES OF MEDICINE AND HEALTH SCIENCES 2022. [DOI: 10.4103/amhs.amhs_259_22] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022] Open
|
15
|
Rivera-Sepulveda A, Isona M. Assessing Resident Diagnostic Skills Using a Modified Bronchiolitis Score. ACTA ACUST UNITED AC 2021; 18:11-16. [PMID: 33679039 DOI: 10.7199/ped.oncall.2021.10] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Background Resident milestones are objective instruments that assess the resident's growth, progression in knowledge, and clinical diagnostic reasoning; but they rely on the subjective appraisal of the supervising attending. Little is known about the use of standardized instruments that may complement the evaluation of resident diagnostic skills in the academic setting. Objectives Evaluate a modified bronchiolitis severity assessment tool by appraising the inter-rater variability and reliability between pediatric attendings and pediatric residents. Methods Cross-sectional study of children under 24 months of age who presented to a Community Hospital's Emergency Department with bronchiolitis between January-June 2014. A paired pediatric attending and resident evaluated each patient. Evaluation included age-based respiratory rate (RR), retractions, peripheral saturation, and auscultation. Cohen's kappa (K) measured inter-rater agreement. Inter-rater reliability (IRR) was assessed using a one-way random, average measures intra-class correlation (ICC) to evaluate the degree of consistency and magnitude of disagreement between inter-raters. Value of >0.6 was considered substantial for kappa and good internal consistency for ICC. Results Twenty patients were evaluated. Analysis showed fair agreement for the presence of retractions (K=0.31), auscultation (K=0.33), and total score (K=0.3). The RR (ICC=0.97), SpO2 (ICC=1.0), auscultation (ICC=0.77), and total score (ICC=0.84) were scored similarly across both raters, indicating excellent IRR. Identification of retractions had the least agreement across all statistical analysis. Conclusion The use of a standardized instrument, in conjunction with a trained resident-teaching staff, can help identify deficiencies in clinical competencies among residents and facilitate the learning process for the identification of pertinent clinical findings.
Collapse
Affiliation(s)
- Andrea Rivera-Sepulveda
- Pediatrics, Emergency Medicine, Nemours Children's Hospital, Orlando, FL, United States.,University of Puerto Rico Medical Sciences Campus, School of Health Professions and School of Medicine, San Juan, Puerto Rico
| | - Muguette Isona
- San Juan City Hospital, Emergency Department, San Juan, Puerto Rico
| |
Collapse
|
16
|
Boursicot K, Kemp S, Wilkinson T, Findyartini A, Canning C, Cilliers F, Fuller R. Performance assessment: Consensus statement and recommendations from the 2020 Ottawa Conference. MEDICAL TEACHER 2021; 43:58-67. [PMID: 33054524 DOI: 10.1080/0142159x.2020.1830052] [Citation(s) in RCA: 27] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
INTRODUCTION In 2011 the Consensus Statement on Performance Assessment was published in Medical Teacher. That paper was commissioned by AMEE (Association for Medical Education in Europe) as part of the series of Consensus Statements following the 2010 Ottawa Conference. In 2019, it was recommended that a working group be reconvened to review and consider developments in performance assessment since the 2011 publication. METHODS Following review of the original recommendations in the 2011 paper and shifts in the field across the past 10 years, the group identified areas of consensus and yet to be resolved issues for performance assessment. RESULTS AND DISCUSSION This paper addresses developments in performance assessment since 2011, reiterates relevant aspects of the 2011 paper, and summarises contemporary best practice recommendations for OSCEs and WBAs, fit-for-purpose methods for performance assessment in the health professions.
Collapse
Affiliation(s)
- Katharine Boursicot
- Department of Assessment and Progression, Duke-National University of Singapore, Singapore, Singapore
| | - Sandra Kemp
- Curtin Medical School, Curtin University, Perth, Australia
| | - Tim Wilkinson
- Dean's Department, University of Otago, Christchurch, New Zealand
| | - Ardi Findyartini
- Department of Medical Education, Universitas Indonesia, Jakarta, Indonesia
| | - Claire Canning
- Department of Assessment and Progression, Duke-National University of Singapore, Singapore, Singapore
| | - Francois Cilliers
- Department of Health Sciences Education, University of Cape Town, Cape Town, South Africa
| | | |
Collapse
|
17
|
Schuwirth LWT, van der Vleuten CPM. A history of assessment in medical education. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2020; 25:1045-1056. [PMID: 33113056 DOI: 10.1007/s10459-020-10003-0] [Citation(s) in RCA: 35] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/28/2020] [Accepted: 10/19/2020] [Indexed: 06/11/2023]
Abstract
The way quality of assessment has been perceived and assured has changed considerably in the recent 5 decades. Originally, assessment was mainly seen as a measurement problem with the aim to tell people apart, the competent from the not competent. Logically, reproducibility or reliability and construct validity were seen as necessary and sufficient for assessment quality and the role of human judgement was minimised. Later, assessment moved back into the authentic workplace with various workplace-based assessment (WBA) methods. Although originally approached from the same measurement framework, WBA and other assessments gradually became assessment processes that included or embraced human judgement but based on good support and assessment expertise. Currently, assessment is treated as a whole system problem in which competence is evaluated from an integrated rather than a reductionist perspective. Current research therefore focuses on how to support and improve human judgement, how to triangulate assessment information meaningfully and how to construct fairness, credibility and defensibility from a systems perspective. But, given the rapid changes in society, education and healthcare, yet another evolution in our thinking about good assessment is likely to lurk around the corner.
Collapse
Affiliation(s)
- Lambert W T Schuwirth
- FHMRI: Prideaux Research in Health Professions Education, College of Medicine and Public Health, Flinders University, Sturt Road, Bedford Park, South Australia, 5042, GPO Box 2100, Adelaide, SA, 5001, Australia.
- Department of Educational Development and Research, Maastricht University, Maastricht, The Netherlands.
| | - Cees P M van der Vleuten
- FHMRI: Prideaux Research in Health Professions Education, College of Medicine and Public Health, Flinders University, Sturt Road, Bedford Park, South Australia, 5042, GPO Box 2100, Adelaide, SA, 5001, Australia
- Department of Educational Development and Research, Maastricht University, Maastricht, The Netherlands
| |
Collapse
|
18
|
Jafri L, Siddiqui I, Khan AH, Tariq M, Effendi MUN, Naseem A, Ahmed S, Ghani F, Alidina S, Shah N, Majid H. Fostering teaching-learning through workplace based assessment in postgraduate chemical pathology residency program using virtual learning environment. BMC MEDICAL EDUCATION 2020; 20:383. [PMID: 33097037 PMCID: PMC7582426 DOI: 10.1186/s12909-020-02299-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/26/2020] [Accepted: 10/09/2020] [Indexed: 06/11/2023]
Abstract
BACKGROUND The principle of workplace based assessment (WBA) is to assess trainees at work with feedback integrated into the program simultaneously. A student driven WBA model was introduced and perception evaluation of this teaching method was done subsequently by taking feedback from the faculty as well as the postgraduate trainees (PGs) of a residency program. METHODS Descriptive multimethod study was conducted. A WBA program was designed for PGs in Chemical Pathology on Moodle and forms utilized were case-based discussion (CBD), direct observation of practical skills (DOPS) and evaluation of clinical events (ECE). Consented assessors and PGs were trained on WBA through a workshop. Pretest and posttest to assess PGs knowledge before and after WBA were conducted. Every time a WBA form was filled, perception of PGs and assessors towards WBA, time taken to conduct single WBA and feedback were recorded. Faculty and PGs qualitative feedback on perception of WBA was taken via interviews. WBA tools data and qualitative feedback were used to evaluate the acceptability and feasibility of the new tools. RESULTS Six eligible PGs and seventeen assessors participated in this study. A total of 79 CBDs (assessors n = 7 and PGs n = 6), 12 ECEs (assessors n = 6 and PGs n = 5), and 20 DOPS (assessors n = 6 and PGs n = 6) were documented. PGs average pretest score was 55.6%, which was improved to 96.4% in posttest; p value< 0.05. Scores of annual assessment before and after implementation of WBA also showed significant improvement, p value 0.039, Overall mean time taken to evaluate PG's was 12.6 ± 9.9 min and feedback time 9.2 ± 7.4 min. Mean WBA process satisfaction of assessors and PGs on Likert scale of 1 to 10 was 8 ± 1 and 8.3 ± 0.8 respectively. CONCLUSION Both assessors and fellows were satisfied with introduction and implementation of WBA. It gave the fellows opportunity to interact with assessors more often and learn from their rich experience. Gain in knowledge of PGs was identified from the statistically significant improvement in PGs' assessment scores after WBA implementation.
Collapse
Affiliation(s)
- Lena Jafri
- Section of Chemical Pathology, Department of Pathology and Laboratory Medicine, Aga Khan University, Karachi Pakistan Stadium Road, Karachi, 74800, Pakistan.
| | - Imran Siddiqui
- Section of Chemical Pathology, Department of Pathology and Laboratory Medicine, Aga Khan University, Karachi Pakistan Stadium Road, Karachi, 74800, Pakistan
| | - Aysha Habib Khan
- Section of Chemical Pathology, Department of Pathology and Laboratory Medicine, Aga Khan University, Karachi Pakistan Stadium Road, Karachi, 74800, Pakistan
| | - Muhammed Tariq
- Department of Medicine, Aga Khan University, Karachi, Pakistan
| | - Muhammad Umer Naeem Effendi
- Section of Chemical Pathology, Department of Pathology and Laboratory Medicine, Aga Khan University, Karachi Pakistan Stadium Road, Karachi, 74800, Pakistan
| | - Azra Naseem
- Blended & Digital Learning Network, Aga Khan University, Karachi, Pakistan
| | - Sibtain Ahmed
- Section of Chemical Pathology, Department of Pathology and Laboratory Medicine, Aga Khan University, Karachi Pakistan Stadium Road, Karachi, 74800, Pakistan
| | - Farooq Ghani
- Section of Chemical Pathology, Department of Pathology and Laboratory Medicine, Aga Khan University, Karachi Pakistan Stadium Road, Karachi, 74800, Pakistan
| | - Shahnila Alidina
- Department of Pathology and Laboratory Medicine, Aga Khan University, Karachi, Pakistan
| | - Nadir Shah
- eLearning Developer, Department of I.T. Academics and Computing, Aga Khan University, Karachi, Pakistan
| | - Hafsa Majid
- Section of Chemical Pathology, Department of Pathology and Laboratory Medicine, Aga Khan University, Karachi Pakistan Stadium Road, Karachi, 74800, Pakistan
| |
Collapse
|
19
|
Nathoo NA, Sidhu R, Gingerich A. Educational Impact Drives Feasibility of Implementing Daily Assessment in the Workplace. TEACHING AND LEARNING IN MEDICINE 2020; 32:389-398. [PMID: 32129088 DOI: 10.1080/10401334.2020.1729162] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Construct: Authors investigated the perspectives of stakeholders on feasibility elements of workplace-based assessments (WBA) with varying designs. Background: In the transition to competency-based medical education, WBA are taking a more prominent role in assessment programs. However, the increased demand for WBA leads to new challenges for implementing suitable WBA tools with published validity evidence, while also being feasible and useful in practice. Despite the availability of published WBA tools, implementation does not necessarily occur; a more fulsome understanding of the perspectives of stakeholders who are ultimately the end-users of these tools, as well as the system factors that both deter or support their use, could help to explain why evidence-based assessment tools may not be incorporated into residency programs. Approach: We examined the perspectives of two groups of stakeholders, surgical teachers and resident learners, during an assessment intervention that varied the assessment tools while keeping the assessment process constant. We chose diverse exemplars from published assessment tools that each represented a different response format: global rating scales, step-by-step surgical rubrics, and an entrustability scale. The primary purpose was to investigate how stakeholders are impacted by WBA tools with varying response formats to better understand their feasibility for assessment of cataract surgery. Secondarily, we were able to explore the culture of assessment in cataract surgery education including stakeholders' perceptions of WBA unrelated to assessment form design. Semi-structured interviews with teachers and a focus group with the residents enabled discussion of their perspectives on dimensions of the tools such as acceptability, demand, implementation, practicality, adaptation, and integration. Findings: Three themes summarize teachers' and residents' experiences with the assessment tools: (1) Feedback is the priority; (2) Forms informing coaching; and (3) Forcing the conversation. The tools helped to facilitate the feedback conversation by serving as a reminder to initiate the conversation, a framework to structure the conversation, and a memory aid for providing detailed feedback. Surgical teachers preferred the assessment tool with a design that best aligned with their approach to teaching and how they wanted to provide feedback. Orientation to the tools, combined with established remediation pathways, may help preceptors to better use assessment tools and improve their ability to give critical feedback. Conclusions: Feedback, more so than assessment, dominated the comments provided by both teachers and residents after using the various WBA tools. Our typical assessment design efforts focus on the creation or selection of a robust assessment tool according to good design and measurement principles, but the current findings would encourage us to also prioritize the coaching relationship and include efforts to design WBA tools to function as a mediator to augment teaching, learning, and feedback exchange within that relationship in the workplace.
Collapse
Affiliation(s)
- Nawaaz A Nathoo
- Department of Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, Canada
- Centre for Health Education Scholarship, University of British Columbia, Vancouver, Canada
| | - Ravi Sidhu
- Centre for Health Education Scholarship, University of British Columbia, Vancouver, Canada
| | - Andrea Gingerich
- Centre for Health Education Scholarship, University of British Columbia, Vancouver, Canada
- Northern Medical Program, University of Northern British Columbia, Prince George, Canada
| |
Collapse
|
20
|
Oudkerk Pool A, Jaarsma ADC, Driessen EW, Govaerts MJB. Student perspectives on competency-based portfolios: Does a portfolio reflect their competence development? PERSPECTIVES ON MEDICAL EDUCATION 2020; 9:166-172. [PMID: 32274650 PMCID: PMC7283408 DOI: 10.1007/s40037-020-00571-7] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
INTRODUCTION Portfolio-based assessments require that learners' competence development is adequately reflected in portfolio documentation. This study explored how students select and document performance data in their portfolios and how they perceive these data to be representative for their competence development. METHODS Students uploaded performance data in a competency-based portfolio. During one clerkship period, twelve students also recorded an audio diary in which they reflected on experiences and feedback that they perceived to be indicants of their competence development. Afterwards, these students were interviewed to explore the extent to which the performance documentation in the portfolio corresponded with what they considered illustrative evidence of their development. The interviews were analyzed using thematic analysis. RESULTS Portfolios provide an accurate but fragmented picture of student development. Portfolio documentation was influenced by tensions between learning and assessment, student beliefs about the goal of portfolios, student performance evaluation strategies, the learning environment and portfolio structure. DISCUSSION This study confirms the importance of taking student perceptions into account when implementing a competency-based portfolio. Students would benefit from coaching on how to select meaningful experiences and performance data for documentation in their portfolios. Flexibility in portfolio structure and requirements is essential to ensure optimal fit between students' experienced competence development and portfolio content.
Collapse
Affiliation(s)
- Andrea Oudkerk Pool
- School of Health Professions Education (SHE), Maastricht University, Maastricht, The Netherlands.
- Department of Educational Development and Research, Faculty of Health Medicine and Life Sciences, Maastricht University, Maastricht, The Netherlands.
| | - A Debbie C Jaarsma
- Center for Education Development and Research in Health Professions (CEDAR), Faculty of Medical Sciences, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Erik W Driessen
- Department of Educational Development and Research, Faculty of Health Medicine and Life Sciences, Maastricht University, Maastricht, The Netherlands
| | - Marjan J B Govaerts
- Department of Educational Development and Research, Faculty of Health Medicine and Life Sciences, Maastricht University, Maastricht, The Netherlands
| |
Collapse
|
21
|
Westein MP, de Vries H, Floor A, Koster AS, Buurma H. Development of a Postgraduate Community Pharmacist Specialization Program Using CanMEDS Competencies, and Entrustable Professional Activities. AMERICAN JOURNAL OF PHARMACEUTICAL EDUCATION 2019; 83:6863. [PMID: 31507284 PMCID: PMC6718509 DOI: 10.5688/ajpe6863] [Citation(s) in RCA: 25] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/27/2017] [Accepted: 02/20/2018] [Indexed: 05/09/2023]
Abstract
Objectives. To develop and implement a postgraduate, workplace-based curriculum for community pharmacy specialists in the Netherlands, conduct a thorough evaluation of the program, and revise any deficiencies found. Methods. The experiences of the Dutch Advisory Board for Postgraduate Curriculum Development for Medical Specialists were used as a guideline for the development of a competency-based postgraduate education program for community pharmacists. To ensure that community pharmacists achieved competence in 10 task areas and seven roles defined by the Canadian Medical Education Directions for Specialists (CanMEDS), a two-year workplace-based curriculum was built. A development path along four milestones was constructed using 40 entrustable professional activities (EPAs). The assessment program consisted of 155 workplace-based assessments, with the supervisor serving as the main assessor. Also, 360-degree feedback and 22 days of classroom courses were included in the curriculum. In 2014, the curriculum was evaluated by two focus groups and a review committee. Results. Eighty-two first-year trainees enrolled in the community pharmacy specialist program in 2012. That number increased to 130 trainees by 2016 (a 59% increase). In 2015, based on feedback from pharmacy supervisors, trainees, and other stakeholders, 22.5% of the EPAs were changed and the number of workplace-based assessments was reduced by 48.5%. Conclusion. Using design approaches from the medical field in the development of postgraduate workplace-based pharmacy education programs proved to be feasible and successful. How to address the concerns and challenges encountered in developing and maintaining competency-based postgraduate pharmacy education programs merits further research.
Collapse
Affiliation(s)
- Marnix P.D. Westein
- Royal Dutch Pharmacists Association (KNMP), Hague, Netherlands
- Utrecht Institute of Pharmaceutical Sciences, Utrecht University, Utrecht, Netherlands
| | - Harry de Vries
- HPC the Human Perspective in Consulting, Hague, Netherlands
| | - Annemieke Floor
- Royal Dutch Pharmacists Association (KNMP), Hague, Netherlands
- SIR Institute for Pharmacy Practice and Policy, Leiden, Netherlands
| | - Andries S. Koster
- Utrecht Institute of Pharmaceutical Sciences, Utrecht University, Utrecht, Netherlands
| | - Henk Buurma
- Royal Dutch Pharmacists Association (KNMP), Hague, Netherlands
- SIR Institute for Pharmacy Practice and Policy, Leiden, Netherlands
| |
Collapse
|
22
|
de Jong LH, Bok HGJ, Kremer WDJ, van der Vleuten CPM. Programmatic assessment: Can we provide evidence for saturation of information? MEDICAL TEACHER 2019; 41:678-682. [PMID: 30707848 DOI: 10.1080/0142159x.2018.1555369] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/18/2023]
Abstract
Purpose: According to the principles of programmatic assessment, a valid high-stakes assessment of the students' performance should amongst others, be based on a multiple data points, supposedly leading to saturation of information. Saturation of information is generated when a data point does not add important information to the assessor. In establishing saturation of information, institutions often set minimum requirements for the number of assessment data points to be included in the portfolio. Methods: In this study, we aimed to provide validity evidence for saturation of information by investigating the relationship between the number of data points exceeding the minimum requirements in a portfolio and the consensus between two independent assessors. Data were analyzed using a multiple logistic regression model. Results: The results showed no relation between the number of data points and the consensus. This suggests that either the consensus is predicted by other factors only, or, more likely, that assessors already reached saturation of information. This study took the first step in investigating saturation of information, further research is necessary to gain in-depth insights of this matter in relation to the complex process of decision-making.
Collapse
Affiliation(s)
- Lubberta H de Jong
- a Faculty of Veterinary Medicine, Centre for Quality Improvement in Veterinary Education , Utrecht University , Utrecht , The Netherlands
| | - Harold G J Bok
- a Faculty of Veterinary Medicine, Centre for Quality Improvement in Veterinary Education , Utrecht University , Utrecht , The Netherlands
| | - Wim D J Kremer
- a Faculty of Veterinary Medicine, Centre for Quality Improvement in Veterinary Education , Utrecht University , Utrecht , The Netherlands
| | - Cees P M van der Vleuten
- b Department of Educational Development and Research, Faculty of Health Medicine and Life Sciences, Maastricht University , Maastricht , The Netherlands
| |
Collapse
|
23
|
Woolf K, Page M, Viney R. Assessing professional competence: a critical review of the Annual Review of Competence Progression. J R Soc Med 2019; 112:236-244. [PMID: 31124405 DOI: 10.1177/0141076819848113] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
The Annual Review of Competence Progression is used to determine whether trainee doctors in the United Kingdom are safe and competent to progress to the next training stage. In this article we provide evidence to inform recommendations to enhance the validity of the summative and formative elements of the Annual Review of Competency Progression. The work was commissioned as part of a Health Education England review. We systematic searched the peer reviewed and grey literature, synthesising findings with information from national, local and specialty-specific Annual Review of Competence Progression guidance, critically evaluating the findings in the context of literature on assessing competence in medical education. National guidance lacked detail resulting in variability across locations and specialties, threatening validity and reliability. Trainees and trainers were concerned that the Annual Review of Competence Progression only reliably identifies the most poorly performing trainees. Feedback is not routinely provided, which can leave those with performance difficulties unsupported and high performers demotivated. Variability in the provision and quality of feedback can negatively affect learning. The Annual Review of Competence Progression functions as a high-stakes assessment, likely to have a significant impact on patient care. It should be subject to the same rigorous evaluation as other high-stakes assessments; there should be consistency in procedures across locations, specialties and grades; and all trainees should receive high-quality feedback.
Collapse
Affiliation(s)
- Katherine Woolf
- Research Department of Medical Education, UCL Medical School, Royal Free Hospital, London NW3 2PF, UK
| | - Michael Page
- Research Department of Medical Education, UCL Medical School, Royal Free Hospital, London NW3 2PF, UK
| | - Rowena Viney
- Research Department of Medical Education, UCL Medical School, Royal Free Hospital, London NW3 2PF, UK
| |
Collapse
|
24
|
Bonnard G, Cohen-Aubart F, Steichen O, Bourgarit A, Abad S, Ranque B, Pouchot J, Dossier A, Espitia-Thibault A, Jego P, Granel B, Launay D, Rivière E, Le Jeunne C, Mouthon L, Pottier P. [Reliability and validity of a workbook for assessment of professional competencies of internal medicine residents]. Rev Med Interne 2019; 40:419-426. [PMID: 30871866 DOI: 10.1016/j.revmed.2019.02.003] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2018] [Revised: 01/25/2019] [Accepted: 02/10/2019] [Indexed: 11/26/2022]
Abstract
INTRODUCTION Though several assessment tools for resident professional skills based on workplace direct observation have been validated, they remain scarcely used in France. The objective of this study was to evaluate the reliability and the validity of a workbook including several assessment forms for different components of the professional competency. METHODS Three assessment forms have been tested over a period of 6 months in a multicentric study including 12 French internal medicine departments: the French version of the mini-CEX, an interpersonal skills assessment form (OD_CR) and the multisource feedback form (E_360). Reliability has been assess using the intra-class correlation coefficient (ICC) and the Cronbach alpha coefficient. Arguments for validity have been provided looking at the ability of the forms to detect an increase in the scores over time and according to the level of experience of the resident. RESULTS Twenty-five residents have been included. The Cronbach alpha was of 0.90 (n=70) with the mini-CEX, 0.89 with the OD_CR (n=62) and 0.77 with the E_360 (n=86). ICC showed a wide variation according to the items of the mini-CEX and the OD-CR probably due to the poor number of observations performed by residents. The scores of most of the items of these two forms increased between M1 and M6. The scores of the E_360 were high: 7.3±0.8 to 8.3±2.4 (maximum 9) and did not vary according to the level of experience. CONCLUSION This study suggest that it would be difficult to ensure a sufficient reliability for professional skills assessment using these tools given our available current human and material resources. However, these assessment forms could be added to the resident portfolio as supports for the debriefing in order to document their progression during their formation.
Collapse
Affiliation(s)
- G Bonnard
- Service de médecine interne, Hôtel-Dieu, CHU de Nantes, 44093 Nantes, France
| | - F Cohen-Aubart
- Service de médecine interne 2, Sorbonne Université, hôpital de la Pitié-Salpêtrière, Assistance publique-Hôpitaux de Paris (AP-HP), 75013 Paris, France
| | - O Steichen
- Service de Médecine Interne, université Paris-VI Pierre-et-Marie-Curie, hôpital Tenon, AP-HP, 75970 Paris, France
| | - A Bourgarit
- Service de médecine interne, hôpital Jean Verdier, AP-HP, 93140 Bondy, France
| | - S Abad
- Service de médecine interne, hôpital Avicenne, AP-HP, 93000 Bobigny, France
| | - B Ranque
- Service de médecine interne, hôpital Européen Georges Pompidou, AP-HP, 75015 Paris, France
| | - J Pouchot
- Service de médecine interne, hôpital Européen Georges Pompidou, AP-HP, 75015 Paris, France
| | - A Dossier
- Service de médecine interne, hôpital Bichat, Université Paris Diderot, PRES Sorbonne Paris Cité, AP-HP, 75877 Paris, France
| | - A Espitia-Thibault
- Service de médecine interne, Hôtel-Dieu, CHU de Nantes, 44093 Nantes, France
| | - P Jego
- Service de médecine interne, CHU de Rennes, 35200 Rennes, France
| | - B Granel
- Service de médecine interne, CHU Nord, Assistance publique-Hôpitaux de Marseille, 13015 Marseille, France
| | - D Launay
- Département de médecine interne et immunologie clinique, CHU Lille, 59037 Lille, France
| | - E Rivière
- Service de médecine interne et maladies infectieuses, hôpital Haut-Lévêque, CHU de Bordeaux, 33600 Pessac, France
| | - C Le Jeunne
- Service de médecine interne, hôpital Cochin, université Paris Descartes, AP-HP, 75014 Paris, France
| | - L Mouthon
- Service de médecine interne, hôpital Cochin, université Paris Descartes, AP-HP, 75014 Paris, France
| | - P Pottier
- Service de médecine interne, Hôtel-Dieu, CHU de Nantes, 44093 Nantes, France; SPHERE U1246, Inserm, université de Nantes-université de Tours, 44000 Nantes, France.
| | | |
Collapse
|
25
|
Duijn CCMA, Dijk EJV, Mandoki M, Bok HGJ, Cate OTJT. Assessment Tools for Feedback and Entrustment Decisions in the Clinical Workplace: A Systematic Review. JOURNAL OF VETERINARY MEDICAL EDUCATION 2019; 46:340-352. [PMID: 31460844 DOI: 10.3138/jvme.0917-123r] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
BACKGROUND: Entrustable Professional Activities (EPAs) combine feedback and evaluation with a permission to act under a specified level of supervision and the possibility to schedule learners for clinical service. This literature review aims to identify workplace-based assessment tools that indicate progression toward unsupervised practice, suitable for entrustment decisions and feedback to learners. METHODS: A systematic search was performed in the PubMed, Embase, ERIC, and PsycINFO databases. Based on title/abstract and full text, articles were selected using predetermined inclusion and exclusion criteria. Information on workplace-based assessment tools was extracted using data coding sheets. The methodological quality of studies was assessed using the medical education research study quality instrument (MERSQI). RESULTS: The search yielded 6,371 articles (180 were evaluated in full text). In total, 80 articles were included, identifying 67 assessment tools. Only a few studies explicitly mentioned assessment tools used as a resource for entrustment decisions. Validity evidence was frequently reported, and the MERSQI score was 10.0 on average. CONCLUSIONS: Many workplace-based assessment tools were identified that potentially support learners with feedback on their development and support supervisors with providing feedback. As expected, only few articles referred to entrustment decisions. Nevertheless, the existing tools or the principals could be used for entrustment decisions, supervision level, or autonomy.
Collapse
|
26
|
Ribeiro AM, Ferla AA, Amorim JSCD. Objective structured clinical examination in physiotherapy teaching: a systematic review. FISIOTERAPIA EM MOVIMENTO 2019. [DOI: 10.1590/1980-5918.032.ao14] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Abstract Introduction: Problems related to the clinical abilities of physiotherapy students are not always identified throughout the educational process and might only be observed when these future professionals have to treat patients. The Objective Structured Clinical Examination (OSCE) includes a problematization approach and can be used in Health Sciences teaching to help this identification before internship practices. However, there are few studies on its use in Physiotherapy. Objective: To gather evidence of the OSCE use to evaluate clinical abilities in Physiotherapy teaching. Method: Articles published from 2000 to 2016 were surveyed in the Biblioteca Virtual em Saúde (BVS) (Virtual Health Library), Centro Latino-Americano e do Caribe de Informação em Ciências da Saúde (BIREME) (Latin-American and Caribbean Health Sciences Information Center), PubMed, Scielo and Web of Science, using the descriptors “educational assessment”, “assessment methods”, “objective structured clinical examination”, “clinical competence”, “professional competence”, “clinical skills”, “student competence”, “student skills”, “physiotherapy” and the Booleans “OR” and “AND”. Results: The initial number of identified publications was 3,242. From these, seven were included in this review. Two studies were developed in Brazil, four in Australia and one in Canada. The studies were scored 7 to 12 regarding methodologic quality, and 1B and 2B regarding scientific evidence. Conclusion: Students’ clinical abilities were grouped into three classes: cognitive, psychomotor and affective, and four studies described their use. There is very little evidence of the use of OSCE, but the instrument can be applied to evaluate skills and competences in Physiotherapy teaching.
Collapse
|
27
|
Young JQ, Irby DM, Kusz M, O'Sullivan PS. Performance Assessment of Pharmacotherapy: Results from a Content Validity Survey of the Psychopharmacotherapy-Structured Clinical Observation (P-SCO) Tool. ACADEMIC PSYCHIATRY : THE JOURNAL OF THE AMERICAN ASSOCIATION OF DIRECTORS OF PSYCHIATRIC RESIDENCY TRAINING AND THE ASSOCIATION FOR ACADEMIC PSYCHIATRY 2018; 42:765-772. [PMID: 29380145 DOI: 10.1007/s40596-017-0876-0] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/21/2017] [Accepted: 12/22/2017] [Indexed: 06/07/2023]
Abstract
OBJECTIVE The Psychopharmacotherapy-Structured Clinical Observation (P-SCO) tool is designed to assess performance of a medication management visit and to enhance feedback. Prior research indicated that the P-SCO was feasible to implement in a resident clinic and generated behaviorally specific, high-quality feedback. This research also highlighted problems with some of the instrument's items. This study seeks to improve the items. METHODS The authors initially revised the P-SCO items based on the problems identified by a prior study. Next, these items were iteratively modified by experts in clinical pharmacotherapy and educational assessment. Forty-five items emerged. Finally, faculty attending an annual department education retreat rated each item on its relevance (4-point scale) and provided comments on how the item might be revised. For final inclusion, an item must have met a quantitative threshold (i.e., content validity index equal to or greater than 0.8 and the lower end of the asymmetric confidence interval equal to or greater than 3.0) and received comments that were supportive. RESULTS Forty-one of the 45 items had strong quantitative support. However, the comments endorsed lumping a number of items in order to decrease overlap between items and to shorten the instrument. This process resulted in the further elimination of 15 items. CONCLUSIONS The revised 26-item P-SCO builds upon prior evidence of feasibility and utility and now possesses additional evidence of content validity. The use of the tool should enhance feedback and improve the capacity of educational programs to assess performance.
Collapse
Affiliation(s)
- John Q Young
- Zucker School of Medicine at Hofstra/Northwell, Hempstead, NY, USA.
| | | | - Martin Kusz
- Zucker School of Medicine at Hofstra/Northwell, Hempstead, NY, USA
| | | |
Collapse
|
28
|
Young JQ, Rasul R, O'Sullivan PS. Evidence for the Validity of the Psychopharmacotherapy-Structured Clinical Observation Tool: Results of a Factor and Time Series Analysis. ACADEMIC PSYCHIATRY : THE JOURNAL OF THE AMERICAN ASSOCIATION OF DIRECTORS OF PSYCHIATRIC RESIDENCY TRAINING AND THE ASSOCIATION FOR ACADEMIC PSYCHIATRY 2018; 42:759-764. [PMID: 29951950 DOI: 10.1007/s40596-018-0928-0] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/28/2017] [Accepted: 04/18/2018] [Indexed: 06/08/2023]
Abstract
OBJECTIVE The Psychopharmacotherapy-Structured Clinical Observation (P-SCO) is a direct observation tool designed to assess resident performance of a medication visit. This study examines two dimensions of validity for the P-SCO: internal structure and how scores correlate with another variable associated with competence (experience). METHODS The faculty completed 601 P-SCOs over 4 years. Multilevel exploratory factor analysis was performed with minimum thresholds for eigenvalue (≥ 1.0) and proportion of variance explained (≥ 5.0%). Internal reliability was assessed with Cronbach alpha. To examine how scores changed with experience, mean ratings (1-4 scale) were calculated for each factor by quarter of the academic year. Separate linear mixed models were also performed. RESULTS The analysis yielded three factors that explained 50% of the variance and demonstrated high internal reliability: affective tasks (alpha = 0.90), cognitive tasks (alpha = 0.84), and hard tasks (alpha = 0.74). Items within "hard tasks" were assessment of substance use, violence risk, and adherence, and inquiry about interactions with other providers. Monitoring adverse effects did not load on the hard task factor but also had overall low mean ratings. Compared to the first quarter, fourth quarter scores for affective tasks (b = 0.54, p < 0.01) and hard tasks (b = 0.46, p = 0.02) were significantly improved while cognitive tasks had a non-significant increase. For the hard tasks, the proportion of residents with a low mean rating improved but was still over 30% during the fourth quarter. CONCLUSIONS The results provide evidence for the validity of the P-SCO with respect to its internal structure and how scores correlate with experience. Curricular implications are explored, especially for the tasks that were hard to learn.
Collapse
Affiliation(s)
- John Q Young
- Zucker School of Medicine at Hofstra/Northwell, Hempstead, NY, USA.
| | | | | |
Collapse
|
29
|
O'Connor A, Cantillon P, Parker M, McCurtin A. Juggling roles and generating solutions; practice-based educators' perceptions of performance-based assessment of physiotherapy students. Physiotherapy 2018; 105:446-452. [PMID: 30871892 DOI: 10.1016/j.physio.2018.11.008] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2018] [Accepted: 11/04/2018] [Indexed: 11/19/2022]
Abstract
OBJECTIVES Physiotherapy lacks the significant body of evidence that underpins performance-based assessments in disciplines such as medicine and nursing. In particular, very few studies have examined stakeholder perspectives of the process. This study set out to explore the perceptions of clinicians who undertake student assessment in the workplace in order to inform further development of performance-based assessment in physiotherapy. DESIGN A qualitative, descriptive design was employed where focus group interviews were utilised for data collection. Inductive thematic analysis was used to analyse the data. PARTICIPANTS Clinical educator and practice tutor volunteers affiliated with three Irish universities participated in one of seven focus groups (n=46). RESULTS Two themes were identified; 1) Tensions in the clinical learning environment, 2) An optimal PBA process. The first theme describes clinical educators' struggle with juggling multiple roles and highlights the challenges of sustaining a balance between student mentoring and patient care. The second theme outlines factors perceived to contribute to an optimal performance-based assessment process; these include maintaining aspects of the current process and expanding the employment of dedicated educational roles in the workplace. CONCLUSION Our findings illustrate a complex working environment for clinicians involved in student supervision and assessment. A dedicated educational role was perceived to provide a more standardised and rigorous approach to performance-based assessment. These findings provide critical stakeholder-centred insights, which may inform development of this process by addressing critical aspects deemed to facilitate and challenge clinical educator's roles as assessors.
Collapse
Affiliation(s)
- A O'Connor
- School of Allied Health, Health Sciences Building, University of Limerick, Castletroy, Limerick V94 T9PX, Ireland; Health Research Institute, University of Limerick, Castletroy, Limerick V94 T9PX, Ireland.
| | - P Cantillon
- Discipline of General Practice, Clinical Science Institute, National University of Ireland, Galway H91 TK33, Ireland.
| | - M Parker
- Department of Physical Education and Sports Sciences, University of Limerick, Castletroy, Limerick V94 T9PX, Ireland.
| | - A McCurtin
- School of Allied Health, Health Sciences Building, University of Limerick, Castletroy, Limerick V94 T9PX, Ireland; Health Research Institute, University of Limerick, Castletroy, Limerick V94 T9PX, Ireland.
| |
Collapse
|
30
|
Castanelli DJ, Moonen-van Loon JMW, Jolly B, Weller JM. The reliability of a portfolio of workplace-based assessments in anesthesia training. Can J Anaesth 2018; 66:193-200. [PMID: 30430441 DOI: 10.1007/s12630-018-1251-7] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2018] [Revised: 10/01/2018] [Accepted: 10/02/2018] [Indexed: 10/27/2022] Open
Abstract
PURPOSE Competency-based anesthesia training programs require robust assessment of trainee performance and commonly combine different types of workplace-based assessment (WBA) covering multiple facets of practice. This study measured the reliability of WBAs in a large existing database and explored how they could be combined to optimize reliability for assessment decisions. METHODS We used generalizability theory to measure the composite reliability of four different types of WBAs used by the Australian and New Zealand College of Anaesthetists: mini-Clinical Evaluation Exercise (mini-CEX), direct observation of procedural skills (DOPS), case-based discussion (CbD), and multi-source feedback (MSF). We then modified the number and weighting of WBA combinations to optimize reliability with fewer assessments. RESULTS We analyzed 67,405 assessments from 1,837 trainees and 4,145 assessors. We assumed acceptable reliability for interim (intermediate stakes) and final (high stakes) decisions of 0.7 and 0.8, respectively. Depending on the combination of WBA types, 12 assessments allowed the 0.7 threshold to be reached where one assessment of any type has the same weighting, while 20 were required for reliability to reach 0.8. If the weighting of the assessments is optimized, acceptable reliability for interim and final decisions is possible with nine (e.g., two DOPS, three CbD, two mini-CEX, two MSF) and 15 (e.g., two DOPS, eight CbD, three mini-CEX, two MSF) assessments respectively. CONCLUSIONS Reliability is an important factor to consider when designing assessments, and measuring composite reliability can allow the selection of a WBA portfolio with adequate reliability to provide evidence for defensible decisions on trainee progression.
Collapse
Affiliation(s)
- Damian J Castanelli
- School of Clinical Sciences at Monash Health, Monash University, Clayton, VIC, Australia. .,Department of Anaesthesia and Perioperative Medicine, Monash Health, Clayton, VIC, Australia.
| | - Joyce M W Moonen-van Loon
- Department of Educational Development and Research, Faculty of Health, Medicine, and Life Sciences, Maastricht University, Maastricht, The Netherlands
| | - Brian Jolly
- School of Medicine and Public Health, Faculty of Health and Medicine, University of Newcastle, Newcastle, NSW, Australia
| | - Jennifer M Weller
- Centre for Medical and Health Sciences Education, School of Medicine, University of Auckland, Auckland, New Zealand.,Department of Anaesthesia, Auckland City Hospital, Auckland, New Zealand
| |
Collapse
|
31
|
Park YS, Hicks PJ, Carraccio C, Margolis M, Schwartz A. Does Incorporating a Measure of Clinical Workload Improve Workplace-Based Assessment Scores? Insights for Measurement Precision and Longitudinal Score Growth From Ten Pediatrics Residency Programs. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2018; 93:S21-S29. [PMID: 30365426 DOI: 10.1097/acm.0000000000002381] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
PURPOSE This study investigates the impact of incorporating observer-reported workload into workplace-based assessment (WBA) scores on (1) psychometric characteristics of WBA scores and (2) measuring changes in performance over time using workload-unadjusted versus workload-adjusted scores. METHOD Structured clinical observations and multisource feedback instruments were used to collect WBA data from first-year pediatrics residents at 10 residency programs between July 2016 and June 2017. Observers completed items in 8 subcompetencies associated with Pediatrics Milestones. Faculty and resident observers assessed workload using a sliding scale ranging from low to high; all item scores were rescaled to a 1-5 scale to facilitate analysis and interpretation. Workload-adjusted WBA scores were calculated at the item level using three different approaches, and aggregated for analysis at the competency level. Mixed-effects regression models were used to estimate variance components. Longitudinal growth curve analyses examined patterns of developmental score change over time. RESULTS On average, participating residents (n = 252) were assessed 5.32 times (standard deviation = 3.79) by different raters during the data collection period. Adjusting for workload yielded better discrimination of learner performance, and higher reliability, reducing measurement error by 28%. Projections in reliability indicated needing up to twice the number of raters when workload-unadjusted scores were used. Longitudinal analysis showed an increase in scores over time, with significant interaction between workload and time; workload also increased significantly over time. CONCLUSIONS Incorporating a measure of observer-reported workload could improve the measurement properties and the ability to interpret WBA scores.
Collapse
Affiliation(s)
- Yoon Soo Park
- Y.S. Park is associate professor, Department of Medical Education, University of Illinois at Chicago College of Medicine, Chicago, Illinois; ORCID: http://orcid.org/0000-0001-8583-4335. P.J. Hicks is professor of clinical pediatrics, Perelman School of Medicine at the University of Pennsylvania, Philadelphia, Pennsylvania; ORCID: https://orcid.org/0000-0003-3781-780X. C. Carraccio is vice president of competency-based assessment programs, American Board of Pediatrics, Chapel Hill, North Carolina; ORCID: https://orcid.org/0000-0001-5473-8914. M. Margolis is senior measurement scientist, National Board of Medical Examiners, Philadelphia, Pennsylvania; ORCID: https://orcid.org/0000-0002-6548-7273. A. Schwartz is Michael Reese Endowed Professor of Medical Education, Department of Medical Education, and research professor, Department of Pediatrics, University of Illinois at Chicago College of Medicine, Chicago, Illinois; ORCID: http://orcid.org/0000-0003-3809-6637
| | | | | | | | | |
Collapse
|
32
|
O'Connor A, Cantillon P, McGarr O, McCurtin A. Navigating the system: Physiotherapy student perceptions of performance-based assessment. MEDICAL TEACHER 2018; 40:928-933. [PMID: 29256736 DOI: 10.1080/0142159x.2017.1416071] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
BACKGROUND Performance-based assessment (PBA) is an integral component of health professional education as it determines students' readiness for independent practice. Stakeholder input can provide valuable insight regarding its challenges, facilitators, and impact on student learning, which may further its evolution. Currently, evidence of stakeholder opinion is limited. Thus, we aimed to explore physiotherapy students' perceptions of performance-based assessment in their capacity as its central stakeholders. METHODS A qualitative interpretive constructivist approach was employed using focus group interviews for data collection. Six focus groups were completed (n = 33). Inductive thematic analysis was used to explore the data. RESULTS Two themes were identified. The first outlined perceived inconsistencies within the process, and how these impacted on student learning. The second described how students used their experiential knowledge to identify strategies to manage these challenges thus identifying key areas for improvement. CONCLUSION Inconsistencies outlined within the current physiotherapy performance-based assessment process encourage an emphasis on grades rather than on learning. It is timely that the physiotherapy academic and clinical communities consider these findings alongside evidence from other health professions to improve assessment procedures and assure public confidence and patient safety.
Collapse
Affiliation(s)
- Anne O'Connor
- a Department of Clinical Therapies , University of Limerick , Limerick , Ireland
| | - Peter Cantillon
- b Department of General Practice , National University of Ireland Galway , Galway , Ireland
| | - Oliver McGarr
- c School of Education , University of Limerick , Limerick , Ireland
| | - Arlene McCurtin
- a Department of Clinical Therapies , University of Limerick , Limerick , Ireland
- d Health Research Institute , University of Limerick , Limerick , Ireland
| |
Collapse
|
33
|
Oudkerk Pool A, Govaerts MJB, Jaarsma DADC, Driessen EW. From aggregation to interpretation: how assessors judge complex data in a competency-based portfolio. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2018; 23:275-287. [PMID: 29032415 PMCID: PMC5882626 DOI: 10.1007/s10459-017-9793-y] [Citation(s) in RCA: 36] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/06/2017] [Accepted: 09/11/2017] [Indexed: 05/11/2023]
Abstract
While portfolios are increasingly used to assess competence, the validity of such portfolio-based assessments has hitherto remained unconfirmed. The purpose of the present research is therefore to further our understanding of how assessors form judgments when interpreting the complex data included in a competency-based portfolio. Eighteen assessors appraised one of three competency-based mock portfolios while thinking aloud, before taking part in semi-structured interviews. A thematic analysis of the think-aloud protocols and interviews revealed that assessors reached judgments through a 3-phase cyclical cognitive process of acquiring, organizing, and integrating evidence. Upon conclusion of the first cycle, assessors reviewed the remaining portfolio evidence to look for confirming or disconfirming evidence. Assessors were inclined to stick to their initial judgments even when confronted with seemingly disconfirming evidence. Although assessors reached similar final (pass-fail) judgments of students' professional competence, they differed in their information-processing approaches and the reasoning behind their judgments. Differences sprung from assessors' divergent assessment beliefs, performance theories, and inferences about the student. Assessment beliefs refer to assessors' opinions about what kind of evidence gives the most valuable and trustworthy information about the student's competence, whereas assessors' performance theories concern their conceptualizations of what constitutes professional competence and competent performance. Even when using the same pieces of information, assessors furthermore differed with respect to inferences about the student as a person as well as a (future) professional. Our findings support the notion that assessors' reasoning in judgment and decision-making varies and is guided by their mental models of performance assessment, potentially impacting feedback and the credibility of decisions. Our findings also lend further credence to the assertion that portfolios should be judged by multiple assessors who should, moreover, thoroughly substantiate their judgments. Finally, it is suggested that portfolios be designed in such a way that they facilitate the selection of and navigation through the portfolio evidence.
Collapse
Affiliation(s)
- Andrea Oudkerk Pool
- Department of Educational Development and Research, Maastricht University, Universiteitssingel 60, 6229 ER, Maastricht, The Netherlands.
| | - Marjan J B Govaerts
- Department of Educational Development and Research, Maastricht University, Universiteitssingel 60, 6229 ER, Maastricht, The Netherlands
| | - Debbie A D C Jaarsma
- Center for Education Development and Research in Health Professions (CEDAR), Faculty of Medical Sciences, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Erik W Driessen
- Department of Educational Development and Research, Maastricht University, Universiteitssingel 60, 6229 ER, Maastricht, The Netherlands
| |
Collapse
|
34
|
Fahim C, Wagner N, Nousiainen MT, Sonnadara R. Assessment of Technical Skills Competence in the Operating Room: A Systematic and Scoping Review. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2018; 93:794-808. [PMID: 28953567 DOI: 10.1097/acm.0000000000001902] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
PURPOSE While academic accreditation bodies continue to promote competency-based medical education (CBME), the feasibility of conducting regular CBME assessments remains challenging. The purpose of this study was to identify evidence pertaining to the practical application of assessments that aim to measure technical competence for surgical trainees in a nonsimulated, operative setting. METHOD In August 2016, the authors systematically searched Medline, Embase, and the Cochrane Database of Systematic Reviews for English-language, peer-reviewed articles published in or after 1996. The title, abstract, and full text of identified articles were screened. Data regarding study characteristics, psychometric and measurement properties, implementation of assessment, competency definitions, and faculty training were extracted. The findings from the systematic review were supplemented by a scoping review to identify key strategies related to faculty uptake and implementation of CBME assessments. RESULTS A total of 32 studies were included. The majority of studies reported reasonable scores of interrater reliability and internal consistency. Seven articles identified minimum scores required to establish competence. Twenty-five articles mentioned faculty training. Many of the faculty training interventions focused on timely completion of assessments or scale calibration. CONCLUSIONS There are a number of diverse tools used to assess competence for intraoperative technical skills and a lack of consensus regarding the definition of technical competence within and across surgical specialties. Further work is required to identify when and how often trainees should be assessed and to identify strategies to train faculty to ensure timely and accurate assessment.
Collapse
Affiliation(s)
- Christine Fahim
- C. Fahim is a PhD candidate, Department of Health Research Methods, Evidence and Impact, McMaster University, Hamilton, Ontario, Canada. N. Wagner is a PhD candidate, Department of Psychology, Neuroscience and Behaviour, McMaster University, Hamilton, Ontario, Canada. M.T. Nousiainen is orthopedic surgeon and assistant professor, Sunnybrook Hospital, Department of Surgery, University of Toronto, Toronto, Ontario, Canada. R. Sonnadara is director of education science and associate professor, Department of Surgery, McMaster University, Hamilton, Ontario, Canada, and associate professor, Department of Surgery, University of Toronto, Toronto, Ontario, Canada; ORCID: http://orcid.org/0000-0001-8318-5714
| | | | | | | |
Collapse
|
35
|
Chan T, Sebok‐Syer S, Thoma B, Wise A, Sherbino J, Pusic M. Learning Analytics in Medical Education Assessment: The Past, the Present, and the Future. AEM EDUCATION AND TRAINING 2018; 2:178-187. [PMID: 30051086 PMCID: PMC6001721 DOI: 10.1002/aet2.10087] [Citation(s) in RCA: 54] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/29/2018] [Accepted: 01/30/2018] [Indexed: 05/09/2023]
Abstract
With the implementation of competency-based medical education (CBME) in emergency medicine, residency programs will amass substantial amounts of qualitative and quantitative data about trainees' performances. This increased volume of data will challenge traditional processes for assessing trainees and remediating training deficiencies. At the intersection of trainee performance data and statistical modeling lies the field of medical learning analytics. At a local training program level, learning analytics has the potential to assist program directors and competency committees with interpreting assessment data to inform decision making. On a broader level, learning analytics can be used to explore system questions and identify problems that may impact our educational programs. Scholars outside of health professions education have been exploring the use of learning analytics for years and their theories and applications have the potential to inform our implementation of CBME. The purpose of this review is to characterize the methodologies of learning analytics and explore their potential to guide new forms of assessment within medical education.
Collapse
Affiliation(s)
- Teresa Chan
- McMaster program for Education Research, Innovation, and Theory (MERIT)HamiltonOntarioCanada
| | - Stefanie Sebok‐Syer
- Centre for Education Research & InnovationSchulich School of Medicine and DentistrySaskatoonSaskatchewanCanada
| | - Brent Thoma
- Department of Emergency MedicineUniversity of SaskatchewanSaskatoonSaskatchewanCanada
| | - Alyssa Wise
- Steinhardt School of Culture, Education, and Human DevelopmentNew York UniversityNew YorkNY
| | - Jonathan Sherbino
- Faculty of Health ScienceDivision of Emergency MedicineDepartment of MedicineMcMaster UniversityHamiltonOntarioCanada
- McMaster program for Education Research, Innovation, and Theory (MERIT)HamiltonOntarioCanada
| | - Martin Pusic
- Department of Emergency MedicineNYU School of MedicineNew YorkNY
| |
Collapse
|
36
|
Gruppen LD, Ten Cate O, Lingard LA, Teunissen PW, Kogan JR. Enhanced Requirements for Assessment in a Competency-Based, Time-Variable Medical Education System. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2018; 93:S17-S21. [PMID: 29485482 DOI: 10.1097/acm.0000000000002066] [Citation(s) in RCA: 53] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
Competency-based, time-variable medical education has reshaped the perceptions and practices of teachers, curriculum designers, faculty developers, clinician educators, and program administrators. This increasingly popular approach highlights the fact that learning among different individuals varies in duration, foundation, and goal. Time variability places particular demands on the assessment data that are so necessary for making decisions about learner progress. These decisions may be formative (e.g., feedback for improvement) or summative (e.g., decisions about advancing a student). This article identifies challenges to collecting assessment data and to making assessment decisions in a time-variable system. These challenges include managing assessment data, defining and making valid assessment decisions, innovating in assessment, and modeling the considerable complexity of assessment in real-world settings and richly interconnected social systems. There are hopeful signs of creativity in assessment both from researchers and practitioners, but the transition from a traditional to a competency-based medical education system will likely continue to create much controversy and offer opportunities for originality and innovation in assessment.
Collapse
Affiliation(s)
- Larry D Gruppen
- L.D. Gruppen is professor, Department of Learning Health Sciences, University of Michigan Medical School, Ann Arbor, Michigan. O. ten Cate is professor of medical education, Center for Research and Development of Education, University Medical Center Utrecht, Utrecht, the Netherlands. L.A. Lingard is professor, Department of Medicine, and director, Centre for Education Research & Innovation, Schulich School of Medicine & Dentistry, Western University, London, Ontario, Canada. P.W. Teunissen is professor, School of Health Professions Education, Maastricht University, Maastricht, the Netherlands, and maternal fetal medicine specialist, VU University Medical Center, Amsterdam, the Netherlands. J.R. Kogan is professor of medicine, Department of Medicine, Perelman School of Medicine at the University of Pennsylvania, Philadelphia, Pennsylvania
| | | | | | | | | |
Collapse
|
37
|
Patel US, Tonni I, Gadbury-Amyot C, Van der Vleuten CPM, Escudier M. Assessment in a global context: An international perspective on dental education. EUROPEAN JOURNAL OF DENTAL EDUCATION : OFFICIAL JOURNAL OF THE ASSOCIATION FOR DENTAL EDUCATION IN EUROPE 2018; 22 Suppl 1:21-27. [PMID: 29601682 DOI: 10.1111/eje.12343] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 03/12/2018] [Indexed: 05/08/2023]
Abstract
Assessments are widely used in dental education to record the academic progress of students and ultimately determine whether they are ready to begin independent dental practice. Whilst some would consider this a "rite-of-passage" of learning, the concept of assessments in education is being challenged to allow the evolution of "assessment for learning." This serves as an economical use of learning resources whilst allowing our learners to prove their knowledge and skills and demonstrating competence. The Association for Dental Education in Europe and the American Dental Education Association held a joint international meeting in London in May 2017 allowing experts in dental education to come together for the purposes of Shaping the Future of Dental Education. Assessment in a Global Context was one topic in which international leaders could discuss different methods of assessment, identifying the positives, the pitfalls and critiquing the method of implementation to determine the optimum assessment for a learner studying to be a healthcare professional. A post-workshop survey identified that educators were thinking differently about assessment, instead of working as individuals providing isolated assessments; the general consensus was that a longitudinally orientated systematic and programmatic approach to assessment provide greater reliability and improved the ability to demonstrate learning.
Collapse
Affiliation(s)
- U S Patel
- School of Dentistry, University of Birmingham, Birmingham, UK
| | - I Tonni
- Department of Orthodontics, University of Brescia, Brescia, Italy
| | - C Gadbury-Amyot
- The University of Missouri-Kansas City (UMKC), Kansas City, MO, USA
| | - C P M Van der Vleuten
- Department of Educational Development and Research in the Faculty of Health, Medicine and Life Sciences, Maastricht University, Maastricht, The Netherlands
| | - M Escudier
- Department of Clinical and Diagnostic Sciences, King's College London Dental Institute, London, UK
| |
Collapse
|
38
|
Ricotta DN, Smith CC, McSparron JI, Chaudhry SI, McDonald FS, Huang GC. When Old Habits Train a New Generation: Findings From a National Survey of Internal Medicine Program Directors on Procedural Training. Am J Med Qual 2017; 33:383-390. [PMID: 29185357 DOI: 10.1177/1062860617743786] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Resident physicians routinely perform bedside procedures that pose substantial risk to patients. However, no standard programmatic approach to supervision and procedural competency assessment among residents currently exists. The authors performed a national survey of internal medicine (IM) program directors to examine procedural assessment and supervision practices of IM residency programs. Procedures chosen were those commonly performed by medicine residents at the bedside. Of the 368 IM programs, 226 (61%) completed the survey. Programs reported the predominant method of training as 171 (74%) apprenticeship and 106 (46%) as module based. The majority of programs used direct observation to attest to competence, with 55% to 62% relying on credentialed residents. Most programs also relied on a minimum number of procedures to determine competence (64%-88%), 72% of which reported 5 procedures (a lapsed historical standard). This national survey demonstrates that procedural assessment practices for IM residents are insufficiently robust and may put patients at undue risk.
Collapse
|
39
|
Pottier P, Cohen Aubart F, Steichen O, Desprets M, Pha M, Espitia A, Georgin-Lavialle S, Morel A, Hardouin JB. [Validity and reproducibility of two direct observation assessment forms for evaluation of internal medicine residents' clinical skills]. Rev Med Interne 2017; 39:4-9. [PMID: 29157753 DOI: 10.1016/j.revmed.2017.10.424] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2017] [Revised: 09/05/2017] [Accepted: 10/18/2017] [Indexed: 10/18/2022]
Abstract
INTRODUCTION The revision of the French medical studies' third cycle ought to be competency-based. In internal medicine, theoretical and practical knowledge will be assessed online with e-learning and e-portfolio. In parallel, a reflection about clinical skills assessment forms is currently ongoing. In this context, our aim was to assess the reproducibility and validity of two assessment forms based on direct clinical observation. METHOD A prospective and multicentric study has been conducted from November 2015 to October 2016 aiming at evaluating the French translations of the MINI-Clinical Examination Exercice (MINI-CEX) and the Standardized Patient Satisfaction Questionnaire (SPSQ). Included residents have been assessed 2 times over a period of 6 months by the same binoma of judges. RESULTS Nineteen residents have been included. The inter-judge reproducibility was satisfactory for the MINI-CEX: intraclass coefficients (ICC) between 0.4 and 0.8 and moderate for the SPSQ: ICC between 0.2 and 0.7 with a good internal coherence for both questionnaires (Cronbach between 0.92 and 0.94). Significant differences between the distributions of the scores given by the judges and a significant inter-center variability have been found. CONCLUSION If the absolute value of the scores should not be taken into account in the evaluation process given its high variability, it could be of interest for the follow-up of the progression in the competencies. These forms could support the residents' debriefing based on the general trends given by the scores.
Collapse
Affiliation(s)
- P Pottier
- Service de médecine interne, Hôtel-Dieu, CHU de Nantes, place Alexis-Ricordeau, 44093 Nantes, France; SPHERE U1246, Inserm, université de Nantes-université de Tours, 44000 Nantes, France.
| | - F Cohen Aubart
- Service de médecine interne 2, hôpital de la Pitié-Salpêtrière, université Paris-VI - Pierre-et-Marie-Curie, Assistance publique-Hôpitaux de Paris, 75013 Paris, France
| | - O Steichen
- Service de médecine interne, hôpital Tenon, UPMC université Paris 06, Sorbonne universités, AP-HP, 75970 Paris, France
| | - M Desprets
- Service de médecine interne, Hôtel-Dieu, CHU de Nantes, place Alexis-Ricordeau, 44093 Nantes, France
| | - M Pha
- Service de médecine interne 2, hôpital de la Pitié-Salpêtrière, université Paris-VI - Pierre-et-Marie-Curie, Assistance publique-Hôpitaux de Paris, 75013 Paris, France
| | - A Espitia
- Service de médecine interne, Hôtel-Dieu, CHU de Nantes, place Alexis-Ricordeau, 44093 Nantes, France
| | - S Georgin-Lavialle
- Service de médecine interne, hôpital Tenon, UPMC université Paris 06, Sorbonne universités, AP-HP, 75970 Paris, France
| | - A Morel
- SPHERE U1246, Inserm, université de Nantes-université de Tours, 44000 Nantes, France
| | - J B Hardouin
- SPHERE U1246, Inserm, université de Nantes-université de Tours, 44000 Nantes, France
| |
Collapse
|
40
|
Nair B(KR, Moonen‐van Loon JMW, Parvathy M, Jolly BC, Vleuten CPM. Composite reliability of workplace‐based assessment of international medical graduates. Med J Aust 2017; 207:453. [DOI: 10.5694/mja17.00130] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2017] [Accepted: 09/25/2017] [Indexed: 11/17/2022]
Affiliation(s)
- Balakrishnan (Kichu) R Nair
- Centre for Medical Professional Development, John Hunter Hospital, Newcastle, NSW
- University of Newcastle, Newcastle, NSW
| | | | - Mulavana Parvathy
- Centre for Medical Professional Development, John Hunter Hospital, Newcastle, NSW
| | | | | |
Collapse
|
41
|
Harris P, Bhanji F, Topps M, Ross S, Lieberman S, Frank JR, Snell L, Sherbino J. Evolving concepts of assessment in a competency-based world. MEDICAL TEACHER 2017; 39:603-608. [PMID: 28598736 DOI: 10.1080/0142159x.2017.1315071] [Citation(s) in RCA: 86] [Impact Index Per Article: 12.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/13/2023]
Abstract
Competency-based medical education (CBME) is an approach to the design of educational systems or curricula that focuses on graduate abilities or competencies. It has been adopted in many jurisdictions, and in recent years an explosion of publications has examined its implementation and provided a critique of the approach. Assessment in a CBME context is often based on observations or judgments about an individual's level of expertise; it emphasizes frequent, direct observation of performance along with constructive and timely feedback to ensure that learners, including clinicians, have the expertise they need to perform entrusted tasks. This paper explores recent developments since the publication in 2010 of Holmboe and colleagues' description of CBME assessment. Seven themes regarding assessment that arose at the second invitational summit on CBME, held in 2013, are described: competency frameworks, the reconceptualization of validity, qualitative methods, milestones, feedback, assessment processes, and assessment across the medical education continuum. Medical educators interested in CBME, or assessment more generally, should consider the implications for their practice of the review of these emerging concepts.
Collapse
Affiliation(s)
- Peter Harris
- a Office of Medical Education , University of New South Wales , Sydney , Australia
| | - Farhan Bhanji
- b Royal College of Physicians and Surgeons of Canada , Ottawa , Canada
- c Centre for Medical and Department of General Internal Medicine , McGill University , Montreal , Quebec, Canada
| | - Maureen Topps
- d Cumming School of Medicine , University of Calgary , Calgary , Canada
| | - Shelley Ross
- e Department of Family Medicine , University of Alberta , Edmonton , Canada
| | - Steven Lieberman
- f Office of the Dean of medicine, University of Texas Medical Branch , Galveston , TX , USA
| | - Jason R Frank
- b Royal College of Physicians and Surgeons of Canada , Ottawa , Canada
- g Department of Emergency Medicine , University of Ottawa , Ottawa , Canada
| | - Linda Snell
- b Royal College of Physicians and Surgeons of Canada , Ottawa , Canada
- c Centre for Medical and Department of General Internal Medicine , McGill University , Montreal , Quebec, Canada
| | - Jonathan Sherbino
- h Division of Emergency Medicine, Department of Medicine , McMaster University , Hamilton , Canada
| |
Collapse
|
42
|
Driessen E. Do portfolios have a future? ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2017; 22:221-228. [PMID: 27025510 PMCID: PMC5306426 DOI: 10.1007/s10459-016-9679-4] [Citation(s) in RCA: 40] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/23/2016] [Accepted: 03/22/2016] [Indexed: 05/04/2023]
Abstract
While portfolios have seen an unprecedented surge in popularity, they have also become the subject of controversy: learners often perceive little gain from writing reflections as part of their portfolios; scholars question the ethics of such obligatory reflection; and students, residents, teachers and scholars alike condemn the bureaucracy surrounding portfolio implementation in competency-based education. It could be argued that mass adoption without careful attention to purpose and format may well jeopardize portfolios' viability in health sciences education. This paper explores this proposition by addressing the following three main questions: (1) Why do portfolios meet with such resistance from students and teachers, while educators love them?; (2) Is it ethical to require students to reflect and then grade their reflections?; (3) Does competency-based education empower or hamper the learner during workplace-based learning? Twenty-five years of portfolio reveal a clear story: without mentoring, portfolios have no future and are nothing short of bureaucratic hurdles in our competency-based education programs. Moreover, comprehensive portfolios, which are integrated into the curriculum and much more diverse in content than reflective portfolios, can serve as meaningful patient charts, providing doctor and patient with useful information to discuss well-being and treatment. In this sense, portfolios are also learner charts that comprehensively document progress in a learning trajectory which is lubricated by meaningful dialogue between learner and mentor in a trusting relationship to foster learning. If we are able to make such comprehensive and meaningful use of portfolios, then, yes, portfolios do have a bright future in medical education.
Collapse
Affiliation(s)
- Erik Driessen
- Department of Educational Development & Research, Faculty of Health Medicine and Life Sciences, Maastricht University, Maastricht, The Netherlands.
| |
Collapse
|
43
|
Abstract
BACKGROUND The increasing use of workplace-based assessments (WBAs) in competency-based medical education has led to large data sets that assess resident performance longitudinally. With large data sets, problems that arise from missing data are increasingly likely. OBJECTIVE The purpose of this study is to examine (1) whether data are missing at random across various WBAs, and (2) the relationship between resident performance and the proportion of missing data. METHODS During 2012-2013, a total of 844 WBAs of CanMEDs Roles were completed for 9 second-year emergency medicine residents. To identify whether missing data were randomly distributed across various WBAs, the total number of missing data points was calculated for each Role. To examine whether the amount of missing data was related to resident performance, 5 faculty members rank-ordered the residents based on performance. A median rank score was calculated for each resident and was correlated with the proportion of missing data. RESULTS More data were missing for Health Advocate and Professional WBAs relative to other competencies (P < .001). Furthermore, resident rankings were not related to the proportion of missing data points (r = 0.29, P > .05). CONCLUSIONS The results of the present study illustrate that some CanMEDS Roles are less likely to be assessed than others. At the same time, the amount of missing data did not correlate with resident performance, suggesting lower-performing residents are no more likely to have missing data than their higher-performing peers. This article discusses several approaches to dealing with missing data.
Collapse
Affiliation(s)
| | | | - Teresa M. Chan
- Corresponding author: Teresa M. Chan, BEd, MD, FRCPC, Hamilton General Hospital, McMaster Clinic, Room 254, 237 Barton Street East, Hamilton, ON L8L 2X2, Canada,
| |
Collapse
|
44
|
van Loon KA, Teunissen PW, Driessen EW, Scheele F. The Role of Generic Competencies in the Entrustment of Professional Activities: A Nationwide Competency-Based Curriculum Assessed. J Grad Med Educ 2016; 8:546-552. [PMID: 27777665 PMCID: PMC5058587 DOI: 10.4300/jgme-d-15-00321.1] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/06/2022] Open
Abstract
BACKGROUND Entrustable professional activities (EPAs) seek to translate essential physician competencies into clinical practice. Until now, it is not known whether EPA-based curricula offer enhanced assessment and feedback to trainees. OBJECTIVE This study examined program directors' and senior residents' justifications for entrustment decisions and what role generic, cross-specialty competencies (such as communication skills, collaboration, and understanding health care systems) play in these decisions. METHODS Entrustment decisions for all Dutch obstetrics and gynecology residents between January 2010 and April 2014 were retrieved from their electronic portfolios. Justifications for entrustment were divided into 4 categories: the resident's experience, his or her technical performance, the presence of a generic competency, and training. Template analysis was used to analyze in depth the types of justifications, which play a role in entrustment decisions. RESULTS A total of 5139 entrustment decisions for 375 unique residents were extracted and analyzed. In 59% of all entrustment decisions, entrusting a professional task to a resident was justified by the experience of the resident. Generic competencies were mentioned in 0.5% of all entrustment decisions. Template analysis revealed that the amount of exposure and technical skills are leading factors, while the quality of the performance was not reported to be of any influence. CONCLUSIONS Entrustment decisions only rarely are based on generic competencies, despite the introduction of competency frameworks and EPAs. For program directors, a leading factor in entrustment decisions is a resident's exposure to an activity, and the quality of a resident's performance appears to play only a minor role.
Collapse
Affiliation(s)
- Karsten A. van Loon
- Corresponding author: Karsten A. van Loon, MSc, OLVG West, Jan Tooropstraat 164, 1061 AE, Amsterdam, the Netherlands,
| | | | | | | |
Collapse
|
45
|
Schwartz A, Margolis MJ, Multerer S, Haftel HM, Schumacher DJ. A multi-source feedback tool for measuring a subset of Pediatrics Milestones. MEDICAL TEACHER 2016; 38:995-1002. [PMID: 27027428 DOI: 10.3109/0142159x.2016.1147646] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
BACKGROUND The Pediatrics Milestones Assessment Pilot employed a new multisource feedback (MSF) instrument to assess nine Pediatrics Milestones among interns and subinterns in the inpatient context. OBJECTIVE To report validity evidence for the MSF tool for informing milestone classification decisions. METHODS We obtained MSF instruments by different raters per learner per rotation. We present evidence for validity based on the unified validity framework. RESULTS One hundred and ninety two interns and 41 subinterns at 18 Pediatrics residency programs received a total of 1084 MSF forms from faculty (40%), senior residents (34%), nurses (22%), and other staff (4%). Variance in ratings was associated primarily with rater (32%) and learner (22%). The milestone factor structure fit data better than simpler structures. In domains except professionalism, ratings by nurses were significantly lower than those by faculty and ratings by other staff were significantly higher. Ratings were higher when the rater observed the learner for longer periods and had a positive global opinion of the learner. Ratings of interns and subinterns did not differ, except for ratings by senior residents. MSF-based scales correlated with summative milestone scores. CONCLUSION We obtain moderately reliable MSF ratings of interns and subinterns in the inpatient context to inform some milestone assignments.
Collapse
Affiliation(s)
- Alan Schwartz
- a Department of Medical Education , University of Illinois at Chicago , Chicago , IL , USA
- b Association of Pediatric Program Directors , Longitudinal Educational Assessment Research Network , McLean , VA , USA
| | - Melissa J Margolis
- c Measurement Consulting Services, National Board of Medical Examiners , Philadelphia , PA , USA
| | - Sara Multerer
- d Department of Pediatrics , University of Louisville , Louisville , KY , USA
| | - Hilary M Haftel
- e Department of Pediatrics , University of Michigan , Ann Arbor , MI , USA
| | - Daniel J Schumacher
- f Department of Emergency Medicine , Cincinnati Children's Hospital Medical Center , Cincinnati , OH , USA
| |
Collapse
|
46
|
|
47
|
Nair BKR, Moonen-van Loon JM, Parvathy MS, van der Vleuten CP. Composite reliability of workplace-based assessment for international medical graduates. Med J Aust 2016; 205:212-6. [PMID: 27581267 DOI: 10.5694/mja16.00069] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2016] [Accepted: 06/03/2016] [Indexed: 11/17/2022]
Abstract
OBJECTIVE The fitness to practise of international medical graduates (IMGs) is usually evaluated with standardised assessment tests. Practising doctors should, however, be assessed on their performance rather than their competency, for which reason workplace-based assessment (WBA) has gained increasing attention. Our aim was to assess the composite reliability of WBA instruments for assessing the performance of IMGs. DESIGN AND SETTING Between June 2010 and April 2015, 142 IMGs were assessed by 99 calibrated assessors; each cohort was assessed at their workplace over 6 months. The IMGs completed 970 case-based discussions (CBDs), 1741 Mini-Clinical Examination Exercises (mini-CEX) and 1020 multisource feedback (MSF) sessions. PARTICIPANTS 103 male and 39 female candidates based in urban and rural hospitals of the Hunter New England Health region, from 28 countries (Africa, Asia, Europe, South America, South Pacific). MAIN OUTCOME MEASURES The reliability of the three WBA tools; the composite reliability of the tools as a group. RESULTS The composite reliability of our WBA toolbox program was good: the composite reliability coefficient for five CBDs and 12 mini-CEX was 0.895 (standard error of measurement, 0.138). When the six MSF results were included, the composite reliability coefficient was 0.899 (standard error of measurement, 0.125). CONCLUSIONS WBA is a reliable method for assessing IMGs when multiple tools and assessors are used over a period of time. This form of assessment meets the criteria for "good assessment" (reliability ≥ 0.8) and can be applied in other settings.
Collapse
Affiliation(s)
| | | | - Mulavana S Parvathy
- Centre for Medical Professional Development, John Hunter Hospital, Newcastle, NSW
| | | |
Collapse
|
48
|
Fraser AB, Stodel EJ, Chaput AJ. Curriculum reform for residency training: competence, change, and opportunities for leadership. Can J Anaesth 2016; 63:875-84. [DOI: 10.1007/s12630-016-0637-7] [Citation(s) in RCA: 33] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2015] [Revised: 01/23/2016] [Accepted: 03/22/2016] [Indexed: 11/30/2022] Open
|
49
|
Moore K, Vaughan B. Assessment of Australian osteopathic learners' clinical competence during workplace learning. INT J OSTEOPATH MED 2016. [DOI: 10.1016/j.ijosm.2015.06.004] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
50
|
Abstract
BACKGROUND For resident doctors the acquisition of technical and professional competence is decisive for the successful practice of their activities. Competency and professional development of resident doctors benefit from regular self-reflection and assessment by peers. While often promoted and recommended by national educational authorities, the implementation of a robust evaluation process in the clinical routine is often counteracted by several factors. OBJECTIVE The aim of the study was to test a self-developed digital evaluation system for the assessment of radiology residents at our institute for practicality and impact with regard to the radiological training. MATERIAL AND METHODS The intranet-based evaluation system was implemented in January 2014, which allowed all Radiology consultants to submit a structured assessment of the Radiology residents according to standardized criteria. It included 7 areas of competency and 31 questions, as well as a self-assessment module, both of which were filled out electronically on a 3-month basis using a 10-point scale and the opportunity to make free text comments. The results of the mandatory self-evaluation by the residents were displayed beside the evaluation by the supervisor. Access to results was restricted and quarterly discussions with the residents were conducted confidentially and individually. RESULTS AND DISCUSSION The system was considered to be practical to use and stable in its functionality. The centrally conducted anonymous national survey of residents revealed a noticeable improvement of satisfaction with the institute assessment for the criterion "regular feedback"compared to the national average. Since its implementation the system has been further developed and extended and is now available for other institutions.
Collapse
Affiliation(s)
- O Kolokythas
- Institut für Radiologie und Nuklearmedizin, Kantonsspital Winterthur, Brauerstr. 15, Postfach 834, 8401, Winterthur, Schweiz.
| | - R Patzwahl
- Institut für Radiologie und Nuklearmedizin, Kantonsspital Winterthur, Brauerstr. 15, Postfach 834, 8401, Winterthur, Schweiz
| | - M Straka
- Institut für Radiologie und Nuklearmedizin, Kantonsspital Winterthur, Brauerstr. 15, Postfach 834, 8401, Winterthur, Schweiz
| | - C Binkert
- Institut für Radiologie und Nuklearmedizin, Kantonsspital Winterthur, Brauerstr. 15, Postfach 834, 8401, Winterthur, Schweiz
| |
Collapse
|