1
|
Jiyed O, Alami A, Maskour L, El Batri B, Benjelloun N, Zaki M. Students' approaches to learning (SALs): Validation and psychometric properties of a tool measurement. JOURNAL OF EDUCATION AND HEALTH PROMOTION 2023; 12:228. [PMID: 37727427 PMCID: PMC10506742 DOI: 10.4103/jehp.jehp_203_23] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/14/2023] [Accepted: 05/03/2023] [Indexed: 09/21/2023]
Abstract
BACKGROUND Deep learning is an important outcome of the higher education and is mostly determined by students' approaches to learning (SALs). The latest version of the Study Process Questionnaire (SPQ) is one of the most used instruments assessing SALs. Many studies from various contexts have either validated or used this famous tool. But none of them-to the best of our knowledge-stem from the Moroccan tertiary context. The current study fills this gap by first: Getting a local translation of the questionnaire following the standardized methodological process and secondly to update the validity and psychometric properties of the construct. MATERIALS AND METHODS Arabic back translation was performed. Data were collected among tertiary scientific students. Descriptive statistics, Cronbach's coefficient alpha, and confirmatory factor analysis were carried out under SPSS version 22. RESULTS A strong fit of the dichotomic construct (deep and surface) was found, whereas the hierarchical models were disappointing. CONCLUSIONS Following the standards of the psychometrics' validation, this Arabic version could be used only in first-order factor model to evaluate the deep and surface approach within tertiary education in Moroccan context.
Collapse
Affiliation(s)
- Omar Jiyed
- LIMOME, Department of Chemistry, Faculty of Sciences Dhar Mahraz, Sidi Mohammed Ben Abdellah University, Fez, Morocco
| | - Anouar Alami
- LIMOME, Department of Chemistry, Faculty of Sciences Dhar Mahraz, Sidi Mohammed Ben Abdellah University, Fez, Morocco
| | - Lhoussaine Maskour
- LRST, High School of Education and Training (ESEF), Ibn Zohr University, Agadir, Morocco
| | - Bouchta El Batri
- Regional Center for Education and Training Professions (CRMEF Fez-Meknes), Fez, Morocco
| | - Nadia Benjelloun
- LISAC, Departments of Physics and Mathematics, Faculty of Sciences Dhar Mahraz, Sidi Mohammed Ben Abdellah University, Fez, Morocco
| | - Moncef Zaki
- LISAC, Departments of Physics and Mathematics, Faculty of Sciences Dhar Mahraz, Sidi Mohammed Ben Abdellah University, Fez, Morocco
| |
Collapse
|
2
|
Jamieson J, Gibson S, Hay M, Palermo C. Teacher, Gatekeeper, or Team Member: supervisor positioning in programmatic assessment. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2022:10.1007/s10459-022-10193-9. [PMID: 36469231 DOI: 10.1007/s10459-022-10193-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/18/2022] [Accepted: 11/27/2022] [Indexed: 06/17/2023]
Abstract
Competency-based assessment is undergoing an evolution with the popularisation of programmatic assessment. Fundamental to programmatic assessment are the attributes and buy-in of the people participating in the system. Our previous research revealed unspoken, yet influential, cultural and relationship dynamics that interact with programmatic assessment to influence success. Pulling at this thread, we conducted secondary analysis of focus groups and interviews (n = 44 supervisors) using the critical lens of Positioning Theory to explore how workplace supervisors experienced and perceived their positioning within programmatic assessment. We found that supervisors positioned themselves in two of three ways. First, supervisors universally positioned themselves as a Teacher, describing an inherent duty to educate students. Enactment of this position was dichotomous, with some supervisors ascribing a passive and disempowered position onto students while others empowered students by cultivating an egalitarian teaching relationship. Second, two mutually exclusive positions were described-either Gatekeeper or Team Member. Supervisors positioning themselves as Gatekeepers had a duty to protect the community and were vigilant to the detection of inadequate student performance. Programmatic assessment challenged this positioning by reorientating supervisor rights and duties which diminished their perceived authority and led to frustration and resistance. In contrast, Team Members enacted a right to make a valuable contribution to programmatic assessment and felt liberated from the burden of assessment, enabling them to assent power shifts towards students and the university. Identifying supervisor positions revealed how programmatic assessment challenged traditional structures and ideologies, impeding success, and provides insights into supporting supervisors in programmatic assessment.
Collapse
Affiliation(s)
- Janica Jamieson
- Monash University, Melbourne, Australia.
- School of Medical and Health Sciences, Edith Cowan University, 270 Joondalup Drive, Joondalup, WA, 6027, Australia.
| | | | | | | |
Collapse
|
3
|
Andrews J, Chartash D, Hay S. Gender bias in resident evaluations: Natural language processing and competency evaluation. MEDICAL EDUCATION 2021; 55:1383-1387. [PMID: 34224606 DOI: 10.1111/medu.14593] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/17/2021] [Revised: 06/08/2021] [Accepted: 06/21/2021] [Indexed: 06/13/2023]
Abstract
BACKGROUND Research shows that female trainees experience evaluation penalties for gender non-conforming behaviour during medical training. Studies of medical education evaluations and performance scores do reflect a gender bias, though studies are of varying methodology and results have not been consistent. OBJECTIVE We sought to examine the differences in word use, competency themes and length within written evaluations of internal medicine residents at scale, considering the impact of both faculty and resident gender. We hypothesised that female internal medicine residents receive more negative feedback, and different thematic feedback than male residents. METHODS This study utilised a corpus of 3864 individual responses to positive and negative questions over the course of six years (2012-2018) within Yale University School of Medicine's internal medicine residency. Researchers developed a sentiment model to assess the valence of evaluation responses. We then used natural language processing (NLP) to evaluate whether female versus male residents received more positive or negative feedback and if that feedback focussed on different Accreditation Council for Graduate Medical Education (ACGME) core competencies based on their gender. Evaluator-evaluatee gender dyad was analysed to see how it impacted quantity and quality of feedback. RESULTS We found that female and male residents did not have substantively different numbers of positive or negative comments. While certain competencies were discussed more than others, gender did not seem to influence which competencies were discussed. Neither gender trainee received more written feedback, though female evaluators tended to write longer evaluations. CONCLUSIONS We conclude that when examined at scale, quantitative gender differences are not as prevalent as has been seen in qualitative work. We suggest that further investigation of linguistic phenomena (such as context) is warranted to reconcile this finding with prior work.
Collapse
Affiliation(s)
- Jane Andrews
- Department of Internal Medicine, The University of Texas Health Science Center at Houston John P and Katherine G McGovern Medical School, Houston, TX, USA
| | - David Chartash
- Center for Medical Informatics, Yale University School of Medicine, New Haven, CT, USA
| | - Seonaid Hay
- Department of Internal Medicine, Yale University School of Medicine, New Haven, CT, USA
| |
Collapse
|
4
|
Tavares W, Hodwitz K, Rowland P, Ng S, Kuper A, Friesen F, Shwetz K, Brydges R. Implicit and inferred: on the philosophical positions informing assessment science. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2021; 26:1597-1623. [PMID: 34370126 DOI: 10.1007/s10459-021-10063-w] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/04/2021] [Accepted: 07/25/2021] [Indexed: 06/13/2023]
Abstract
Assessment practices have been increasingly informed by a range of philosophical positions. While generally beneficial, the addition of options can lead to misalignment in the philosophical assumptions associated with different features of assessment (e.g., the nature of constructs and competence, ways of assessing, validation approaches). Such incompatibility can threaten the quality and defensibility of researchers' claims, especially when left implicit. We investigated how authors state and use their philosophical positions when designing and reporting on performance-based assessments (PBA) of intrinsic roles, as well as the (in)compatibility of assumptions across assessment features. Using a representative sample of studies examining PBA of intrinsic roles, we used qualitative content analysis to extract data on how authors enacted their philosophical positions across three key assessment features: (1) construct conceptualizations, (2) assessment activities, and (3) validation methods. We also examined patterns in philosophical positioning across features and studies. In reviewing 32 papers from established peer-reviewed journals, we found (a) authors rarely reported their philosophical positions, meaning underlying assumptions could only be inferred; (b) authors approached features of assessment in variable ways that could be informed by or associated with different philosophical assumptions; (c) we experienced uncertainty in determining (in)compatibility of philosophical assumptions across features. Authors' philosophical positions were often vague or absent in the selected contemporary assessment literature. Leaving such details implicit may lead to misinterpretation by knowledge users wishing to implement, build on, or evaluate the work. As such, assessing claims, quality and defensibility, may increasingly depend more on who is interpreting, rather than what is being interpreted.
Collapse
Affiliation(s)
- Walter Tavares
- The Wilson Centre, Temerty Faculty of Medicine, Department of Medicine, Institute for Health Policy, Management and Evaluation, University of Toronto/University Health Network, Toronto, Ontario, Canada.
| | - Kathryn Hodwitz
- Li Ka Shing Knowledge Institute, St. Michaels Hospital, Toronto, Ontario, Canada
| | - Paula Rowland
- The Wilson Centre, Temerty Faculty of Medicine, Department of Occupational Therapy and Occupational Science, University of Toronto/University Health Network, Toronto, Ontario , Canada
| | - Stella Ng
- The Wilson Centre, Temerty Faculty of Medicine, Department of Speech-Language Pathology, Temerty Faculty of Medicine, The Wilson Centre, University of Toronto, Centre for Faculty Development, Unity Health Toronto, Toronto, Ontario, Canada
| | - Ayelet Kuper
- The Wilson Centre, University Health Network/University of Toronto, Division of General Internal Medicine, Sunnybrook Health Sciences Centre, Department of Medicine, Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada
| | - Farah Friesen
- Centre for Faculty Development, Temerty Faculty of Medicine, University of Toronto at Unity Health Toronto, Toronto, Ontario, Canada
| | - Katherine Shwetz
- Department of English, University of Toronto, Toronto, Ontario, Canada
| | - Ryan Brydges
- The Wilson Centre, Temerty Faculty of Medicine, Department of Medicine, Unity Health Toronto, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
5
|
Pearce J, Tavares W. A philosophical history of programmatic assessment: tracing shifting configurations. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2021; 26:1291-1310. [PMID: 33893881 DOI: 10.1007/s10459-021-10050-1] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/23/2020] [Accepted: 04/09/2021] [Indexed: 06/12/2023]
Abstract
Programmatic assessment is now well entrenched in medical education, allowing us to reflect on when it first emerged and how it evolved into the form we know today. Drawing upon the intellectual tradition of historical epistemology, we provide a philosophically-oriented historiographical study of programmatic assessment. Our goal is to trace its relatively short historical trajectory by describing shifting configurations in its scene of inquiry-focusing on questions, practices, and philosophical presuppositions. We identify three historical phases: emergence, evolution and entrenchment. For each, we describe the configurations of the scene; examine underlying philosophical presuppositions driving changes; and detail upshots in assessment practice. We find that programmatic assessment emerged in response to positivist 'turmoil' prior to 2005, driven by utility considerations and implicit pragmatist undertones. Once introduced, it evolved with notions of diversity and learning being underscored, and a constructivist ontology developing at its core. More recently, programmatic assessment has become entrenched as its own sub-discipline. Rich narratives have been emphasised, but philosophical underpinnings have been blurred. We hope to shed new light on current assessment practices in the medical education community by interrogating the history of programmatic assessment from this philosophical vantage point. Making philosophical presuppositions explicit highlights the perspectival nature of aspects of programmatic assessment, and suggest reasons for perceived benefits as well as potential tensions, contradictions and vulnerabilities in the approach today. We conclude by offering some reflections on important points to emerge from our historical study, and suggest 'what next' for programmatic assessment in light of this endeavour.
Collapse
Affiliation(s)
- J Pearce
- Tertiary Education (Assessment), Australian Council for Educational Research, 19 Prospect Hill Road, Camberwell, VIC, 3124, Australia.
| | - W Tavares
- The Wilson Centre and Post-MD Education. University Health Network and University of Toronto, Toronto, ON, Canada
| |
Collapse
|
6
|
On Educational Assessment Theory: A High-Level Discussion of Adolphe Quetelet, Platonism, and Ergodicity. PHILOSOPHIES 2021. [DOI: 10.3390/philosophies6020046] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Educational assessments, specifically standardized and normalized exams, owe most of their foundations to psychological test theory in psychometrics. While the theoretical assumptions of these practices are widespread and relatively uncontroversial in the testing community, there are at least two that are philosophically and mathematically suspect and have troubling implications in education. Assumption 1 is that repeated assessment measures that are calculated into an arithmetic mean are thought to represent some real stable, quantitative psychological trait or ability plus some error. Assumption 2 is that aggregated, group-level educational data collected from assessments can then be interpreted to make inferences about a given individual person over time without explicit justification. It is argued that the former assumption cannot be taken for granted; it is also argued that, while it is typically attributed to 20th century thought, the assumption in a rigorous form can be traced back at least to the 1830s via an unattractive Platonistic statistical thesis offered by one of the founders of the social sciences—Belgian mathematician Adolphe Quetelet (1796–1874). While contemporary research has moved away from using his work directly, it is demonstrated that cognitive psychology is still facing the preservation of assumption 1, which is becoming increasingly challenged by current paradigms that pitch human cognition as a dynamical, complex system. However, how to deal with assumption 1 and whether it is broadly justified is left as an open question. It is then argued that assumption 2 is only justified by assessments having ergodic properties, which is a criterion rarely met in education; specifically, some forms of normalized standardized exams are intrinsically non-ergodic and should be thought of as invalid assessments for saying much about individual students and their capability. The article closes with a call for the introduction of dynamical mathematics into educational assessment at a conceptual level (e.g., through Bayesian networks), the critical analysis of several key psychological testing assumptions, and the introduction of dynamical language into philosophical discourse. Each of these prima facie distinct areas ought to inform each other more closely in educational studies.
Collapse
|
7
|
Cain-Shields LR, Johnson DA, Glover L, Sims M. The association of goal-striving stress with sleep duration and sleep quality among African Americans in the Jackson Heart Study. Sleep Health 2020; 6:117-123. [PMID: 31734287 PMCID: PMC6995417 DOI: 10.1016/j.sleh.2019.08.007] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2018] [Revised: 08/14/2019] [Accepted: 08/19/2019] [Indexed: 12/20/2022]
Abstract
BACKGROUND African Americans (AAs) report a higher frequency of certain stressors over their lifetime which may impact biological processes that can impair sleep. For this reason, goal-striving stress (GSS), the difference between aspiration and achievement, weighted by disappointment, may contribute to poor sleep quality and suboptimal sleep duration among AAs. METHODS We completed a cross-sectional analysis using exam 1 data (2000-2004) from the Jackson Heart Study (JHS) (n=4943). GSS was self-reported and categorized in tertiles of low, moderate, and high. Participants self-reported the number of hours they slept each night and rated their sleep quality as (1) very poor to (5) excellent. Sleep duration categories included the following: short sleep (≤6 hours), normal sleep (7-8 hours) and long sleep (≥ 9 hours). Sleep quality was categorized as high (good/very good/excellent) and low (fair/poor). Relative risk ratios (RRRs 95% confidence intervals-CI) were estimated for sleep duration and sleep quality categories by GSS using logistic regression. RESULTS After full adjustment, there were no significant associations between GSS and sleep duration categories. However, participants who reported high (versus low) GSS had a 20% greater risk (1.20 95% CI: 1.01, 1.43) of low (versus high) sleep quality in the fully adjusted model. CONCLUSION The stress due to the deficit between goal aspiration and achievement was associated with poor sleep quality. Future investigations should examine the association of changes in GSS with changes in sleep duration and sleep quality.
Collapse
Affiliation(s)
- Loretta R Cain-Shields
- Department of Data Science, John D Bower School of Population Health, University of Mississippi Medical Center, 2500 North State St., Jackson, MS, 39216, USA.
| | - Dayna A Johnson
- Department of Epidemiology, Rollins School of Public Health, Emory University, 1518 Clifton Road NE, Atlanta, GA, 30322, USA
| | - LáShauntá Glover
- Department of Epidemiology, Gillings School of Global Public Health, University of North Carolina at Chapel Hill, Chapel Hill NC, 27516, USA
| | - Mario Sims
- Department of Medicine, School of Medicine, University of Mississippi Medical Center, 2500 North State Street, Jackson, MS, 39216, USA
| |
Collapse
|
8
|
Pearce J. In defence of constructivist, utility-driven psychometrics for the 'post-psychometric era'. MEDICAL EDUCATION 2020; 54:99-102. [PMID: 31867758 DOI: 10.1111/medu.14039] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Affiliation(s)
- Jacob Pearce
- Australian Council for Educational Research - Assessment and Psychometric Research, Camberwell, Victoria, Australia
| |
Collapse
|
9
|
Park YS, Morales A, Ross L, Paniagua M. Reporting Subscore Profiles Using Diagnostic Classification Models in Health Professions Education. Eval Health Prof 2019; 43:149-158. [PMID: 31462073 DOI: 10.1177/0163278719871090] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Learners and educators in the health professions have called for more fine-grained information (subscores) from assessments, beyond a single overall test score. However, due to concerns over reliability, there have been limited uses of subscores in practice. Recent advances in latent class analysis have made contributions in subscore reporting by using diagnostic classification models (DCMs), which allow reliable classification of examinees into fine-grained proficiency levels (subscore profiles). This study examines the innovative and practical application of DCM framework to health professions educational assessments using retrospective large-scale assessment data from the basic and clinical sciences: National Board of Medical Examiners Subject Examinations in pathology (n = 2,006) and medicine (n = 2,351). DCMs were fit and analyzed to generate subscores and subscore profiles of examinees. Model fit indices, classification (reliability), and parameter estimates indicated that DCMs had good psychometric properties including consistent classification of examinees into subscore profiles. Results showed a range of useful information including varying levels of subscore distributions. The DCM framework can be a promising approach to report subscores in health professions education. Consistency of classification was high, demonstrating reliable results at fine-grained subscore levels, allowing for targeted and specific feedback to learners.
Collapse
Affiliation(s)
- Yoon Soo Park
- Department of Medical Education, College of Medicine, University of Illinois at Chicago, IL, USA
| | - Amy Morales
- National Board of Medical Examiners, Philadelphia, PA, USA
| | - Linette Ross
- National Board of Medical Examiners, Philadelphia, PA, USA
| | | |
Collapse
|
10
|
Pearce J. Psychometrics in action, science as practice. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2018; 23:653-663. [PMID: 28752439 DOI: 10.1007/s10459-017-9789-7] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/26/2017] [Accepted: 07/21/2017] [Indexed: 06/07/2023]
Abstract
Practitioners in health sciences education and assessment regularly use a range of psychometric techniques to analyse data, evaluate models, and make crucial progression decisions regarding student learning. However, a recent editorial entitled "Is Psychometrics Science?" highlighted some core epistemological and practical problems in psychometrics, and brought its legitimacy into question. This paper attempts to address these issues by applying some key ideas from history and philosophy of science (HPS) discourse. I present some of the conceptual developments in HPS that have bearing on the psychometrics debate. Next, by shifting the focus onto what constitutes the practice of science, I discuss psychometrics in action. Some incorrectly conceptualize science as an assemblage of truths, rather than an assemblage of tools and goals. Psychometrics, however, seems to be an assemblage of methods and techniques. Psychometrics in action represents a range of practices using specific tools in specific contexts. This does not render the practice of psychometrics meaningless or futile. Engaging in debates about whether or not we should regard psychometrics as 'scientific' is, however, a fruitless enterprise. The key question and focus should be whether, on what grounds, and in what contexts, the existing methods and techniques used by psychometricians can be justified or criticized.
Collapse
Affiliation(s)
- Jacob Pearce
- Assessment and Psychometric Research, Australian Council for Educational Research, 19 Prospect Hill Rd, Camberwell, VIC, 3124, Australia.
- History and Philosophy of Science, School of Historical and Philosophical Studies, University of Melbourne, Parkville, VIC, 3010, Australia.
| |
Collapse
|
11
|
|
12
|
Fahim C, Wagner N, Nousiainen MT, Sonnadara R. Assessment of Technical Skills Competence in the Operating Room: A Systematic and Scoping Review. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2018; 93:794-808. [PMID: 28953567 DOI: 10.1097/acm.0000000000001902] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
PURPOSE While academic accreditation bodies continue to promote competency-based medical education (CBME), the feasibility of conducting regular CBME assessments remains challenging. The purpose of this study was to identify evidence pertaining to the practical application of assessments that aim to measure technical competence for surgical trainees in a nonsimulated, operative setting. METHOD In August 2016, the authors systematically searched Medline, Embase, and the Cochrane Database of Systematic Reviews for English-language, peer-reviewed articles published in or after 1996. The title, abstract, and full text of identified articles were screened. Data regarding study characteristics, psychometric and measurement properties, implementation of assessment, competency definitions, and faculty training were extracted. The findings from the systematic review were supplemented by a scoping review to identify key strategies related to faculty uptake and implementation of CBME assessments. RESULTS A total of 32 studies were included. The majority of studies reported reasonable scores of interrater reliability and internal consistency. Seven articles identified minimum scores required to establish competence. Twenty-five articles mentioned faculty training. Many of the faculty training interventions focused on timely completion of assessments or scale calibration. CONCLUSIONS There are a number of diverse tools used to assess competence for intraoperative technical skills and a lack of consensus regarding the definition of technical competence within and across surgical specialties. Further work is required to identify when and how often trainees should be assessed and to identify strategies to train faculty to ensure timely and accurate assessment.
Collapse
Affiliation(s)
- Christine Fahim
- C. Fahim is a PhD candidate, Department of Health Research Methods, Evidence and Impact, McMaster University, Hamilton, Ontario, Canada. N. Wagner is a PhD candidate, Department of Psychology, Neuroscience and Behaviour, McMaster University, Hamilton, Ontario, Canada. M.T. Nousiainen is orthopedic surgeon and assistant professor, Sunnybrook Hospital, Department of Surgery, University of Toronto, Toronto, Ontario, Canada. R. Sonnadara is director of education science and associate professor, Department of Surgery, McMaster University, Hamilton, Ontario, Canada, and associate professor, Department of Surgery, University of Toronto, Toronto, Ontario, Canada; ORCID: http://orcid.org/0000-0001-8318-5714
| | | | | | | |
Collapse
|
13
|
de Jonge LPJWM, Timmerman AA, Govaerts MJB, Muris JWM, Muijtjens AMM, Kramer AWM, van der Vleuten CPM. Stakeholder perspectives on workplace-based performance assessment: towards a better understanding of assessor behaviour. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2017; 22:1213-1243. [PMID: 28155004 PMCID: PMC5663793 DOI: 10.1007/s10459-017-9760-7] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/07/2016] [Accepted: 01/24/2017] [Indexed: 05/13/2023]
Abstract
Workplace-Based Assessment (WBA) plays a pivotal role in present-day competency-based medical curricula. Validity in WBA mainly depends on how stakeholders (e.g. clinical supervisors and learners) use the assessments-rather than on the intrinsic qualities of instruments and methods. Current research on assessment in clinical contexts seems to imply that variable behaviours during performance assessment of both assessors and learners may well reflect their respective beliefs and perspectives towards WBA. We therefore performed a Q methodological study to explore perspectives underlying stakeholders' behaviours in WBA in a postgraduate medical training program. Five different perspectives on performance assessment were extracted: Agency, Mutuality, Objectivity, Adaptivity and Accountability. These perspectives reflect both differences and similarities in stakeholder perceptions and preferences regarding the utility of WBA. In comparing and contrasting the various perspectives, we identified two key areas of disagreement, specifically 'the locus of regulation of learning' (i.e., self-regulated versus externally regulated learning) and 'the extent to which assessment should be standardised' (i.e., tailored versus standardised assessment). Differing perspectives may variously affect stakeholders' acceptance, use-and, consequently, the effectiveness-of assessment programmes. Continuous interaction between all stakeholders is essential to monitor, adapt and improve assessment practices and to stimulate the development of a shared mental model. Better understanding of underlying stakeholder perspectives could be an important step in bridging the gap between psychometric and socio-constructivist approaches in WBA.
Collapse
Affiliation(s)
- Laury P J W M de Jonge
- Department of Family Medicine, FHML, Maastricht University, P.O. Box 616, 6200 MD, Maastricht, The Netherlands.
| | - Angelique A Timmerman
- Department of Family Medicine, FHML, Maastricht University, P.O. Box 616, 6200 MD, Maastricht, The Netherlands
| | - Marjan J B Govaerts
- Department of Educational Research and Development, FHML, Maastricht University, Maastricht, The Netherlands
| | - Jean W M Muris
- Department of Family Medicine, FHML, Maastricht University, P.O. Box 616, 6200 MD, Maastricht, The Netherlands
| | - Arno M M Muijtjens
- Department of Educational Research and Development, FHML, Maastricht University, Maastricht, The Netherlands
| | - Anneke W M Kramer
- Department of Family Medicine, Leiden University, Leiden, The Netherlands
| | - Cees P M van der Vleuten
- Department of Educational Research and Development, FHML, Maastricht University, Maastricht, The Netherlands
| |
Collapse
|
14
|
Schoenherr JR. Scientific integrity in research methods. Front Psychol 2015; 6:1562. [PMID: 26578994 PMCID: PMC4630561 DOI: 10.3389/fpsyg.2015.01562] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2015] [Accepted: 09/28/2015] [Indexed: 11/28/2022] Open
|