1
|
Cleland J, Blitz J, Cleutjens KBJM, Oude Egbrink MGA, Schreurs S, Patterson F. Robust, defensible, and fair: The AMEE guide to selection into medical school: AMEE Guide No. 153. MEDICAL TEACHER 2023; 45:1071-1084. [PMID: 36708606 DOI: 10.1080/0142159x.2023.2168529] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/05/2023]
Abstract
Selection is the first assessment of medical education and training. Medical schools must select from a pool of academically successful applicants and ensure that the way in which they choose future clinicians is robust, defensible, fair to all who apply and cost-effective. However, there is no comprehensive and evidence-informed guide to help those tasked with setting up or rejuvenating their local selection process. To address this gap, our guide draws on the latest research, international case studies and consideration of common dilemmas to provide practical guidance for designing, implementing and evaluating an effective medical school selection system. We draw on a model from the field of instructional design to frame the many different activities involved in doing so: the ADDIE model. ADDIE provides a systematic framework of Analysis (of the outcomes to be achieved by the selection process, and the barriers and facilitators to achieving these), Design (what tools and content are needed so the goals of selection are achieved), Development (what materials and resources are needed and available), Implementation (plan [including piloting], do study and adjust) and Evaluation (quality assurance is embedded throughout but the last step involves extensive evaluation of the entire process and its outcomes).HIGHLIGHTSRobust, defensible and fair selection into medical school is essential. This guide systematically covers the processes required to achieve this, from needs analysis through design, development and implementation, to evaluation of the success of a selection process.
Collapse
Affiliation(s)
- J Cleland
- Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore
| | - J Blitz
- Centre for Health Professions Education, Faculty of Medicine and Health Sciences, Stellenbosch University, Stellenbosch, South Africa
| | - K B J M Cleutjens
- School of Health Professions Education, Maastricht University, the Netherlands
| | - M G A Oude Egbrink
- School of Health Professions Education, Maastricht University, the Netherlands
| | - S Schreurs
- School of Health Professions Education, Maastricht University, the Netherlands
- Centrum for Evidence Based Education, University of Utrecht, the Netherlands
| | | |
Collapse
|
2
|
Renaud JS, Bourget M, St-Onge C, Eva KW, Tavares W, Salvador Loye A, Leduc JM, Homer M. Effect of station format on the psychometric properties of Multiple Mini Interviews. MEDICAL EDUCATION 2022; 56:1042-1050. [PMID: 35701388 DOI: 10.1111/medu.14855] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/28/2021] [Revised: 05/10/2022] [Accepted: 06/09/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND Given the widespread use of Multiple Mini Interviews (MMIs), their impact on the selection of candidates and the considerable resources invested in preparing and administering them, it is essential to ensure their quality. Given the variety of station formats used and the degree to which that factor resides in the control of training programmes that we know so little about, format's effect on MMI quality is a considerable oversight. This study assessed the effect of two popular station formats (interview vs. role-play) on the psychometric properties of MMIs. METHODS We analysed candidate data from the first 8 years of the Integrated French MMIs (IF-MMI) (2010-2017, n = 11 761 applicants), an MMI organised yearly by three francophone universities and administered at four testing sites located in two Canadian provinces. There were 84 role-play and 96 interview stations administered, totalling 180 stations. Mixed design analyses of variance (ANOVAs) were used to test the effect of station format on candidates' scores and stations' discrimination. Cronbach's alpha coefficients for interview and role-play stations were also compared. Predictive validity of both station formats was estimated with a mixed multiple linear regression model testing the relation between interview and role-play scores with average clerkship performance for those who gained entry to medical school (n = 462). RESULTS Role-play stations (M = 20.67, standard deviation [SD] = 3.38) had a slightly lower mean score than interview stations (M = 21.36, SD = 3.08), p < 0.01, Cohen's d = 0.2. The correlation between role-play and interview stations scores was r = 0.5 (p < 0.01). Discrimination coefficients, Cronbach's alpha and predictive validity statistics did not vary by station format. CONCLUSION Interview and role-play stations have comparable psychometric properties, suggesting format to be interchangeable. Programmes should select station format based on match to the personal qualities for which they are trying to select.
Collapse
Affiliation(s)
- Jean-Sébastien Renaud
- Department of Family and Emergency Medicine, Office of Education and Continuing Professional Development, VITAM Research Center, Université Laval, Quebec City, Quebec, Canada
| | - Martine Bourget
- Department of Psychiatry and Neurosciences, Université Laval, Quebec City, Quebec, Canada
| | - Christina St-Onge
- Department of Medicine, Université de Sherbrooke, Sherbrooke, Quebec, Canada
| | - Kevin W Eva
- Centre for Health Education Scholarship, University of British Columbia, Vancouver, British Columbia, Canada
| | - Walter Tavares
- Wilson Center, University of Toronto, Toronto, Ontario, Canada
| | | | - Jean-Michel Leduc
- Faculty of Medicine, Université de Montréal, Montreal, Quebec, Canada
| | - Matt Homer
- School of Education, University of Leeds, Leeds, UK
| |
Collapse
|
3
|
Kennedy AB, Riyad CNY, Ellis R, Fleming PR, Gainey M, Templeton K, Nourse A, Hardaway V, Brown A, Evans P, Natafgi N. Evaluating a Global Assessment Measure Created by Standardized Patients for the Multiple Mini Interview in Medical School Admissions: Mixed Methods Study. J Particip Med 2022; 14:e38209. [PMID: 36040776 PMCID: PMC9472042 DOI: 10.2196/38209] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2022] [Revised: 07/19/2022] [Accepted: 08/03/2022] [Indexed: 11/15/2022] Open
Abstract
BACKGROUND Standardized patients (SPs) are essential stakeholders in the multiple mini interviews (MMIs) that are increasingly used to assess medical school applicants' interpersonal skills. However, there is little evidence for their inclusion in the development of instruments. OBJECTIVE This study aimed to describe the process and evaluate the impact of having SPs co-design and cocreate a global measurement question that assesses medical school applicants' readiness for medical school and acceptance status. METHODS This study used an exploratory, sequential, and mixed methods study design. First, we evaluated the initial MMI program and determined the next quality improvement steps. Second, we held a collaborative workshop with SPs to codevelop the assessment question and response options. Third, we evaluated the created question and the additional MMI rubric items through statistical tests based on 1084 applicants' data from 3 cohorts of applicants starting in the 2018-2019 academic year. The internal reliability of the MMI was measured using a Cronbach α test, and its prediction of admission status was tested using a forward stepwise binary logistic regression. RESULTS Program evaluation indicated the need for an additional quantitative question to assess applicant readiness for medical school. In total, 3 simulation specialists, 2 researchers, and 21 SPs participated in a workshop leading to a final global assessment question and responses. The Cronbach α's were >0.8 overall and in each cohort year. The final stepwise logistic model for all cohorts combined was statistically significant (P<.001), explained 9.2% (R2) of the variance in acceptance status, and correctly classified 65.5% (637/972) of cases. The final model consisted of 3 variables: empathy, rank of readiness, and opening the encounter. CONCLUSIONS The collaborative nature of this project between stakeholders, including nonacademics and researchers, was vital for the success of this project. The SP-created question had a significant impact on the final model predicting acceptance to medical school. This finding indicates that SPs bring a critical perspective that can improve the process of evaluating medical school applicants.
Collapse
Affiliation(s)
- Ann Blair Kennedy
- Biomedical Sciences Department, School of Medicine Greenville, University of South Carolina, Greenville, SC, United States
- Patient Engagement Studio, University of South Carolina, Greenville, SC, United States
- Family Medicine Department, Prisma Health, Greenville, SC, United States
| | - Cindy Nessim Youssef Riyad
- School of Medicine Greenville, University of South Carolina, Greenville, SC, United States
- Hospital Based Accreditation, Accreditation Council of Graduate Medical Education, Chicago, IL, United States
| | - Ryan Ellis
- School of Medicine Greenville, University of South Carolina, Greenville, SC, United States
| | - Perry R Fleming
- Patient Engagement Studio, University of South Carolina, Greenville, SC, United States
- School of Medicine Columbia, University of South Carolina, Columbia, SC, United States
| | - Mallorie Gainey
- School of Medicine Greenville, University of South Carolina, Greenville, SC, United States
| | - Kara Templeton
- Prisma Health-Upstate Simulation Center, School of Medicine Greenville, University of South Carolina, Greenville, SC, United States
| | - Anna Nourse
- Patient Engagement Studio, University of South Carolina, Greenville, SC, United States
| | - Virginia Hardaway
- Admissions and Registration, School of Medicine Greenville, University of South Carolina, Greenville, SC, United States
| | - April Brown
- Prisma Health-Upstate Simulation Center, School of Medicine Greenville, University of South Carolina, Greenville, SC, United States
| | - Pam Evans
- Patient Engagement Studio, University of South Carolina, Greenville, SC, United States
- Prisma Health-Upstate Simulation Center, School of Medicine Greenville, University of South Carolina, Greenville, SC, United States
| | - Nabil Natafgi
- Patient Engagement Studio, University of South Carolina, Greenville, SC, United States
- Health Services, Policy, Management Department, Arnold School of Public Health, University of South Carolina, Columbia, SC, United States
| |
Collapse
|
4
|
Leduc JM, Béland S, Renaud JS, Bégin P, Gagnon R, Ouellet A, Bourdy C, Loye N. Are different station formats assessing different dimensions in multiple mini-interviews? Findings from the Canadian integrated French multiple mini-interviews. BMC MEDICAL EDUCATION 2022; 22:616. [PMID: 35962381 PMCID: PMC9375358 DOI: 10.1186/s12909-022-03681-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/19/2021] [Accepted: 08/05/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND Multiple mini-interviews (MMI) are used to assess non-academic attributes for selection in medicine and other healthcare professions. It remains unclear if different MMI station formats (discussions, role-plays, collaboration) assess different dimensions. METHODS Based on station formats of the 2018 and 2019 Integrated French MMI (IFMMI), which comprised five discussions, three role-plays and two collaboration stations, the authors performed confirmatory factor analysis (CFA) using the lavaan 0.6-5 R package and compared a one-factor solution to a three-factor solution for scores of the 2018 (n = 1438) and 2019 (n = 1440) cohorts of the IFMMI across three medical schools in Quebec, Canada. RESULTS The three-factor solution was retained, with discussions, role-plays and collaboration stations all loading adequately with their scores. Furthermore, all three factors had moderate-to-high covariance (range 0.44 to 0.64). The model fit was also excellent with a Comparative fit index (CFI) of 0.983 (good if > 0.9), a Tucker Lewis index of 0.976 (good if > 0.95), a Standardized Root Mean Square Residual of 0.021 (good if < .08) and a Root Mean Square Error of 0.023 (good if < 0.08) for 2018 and similar results for 2019. In comparison, the single factor solution presented a lower fit (CFI = 0.819, TLI = 0.767, SRMR = 0.049 and RMSEA = 0.070). CONCLUSIONS The IFMMI assessed three dimensions that were related to stations formats, a finding that was consistent across two cohorts. This suggests that different station formats may be assessing different skills, and has implications for the choice of appropriate reliability metrics and the interpretation of scores. Further studies should try to characterize the underlying constructs associated with each station format and look for differential predictive validity according to these formats.
Collapse
Affiliation(s)
- Jean-Michel Leduc
- Centre de recherche du Centre intégré universitaire de santé et de services sociaux du Nord-de-l’Île-de-Montréal, Hôpital du Sacré-Cœur de Montréal, 5400 boul. Gouin ouest, Montréal, QC H4J 1C5 Canada
- Department of Microbiology, Infectious Diseases and Immunology, Faculty of Medicine, Université de Montréal, 2900 boul. Edouard-Montpetit, Montréal, QC H3T 1J4 Canada
| | - Sébastien Béland
- Department of Education Administration and Foundations, Faculty of Education Sciences, Université de Montréal, 90, avenue Vincent-D’Indy, Montréal, QC H2V 2S9 Canada
| | - Jean-Sébastien Renaud
- Department of Family Medicine and Emergency Medicine, Office of Education and Professional Development, Faculty of Medicine, Université Laval, 1050 Avenue de la Médecine, Quebec, QC G1V 0A6 Canada
| | - Philippe Bégin
- Department of Medicine, Faculty of Medicine, Université de Montréal, 2900 boul. Edouard-Montpetit, Montréal, QC H3T 1J4 Canada
| | - Robert Gagnon
- Office of Assessment and Evaluation, Faculty of Medicine, Université de Montréal, 2900 boul. Edouard-Montpetit, Montréal, QC H3T 1J4 Canada
| | - Annie Ouellet
- Department of Obstetrics and Gynecology, Faculty of Medicine and Health Sciences, Université de Sherbrooke, 3001 12 Ave N Immeuble X1, Sherbrooke, QC J1H 5N4 Canada
| | - Christian Bourdy
- Department of Family Medicine and Emergency Medicine, Faculty of Medicine, Université de Montréal, 2900 boul. Edouard-Montpetit, Montréal, QC H3T 1J4 Canada
| | - Nathalie Loye
- Department of Education Administration and Foundations, Faculty of Education Sciences, Université de Montréal, 90, avenue Vincent-D’Indy, Montréal, QC H2V 2S9 Canada
| |
Collapse
|
5
|
Towaij C, Gawad N, Alibhai K, Doan D, Raîche I. Trust Me, I Know Them: Assessing Interpersonal Bias in Surgery Residency Interviews. J Grad Med Educ 2022; 14:289-294. [PMID: 35754644 PMCID: PMC9200259 DOI: 10.4300/jgme-d-21-00882.1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/08/2021] [Revised: 01/24/2022] [Accepted: 02/28/2022] [Indexed: 11/06/2022] Open
Abstract
BACKGROUND Residency selection integrates objective and subjective data sources. Interviews help assess characteristics like insight and communication but have the potential for bias. Structured multiple mini-interviews may mitigate some elements of bias; however, a halo effect is described in assessments of medical trainees, and degree of familiarity with applicants may remain a source of bias in interviews. OBJECTIVE To investigate the extent of interviewer bias that results from pre-interview knowledge of the applicant by comparing file review and interview scores for known versus unknown applicants. METHODS File review and interview scores of applicants to the University of Ottawa General Surgery Residency Training Program from 2019 to 2021 were gathered retrospectively. Applicants were categorized as "home" if from the institution, "known" if they completed an elective at the institution, or "unknown." The Kruskal-Wallis H test was used to compare median interview scores between groups and Spearman's rank-order correlation (rs) to determine the correlation between file review and interview scores. RESULTS Over a 3-year period, 169 applicants were interviewed; 62% were unknown, 31% were known, and 6% were home applicants. There was a statistically significant difference (P=.01) between the median interview scores of home, known, and unknown applicants. Comparison of groups demonstrated higher positive correlations between file review and interview scores (rs=0.15 vs 0.36 vs 0.55 in unknown, known, and home applicants) with increasing applicant familiarity. CONCLUSIONS There is an increased positive correlation between file review and interview scores with applicant familiarity. The interview process may carry inherent bias insufficiently mitigated by the current structure.
Collapse
Affiliation(s)
- Chelsea Towaij
- Chelsea Towaij, MD, is a Postgraduate Year 5 Resident, Division of General Surgery, Department of Surgery, Faculty of Medicine, University of Ottawa, The Ottawa Hospital, Ottawa, ON, Canada
| | - Nada Gawad
- Nada Gawad, MD, MAEd, is a Surgical Fellow, Division of General Surgery, Department of Surgery, Faculty of Medicine, Department of Innovation in Medical Education (DIME), University of Ottawa, The Ottawa Hospital, Ottawa, ON, Canada
| | - Kameela Alibhai
- Kameela Alibhai, BSc, is a Third-Year Medical Student, Division of General Surgery, Department of Surgery, Faculty of Medicine, University of Ottawa, Ottawa, ON, Canada
| | - Danielle Doan
- Danielle Doan, MAEd, is a Program Administrator, Division of General Surgery, Department of Surgery, Faculty of Medicine, Eric Poulin Office of Education, University of Ottawa, The Ottawa Hospital, Ottawa, ON, Canada
| | - Isabelle Raîche
- Isabelle Raîche, MD, MAEd, is an Assistant Professor of Surgery, Division of General Surgery, Department of Surgery, Faculty of Medicine, DIME, University of Ottawa, The Ottawa Hospital, Ottawa, ON, Canada
| |
Collapse
|
6
|
Callwood A, Groothuizen JE, Lemanska A, Allan H. The predictive validity of Multiple Mini Interviews (MMIs) in nursing and midwifery programmes: Year three findings from a cross-discipline cohort study. NURSE EDUCATION TODAY 2020; 88:104320. [PMID: 32193067 DOI: 10.1016/j.nedt.2019.104320] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/30/2019] [Revised: 11/11/2019] [Accepted: 12/18/2019] [Indexed: 06/10/2023]
Abstract
BACKGROUND Education literature worldwide is replete with studies evaluating the effectiveness of Multiple Mini Interviews (MMIs) in admissions to medicine but <1% of published studies have been conducted in selection to nursing and midwifery programmes. OBJECTIVES To examine the predictive validity of MMIs using end of programme clinical and academic performance indicators of pre-registration adult, child, and mental health nursing and midwifery students. DESIGN AND SETTING A cross-sectional cohort study at one university in the United Kingdom. PARTICIPANTS A non-probability consecutive sampling strategy whereby all applicants to the September 2015 pre-registration adult, child, mental health nursing and midwifery programmes were invited to participate. Of the 354 students who commenced year one, 225 (64%) completed their three-year programme and agreed to take part (adult 120, child 32, mental health nursing 30 and midwifery 43). METHODS All applicants were interviewed using MMIs with six and seven station, four-minute models deployed in nursing and midwifery student selection respectively. Associations between MMI scores and the cross-discipline programme performance indicators available for each student at this university at the end of year three: clinical practice (assessed by mentors) and academic attainment (dissertation mark) were explored using multiple linear regression adjusting for applicant age, academic entry level, discipline and number of MMI stations. RESULTS In the adjusted models, students with higher admissions MMI score (at six and seven stations) performed better in clinical practice (p < 0.001) but not in academic attainment (p = 0.122) at the end of their three-year programme. CONCLUSION These findings provide the first report of the predictive validity of MMIs for performance in clinical practice using six and seven station models in nursing and midwifery programmes. Further evidence is required from both clinical and academic perspectives from larger, multi-site evaluations.
Collapse
Affiliation(s)
- Alison Callwood
- School of Health Sciences, University of Surrey, Guildford, UK.
| | | | | | - Helen Allan
- Centre for Critical Research in Nursing and Midwifery, School of Health and Education, Middlesex University, UK.
| |
Collapse
|
7
|
Breil SM, Forthmann B, Hertel-Waszak A, Ahrens H, Brouwer B, Schönefeld E, Marschall B, Back MD. Construct validity of multiple mini interviews - Investigating the role of stations, skills, and raters using Bayesian G-theory. MEDICAL TEACHER 2020; 42:164-171. [PMID: 31591917 DOI: 10.1080/0142159x.2019.1670337] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Background: One popular procedure in the medical student selection process are multiple mini-interviews (MMIs), which are designed to assess social skills (e.g., empathy) by means of brief interview and role-play stations. However, it remains unclear whether MMIs reliably measure desired social skills or rather general performance differences that do not depend on specific social skills. Here, we provide a detailed investigation into the construct validity of MMIs, including the identification and quantification of performance facets (social skill-specific performance, station-specific performance, general performance) and their relations with other selection measures.Methods: We used data from three MMI samples (N = 376 applicants, 144 raters) that included six interview and role-play stations and multiple assessed social skills.Results: Bayesian generalizability analyses show that, the largest amount of reliable MMI variance was accounted for by station-specific and general performance differences between applicants. Furthermore, there were low or no correlations with other selection measures.Discussion: Our findings suggest that MMI ratings are less social skill-specific than originally conceptualized and are due more to general performance differences (across and within-stations). Future research should focus on the development of skill-specific MMI stations and on behavioral analyses on the extents to which performance differences are based on desirable skills versus undesired aspects.
Collapse
Affiliation(s)
| | | | | | | | | | | | | | - Mitja D Back
- Psychology, University of Münster, Münster, Germany
| |
Collapse
|
8
|
Clark JR, Miller CA, Garwood EL. Rethinking the Admissions Interview: Piloting Multiple Mini-Interviews in a Graduate Psychology Program. Psychol Rep 2019; 123:1869-1886. [PMID: 31865837 DOI: 10.1177/0033294119896062] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Health profession programs routinely utilize traditional interviews in admissions as a means of assessing important non-academic characteristics (e.g., critical thinking, interpersonal skills, judgment) of candidates. However, the reliability and validity of traditional interviews is highly questionable. Given this, multiple health profession programs (e.g., medicine, nursing, pharmacy, physical therapy) have implemented multiple mini-interviews as an alternative for assessing non-academic characteristics. This paper describes the development and implementation of multiple mini-interviews in the admissions process for a doctoral clinical psychology program, one of the health professions yet to use multiple mini-interviews. This paper also examines the feasibility and acceptability of the multiple mini-interviews in this program. Results of a mixed-method survey of all 120 candidates who participated in admissions days are presented along with discussion of factors associated with satisfaction and dissatisfaction. Recommendations for program refinement and application to other graduate psychology programs for improved admissions processes are discussed.
Collapse
|
9
|
Ali S, Sadiq Hashmi MS, Umair M, Beg MA, Huda N. Multiple Mini-Interviews: Current Perspectives on Utility and Limitations. ADVANCES IN MEDICAL EDUCATION AND PRACTICE 2019; 10:1031-1038. [PMID: 31849557 PMCID: PMC6913247 DOI: 10.2147/amep.s181332] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 04/13/2019] [Accepted: 11/16/2019] [Indexed: 06/10/2023]
Abstract
The growing role of healthcare professionals urged admissions committees to restructure their selection process and assess key personal attributes rather than academic achievements only. Multiple mini interviews (MMIs) were designed in 2002 to assess such domains in prospective healthcare professions. Being a high-stake assessment, the utility and limitations of MMI need to be explored. The purpose of this article is to review the available evidence to establish its utility. The claim of the reliability is verified by the studies assessing the effect of number of stations, duration of stations, format and scoring systems of stations and number of raters assessing the applicants. Similarly, by gathering evidence concerning its content validity, convergent/divergent correlation and predictive ability, validity is ensured. Finally, its acceptability and feasibility along with limitations is discussed. This article concludes by providing recommendations for further work required to deal with the limitations and enhance its utility.
Collapse
Affiliation(s)
- Sobia Ali
- Department of Health Professions Education, Liaquat National Hospital & Medical College, Karachi74800, Pakistan
| | | | - Mehnaz Umair
- Department of Health Professions Education, Liaquat National Hospital & Medical College, Karachi74800, Pakistan
| | - Mirza Aroosa Beg
- Department of Medical Education, Sindh Institute of Urology and Transplantation (SIUT), Karachi74200, Pakistan
| | - Nighat Huda
- Department of Health Professions Education, Liaquat National Hospital & Medical College, Karachi74800, Pakistan
| |
Collapse
|
10
|
Benbassat J. Assessments of Non-academic Attributes in Applicants for Undergraduate Medical Education: an Overview of Advantages and Limitations. MEDICAL SCIENCE EDUCATOR 2019; 29:1129-1134. [PMID: 34457592 PMCID: PMC8368911 DOI: 10.1007/s40670-019-00791-5] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Affiliation(s)
- Jochanan Benbassat
- Smokler Center for Health Policy Research, Myers-JDC-Brookdale Institute, PO Box 3886, 91037 Jerusalem, Israel
| |
Collapse
|
11
|
Juster FR, Baum RC, Zou C, Risucci D, Ly A, Reiter H, Miller DD, Dore KL. Addressing the Diversity-Validity Dilemma Using Situational Judgment Tests. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2019; 94:1197-1203. [PMID: 31033603 DOI: 10.1097/acm.0000000000002769] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
PURPOSE To examine the magnitudes of score differences across different demographic groups for three academic (grade point average [GPA], old Medical College Admission Test [MCAT], and MCAT 2015) and one nonacademic (situational judgment test [SJT]) screening measures and one nonacademic (multiple mini-interview [MMI]) interview measure (analysis 1), and the demographic implications of including an SJT in the screening stage for the pool of applicants who are invited to interview (analysis 2). METHOD The authors ran the analyses using data from New York Medical College School of Medicine applicants from the 2015-2016 admissions cycle. For analysis 1, effect sizes (Cohen d) were calculated for GPA, old MCAT, MCAT 2015, CASPer (an online SJT), and MMI. Comparisons were made across gender, race, ethnicity (African American, Hispanic/Latino), and socioeconomic status (SES). For analysis 2, a series of simulations were conducted to estimate the number of underrepresented in medicine (UIM) applicants who would have been invited to interview with different weightings of GPA, MCAT, and CASPer scores. RESULTS A total of 9,096 applicants were included in analysis 1. Group differences were significantly smaller or reversed for CASPer and MMI compared with the academic assessments (MCAT, GPA) across nearly all demographic variables/indicators. The simulations suggested that a higher weighting of CASPer may help increase gender, racial, and ethnic diversity in the interview pool; results for low-SES applicants were mixed. CONCLUSIONS The inclusion of an SJT in the admissions process has the potential to widen access to medical education for a number of UIM groups.
Collapse
Affiliation(s)
- Fern R Juster
- F.R. Juster was senior associate dean and associate professor of clinical pediatrics, New York Medical College School of Medicine, Valhalla, New York, at the time of this study. She is currently senior associate dean emeritus, New York Medical College School of Medicine, Valhalla, New York, and graduate student, Health Sciences Education Master's Program, David Braley Health Science Centre, McMaster University, Hamilton, Ontario, Canada. R.C. Baum is assistant dean of admissions, New York Medical College School of Medicine, Valhalla, New York. C. Zou is research scientist, Altus Assessments, Toronto, Ontario, Canada. D. Risucci is assistant dean for assessment and evaluation and professor of surgery, New York Medical College School of Medicine, Valhalla, New York. A. Ly is former director of analytics, Academic Administration, New York Medical College School of Medicine, Valhalla, New York. H. Reiter is professor, Department of Oncology, McMaster University, Hamilton, Ontario, Canada. D.D. Miller is former dean and professor of medicine, New York Medical College School of Medicine, Valhalla, New York. K.L. Dore is associate professor of medicine, McMaster University, Hamilton, Ontario, Canada
| | | | | | | | | | | | | | | |
Collapse
|
12
|
Knorr M, Meyer H, Sehner S, Hampe W, Zimmermann S. Exploring sociodemographic subgroup differences in multiple mini-interview (MMI) performance based on MMI station type and the implications for the predictive fairness of the Hamburg MMI. BMC MEDICAL EDUCATION 2019; 19:243. [PMID: 31269937 PMCID: PMC6610801 DOI: 10.1186/s12909-019-1674-z] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/21/2019] [Accepted: 06/17/2019] [Indexed: 05/30/2023]
Abstract
BACKGROUND Sociodemographic subgroup differences in multiple mini-interview (MMI) performance have been extensively studied within the MMI research literature, but heterogeneous findings demand a closer look at how specific aspects of MMI design (such as station type) affect these differences. So far, it has not been investigated whether sociodemographic subgroup differences imply that an MMI is biased, particularly in terms of its predictive validity. METHODS Between 2010 and 2017, the University Medical Centre Hamburg-Eppendorf (UKE) tested 1438 candidates in an MMI who also provided sociodemographic data and agreed to participate in this study. Out of these, 400 candidates were admitted and underwent a first objective structured clinical examination (OSCE) after one and a half years, including one station assessing communication skills. First, we analyzed the relationship between gender, age, native language and medical family background and MMI station performance including interaction terms with MMI station type (simulation, interview, and group) in a hierarchical linear model. Second, we tested whether the prediction of OSCE overall and communication station performance in particular differed depending on sociodemographic background by adding interaction terms between MMI performance and gender, age and medical family background in a linear regression model. RESULTS Young female candidates performed better than young male candidates both at interview and simulation stations. The gender difference was smaller (simulation) or non-significant (interview) in older candidates. There were no gender or age effects in MMI group station performance. All effects were very small, with the overall model explaining only 0.6% of the variance. MMI performance was not related to OSCE overall performance but significantly predicted OSCE communication station performance with no differences in the prediction for sociodemographic subgroups. CONCLUSIONS The Hamburg MMI is fair in its prediction of OSCE communication scores. Differences in MMI station performance for gender and age and their interaction with MMI station type can be related to the dimensions assessed at different station types and thus support the validity of the MMI. Rather than being threats to fairness, these differences could be useful for decisions relating to the design and use of an MMI.
Collapse
Affiliation(s)
- Mirjana Knorr
- Institute of Biochemistry and Molecular Cell Biology, University Medical Center Hamburg-Eppendorf (UKE), N30, Martinistraße 52, 20246 Hamburg, Germany
| | - Hubertus Meyer
- Institute of Biochemistry and Molecular Cell Biology, University Medical Center Hamburg-Eppendorf (UKE), N30, Martinistraße 52, 20246 Hamburg, Germany
| | - Susanne Sehner
- Institute of Medical Biometry and Epidemiology, University Medical Center Hamburg-Eppendorf (UKE), W34, Martinistraße 52, 20246 Hamburg, Germany
| | - Wolfgang Hampe
- Institute of Biochemistry and Molecular Cell Biology, University Medical Center Hamburg-Eppendorf (UKE), N30, Martinistraße 52, 20246 Hamburg, Germany
| | - Stefan Zimmermann
- Institute of Biochemistry and Molecular Cell Biology, University Medical Center Hamburg-Eppendorf (UKE), N30, Martinistraße 52, 20246 Hamburg, Germany
| |
Collapse
|
13
|
Lillis S, Lack L, Mbita A, Ashford M. Using the Multiple Mini Interview for selection into vocational general practice training. J Prim Health Care 2019; 11:75-79. [PMID: 31039992 DOI: 10.1071/hc18085] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2018] [Accepted: 03/07/2019] [Indexed: 11/23/2022] Open
Abstract
INTRODUCTION Interviews for selection into post graduate training courses are an accepted method of selection. There is the presumption that such interview processes are fair to both candidates and the training scheme. AIM Due to concerns over unconscious bias and a desire to move to best practice, the Royal New Zealand College of General Practitioners introduced the Mini Multiple Interview (MMI) process as the selection process for doctors wishing to enter vocational training in general practice. METHODS Aspects assessed during the interviews were developed through wide consultation and included: reason for wanting to undertake training, managing poor compliance, addressing issues of equity, managing complaints, insight and understanding the role of general practitioners in chronic care. There were 218 applicants who took the MMI. Demographic data as well as scores were collected. RESULTS The MMI process has good reliability and performs well in several aspects of validity. All three interview venues had similar results. There was no gender difference in overall result or scores. New Zealand graduates scored higher than overseas graduates. Of the 218 candidates, 12 were considered not yet ready to enter training. DISCUSSION The MMI process appears to have acceptable reliability and good validity. The structure of the MMI is likely to have reduced unconscious bias. Further research will study the predictive validity of the MMI for this cohort of candidates.
Collapse
Affiliation(s)
- Steven Lillis
- Royal New Zealand College of General Practitioners, Level 4, 50 Customhouse Quay, Wellington 6143, New Zealand; and Corresponding author.
| | - Liza Lack
- Royal New Zealand College of General Practitioners, Level 4, 50 Customhouse Quay, Wellington 6143, New Zealand
| | - Allan Mbita
- Royal New Zealand College of General Practitioners, Level 4, 50 Customhouse Quay, Wellington 6143, New Zealand
| | - Melissa Ashford
- Royal New Zealand College of General Practitioners, Level 4, 50 Customhouse Quay, Wellington 6143, New Zealand
| |
Collapse
|
14
|
Eva KW, Macala C, Fleming B. Twelve tips for constructing a multiple mini-interview. MEDICAL TEACHER 2019; 41:510-516. [PMID: 29373943 DOI: 10.1080/0142159x.2018.1429586] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Health professions the world over value various competencies in their practitioners that are not easily captured by academic measures of performance. As a result, many programs have begun using multiple mini-interviews (MMIs) to facilitate the selection of candidates who are most likely to demonstrate and further develop such qualities. In this twelve-tips article, the authors offer evidence- and experience-based advice regarding how to construct an MMI that is fit for purpose. The tips are provided chronologically, offering guidance regarding how one might conceptualize their goals for creating an MMI, how to establish a database of stations that are context appropriate, and how to prepare both candidates and examiners for their task. While MMIs have been shown to have utility in many instances, the authors urge caution against over-generalization by stressing the importance of post-MMI considerations including data monitoring and integration between one's admissions philosophy and one's curricular efforts.
Collapse
Affiliation(s)
- Kevin W Eva
- a Department of Medicine , University of British Columbia , Vancouver , Canada
| | - Catherine Macala
- a Department of Medicine , University of British Columbia , Vancouver , Canada
| | - Bruce Fleming
- a Department of Medicine , University of British Columbia , Vancouver , Canada
| |
Collapse
|
15
|
Satterfield CA, Dacso MM, Patel P. Using multiple mini interviews as a pre-screening tool for medical student candidates completing international health electives. MEDICAL EDUCATION ONLINE 2018; 23:1483694. [PMID: 29912657 PMCID: PMC6008579 DOI: 10.1080/10872981.2018.1483694] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/15/2017] [Accepted: 05/24/2018] [Indexed: 06/08/2023]
Abstract
There continues to be an increase in the number of learners who participate in international health electives (IHEs). However, not all learners enter IHEs with the same level of knowledge, attitude, and previous experience, which puts undue burden on host supervisors and poses risks to student and patient safety. The Multiple Mini-Interview (MMI) is a technique that has become a popular method for undergraduate and postgraduate-level health science admissions programs. This paper describes the MMI process used by our program to screen first-year medical students applying for pre-clinical IHEs. Two country-specific cases were developed to assess non-cognitive skills. One hundred percent (100%) of the students (n = 48) and interviewers (n = 10) who participated in MMIs completed anonymous surveys on their experience. The majority of students rated the scenarios as realistic (>90%); 96% found the MMI format fair and balanced; 96% of students felt that they were able to clearly articulate their thoughts; 75% of students stated that they had a general understanding of how the MMIs worked; only 33% of students would have preferred a traditional one-to-one interview. Feedback from both interviewers and students was positive toward the MMI experience, and no students were identified as unfit for participation. Ultimately, 43 students participated in pre-clinical IHEs in 2016. In this paper, we will outline our MMI process, detail shortcomings, and discuss our next steps to screen medical students for IHEs.
Collapse
Affiliation(s)
- Caley A. Satterfield
- Center for Global Health Education, University of Texas Medical Branch, Texas, Galveston, US
| | - Matthew M. Dacso
- Center for Global Health Education, University of Texas Medical Branch, Texas, Galveston, US
| | - Premal Patel
- Center for Global Health Education, University of Texas Medical Branch, Texas, Galveston, US
| |
Collapse
|
16
|
Terregino CA, Copeland HL, Laumbach SG, Mehan D, Dunleavy D, Geiger T. How good are we at selecting students that meet our mission? Outcomes of the 2011 and 2012 entering classes selected by a locally developed multiple mini interview. MEDICAL TEACHER 2018; 40:1300-1305. [PMID: 29457915 DOI: 10.1080/0142159x.2018.1436165] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
BACKGROUND Can a locally developed multiple mini interview (MMI) process lead to outcomes reflective of local values and mission? METHODS In 2017, the authors performed a retrospective analysis of the relationship of MMI with multiple-choice-based outcomes and non-multiple-choice-based outcomes, including clerkship competencies, OSCE, scholarship/service/leadership, academic honor society induction, peer and faculty humanism nominations, and overall performance at graduation for two entering classes with acceptance decisions based exclusively on a locally developed MMI. RESULTS There was no association between MMI and performance on multiple-choice-based examinations. For other outcomes, the effect size of MMI for OSCE was small and leadership/service and scholarship did not correlate with MMI score. For clerkship competencies, there was medium effect size for patient care, practice-based learning and improvement, interpersonal and communication skills, and cultural competence. Highest and lowest quartile MMI scorers were no different in academic honor society induction; however, top quartile MMI scorers received more humanism votes versus last quartile and were more likely rated outstanding or excellent graduates. CONCLUSIONS Local development of MMI and of admissions processes with sole reliance on MMI for final acceptance decisions will not affect academic preparation/medical school performance in multiple-choice-based assessments but can lead to locally desired attributes in students.
Collapse
Affiliation(s)
- Carol A Terregino
- a Rutgers Robert Wood Johnson Medical School , Piscataway , NJ , USA
| | - H Liesel Copeland
- a Rutgers Robert Wood Johnson Medical School , Piscataway , NJ , USA
| | | | - Daniel Mehan
- a Rutgers Robert Wood Johnson Medical School , Piscataway , NJ , USA
| | - Dana Dunleavy
- b Association of American Medical Colleges , Washington, DC , USA
| | - Thomas Geiger
- b Association of American Medical Colleges , Washington, DC , USA
| |
Collapse
|
17
|
Patterson F, Roberts C, Hanson MD, Hampe W, Eva K, Ponnamperuma G, Magzoub M, Tekian A, Cleland J. 2018 Ottawa consensus statement: Selection and recruitment to the healthcare professions. MEDICAL TEACHER 2018; 40:1091-1101. [PMID: 30251906 DOI: 10.1080/0142159x.2018.1498589] [Citation(s) in RCA: 55] [Impact Index Per Article: 9.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/15/2023]
Abstract
Selection and recruitment into healthcare education and practice is a key area of interest for educators with significant developments in research, policy, and practice in recent years. This updated consensus statement, developed through a multi-stage process, examines future opportunities and challenges in selection and recruitment. There is both a gap in the literature around and a compelling case for further theoretical and empirical literature to underpin the development of overall selection philosophes and policies and their enactment. More consistent evidence has emerged regarding the quality of different selection methods. Approaches to selection are context-dependent, requiring the consideration of an institution's philosophy regarding what they are trying to achieve, the communities it purports to serve, along with the system within which they are used. Diversity and globalization issues continue to be critically important topics. Further research is required to explore differential attainment and explain why there are substantial differences in culturally acceptable ways of approaching diversity and widening access. More sophisticated evaluation approaches using multi-disciplinary theoretical frameworks are required to address the issues. Following a discussion of these areas, 10 recommendations are presented to guide future research and practice and to encourage debate between colleagues across the globe.
Collapse
Affiliation(s)
- F Patterson
- a Work Psychology Group, Derby United Kingdom of Great Britain and Northern Ireland, UK
| | - C Roberts
- b Northern Clinical School, University of Sydney, Sydney, New South Wales, Australia
| | - M D Hanson
- c Department of Psychiatry, Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada
| | - W Hampe
- d Department of Biochemistry and Molecular Cell Biology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - K Eva
- e Centre for Health Education Scholarship and Department of Medicine, University of British Columbia, Vancouver, British Columbia, Canada
| | - G Ponnamperuma
- f Centre for Medical Education, Yong Loo Lin School of Medicine, Singapore
| | - M Magzoub
- g Department of Medical Education, College of Medicine, King Saud bin Abdulaziz University for Health Sciences, Riyadh, Saudi Arabia
| | - A Tekian
- h Department of Medical Education, University of Illinois at Chicago, Chicago, Illinois, USA
| | - J Cleland
- i Centre for Healthcare Research and Innovation (CHERI), University of Aberdeen, UK
| |
Collapse
|