1
|
Lin JC, Lokhande A, Margo CE, Greenberg PB. Best practices for interviewing applicants for medical school admissions: a systematic review. PERSPECTIVES ON MEDICAL EDUCATION 2022; 11:239-246. [PMID: 36136234 PMCID: PMC9510545 DOI: 10.1007/s40037-022-00726-8] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/29/2021] [Revised: 08/21/2022] [Accepted: 08/25/2022] [Indexed: 05/28/2023]
Abstract
INTRODUCTION Interviews are commonly used to select applicants for medical school, residency, and fellowship. However, interview techniques vary in acceptability, feasibility, reliability, and validity. This systematic review investigated the effectiveness of different interview methods in selecting the best qualified applicants for admission to medical school and developed a logic model to implement best practices for interviewing. METHODS Five electronic literature databases were searched for comparative studies related to interviewing in medical schools from inception through February 1, 2021. Inclusion criteria included publications in English that compared different methods of conducting a selection interview in medical schools with a controlled trial design. General study characteristics, measurement methodologies, and outcomes were reviewed. Quality appraisal was performed using the Medical Education Research Study Quality Instrument (MERSQI) and the Oxford Risk of Bias Scale. Based on these findings, a logic model was constructed using content analysis. RESULTS Thirteen studies were included. The multiple mini-interview (MMI) was reliable, unbiased, and predicted clinical and academic performance; the virtual MMI increased reliability and lowered costs. For unstructured interviews, blinding interviewers to academic scores reduced bias towards higher scorers; student and faculty interviewers rated applicants similarly. Applicants preferred structured over unstructured interviews. Study quality was above average per the MERSQI, risk of bias was high per the Oxford scale, and between-study heterogeneity was substantial. DISCUSSION There were few high-quality studies on interviewing applicants for admission to medical school; the MMI appears to offer a reliable method of interviewing. A logic model can provide a conceptual framework for conducting evidence-based admissions interviews.
Collapse
Affiliation(s)
- John C Lin
- Program in Biology, Brown University, Providence, RI, USA
- Division of Ophthalmology, Alpert Medical School, Brown University, Providence, RI, USA
| | - Anagha Lokhande
- Division of Ophthalmology, Alpert Medical School, Brown University, Providence, RI, USA
| | - Curtis E Margo
- Department of Ophthalmology, Morsani College of Medicine, University of South Florida, Tampa, FL, USA
| | - Paul B Greenberg
- Division of Ophthalmology, Alpert Medical School, Brown University, Providence, RI, USA.
- Section of Ophthalmology, Providence VA Medical Center, Providence, RI, USA.
- Office of Academic Affiliations, US Department of Veterans Affairs, Washington, DC, USA.
| |
Collapse
|
2
|
Renaud JS, Bourget M, St-Onge C, Eva KW, Tavares W, Salvador Loye A, Leduc JM, Homer M. Effect of station format on the psychometric properties of Multiple Mini Interviews. MEDICAL EDUCATION 2022; 56:1042-1050. [PMID: 35701388 DOI: 10.1111/medu.14855] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/28/2021] [Revised: 05/10/2022] [Accepted: 06/09/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND Given the widespread use of Multiple Mini Interviews (MMIs), their impact on the selection of candidates and the considerable resources invested in preparing and administering them, it is essential to ensure their quality. Given the variety of station formats used and the degree to which that factor resides in the control of training programmes that we know so little about, format's effect on MMI quality is a considerable oversight. This study assessed the effect of two popular station formats (interview vs. role-play) on the psychometric properties of MMIs. METHODS We analysed candidate data from the first 8 years of the Integrated French MMIs (IF-MMI) (2010-2017, n = 11 761 applicants), an MMI organised yearly by three francophone universities and administered at four testing sites located in two Canadian provinces. There were 84 role-play and 96 interview stations administered, totalling 180 stations. Mixed design analyses of variance (ANOVAs) were used to test the effect of station format on candidates' scores and stations' discrimination. Cronbach's alpha coefficients for interview and role-play stations were also compared. Predictive validity of both station formats was estimated with a mixed multiple linear regression model testing the relation between interview and role-play scores with average clerkship performance for those who gained entry to medical school (n = 462). RESULTS Role-play stations (M = 20.67, standard deviation [SD] = 3.38) had a slightly lower mean score than interview stations (M = 21.36, SD = 3.08), p < 0.01, Cohen's d = 0.2. The correlation between role-play and interview stations scores was r = 0.5 (p < 0.01). Discrimination coefficients, Cronbach's alpha and predictive validity statistics did not vary by station format. CONCLUSION Interview and role-play stations have comparable psychometric properties, suggesting format to be interchangeable. Programmes should select station format based on match to the personal qualities for which they are trying to select.
Collapse
Affiliation(s)
- Jean-Sébastien Renaud
- Department of Family and Emergency Medicine, Office of Education and Continuing Professional Development, VITAM Research Center, Université Laval, Quebec City, Quebec, Canada
| | - Martine Bourget
- Department of Psychiatry and Neurosciences, Université Laval, Quebec City, Quebec, Canada
| | - Christina St-Onge
- Department of Medicine, Université de Sherbrooke, Sherbrooke, Quebec, Canada
| | - Kevin W Eva
- Centre for Health Education Scholarship, University of British Columbia, Vancouver, British Columbia, Canada
| | - Walter Tavares
- Wilson Center, University of Toronto, Toronto, Ontario, Canada
| | | | - Jean-Michel Leduc
- Faculty of Medicine, Université de Montréal, Montreal, Quebec, Canada
| | - Matt Homer
- School of Education, University of Leeds, Leeds, UK
| |
Collapse
|
3
|
Leduc JM, Béland S, Renaud JS, Bégin P, Gagnon R, Ouellet A, Bourdy C, Loye N. Are different station formats assessing different dimensions in multiple mini-interviews? Findings from the Canadian integrated French multiple mini-interviews. BMC MEDICAL EDUCATION 2022; 22:616. [PMID: 35962381 PMCID: PMC9375358 DOI: 10.1186/s12909-022-03681-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/19/2021] [Accepted: 08/05/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND Multiple mini-interviews (MMI) are used to assess non-academic attributes for selection in medicine and other healthcare professions. It remains unclear if different MMI station formats (discussions, role-plays, collaboration) assess different dimensions. METHODS Based on station formats of the 2018 and 2019 Integrated French MMI (IFMMI), which comprised five discussions, three role-plays and two collaboration stations, the authors performed confirmatory factor analysis (CFA) using the lavaan 0.6-5 R package and compared a one-factor solution to a three-factor solution for scores of the 2018 (n = 1438) and 2019 (n = 1440) cohorts of the IFMMI across three medical schools in Quebec, Canada. RESULTS The three-factor solution was retained, with discussions, role-plays and collaboration stations all loading adequately with their scores. Furthermore, all three factors had moderate-to-high covariance (range 0.44 to 0.64). The model fit was also excellent with a Comparative fit index (CFI) of 0.983 (good if > 0.9), a Tucker Lewis index of 0.976 (good if > 0.95), a Standardized Root Mean Square Residual of 0.021 (good if < .08) and a Root Mean Square Error of 0.023 (good if < 0.08) for 2018 and similar results for 2019. In comparison, the single factor solution presented a lower fit (CFI = 0.819, TLI = 0.767, SRMR = 0.049 and RMSEA = 0.070). CONCLUSIONS The IFMMI assessed three dimensions that were related to stations formats, a finding that was consistent across two cohorts. This suggests that different station formats may be assessing different skills, and has implications for the choice of appropriate reliability metrics and the interpretation of scores. Further studies should try to characterize the underlying constructs associated with each station format and look for differential predictive validity according to these formats.
Collapse
Affiliation(s)
- Jean-Michel Leduc
- Centre de recherche du Centre intégré universitaire de santé et de services sociaux du Nord-de-l’Île-de-Montréal, Hôpital du Sacré-Cœur de Montréal, 5400 boul. Gouin ouest, Montréal, QC H4J 1C5 Canada
- Department of Microbiology, Infectious Diseases and Immunology, Faculty of Medicine, Université de Montréal, 2900 boul. Edouard-Montpetit, Montréal, QC H3T 1J4 Canada
| | - Sébastien Béland
- Department of Education Administration and Foundations, Faculty of Education Sciences, Université de Montréal, 90, avenue Vincent-D’Indy, Montréal, QC H2V 2S9 Canada
| | - Jean-Sébastien Renaud
- Department of Family Medicine and Emergency Medicine, Office of Education and Professional Development, Faculty of Medicine, Université Laval, 1050 Avenue de la Médecine, Quebec, QC G1V 0A6 Canada
| | - Philippe Bégin
- Department of Medicine, Faculty of Medicine, Université de Montréal, 2900 boul. Edouard-Montpetit, Montréal, QC H3T 1J4 Canada
| | - Robert Gagnon
- Office of Assessment and Evaluation, Faculty of Medicine, Université de Montréal, 2900 boul. Edouard-Montpetit, Montréal, QC H3T 1J4 Canada
| | - Annie Ouellet
- Department of Obstetrics and Gynecology, Faculty of Medicine and Health Sciences, Université de Sherbrooke, 3001 12 Ave N Immeuble X1, Sherbrooke, QC J1H 5N4 Canada
| | - Christian Bourdy
- Department of Family Medicine and Emergency Medicine, Faculty of Medicine, Université de Montréal, 2900 boul. Edouard-Montpetit, Montréal, QC H3T 1J4 Canada
| | - Nathalie Loye
- Department of Education Administration and Foundations, Faculty of Education Sciences, Université de Montréal, 90, avenue Vincent-D’Indy, Montréal, QC H2V 2S9 Canada
| |
Collapse
|
4
|
Nenad MW. The multiple mini-interview and dental hygiene admissions: A feasible option? J Dent Educ 2020; 84:634-641. [DOI: 10.1002/jdd.12114] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2019] [Revised: 01/06/2020] [Accepted: 01/30/2020] [Indexed: 11/06/2022]
Affiliation(s)
- Monica Williamson Nenad
- Director of Faculty Development; Accreditation, and Continuing Dental Education, A. T. Still University; Arizona School of Dentistry & Oral Health; Mesa Arizona USA
| |
Collapse
|
5
|
Breil SM, Forthmann B, Hertel-Waszak A, Ahrens H, Brouwer B, Schönefeld E, Marschall B, Back MD. Construct validity of multiple mini interviews - Investigating the role of stations, skills, and raters using Bayesian G-theory. MEDICAL TEACHER 2020; 42:164-171. [PMID: 31591917 DOI: 10.1080/0142159x.2019.1670337] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Background: One popular procedure in the medical student selection process are multiple mini-interviews (MMIs), which are designed to assess social skills (e.g., empathy) by means of brief interview and role-play stations. However, it remains unclear whether MMIs reliably measure desired social skills or rather general performance differences that do not depend on specific social skills. Here, we provide a detailed investigation into the construct validity of MMIs, including the identification and quantification of performance facets (social skill-specific performance, station-specific performance, general performance) and their relations with other selection measures.Methods: We used data from three MMI samples (N = 376 applicants, 144 raters) that included six interview and role-play stations and multiple assessed social skills.Results: Bayesian generalizability analyses show that, the largest amount of reliable MMI variance was accounted for by station-specific and general performance differences between applicants. Furthermore, there were low or no correlations with other selection measures.Discussion: Our findings suggest that MMI ratings are less social skill-specific than originally conceptualized and are due more to general performance differences (across and within-stations). Future research should focus on the development of skill-specific MMI stations and on behavioral analyses on the extents to which performance differences are based on desirable skills versus undesired aspects.
Collapse
Affiliation(s)
| | | | | | | | | | | | | | - Mitja D Back
- Psychology, University of Münster, Münster, Germany
| |
Collapse
|
6
|
Ali S, Sadiq Hashmi MS, Umair M, Beg MA, Huda N. Multiple Mini-Interviews: Current Perspectives on Utility and Limitations. ADVANCES IN MEDICAL EDUCATION AND PRACTICE 2019; 10:1031-1038. [PMID: 31849557 PMCID: PMC6913247 DOI: 10.2147/amep.s181332] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 04/13/2019] [Accepted: 11/16/2019] [Indexed: 06/10/2023]
Abstract
The growing role of healthcare professionals urged admissions committees to restructure their selection process and assess key personal attributes rather than academic achievements only. Multiple mini interviews (MMIs) were designed in 2002 to assess such domains in prospective healthcare professions. Being a high-stake assessment, the utility and limitations of MMI need to be explored. The purpose of this article is to review the available evidence to establish its utility. The claim of the reliability is verified by the studies assessing the effect of number of stations, duration of stations, format and scoring systems of stations and number of raters assessing the applicants. Similarly, by gathering evidence concerning its content validity, convergent/divergent correlation and predictive ability, validity is ensured. Finally, its acceptability and feasibility along with limitations is discussed. This article concludes by providing recommendations for further work required to deal with the limitations and enhance its utility.
Collapse
Affiliation(s)
- Sobia Ali
- Department of Health Professions Education, Liaquat National Hospital & Medical College, Karachi74800, Pakistan
| | | | - Mehnaz Umair
- Department of Health Professions Education, Liaquat National Hospital & Medical College, Karachi74800, Pakistan
| | - Mirza Aroosa Beg
- Department of Medical Education, Sindh Institute of Urology and Transplantation (SIUT), Karachi74200, Pakistan
| | - Nighat Huda
- Department of Health Professions Education, Liaquat National Hospital & Medical College, Karachi74800, Pakistan
| |
Collapse
|
7
|
Manuel RS, Dickens L, Young K. Qualitative Analysis of Multiple Mini Interview Interviewer Comments. MEDICAL SCIENCE EDUCATOR 2019; 29:941-945. [PMID: 34457570 PMCID: PMC8368679 DOI: 10.1007/s40670-019-00778-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
OBJECTIVE Qualitative studies of the Multiple Mini Interview (MMI) have investigated the attitudes and thoughts of prospective students and interviewers (i.e., raters) on the MMI interview, but none have examined rater's written assessments. Concerns regarding what the MMI measures, especially across and within each interview, have sparked investigations to determine how and what raters are measuring. Raters communicate their student evaluation(s) through numerical ratings and written comments that provide score context. This study explores rater's written comments to better understand the specific information gathered during the MMI process that contributes to interviewee evaluations. METHODS Randomized data from two US medical schools were examined with no numerical scores or other information about the interviewee provided to reviewers. In reviewing the rater comments, common words and phrases were identified to help construct themes that characterized the content (domains). Authors reviewed each other's notes and comments regarding themes and worked together to verify themes for accuracy. RESULTS Using a directed content approach to content analysis and reviewing the rater's comments, the results indicate that raters are focused on seven different domains: perspective taking, presentation, qualities, communication, coherence, comprehension, and non-verbal. Many of the rater comments contained multiple themes. CONCLUSION Raters' MMI comments provide the context for numerical scores allowing admissions committees to more fully understand a candidate's strengths or weaknesses. Identifying the themes in rater comments can ultimately assist the admissions committee to more comprehensively understand assessment elements that raters are using and consider important during the MMI evaluation.
Collapse
Affiliation(s)
- R. Stephen Manuel
- University of Mississippi Medical Center, 2500 North State St., Jackson, MS 39216 USA
| | - Lesley Dickens
- University of Mississippi Medical Center, 2500 North State St., Jackson, MS 39216 USA
| | - Kathleen Young
- University of Mississippi Medical Center, 2500 North State St., Jackson, MS 39216 USA
| |
Collapse
|
8
|
Yusoff MSB. Multiple Mini Interview as an admission tool in higher education: Insights from a systematic review. J Taibah Univ Med Sci 2019; 14:203-240. [PMID: 31435411 PMCID: PMC6695046 DOI: 10.1016/j.jtumed.2019.03.006] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2019] [Revised: 03/24/2019] [Accepted: 03/27/2019] [Indexed: 11/08/2022] Open
Abstract
Objectives Multiple Mini Interviews (MMI) have been conducted across the globe in the student selection process, particularly in health profession education. This paper reported the validity evidence of MMI in various educational settings. Methods A literature search was carried out through Scopus, Science Direct, Google Scholar, PubMed, and EBSCOhost databases based on specific search terms. Each article was appraised based on title, abstract, and full text. The selected articles were critically appraised, and relevant information to support the validity of MMI in various educational settings was synthesized. This paper followed the PRISMA guideline to ensure consistency in reporting systematic review results. Results A majority of the studies were from Canada, with 41.54%, followed by the United Kingdom (25.39%), the United States (13.85%), and Australia (9.23%). The rest (9.24%) were from Germany, Ireland, the United Arab Emirates, Japan, Pakistan, Taiwan, and Malaysia. Moreover, most MMI stations ranged from seven to 12 with a duration of 10 min per station (including a 2-min gap between stations). Conclusion The results suggest that the content, response process, and internal structure of MMI were well supported by evidence; however, the relation and consequences of MMI to important outcome variables were inconsistently supported. The evidence shows that MMI is a non-biased, practical, feasible, reliable, and content-valid admission tool. However, further research on its impact on non-cognitive outcomes is required.
Collapse
Affiliation(s)
- Muhamad S Bahri Yusoff
- Department of Medical Education, School of Medical Sciences, Universiti Sains Malaysia, Kota Bharu, Malaysia
| |
Collapse
|
9
|
Eva KW, Macala C, Fleming B. Twelve tips for constructing a multiple mini-interview. MEDICAL TEACHER 2019; 41:510-516. [PMID: 29373943 DOI: 10.1080/0142159x.2018.1429586] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Health professions the world over value various competencies in their practitioners that are not easily captured by academic measures of performance. As a result, many programs have begun using multiple mini-interviews (MMIs) to facilitate the selection of candidates who are most likely to demonstrate and further develop such qualities. In this twelve-tips article, the authors offer evidence- and experience-based advice regarding how to construct an MMI that is fit for purpose. The tips are provided chronologically, offering guidance regarding how one might conceptualize their goals for creating an MMI, how to establish a database of stations that are context appropriate, and how to prepare both candidates and examiners for their task. While MMIs have been shown to have utility in many instances, the authors urge caution against over-generalization by stressing the importance of post-MMI considerations including data monitoring and integration between one's admissions philosophy and one's curricular efforts.
Collapse
Affiliation(s)
- Kevin W Eva
- a Department of Medicine , University of British Columbia , Vancouver , Canada
| | - Catherine Macala
- a Department of Medicine , University of British Columbia , Vancouver , Canada
| | - Bruce Fleming
- a Department of Medicine , University of British Columbia , Vancouver , Canada
| |
Collapse
|
10
|
Mirghani I, Mushtaq F, Balkhoyor A, Al-Saud L, Osnes C, Keeling A, Mon-Williams M, Manogue M. The factors that count in selecting future dentists: sensorimotor and soft skills. Br Dent J 2019; 226:417-421. [DOI: 10.1038/s41415-019-0030-3] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
11
|
Castanelli DJ, Moonen-van Loon JMW, Jolly B, Weller JM. The reliability of a portfolio of workplace-based assessments in anesthesia training. Can J Anaesth 2018; 66:193-200. [PMID: 30430441 DOI: 10.1007/s12630-018-1251-7] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2018] [Revised: 10/01/2018] [Accepted: 10/02/2018] [Indexed: 10/27/2022] Open
Abstract
PURPOSE Competency-based anesthesia training programs require robust assessment of trainee performance and commonly combine different types of workplace-based assessment (WBA) covering multiple facets of practice. This study measured the reliability of WBAs in a large existing database and explored how they could be combined to optimize reliability for assessment decisions. METHODS We used generalizability theory to measure the composite reliability of four different types of WBAs used by the Australian and New Zealand College of Anaesthetists: mini-Clinical Evaluation Exercise (mini-CEX), direct observation of procedural skills (DOPS), case-based discussion (CbD), and multi-source feedback (MSF). We then modified the number and weighting of WBA combinations to optimize reliability with fewer assessments. RESULTS We analyzed 67,405 assessments from 1,837 trainees and 4,145 assessors. We assumed acceptable reliability for interim (intermediate stakes) and final (high stakes) decisions of 0.7 and 0.8, respectively. Depending on the combination of WBA types, 12 assessments allowed the 0.7 threshold to be reached where one assessment of any type has the same weighting, while 20 were required for reliability to reach 0.8. If the weighting of the assessments is optimized, acceptable reliability for interim and final decisions is possible with nine (e.g., two DOPS, three CbD, two mini-CEX, two MSF) and 15 (e.g., two DOPS, eight CbD, three mini-CEX, two MSF) assessments respectively. CONCLUSIONS Reliability is an important factor to consider when designing assessments, and measuring composite reliability can allow the selection of a WBA portfolio with adequate reliability to provide evidence for defensible decisions on trainee progression.
Collapse
Affiliation(s)
- Damian J Castanelli
- School of Clinical Sciences at Monash Health, Monash University, Clayton, VIC, Australia. .,Department of Anaesthesia and Perioperative Medicine, Monash Health, Clayton, VIC, Australia.
| | - Joyce M W Moonen-van Loon
- Department of Educational Development and Research, Faculty of Health, Medicine, and Life Sciences, Maastricht University, Maastricht, The Netherlands
| | - Brian Jolly
- School of Medicine and Public Health, Faculty of Health and Medicine, University of Newcastle, Newcastle, NSW, Australia
| | - Jennifer M Weller
- Centre for Medical and Health Sciences Education, School of Medicine, University of Auckland, Auckland, New Zealand.,Department of Anaesthesia, Auckland City Hospital, Auckland, New Zealand
| |
Collapse
|
12
|
Kelly ME, Patterson F, O’Flynn S, Mulligan J, Murphy AW. A systematic review of stakeholder views of selection methods for medical schools admission. BMC MEDICAL EDUCATION 2018; 18:139. [PMID: 29907112 PMCID: PMC6002997 DOI: 10.1186/s12909-018-1235-x] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/23/2017] [Accepted: 05/22/2018] [Indexed: 05/24/2023]
Abstract
BACKGROUND The purpose of this paper is to systematically review the literature with respect to stakeholder views of selection methods for medical school admissions. METHODS An electronic search of nine databases was conducted between January 2000-July 2014. Two reviewers independently assessed all titles (n = 1017) and retained abstracts (n = 233) for relevance. Methodological quality of quantitative papers was assessed using the MERSQI instrument. The overall quality of evidence in this field was low. Evidence was synthesised in a narrative review. RESULTS Applicants support interviews, and multiple mini interviews (MMIs). There is emerging evidence that situational judgement tests (SJTs) and selection centres (SCs) are also well regarded, but aptitude tests less so. Selectors endorse the use of interviews in general and in particular MMIs judging them to be fair, relevant and appropriate, with emerging evidence of similarly positive reactions to SCs. Aptitude tests and academic records were valued in decisions of whom to call to interview. Medical students prefer interviews based selection to cognitive aptitude tests. They are unconvinced about the transparency and veracity of written applications. Perceptions of organisational justice, which describe views of fairness in organisational processes, appear to be highly influential on stakeholders' views of the acceptability of selection methods. In particular procedural justice (perceived fairness of selection tools in terms of job relevance and characteristics of the test) and distributive justice (perceived fairness of selection outcomes in terms of equal opportunity and equity), appear to be important considerations when deciding on acceptability of selection methods. There were significant gaps with respect to both key stakeholder groups and the range of selection tools assessed. CONCLUSIONS Notwithstanding the observed limitations in the quality of research in this field, there appears to be broad concordance of views on the various selection methods, across the diverse stakeholders groups. This review highlights the need for better standards, more appropriate methodologies and for broadening the scope of stakeholder research.
Collapse
Affiliation(s)
- M. E. Kelly
- Discipline of General Practice, Clinical Science Institute, National University of Ireland, Galway, Ireland
| | | | | | - J. Mulligan
- Discipline of General Practice, Clinical Science Institute, National University of Ireland, Galway, Ireland
| | - A. W. Murphy
- Discipline of General Practice, Clinical Science Institute, National University of Ireland, Galway, Ireland
| |
Collapse
|
13
|
Eva KW. Cognitive Influences on Complex Performance Assessment: Lessons from the Interplay between Medicine and Psychology. JOURNAL OF APPLIED RESEARCH IN MEMORY AND COGNITION 2018. [DOI: 10.1016/j.jarmac.2018.03.008] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
|
14
|
Anglim J, Bozic S, Little J, Lievens F. Response distortion on personality tests in applicants: comparing high-stakes to low-stakes medical settings. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2018; 23:311-321. [PMID: 29022186 DOI: 10.1007/s10459-017-9796-8] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/23/2017] [Accepted: 09/28/2017] [Indexed: 06/07/2023]
Abstract
The current study examined the degree to which applicants applying for medical internships distort their responses to personality tests and assessed whether this response distortion led to reduced predictive validity. The applicant sample (n = 530) completed the NEO Personality Inventory whilst applying for one of 60 positions as first-year post-graduate medical interns. Predictive validity was assessed using university grades, averaged over the entire medical degree. Applicant responses for the Big Five (i.e., neuroticism, extraversion, openness, conscientiousness, and agreeableness) and 30 facets of personality were compared to a range of normative samples where personality was measured in standard research settings including medical students, role model physicians, current interns, and standard young-adult test norms. Applicants had substantially higher scores on conscientiousness, openness, agreeableness, and extraversion and lower scores on neuroticism with an average absolute standardized difference of 1.03, when averaged over the normative samples. While current interns, medical students, and especially role model physicians do show a more socially desirable personality profile than standard test norms, applicants provided responses that were substantially more socially desirable. Of the Big Five, conscientiousness was the strongest predictor of academic performance in both applicants (r = .11) and medical students (r = .21). Findings suggest that applicants engage in substantial response distortion, and that the predictive validity of personality is modest and may be reduced in an applicant setting.
Collapse
Affiliation(s)
- Jeromy Anglim
- School of Psychology, Deakin University, Locked Bag 20000, Geelong, 3220, Australia.
| | - Stefan Bozic
- School of Psychology, Deakin University, Locked Bag 20000, Geelong, 3220, Australia
| | | | - Filip Lievens
- Department of Personnel Management, Work and Organizational Psychology, Ghent University, Ghent, Belgium
| |
Collapse
|
15
|
Callwood A, Jeevaratnam K, Kotronoulas G, Schneider A, Lewis L, Nadarajah VD. Personal domains assessed in multiple mini interviews (MMIs) for healthcare student selection: A narrative synthesis systematic review. NURSE EDUCATION TODAY 2018; 64:56-64. [PMID: 29459193 DOI: 10.1016/j.nedt.2018.01.016] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/25/2017] [Revised: 12/08/2017] [Accepted: 01/22/2018] [Indexed: 06/08/2023]
Abstract
OBJECTIVES To examine the personal domains multiple mini interviews (MMIs) are being designed to assess, explore how they were determined and contextualise such domains in current and future healthcare student selection processes DESIGN: A systematic review of empirical research reporting on MMI model design was conducted from database inception to November 2017. DATA SOURCES Twelve electronic bibliographic databases. REVIEW METHODS Evidence was extracted from original studies, and integrated in a narrative synthesis guided by the PRISMA statement for reporting systematic reviews. Personal domains were clustered into themes using a modified Delphi technique. RESULTS A total of 584 articles were screened. 65 unique studies (80 articles) matched our inclusion criteria of which seven were conducted within nursing/midwifery faculties. Six in 10 studies featured applicants to medical school. Across selection processes, we identified 32 personal domains assessed by MMIs, the most frequent being: communication skills (84%), teamwork/collaboration (70%), and ethical/moral judgement (65%). Domains capturing ability to cope with stressful situations (14%), make decisions (14%), and resolve conflict in the workplace (13%) featured in fewer than ten studies overall. Intra- and inter-disciplinary inconsistencies in domain profiles were noted, as well as differences by entry level. MMIs deployed in nursing and midwifery assessed compassion and decision-making more frequently than in all other disciplines. Own programme philosophy and professional body guidance were most frequently cited (~50%) as sources for personal domains; a blueprinting process was reported in only 8% of studies. CONCLUSIONS Nursing, midwifery and allied healthcare professionals should develop their theoretical frameworks for MMIs to ensure they are evidence-based and fit-for-purpose. We suggest a re-evaluation of domain priorities to ensure that students who are selected, not only have the capacity to offer the highest standards of care provision, but are able to maintain these standards when facing clinical practice and organisational pressures.
Collapse
Affiliation(s)
- Alison Callwood
- School of Health Sciences, University of Surrey, Guildford, Surrey GU2 7XH, UK.
| | - Kamalan Jeevaratnam
- School of Veterinary Medicine, University of Surrey, Guildford, Surrey GU2 7XH, UK
| | | | | | | | | |
Collapse
|
16
|
Roberts C, Khanna P, Rigby L, Bartle E, Llewellyn A, Gustavs J, Newton L, Newcombe JP, Davies M, Thistlethwaite J, Lynam J. Utility of selection methods for specialist medical training: A BEME (best evidence medical education) systematic review: BEME guide no. 45. MEDICAL TEACHER 2018; 40:3-19. [PMID: 28847200 DOI: 10.1080/0142159x.2017.1367375] [Citation(s) in RCA: 33] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
BACKGROUND Selection into specialty training is a high-stakes and resource-intensive process. While substantial literature exists on selection into medical schools, and there are individual studies in postgraduate settings, there seems to be paucity of evidence concerning selection systems and the utility of selection tools in postgraduate training environments. AIM To explore, analyze and synthesize the evidence related to selection into postgraduate medical specialty training. METHOD Core bibliographic databases including PubMed; Ovid Medline; Embase, CINAHL; ERIC and PsycINFO were searched, and a total of 2640 abstracts were retrieved. After removing duplicates and screening against the inclusion criteria, 202 full papers were coded, of which 116 were included. RESULTS Gaps in underlying selection frameworks were illuminated. Frameworks defined by locally derived selection criteria, and heavily weighed on academic parameters seem to be giving way to the evidencing of competency-based selection approaches in some settings. Regarding selection tools, we found favorable psychometric evidence for multiple mini-interviews, situational judgment tests and clinical problem-solving tests, although the bulk of evidence was mostly limited to the United Kingdom. The evidence around the robustness of curriculum vitae, letters of recommendation and personal statements was equivocal. The findings on the predictors of past performance were limited to academic criteria with paucity of long-term evaluations. The evidence around nonacademic criteria was inadequate to make an informed judgment. CONCLUSIONS While much has been gained in understanding the utility of individual selection methods, though the evidence around many of them is equivocal, the underlying theoretical and conceptual frameworks for designing holistic and equitable selection systems are yet to be developed.
Collapse
Affiliation(s)
- Chris Roberts
- a Primary Care and Medical Education, Sydney Medical School , University of Sydney , New South Wales , Australia
| | - Priya Khanna
- b The Royal Australasian College of Physicians , New South Wales , Australia
| | - Louise Rigby
- c Health Education and Training Institute , New South Wales , Australia
| | - Emma Bartle
- d School of Dentistry , University of Queensland , Queensland , Australia
| | - Anthony Llewellyn
- e Hunter New England Local Health District , New Lambton , Australia
- f Health Education and Training Institute, University of Newcastle , Newcastle Australia
| | - Julie Gustavs
- b The Royal Australasian College of Physicians , New South Wales , Australia
| | - Libby Newton
- b The Royal Australasian College of Physicians , New South Wales , Australia
| | | | - Mark Davies
- h Royal Brisbane and Women's Hospital , Queensland , Australia
| | - Jill Thistlethwaite
- i School of Communication , University of Technology Sydney , New South Wales , Australia
| | - James Lynam
- j Calvary Mater Newcastle, University of Newcastle , New South Wales , Australia
| |
Collapse
|
17
|
St-Onge C, Young M, Eva KW, Hodges B. Validity: one word with a plurality of meanings. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2017; 22:853-867. [PMID: 27696103 DOI: 10.1007/s10459-016-9716-3] [Citation(s) in RCA: 40] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/13/2016] [Accepted: 09/26/2016] [Indexed: 06/06/2023]
Abstract
Validity is one of the most debated constructs in our field; debates abound about what is legitimate and what is not, and the word continues to be used in ways that are explicitly disavowed by current practice guidelines. The resultant tensions have not been well characterized, yet their existence suggests that different uses may maintain some value for the user that needs to be better understood. We conducted an empirical form of Discourse Analysis to document the multiple ways in which validity is described, understood, and used in the health professions education field. We created and analyzed an archive of texts identified from multiple sources, including formal databases such as PubMED, ERIC and PsycINFO as well as the authors' personal assessment libraries. An iterative analytic process was used to identify, discuss, and characterize emerging discourses about validity. Three discourses of validity were identified. Validity as a test characteristic is underpinned by the notion that validity is an intrinsic property of a tool and could, therefore, be seen as content and context independent. Validity as an argument-based evidentiary-chain emphasizes the importance of supporting the interpretation of assessment results with ongoing analysis such that validity does not belong to the tool/instrument itself. The emphasis is on process-based validation (emphasizing the journey instead of the goal). Validity as a social imperative foregrounds the consequences of assessment at the individual and societal levels, be they positive or negative. The existence of different discourses may explain-in part-results observed in recent systematic reviews that highlighted discrepancies and tensions between recommendations for practice and the validation practices that are actually adopted and reported. Some of these practices, despite contravening accepted validation 'guidelines', may nevertheless respond to different and somewhat unarticulated needs within health professional education.
Collapse
Affiliation(s)
| | | | - Kevin W Eva
- University of British Columbia, Vancouver, Canada
| | | |
Collapse
|
18
|
Dore KL, Reiter HI, Kreuger S, Norman GR. CASPer, an online pre-interview screen for personal/professional characteristics: prediction of national licensure scores. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2017; 22:327-336. [PMID: 27873137 DOI: 10.1007/s10459-016-9739-9] [Citation(s) in RCA: 45] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/26/2016] [Accepted: 11/16/2016] [Indexed: 05/15/2023]
Abstract
Typically, only a minority of applicants to health professional training are invited to interview. However, pre-interview measures of cognitive skills predict for national licensure scores (Gauer et al. in Med Educ Online 21 2016) and subsequently licensure scores predict for performance in practice (Tamblyn et al. in JAMA 288(23): 3019-3026, 2002; Tamblyn et al. in JAMA 298(9):993-1001, 2007). Assessment of personal and professional characteristics, with the same psychometric rigour of measures of cognitive abilities, are needed upstream in the selection to health profession training programs. To fill that need, Computer-based Assessment for Sampling Personal characteristics (CASPer)-an on-line, video-based screening test-was created. In this paper, we examine the correlation between CASPer and Canadian national licensure examination outcomes in 109 doctors who took CASPer at the time of selection to medical school. Specifically, CASPer scores were correlated against performance on cognitive and 'non-cognitive' subsections of both the Medical Council of Canada Qualifying Examination (MCCQE) Parts I (end of medical school) and Part II (18 months into specialty training). Unlike most national licensure exams, MCCQE has specific subcomponents examining personal/professional qualities, providing a unique opportunity for comparison. The results demonstrated moderate predictive validity of CASPer to national licensure outcomes of personal/professional characteristics three to six years after admission to medical school. These types of disattenuated correlations (r = 0.3-0.5) are not otherwise predicted by traditional screening measures. These data support the ability of a computer-based strategy to screen applicants in a feasible, reliable test, which has now demonstrated predictive validity, lending evidence of its validation for medical school applicant selection.
Collapse
Affiliation(s)
- Kelly L Dore
- Department of Medicine, PERD, 5003/C David Braley Health Sciences Centre, McMaster University, 1280 Main St. W., Hamilton, ON, L8S 4K1, Canada.
| | - Harold I Reiter
- Department of Oncology, PERD, McMaster University, Hamilton, ON, L8S 4K1, Canada
| | - Sharyn Kreuger
- PERD, McMaster University, Hamilton, ON, L8S 4K1, Canada
| | - Geoffrey R Norman
- Department of Clinical Epidemiology and Biostatistics, PERD, McMaster University, Hamilton, ON, L8S 4K1, Canada
| |
Collapse
|
19
|
Yamada T, Sato J, Yoshimura H, Okubo T, Hiraoka E, Shiga T, Kubota T, Fujitani S, Machi J, Ban N. Reliability and acceptability of six station multiple mini-interviews: past-behavioural versus situational questions in postgraduate medical admission. BMC MEDICAL EDUCATION 2017; 17:57. [PMID: 28302124 PMCID: PMC5356352 DOI: 10.1186/s12909-017-0898-z] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/07/2016] [Accepted: 03/10/2017] [Indexed: 06/06/2023]
Abstract
BACKGROUND The multiple mini-interview (MMI) is increasingly used for postgraduate medical admissions and in undergraduate settings. MMIs use mostly Situational Questions (SQs) rather than Past-Behavioural Questions (PBQs). A previous study of MMIs in this setting, where PBQs and SQs were asked in the same order, reported that the reliability of PBQs was non-inferior to SQs and that SQs were more acceptable to candidates. The order in which the questions are asked may affect reliability and acceptability of an MMI. This study investigated the reliability of an MMI using both PBQs and SQs, minimising question order bias. Acceptability of PBQs and SQs was also assessed. METHODS Forty candidates applying for a postgraduate medical admission for 2016-2017 were included; 24 examiners were used. The MMI consisted of six stations with one examiner per station; a PBQ and a SQ were asked at every station, and the order of questions was alternated between stations. Reliability was analysed for scores obtained for PBQs or SQs separately, and for both questions. A post-MMI survey was used to assess the acceptability of PBQs and SQs. RESULTS The generalisability (G) coefficients for PBQs only, SQs only, and both questions were 0.87, 0.96, and 0.80, respectively. Decision studies suggested that a four-station MMI would also be sufficiently reliable (G-coefficients 0.82 and 0.94 for PBQs and SQs, respectively). In total, 83% of participants were satisfied with the MMI. In terms of face validity, PBQs were more acceptable than SQs for candidates (p = 0.01), but equally acceptable for examiners (88% vs. 83% positive responses for PBQs vs. SQs; p = 0.377). Candidates preferred PBQs to SQs when asked to choose one, though this difference was not significant (p = 0.081); examiners showed a clear preference for PBQs (p = 0.007). CONCLUSIONS Reliability and acceptability of six-station MMI were good among 40 postgraduate candidates; modelling suggested that four stations would also be reliable. SQs were more reliable than PBQs. Candidates found PBQs more acceptable than SQs and examiners preferred PBQs when they had to choose between the two. Our findings suggest that it is better to ask both PBQs and SQs during an MMI to maximise acceptability.
Collapse
Affiliation(s)
- Toru Yamada
- Department of General Medicine/Family & Community Medicine, Nagoya University Graduate School of Medicine, Nagoya, Aichi Japan
- Department of Internal Medicine, Tokyo Bay Urayasu Ichikawa Medical Center, Urayasu, Chiba Japan
| | - Juichi Sato
- Department of General Medicine/Family & Community Medicine, Nagoya University Graduate School of Medicine, Nagoya, Aichi Japan
| | - Hiroshi Yoshimura
- Educational Committee, Prefectural Okinawa Nanbu and Children’s Medical Centre, Haebaru Town, Okinawa Japan
| | - Tomoya Okubo
- Research Division, The National Center for University Entrance Examinations, Tokyo, Japan
| | - Eiji Hiraoka
- Department of Internal Medicine, Tokyo Bay Urayasu Ichikawa Medical Center, Urayasu, Chiba Japan
| | - Takashi Shiga
- Department of Emergency and Critical Care Medicine, Tokyo Bay Urayasu Ichikawa Medical Center, Urayasu, Chiba Japan
| | - Tadao Kubota
- Department of Surgery, Tokyo Bay Urayasu Ichikawa Medical Center, Urayasu, Chiba Japan
| | - Shigeki Fujitani
- Educational Committee, Tokyo Bay Urayasu Ichikawa Medical Center, Urayasu, Chiba Japan
- Emergency Medicine and Critical Care Medicine, St. Marianna University, Kawasaki, Kanagawa Japan
| | - Junji Machi
- Educational Committee, Tokyo Bay Urayasu Ichikawa Medical Center, Urayasu, Chiba Japan
- Department of Surgery, University of Hawaii, Honolulu, HI USA
| | - Nobutaro Ban
- Department of General Medicine/Family & Community Medicine, Nagoya University Graduate School of Medicine, Nagoya, Aichi Japan
| |
Collapse
|
20
|
Leduc JM, Rioux R, Gagnon R, Bourdy C, Dennis A. Impact of sociodemographic characteristics of applicants in multiple mini-interviews. MEDICAL TEACHER 2017; 39:285-294. [PMID: 28024439 DOI: 10.1080/0142159x.2017.1270431] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
BACKGROUND Multiple mini-interviews (MMI) are commonly used for medical school admission. This study aimed to assess if sociodemographic characteristics are associated with MMI performance, and how they may act as barriers or enablers to communication in MMI. METHODS This mixed-method study combined data from a sociodemographic questionnaire, MMI scores, semi-structured interviews and focus groups with applicants and assessors. Quantitative and qualitative data were analyzed using multiple linear regression and a thematic framework analysis. RESULTS 1099 applicants responded to the questionnaire. A regression model (R2 = 0.086) demonstrated that being age 25-29 (β = 0.11, p = 0.001), female and a French-speaker (β = 0.22, p = 0.003) were associated with better MMI scores. Having an Asian-born parent was associated with a lower score (β = -0.12, p < 0.001). Candidates reporting a higher family income had higher MMI scores. In the qualitative data, participants discussed how maturity and financial support improved life experiences, how language could act as a barrier, and how ethnocultural differences could lead to misunderstandings. CONCLUSION Age, gender, ethnicity, socioeconomic status and language seem to be associated with applicants' MMI scores because of perceived differences in communications skills and life experiences. Monitoring this association may provide guidance to improve fairness of MMI stations.
Collapse
Affiliation(s)
- Jean-Michel Leduc
- a Division of Medical Microbiology and Infectious Diseases , Hôpital du Sacré-Coeur de Montréal , Montréal , Canada
| | - Richard Rioux
- b Health and Society Institute, Université du Québec à Montréal , Montréal , Canada
| | - Robert Gagnon
- c Center of Pedagogy Applied to Health Sciences, Université de Montréal , Montréal , Canada
| | - Christian Bourdy
- d Department of Family Medicine and Emergency Medicine , Université de Montréal , Montréal , Canada
| | - Ashley Dennis
- e Centre for Medical Education , University of Dundee , Dundee , UK
| |
Collapse
|
21
|
Yeung E, Kulasagarem K, Woods N, Dubrowski A, Hodges B, Carnahan H. Validity of a new assessment rubric for a short-answer test of clinical reasoning. BMC MEDICAL EDUCATION 2016; 16:192. [PMID: 27461249 PMCID: PMC4962495 DOI: 10.1186/s12909-016-0714-1] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/19/2015] [Accepted: 07/23/2016] [Indexed: 06/06/2023]
Abstract
BACKGROUND The validity of high-stakes decisions derived from assessment results is of primary concern to candidates and certifying institutions in the health professions. In the field of orthopaedic manual physical therapy (OMPT), there is a dearth of documented validity evidence to support the certification process particularly for short-answer tests. To address this need, we examined the internal structure of the Case History Assessment Tool (CHAT); this is a new assessment rubric developed to appraise written responses to a short-answer test of clinical reasoning in post-graduate OMPT certification in Canada. METHODS Fourteen physical therapy students (novices) and 16 physical therapists (PT) with minimal and substantial OMPT training respectively completed a mock examination. Four pairs of examiners (n = 8) participated in appraising written responses using the CHAT. We conducted separate generalizability studies (G studies) for all participants and also by level of OMPT training. Internal consistency was calculated for test questions with more than 2 assessment items. Decision studies were also conducted to determine optimal application of the CHAT for OMPT certification. RESULTS The overall reliability of CHAT scores was found to be moderate; however, reliability estimates for the novice group suggest that the scale was incapable of accommodating for scores of novices. Internal consistency estimates indicate item redundancies for several test questions which will require further investigation. CONCLUSION Future validity studies should consider discriminating the clinical reasoning competence of OMPT trainees strictly at the post-graduate level. Although rater variance was low, the large variance attributed to error sources not incorporated in our G studies warrant further investigations into other threats to validity. Future examination of examiner stringency is also warranted.
Collapse
Affiliation(s)
- Euson Yeung
- Department of Rehabilitation Sciences, University of Toronto, 160-500 University Avenue, Toronto, ON M5G 1V7 Canada
- The Wilson Centre for Research in Education, University Health Network, Toronto, Canada
| | - Kulamakan Kulasagarem
- Department of Family and Community Medicine, University of Toronto, Toronto, Canada
- The Wilson Centre for Research in Education, University Health Network, Toronto, Canada
| | - Nicole Woods
- Department of Surgery, University of Toronto, Toronto, Canada
- The Wilson Centre for Research in Education, University Health Network, Toronto, Canada
| | - Adam Dubrowski
- Division of Emergency Medicine, Memorial University of Newfoundland, St John’s, Canada
| | - Brian Hodges
- Faculty of Medicine, University of Toronto, Toronto, Canada
- Wilson Centre for Research in Education Richard and Elizabeth Currie Chair in Health Professions Education Research, University Health Network, Toronto, Canada
| | - Heather Carnahan
- School of Human Kinetics and Recreation, Memorial University of Newfoundland, St John’s, Canada
| |
Collapse
|
22
|
How Different Medical School Selection Processes Call upon Different Personality Characteristics. PLoS One 2016; 11:e0150645. [PMID: 26959489 PMCID: PMC4784968 DOI: 10.1371/journal.pone.0150645] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2015] [Accepted: 02/16/2016] [Indexed: 11/19/2022] Open
Abstract
Background Research indicates that certain personality traits relate to performance in the medical profession. Yet, personality testing during selection seems ineffective. In this study, we examine the extent to which different medical school selection processes call upon desirable personality characteristics in applicants. Methods 1019 of all 1055 students who entered the Dutch Bachelor of Medicine at University of Groningen, the Netherlands in 2009, 2010 and 2011 were included in this study. Students were admitted based on either top pre-university grades (n = 139), acceptance in a voluntary multifaceted selection process (n = 286), or lottery weighted for pre-university GPA. Within the lottery group, we distinguished between students who had not participated (n = 284) and students who were initially rejected (n = 310) in the voluntary selection process. Two months after admission, personality was assessed with the NEO-FFI, a measure of the five factor model of personality. We performed ANCOVA modelling with gender as a covariate to examine personality differences between the four groups. Results The multifaceted selection group scored higher on extraversion than all other groups(p<0.01), higher on conscientiousness than both lottery-admitted groups(p<0.01), and lower on neuroticism than the lottery-admitted group that had not participated in the voluntary selection process. The latter group scored lower on conscientiousness than all other groups(p<0.05) and lower on agreeableness than the multifaceted selection group and the top pre-university group(p<0.01). Conclusions Differences between the four admission groups, though statistically significant, were relatively small. Personality scores in the group admitted through the voluntary multifaceted selection process seemed most fit for the medical profession. Personality scores in the lottery-admitted group that had not participated in this process seemed least fit for the medical profession. It seems that in order to select applicants with suitable personalities, an admission process that calls upon desirable personality characteristics is beneficial.
Collapse
|
23
|
Patterson F, Knight A, Dowell J, Nicholson S, Cousans F, Cleland J. How effective are selection methods in medical education? A systematic review. MEDICAL EDUCATION 2016; 50:36-60. [PMID: 26695465 DOI: 10.1111/medu.12817] [Citation(s) in RCA: 251] [Impact Index Per Article: 31.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/05/2014] [Revised: 11/13/2014] [Accepted: 06/08/2015] [Indexed: 05/20/2023]
Abstract
CONTEXT Selection methods used by medical schools should reliably identify whether candidates are likely to be successful in medical training and ultimately become competent clinicians. However, there is little consensus regarding methods that reliably evaluate non-academic attributes, and longitudinal studies examining predictors of success after qualification are insufficient. This systematic review synthesises the extant research evidence on the relative strengths of various selection methods. We offer a research agenda and identify key considerations to inform policy and practice in the next 50 years. METHODS A formalised literature search was conducted for studies published between 1997 and 2015. A total of 194 articles met the inclusion criteria and were appraised in relation to: (i) selection method used; (ii) research question(s) addressed, and (iii) type of study design. RESULTS Eight selection methods were identified: (i) aptitude tests; (ii) academic records; (iii) personal statements; (iv) references; (v) situational judgement tests (SJTs); (vi) personality and emotional intelligence assessments; (vii) interviews and multiple mini-interviews (MMIs), and (viii) selection centres (SCs). The evidence relating to each method was reviewed against four evaluation criteria: effectiveness (reliability and validity); procedural issues; acceptability, and cost-effectiveness. CONCLUSIONS Evidence shows clearly that academic records, MMIs, aptitude tests, SJTs and SCs are more effective selection methods and are generally fairer than traditional interviews, references and personal statements. However, achievement in different selection methods may differentially predict performance at the various stages of medical education and clinical practice. Research into selection has been over-reliant on cross-sectional study designs and has tended to focus on reliability estimates rather than validity as an indicator of quality. A comprehensive framework of outcome criteria should be developed to allow researchers to interpret empirical evidence and compare selection methods fairly. This review highlights gaps in evidence for the combination of selection tools that is most effective and the weighting to be given to each tool.
Collapse
Affiliation(s)
- Fiona Patterson
- Department of Organisational Psychology, City University, London, UK
| | | | - Jon Dowell
- School of Medicine, University of Dundee, Dundee, UK
| | - Sandra Nicholson
- Barts and The London School of Medicine and Dentistry, Queen Mary University of London, London, UK
| | | | - Jennifer Cleland
- School of Medicine and Dentistry, University of Aberdeen, Aberdeen, UK
| |
Collapse
|
24
|
Yoshimura H, Kitazono H, Fujitani S, Machi J, Saiki T, Suzuki Y, Ponnamperuma G. Past-behavioural versus situational questions in a postgraduate admissions multiple mini-interview: a reliability and acceptability comparison. BMC MEDICAL EDUCATION 2015; 15:75. [PMID: 25890189 PMCID: PMC4427914 DOI: 10.1186/s12909-015-0361-y] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/21/2014] [Accepted: 03/30/2015] [Indexed: 05/18/2023]
Abstract
BACKGROUND The Multiple Mini-Interview (MMI) mostly uses 'Situational' Questions (SQs) as an interview format within a station, rather than 'Past-Behavioural' Questions (PBQs), which are most frequently adopted in traditional single-station personal interviews (SSPIs) for non-medical and medical selection. This study investigated reliability and acceptability of the postgraduate admissions MMI with PBQ and SQ interview formats within MMI stations. METHODS Twenty-six Japanese medical graduates, first completed the two-year national obligatory initial postgraduate clinical training programme and then applied to three specialty training programmes - internal medicine, general surgery, and emergency medicine - in a Japanese teaching hospital, where they underwent the Accreditation Council for Graduate Medical Education (ACGME)-competency-based MMI. This MMI contained five stations, with two examiners per station. In each station, a PBQ, and then an SQ were asked consecutively. PBQ and SQ interview formats were not separated into two different stations, or the order of questioning of PBQs and SQs in individual stations was not changed due to lack of space and experienced examiners. Reliability was analysed for the scores of these two MMI question types. Candidates and examiners were surveyed on this experience. RESULTS The PBQ and SQ formats had generalisability coefficients of 0.822 and 0.821, respectively. With one examiner per station, seven stations could produce a reliability of more than 0.80 in both PBQ and SQ formats. More than 60% of both candidates and examiners felt positive about the overall candidates' ability. All participants liked the fairness of this MMI when compared with the previously experienced SSPI. SQs were perceived more favourable by candidates; in contrast, PBQs were perceived more relevant by examiners. CONCLUSIONS Both PBQs and SQs are equally reliable and acceptable as station interview formats in the postgraduate admissions MMI. However, the use of the two formats within the same station, and with a fixed order, is not the best to maximise its utility as an admission test. Future studies are required to evaluate how best the SQs and PBQs should be combined as station interview formats to enhance reliability, feasibility, acceptability and predictive validity of the MMI.
Collapse
Affiliation(s)
- Hiroshi Yoshimura
- Educational Committee, Prefectural Okinawa Nanbu and Children's Medical Centre, Haebaru Town, Okinawa Prefecture, Japan.
- Educational Committee, Tokyo Bay Urayasu-Ichikawa Medical Centre, Urayasu City, Chiba Prefecture, Japan.
- Department of Surgery, University of Hawaii, John A. Burns School of Medicine, Honolulu, State of Hawaii, USA.
| | - Hidetaka Kitazono
- Educational Committee, Tokyo Bay Urayasu-Ichikawa Medical Centre, Urayasu City, Chiba Prefecture, Japan.
| | - Shigeki Fujitani
- Educational Committee, Tokyo Bay Urayasu-Ichikawa Medical Centre, Urayasu City, Chiba Prefecture, Japan.
| | - Junji Machi
- Educational Committee, Tokyo Bay Urayasu-Ichikawa Medical Centre, Urayasu City, Chiba Prefecture, Japan.
- Department of Surgery, University of Hawaii, John A. Burns School of Medicine, Honolulu, State of Hawaii, USA.
| | - Takuya Saiki
- Medical Education Development Centre, Faculty of Medicine, Gifu University, Gifu City, Gifu Prefecture, Japan.
| | - Yasuyuki Suzuki
- Medical Education Development Centre, Faculty of Medicine, Gifu University, Gifu City, Gifu Prefecture, Japan.
| | - Gominda Ponnamperuma
- Faculty of Medicine, University of Colombo, Colombo, Western Province, Sri Lanka.
| |
Collapse
|
25
|
Phillips AW, Garmel GM. Does the multiple mini-interview address stakeholder needs? An applicant's perspective. Ann Emerg Med 2014; 64:316-9. [PMID: 24743102 DOI: 10.1016/j.annemergmed.2014.01.021] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2013] [Revised: 12/18/2013] [Accepted: 01/21/2014] [Indexed: 11/29/2022]
Affiliation(s)
| | - Gus M Garmel
- Stanford/Kaiser Emergency Medicine Residency Program, Stanford, CA; Kaiser Permanente Medical Center, Santa Clara, CA
| |
Collapse
|