1
|
Grierson L, Lee M, Mahmud M, Profetto J, Sibbald M, Whyte R, Vanstone M. A survey of medical school aspirant perceptions of an unexpected lottery-facilitated admissions adaptation. J Eval Clin Pract 2024; 30:678-686. [PMID: 38622886 DOI: 10.1111/jep.13994] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/20/2023] [Revised: 01/19/2024] [Accepted: 04/07/2024] [Indexed: 04/17/2024]
Abstract
INTRODUCTION Due to the COVID-19 pandemic, the Undergraduate Medical Doctor (MD) Programme at McMaster University (Hamilton, Canada) was unable to run in-person medical school interviews in March 2020, prompting an alternate solution that maximised admission opportunities for Indigenous applicants, prioritised admission for those rated most highly in the interview determination process, and allocated subsequent offers via lottery. METHODS A short survey was administered to applicants who had been offered an admissions interview and were subsequently impacted by the admissions adaptations. The survey elicited perceptions of the adaptation through Likert scale ratings and free-text responses. Survey data were analysed via a sequential (quantitative to qualitative) mixed-methods design. RESULTS 196 of 552 potential participants completed the survey. Across quantitative and qualitative analyses, respondents reported that the adaptation had a negative impact on their professional development and personal life. Ratings of negative perception were greater for those who did not receive an offer than for those who accepted or declined an offer. Free text responses emphasised considerable criticism for the lottery portion of the adaptation and displeasure that efforts made in constructing applications were less relevant than anticipated. DISCUSSION The negative responses to this unexpected change highlight the profound upstream impact admission policies have on the preapplication behaviours of aspiring medical students. The outcomes support a refined understanding of the value candidates place on the interview in appraising their own suitability for a career as a physician.
Collapse
Affiliation(s)
- Lawrence Grierson
- Department of Family Medicine, Faculty of Health Sciences, McMaster University, Hamilton, Ontario, Canada
- McMaster Education Research, Innovation, and Theory (MERIT), Faculty of Health Sciences, McMaster University, Hamilton, Ontario, Canada
| | - Mark Lee
- McMaster Education Research, Innovation, and Theory (MERIT), Faculty of Health Sciences, McMaster University, Hamilton, Ontario, Canada
| | - Meera Mahmud
- Department of Family Medicine, Faculty of Health Sciences, McMaster University, Hamilton, Ontario, Canada
| | - Jason Profetto
- Department of Family Medicine, Faculty of Health Sciences, McMaster University, Hamilton, Ontario, Canada
- Undergraduate MD Program, Michael G. DeGroote School of Medicine, Faculty of Health Sciences, McMaster University, Hamilton, Ontario, Canada
| | - Matthew Sibbald
- McMaster Education Research, Innovation, and Theory (MERIT), Faculty of Health Sciences, McMaster University, Hamilton, Ontario, Canada
- Undergraduate MD Program, Michael G. DeGroote School of Medicine, Faculty of Health Sciences, McMaster University, Hamilton, Ontario, Canada
- Division of Cardiology, Department of Medicine, Faculty of Health Sciences, McMaster University, Hamilton, Ontario, Canada
| | - Robert Whyte
- Undergraduate MD Program, Michael G. DeGroote School of Medicine, Faculty of Health Sciences, McMaster University, Hamilton, Ontario, Canada
- Department of Anesthesia, Faculty of Health Sciences, McMaster University, Hamilton, Ontario, Canada
| | - Meredith Vanstone
- Department of Family Medicine, Faculty of Health Sciences, McMaster University, Hamilton, Ontario, Canada
| |
Collapse
|
2
|
Kulasegaram K, Baxan V, Giannone E, Latter D, Hanson MD. Adapting the Admissions Interview During COVID-19: A Comparison of In-Person and Video-Based Interview Validity Evidence. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2022; 97:200-206. [PMID: 34348379 PMCID: PMC8779599 DOI: 10.1097/acm.0000000000004331] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
COVID-19 physical distancing limited many medical schools' abilities to conduct in-person interviews for the 2020 admissions cycle. The University of Toronto (U of T) Temerty Faculty of Medicine was already in the midst of its interview process, with two-thirds of applicants having completed the in-person modified personal interview (MPI). As the university and surrounding region were shut down, the shift was made in the middle of the application cycle to a semisynchronous video-based MPI interview (vMPI) approach. U of T undertook the development, deployment, and evaluation of the 2 approaches mid-admissions cycle. Existing resources and tools were used to create a tailored interview process with the assistance of applicants. The vMPI was similar in content and process to the MPI: a 4-station interview with each station mapped to attributes relevant to medical school success. Instead of live interviews, applicants recorded 5-minute responses to questions for each station using their own hardware. These responses were later assessed by raters asynchronously. Out of 627 applicants, 232 applicants completed the vMPI. Validity evidence was generated for the vMPI and compared with the MPI on the internal structure, relationship to other variables, and consequential validity, including applicant and interviewer acceptability. Overall, the vMPI demonstrated similar reliability and factor structure to the MPI. As with the MPI, applicant performance was predicted by nonacademic screening tools but not academic measures. Applicants' acceptance of the vMPI was positive. Most interviewers found the vMPI to be acceptable and reported confidence in their ratings. Continuing physical distancing concerns will require multiple options for admissions committees to select medical students. The vMPI is an example of a customized approach that schools can implement and may have advantages for selection beyond the COVID-19 pandemic. Future evaluation will examine additional validity evidence for the tool.
Collapse
Affiliation(s)
- Kulamakan Kulasegaram
- K. Kulasegaram is associate professor, Department of Family and Community Medicine, and scientist, MD Program and The Wilson Centre, Temerty Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada
| | - Victorina Baxan
- V. Baxan is associate registrar for admissions, MD Program, Temerty Faculty of Medicine, and lecturer, Ontario Institute for Studies in Education, University of Toronto, Toronto, Ontario, Canada
| | - Elicia Giannone
- E. Giannone is enrolment coordinator, Enrolment Services—Undergraduate Medical Education, MD Program and The Wilson Centre, Temerty Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada
| | - David Latter
- D. Latter is professor of surgery and director, MD Admissions and Student Finances, Temerty Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada
| | - Mark D. Hanson
- M.D. Hanson is professor of psychiatry, Temerty Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada; ORCID: https://orcid.org/0000-0002-0820-4521
| |
Collapse
|
3
|
Peeters MJ. Moving beyond Cronbach's Alpha and Inter-Rater Reliability: A Primer on Generalizability Theory for Pharmacy Education. Innov Pharm 2021; 12:10.24926/iip.v12i1.2131. [PMID: 34007684 PMCID: PMC8102977 DOI: 10.24926/iip.v12i1.2131] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
BACKGROUND When available, empirical evidence should help guide decision-making. Following each administration of a learning assessment, data becomes available for analysis. For learning assessments, Kane's Framework for Validation can helpfully categorize evidence by inference (i.e., scoring, generalization, extrapolation, implications). Especially for test-scores used within a high-stakes setting, generalization evidence is critical. While reporting Cronbach's alpha, inter-rater reliability, and other reliability coefficients for a single measurement error are somewhat common in pharmacy education, dealing with multiple concurrent sources of measurement error within complex learning assessments is not. Performance-based assessments (e.g., OSCEs) that use raters, are inherently complex learning assessments. PRIMER Generalizability Theory (G-Theory) can account for multiple sources of measurement error. G-Theory is a powerful tool that can provide a composite reliability (i.e., generalization evidence) for more complex learning assessments, including performance-based assessments. It can also help educators explore ways to make a learning assessment more rigorous if needed, as well as suggest ways to better allocate resources (e.g., staffing, space, fiscal). A brief review of G-Theory is discussed herein focused on pharmacy education. MOVING FORWARD G-Theory has been common and useful in medical education, though has been used rarely in pharmacy education. Given the similarities in assessment methods among health-professions, G-Theory should prove helpful in pharmacy education as well. Within this Journal and accompanying this Idea Paper, there are multiple reports that demonstrate use of G-Theory in pharmacy education.
Collapse
|
4
|
Kulkarni CA, Rasasingham R, Woods NN, Gorman DA, Szatmari P, Hanson MD. Case Report: Defining Applicant Attributes to Be Prioritized in the Selection of Child and Adolescent Psychiatry Subspecialty Residents at the University of Toronto. Front Psychiatry 2021; 12:650317. [PMID: 33959054 PMCID: PMC8093509 DOI: 10.3389/fpsyt.2021.650317] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/06/2021] [Accepted: 03/25/2021] [Indexed: 11/13/2022] Open
Abstract
Background/Objectives: The child and adolescent psychiatry (CAP) subspecialty training program at the University of Toronto was among the first fully accredited CAP programs in Canada. As one of Canada's largest CAP subspecialty programs, we attract many excellent applicants annually. While objectivity and transparency in the selection of candidates have been valued, it was unclear which applicant attributes should be prioritized. This quality improvement project was undertaken to identify the key applicant attributes that should be prioritized for admission to the program. Materials/Methods: An initial list of attributes was compiled by project team members and feedback solicited. Through iterative design, this list was categorized into "end products," "branding attributes" and "generic attributes." The "end products" were removed as these represented outputs of training rather than attributes on which applicant selection should be based. Subsequent steps involved only the "branding" and "generic" attributes. A consensus-building exercise led to the creation of two short-lists of five attributes within each category. Finally, a paired-comparison forced choice methodology was used to determine the ranking of these attributes in order of importance when assessing applicants. Results: The final lists of "generic" and "branding" attributes developed through a consensus-building exercise are presented in rank order based on the paired-comparison methodology. The overall response rate for the forced choice electronic survey was 49% of faculty and learners. Conclusions/Discussion: This project used an iterative process of consensus building & pairwise comparison to prioritize key attributes for assessing trainee selection to the program. Going forward, these attributes will be incorporated into the file review and interview portions of our admissions process. In addition to emphasizing these priority attributes in admissions, there are implications for other aspects of the program including curriculum and faculty development, as well as guiding the overall mission and vision for the Division. A similar process could be undertaken by other training programs seeking to identify priority attributes for admission to their programs.
Collapse
Affiliation(s)
- Chetana A Kulkarni
- Department of Psychiatry, University of Toronto, Toronto, ON, Canada.,Department of Psychiatry, Hospital for Sick Children (SickKids), University of Toronto, Toronto, ON, Canada
| | - Raj Rasasingham
- Department of Psychiatry, University of Toronto, Toronto, ON, Canada.,Humber River Regional Hospital, Toronto, ON, Canada
| | - Nicole N Woods
- The Wilson Centre, University Health Network, Toronto, ON, Canada
| | - Daniel A Gorman
- Department of Psychiatry, University of Toronto, Toronto, ON, Canada.,Department of Psychiatry, Hospital for Sick Children (SickKids), University of Toronto, Toronto, ON, Canada
| | - Peter Szatmari
- Department of Psychiatry, University of Toronto, Toronto, ON, Canada.,Department of Psychiatry, Hospital for Sick Children (SickKids), University of Toronto, Toronto, ON, Canada.,Centre for Addiction & Mental Health, Toronto, ON, Canada
| | - Mark D Hanson
- Department of Psychiatry, University of Toronto, Toronto, ON, Canada.,Department of Psychiatry, Hospital for Sick Children (SickKids), University of Toronto, Toronto, ON, Canada
| |
Collapse
|
5
|
Abstract
Medical school interviews are critical for screening candidates for admission. Traditionally, the panel format is used for this process, although its drastically low reliabilities sparked the creation of the highly reliable multiple mini-interview (MMI). However, the multiple mini-interview's feasibility issues made it unappealing to some institutions, like the University of Toronto, who created the modified personal interview (MPI) as a more feasible alternative. The lack of literature about the MPI, however, prevents the medical community from determining whether this interview format achieves this goal. Therefore, evidence was compiled and critically appraised for the MPI using Kane's validity framework, which enables analysis of four levels of inference (Scoring, Generalization, Extrapolation, Implication). Upon examining each level, it was concluded that assumptions made at the 'Scoring' and 'Generalization' levels had the least support. Based on these findings, it was recommended that in-person rater training become mandatory and the number of stations increase twofold from four to eight. Moreover, the following research initiatives were suggested to improve understanding of and evidence for the modified personal interview: (1) formally blueprint each station; (2) conduct predictive validity studies for the modified personal interview, and (3) relate admission to medical school on the basis of the MPI with medical error rates. By making these changes and studying these initiatives, the MPI can become a more feasible and equally effective alternative to the MMI with more evidence to justify its implementation at other medical schools.
Collapse
Affiliation(s)
- Dilshan Pieris
- Faculty of Health Sciences, McMaster University, Hamilton, ON, Canada.
| |
Collapse
|
6
|
Svicher A, Cosci F, Giannini M, Pistelli F, Fagerström K. Item Response Theory analysis of Fagerström Test for Cigarette Dependence. Addict Behav 2018; 77:38-46. [PMID: 28950117 DOI: 10.1016/j.addbeh.2017.09.005] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2017] [Revised: 09/13/2017] [Accepted: 09/14/2017] [Indexed: 10/18/2022]
Abstract
INTRODUCTION The Fagerström Test for Cigarette Dependence (FTCD) and the Heaviness of Smoking Index (HSI) are the gold standard measures to assess cigarette dependence. However, FTCD reliability and factor structure have been questioned and HSI psychometric properties are in need of further investigations. The present study examined the psychometrics properties of the FTCD and the HSI via the Item Response Theory. METHODS The study was a secondary analysis of data collected in 862 Italian daily smokers. Confirmatory factor analysis was run to evaluate the dimensionality of FTCD. A Grade Response Model was applied to FTCD and HSI to verify the fit to the data. Both item and test functioning were analyzed and item statistics, Test Information Function, and scale reliabilities were calculated. Mokken Scale Analysis was applied to estimate homogeneity and Loevinger's coefficients were calculated. RESULTS The FTCD showed unidimensionality and homogeneity for most of the items and for the total score. It also showed high sensitivity and good reliability from medium to high levels of cigarette dependence, although problems related to some items (i.e., items 3 and 5) were evident. HSI had good homogeneity, adequate item functioning, and high reliability from medium to high levels of cigarette dependence. Significant Differential Item Functioning was found for items 1, 4, 5 of the FTCD and for both items of HSI. CONCLUSIONS HSI seems highly recommended in clinical settings addressed to heavy smokers while FTCD would be better used in smokers with a level of cigarette dependence ranging between low and high.
Collapse
|