1
|
Grier DD, Turner L, Prichard TJ, Oaks A, Nolan D, Shomo AS, Dunlavy D, Batisky DL. Virtual and In-Person Multiple Mini-interviews: A Comparison of Two Modalities in Regard to Bias. MEDICAL SCIENCE EDUCATOR 2024; 34:1479-1485. [PMID: 39758481 PMCID: PMC11699074 DOI: 10.1007/s40670-024-02142-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Accepted: 08/05/2024] [Indexed: 01/07/2025]
Abstract
Purpose To examine the characteristics between virtual multiple mini-interview (vMMI) and in-person interviews (ipMMI) in regard to difference in performance between applicant-reported gender identity and racial groups. Methods Retrospective multiple mini-interview (MMI) data from two vMMI interview cycles (2021 and 2022) consisting of 627 applicants and four ipMMI cycles (2017-2020) consisting of 2248 applicants. Comparisons were made between applicant subgroups including reported gender (male and female) and minority status (URiM and non-URiM). A three-way analysis of variance (ANOVA) was conducted to examine the effects of gender, URiM status, and interview modality (in-person vs. virtual) on MMI scores. Results There were no overall significant differences between annual ipMMI and vMMI scores. A significant main effect of gender was observed, with females scoring higher than males overall. An interaction between gender and URiM status was also found. Although not statistically significant, when the MMI was virtual, URiM applicants on average scored higher than non-URiM applicants. In both the ipMMI and vMMI, URiM males tended to score lower than their non-URiM counterparts, though this difference was not statistically significant. URiM females tended to score higher than non-URiM females during the vMMI, and this difference was statistically significant. Conclusions The switch to vMMI shows that there are no overall significant differences between the in-person and virtual formats; however, the finding that female URiM's better performance in the virtual setting is novel. The cause of this finding is unknown but most likely reflects the complex interaction between race and gender. This insight requires future study and builds on the evidence that the MMI is an admissions tool to mitigate bias.
Collapse
Affiliation(s)
- David D. Grier
- Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH USA
- Division of Pathology, Cincinnati Children’s Hospital Medical Center, 3333 Burnet Avenue, MLC 1035, Cincinnati, OH 45229-3029 USA
| | - Laurah Turner
- University of Cincinnati College of Medicine, Cincinnati, OH USA
| | | | - Andrea Oaks
- University of Cincinnati College of Medicine, Cincinnati, OH USA
| | - David Nolan
- University of Cincinnati College of Medicine, Cincinnati, OH USA
| | - Anisa S. Shomo
- Department of Family and Community Medicine, University of Cincinnati, Cincinnati, OH USA
| | - Dustin Dunlavy
- University of Cincinnati College of Medicine, Cincinnati, OH USA
| | - Donald L. Batisky
- Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH USA
- University of Cincinnati College of Medicine, Cincinnati, OH USA
- Division of Nephrology, Cincinnati Children’s Hospital Medical Center, Cincinnati, OH USA
| |
Collapse
|
2
|
Ellison HB, Grabowski CJ, Schmude M, Costa JB, Naemi B, Schmidt M, Patel D, Westervelt M. Evaluating a Situational Judgment Test for Use in Medical School Admissions: Two Years of AAMC PREview Exam Administration Data. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2024; 99:183-191. [PMID: 37976531 DOI: 10.1097/acm.0000000000005548] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/19/2023]
Abstract
PURPOSE To examine the relationship between the Association of American Medical Colleges (AAMC) Professional Readiness Exam (PREview) scores and other admissions data, group differences in mean PREview scores, and whether adding a new assessment tool affected the volume and composition of applicant pools. METHOD Data from the 2020 and 2021 PREview exam administrations were analyzed. Two U.S. schools participated in the PREview pilot in 2020 and 6 U.S. schools participated in 2021. PREview scores were paired with data from the American Medical College Application Service (undergraduate grade point averages [GPAs], Medical College Admission Test [MCAT] scores, race, and ethnicity) and participating schools (interview ratings). RESULTS Data included 19,525 PREview scores from 18,549 unique PREview examinees. Correlations between PREview scores and undergraduate GPAs ( r = .16) and MCAT scores ( r = .29) were small and positive. Correlations between PREview scores and interview ratings were also small and positive, ranging between .09 and .14 after correcting for range restriction. Small group differences in mean PREview scores were observed between White and Black or African American and White and Hispanic, Latino, or of Spanish origin examinees. The addition of the PREview exam did not substantially change the volume or composition of participating schools' applicant pools. CONCLUSIONS Results suggest the PREview exam measures knowledge of competencies that are distinct from those measured by other measures used in medical school admissions. Observed group differences were smaller than group differences observed with traditional academic assessments and evaluations. The addition of the PREview exam did not substantially change the overall volume of applications or the proportions of out-of-state, underrepresented in medicine, or lower socioeconomic status applicants. While more research is needed, these results suggest the PREview exam may provide unique information to the admissions process without adversely affecting applicant pools.
Collapse
|
3
|
Kennedy AB, Riyad CNY, Ellis R, Fleming PR, Gainey M, Templeton K, Nourse A, Hardaway V, Brown A, Evans P, Natafgi N. Evaluating a Global Assessment Measure Created by Standardized Patients for the Multiple Mini Interview in Medical School Admissions: Mixed Methods Study. J Particip Med 2022; 14:e38209. [PMID: 36040776 PMCID: PMC9472042 DOI: 10.2196/38209] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2022] [Revised: 07/19/2022] [Accepted: 08/03/2022] [Indexed: 11/15/2022] Open
Abstract
BACKGROUND Standardized patients (SPs) are essential stakeholders in the multiple mini interviews (MMIs) that are increasingly used to assess medical school applicants' interpersonal skills. However, there is little evidence for their inclusion in the development of instruments. OBJECTIVE This study aimed to describe the process and evaluate the impact of having SPs co-design and cocreate a global measurement question that assesses medical school applicants' readiness for medical school and acceptance status. METHODS This study used an exploratory, sequential, and mixed methods study design. First, we evaluated the initial MMI program and determined the next quality improvement steps. Second, we held a collaborative workshop with SPs to codevelop the assessment question and response options. Third, we evaluated the created question and the additional MMI rubric items through statistical tests based on 1084 applicants' data from 3 cohorts of applicants starting in the 2018-2019 academic year. The internal reliability of the MMI was measured using a Cronbach α test, and its prediction of admission status was tested using a forward stepwise binary logistic regression. RESULTS Program evaluation indicated the need for an additional quantitative question to assess applicant readiness for medical school. In total, 3 simulation specialists, 2 researchers, and 21 SPs participated in a workshop leading to a final global assessment question and responses. The Cronbach α's were >0.8 overall and in each cohort year. The final stepwise logistic model for all cohorts combined was statistically significant (P<.001), explained 9.2% (R2) of the variance in acceptance status, and correctly classified 65.5% (637/972) of cases. The final model consisted of 3 variables: empathy, rank of readiness, and opening the encounter. CONCLUSIONS The collaborative nature of this project between stakeholders, including nonacademics and researchers, was vital for the success of this project. The SP-created question had a significant impact on the final model predicting acceptance to medical school. This finding indicates that SPs bring a critical perspective that can improve the process of evaluating medical school applicants.
Collapse
Affiliation(s)
- Ann Blair Kennedy
- Biomedical Sciences Department, School of Medicine Greenville, University of South Carolina, Greenville, SC, United States
- Patient Engagement Studio, University of South Carolina, Greenville, SC, United States
- Family Medicine Department, Prisma Health, Greenville, SC, United States
| | - Cindy Nessim Youssef Riyad
- School of Medicine Greenville, University of South Carolina, Greenville, SC, United States
- Hospital Based Accreditation, Accreditation Council of Graduate Medical Education, Chicago, IL, United States
| | - Ryan Ellis
- School of Medicine Greenville, University of South Carolina, Greenville, SC, United States
| | - Perry R Fleming
- Patient Engagement Studio, University of South Carolina, Greenville, SC, United States
- School of Medicine Columbia, University of South Carolina, Columbia, SC, United States
| | - Mallorie Gainey
- School of Medicine Greenville, University of South Carolina, Greenville, SC, United States
| | - Kara Templeton
- Prisma Health-Upstate Simulation Center, School of Medicine Greenville, University of South Carolina, Greenville, SC, United States
| | - Anna Nourse
- Patient Engagement Studio, University of South Carolina, Greenville, SC, United States
| | - Virginia Hardaway
- Admissions and Registration, School of Medicine Greenville, University of South Carolina, Greenville, SC, United States
| | - April Brown
- Prisma Health-Upstate Simulation Center, School of Medicine Greenville, University of South Carolina, Greenville, SC, United States
| | - Pam Evans
- Patient Engagement Studio, University of South Carolina, Greenville, SC, United States
- Prisma Health-Upstate Simulation Center, School of Medicine Greenville, University of South Carolina, Greenville, SC, United States
| | - Nabil Natafgi
- Patient Engagement Studio, University of South Carolina, Greenville, SC, United States
- Health Services, Policy, Management Department, Arnold School of Public Health, University of South Carolina, Columbia, SC, United States
| |
Collapse
|