1
|
Mahoney MR, Gayoso ME, Belsky NA, Crook TW, Parekh KP. Observed Structured Teaching Experiences (OSTEs) in a Students as Teachers Course. MEDICAL SCIENCE EDUCATOR 2024; 34:13-18. [PMID: 38510411 PMCID: PMC10948636 DOI: 10.1007/s40670-023-01952-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 11/28/2023] [Indexed: 03/22/2024]
Abstract
Introduction Teaching is an important competency in graduate medical education (GME). Many residency programs have implemented curricula to develop residents' teaching skills and observed structured teaching experiences (OSTEs) have been used to assess these skills. There is an increasing focus on building teaching skills earlier in the medical education continuum, however, there is limited literature on assessing medical students' teaching skills. The authors developed an OSTE for medical students enrolled in a students-as-teachers course to address this gap and provide formative feedback on teaching skills. Materials and Methods OSTEs were conducted for fourth-year medical students (M4s) enrolled in a Students as Teachers Advanced Elective at a US medical school. An M4 observed a first-year medical student (M1) during a simulated encounter with a standardized patient. The M4 gave feedback and a chalk talk. A physician observer assessed the M4's teaching using the modified Stanford Faculty Development Program (SFDP) questionnaire. The M1s and M4s also completed the SFDP. The M4 completed pre- and post-OSTE self-efficacy surveys (score range 6-30) and a post-OSTE acceptability survey. Results All (30/30) M4s completed the OSTE. The SFDP identified common teaching strengths and areas for growth. ANOVA tests demonstrated significant differences between the mean (SD) scores from physician assessors, M1s, and M4s [4.56 (0.63) vs. 4.87 (0.35) vs. 4.08 (0.74), p<0.001]. There was a statistically significant difference in mean (SD) self-efficacy scores pre- and post-OSTE [18.72 (3.39) vs. 23.83 (3.26), p<0.001]. All M4s (30/30) somewhat or strongly agreed with all three OSTE acceptability questions. Lessons Learned The authors successfully conducted an OSTE in an M4 advanced elective. The OSTE was highly acceptable to participants, and M4s demonstrated improved teaching self-efficacy. Further research should explore the validity of the OSTE to measure medical students' teaching skills and the long-term impact of developing teaching skills in medical school. Supplementary Information The online version contains supplementary material available at 10.1007/s40670-023-01952-3.
Collapse
Affiliation(s)
- Margaret R. Mahoney
- Vanderbilt University School of Medicine, 2209 Garland Ave, Nashville, TN 37232 USA
| | - Matthew E. Gayoso
- Vanderbilt University School of Medicine, 2209 Garland Ave, Nashville, TN 37232 USA
| | - Natasha A. Belsky
- Vanderbilt University School of Medicine, 2209 Garland Ave, Nashville, TN 37232 USA
| | - Travis W. Crook
- Department of Pediatrics, Vanderbilt University Medical Center, Nashville, TN USA
| | - Kendra P. Parekh
- Vanderbilt University School of Medicine, 2209 Garland Ave, Nashville, TN 37232 USA
- Department of Emergency Medicine, Vanderbilt University Medical Center, Nashville, TN USA
| |
Collapse
|
2
|
Malmut L, Ng A. Near-peer teaching in simulation. CLINICAL TEACHER 2023; 20:e13645. [PMID: 37632300 DOI: 10.1111/tct.13645] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2023] [Accepted: 08/01/2023] [Indexed: 08/27/2023]
Abstract
BACKGROUND Development, implementation and evaluation of a simulation curriculum is time and resource intensive. Limited faculty time and training are cited as primary barriers to adopting simulation into medical education. Near-peer teaching is a potential solution to manage the increased teaching demands that occur with simulation use. APPROACH In 2022, we implemented a near-peer simulation curriculum for teaching junior physical medicine and rehabilitation (PM&R) residents high-acuity low-opportunity events commonly seen on the inpatient rehabilitation unit. The curriculum was taught by senior residents to supplement faculty lectures. Senior residents completed facilitator training on simulator logistics, debriefing and formative assessment. EVALUATION Residents completed an end-of-course questionnaire evaluating teaching effectiveness and perceived knowledge acquisition. All items were scored on a 5-point Likert-type scale. Learners rated their near-peers as having good clinical teaching effectiveness (mean [SD], 4.66[0.38]). Senior residents (n = 6) disclosed feeling knowledgeable about the topics they instructed (baseline 3.9[3.2-4.4]; after 4.6[4.1-4.9]; p = 0.19), and junior residents (n = 6) felt they gained knowledge and improved their ability to manage patients as a result of the near-peer curriculum (baseline 2.4[2.3-2.5]; after 3.9[3.5-4.2]; p = 0.005). IMPLICATIONS This educational programme is an example of how near-peer teaching can be used in simulation. Our simulation curriculum taught by near-peers was valued by learners as well taught and educational. Research is needed that directly compares the effectiveness of near-pear teaching to faculty instruction. We hope that by sharing our work, educators will feel inspired to use near-peer teachers for simulation instruction when faculty availability for teaching is scarce.
Collapse
Affiliation(s)
- Laura Malmut
- MedStar National Rehabilitation Hospital, Washington, DC, USA
- Georgetown University School of Medicine, Washington, DC, USA
| | - Alvin Ng
- MedStar National Rehabilitation Hospital, Washington, DC, USA
- Georgetown University School of Medicine, Washington, DC, USA
| |
Collapse
|
3
|
A Competency-based Tool for Resident Evaluation of Pediatric Emergency Department Faculty. West J Emerg Med 2023; 24:59-63. [PMID: 36602497 PMCID: PMC9897249 DOI: 10.5811/westjem.2022.11.57686] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2022] [Accepted: 11/18/2022] [Indexed: 01/06/2023] Open
|
4
|
Ijaz H, Stull M, McDonough E, Paulsen R, Hill J. A behaviorally anchored assessment tool for bedside teaching in the emergency department. AEM EDUCATION AND TRAINING 2022; 6:e10789. [PMID: 35979341 PMCID: PMC9366581 DOI: 10.1002/aet2.10789] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/22/2022] [Revised: 07/05/2022] [Accepted: 07/08/2022] [Indexed: 06/15/2023]
Abstract
Evaluating a resident's development as a bedside educator in the emergency department (ED) is challenging. Teaching consults, where trainees are observed and assessed in their teaching skills, have been used to improve bedside teaching. Within emergency medicine, there are a few assessment tools to evaluate a clinician's bedside teaching, with the majority focusing on faculty. A user-friendly assessment tool adapted to the ED that emphasizes behaviorally anchored, milestone-based evaluations for residents has yet to be developed. We sought to develop such an assessment tool for evaluating residents' bedside teaching in the ED. Using a nominal-group consensus-building technique, we derived the bedside teaching assessment tool. The consensus-building panel was composed of clinician-educators with extensive experience in resident education. The teaching consult process consisted of the consultant, a faculty member with a focus in medical education, directly observing a resident's bedside teaching throughout their shift while filling out the evaluation form based on observed behaviors. A total of 35 consults were provided to 30 individual residents. The mean (±SD) scores for the 35 consults for the learning climate, content teaching, supervision, feedback and evaluation, and self-assessment were 3.84 (±0.75), 3.56 (±0.58), 3.70 (±0.60), 3.64 (±0.77), and 3.92 (±0.45), respectively. The median scores for the above domains were 4, 3.5, 4, 3.5, and 4, respectively. The tool has acceptable internal consistency with a Cronbach's alpha of 0.723 (95% CI 0.469-0.839). Eleven of 13 (85%) residents who provided feedback agreed or strongly agreed that the quantitative feedback provided by the assessment tool was useful. Twelve of 13 (92%) residents found the consultation process to be unobtrusive to their clinical performance. In conclusion, this novel behaviorally anchored assessment tool for bedside teaching can serve as a useful adjunct to a teaching consult and provide useful feedback for the development of residents' bedside teaching skills.
Collapse
Affiliation(s)
- Hamza Ijaz
- University of CincinnatiCincinnatiOhioUSA
| | - Matthew Stull
- University Hospitals Cleveland Medical CenterClevelandOhioUSA
| | | | | | | |
Collapse
|
5
|
Evaluating chief resident readiness for the teaching assistant role: The Teaching Evaluation assessment of the chief resident (TEACh-R) instrument. Am J Surg 2021; 222:1112-1119. [PMID: 34600735 DOI: 10.1016/j.amjsurg.2021.09.026] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2021] [Revised: 09/23/2021] [Accepted: 09/24/2021] [Indexed: 11/21/2022]
Abstract
BACKGROUND The American Board of Surgery has mandated chief residents complete 25 cases in the teaching assistant (TA) role. We developed a structured instrument, the Teaching Evaluation and Assessment of the Chief Resident (TEACh-R), to determine readiness and provide feedback for residents in this role. METHODS Senior (PGY3-5) residents were scored on technical and teaching performance by faculty observers using the TEACh-R instrument in the simulation lab. Residents were provided with their TEACh-R scores and surveyed on their experience. RESULTS Scores in technical (p < 0.01) and teaching (p < 0.01) domains increased with PGY. Higher technical, but not teaching, scores correlated with attending-rated readiness for operative independence (p 0.02). Autonomy mismatch was inversely correlated with teaching competence (p < 0.01). Residents reported satisfaction with TEACh-R feedback and desire for use of this instrument in operating room settings. CONCLUSION Our TEACh-R instrument is an effective way to assess technical and teaching performance in the TA role.
Collapse
|
6
|
Bartlett AD, Um IS, Luca EJ, Krass I, Schneider CR. Measuring and assessing the competencies of preceptors in health professions: a systematic scoping review. BMC MEDICAL EDUCATION 2020; 20:165. [PMID: 32448239 PMCID: PMC7247189 DOI: 10.1186/s12909-020-02082-9] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/02/2020] [Accepted: 05/17/2020] [Indexed: 06/11/2023]
Abstract
BACKGROUND In healthcare, preceptors act as a role model and supervisor, thereby facilitating the socialisation and development of the preceptee into a professional fit to practice. To ensure a consistent approach to every preceptorship experience, preceptor competencies should be measured or assessed to ensure that the desired outcomes are achieved. Defining these would ensure quality management and could inform development of an preceptor competency framework. This review aimed to evaluate the evidence for preceptor competencies and assessment in health professions. METHODS This study followed the PRISMA ScR scoping review guidelines. A database search was conducted in Embase, Medline, CINAHL and IPA in 2019. Articles were included if they defined criteria for competency, measured or assessed competency, or described performance indicators of preceptors. A modified GRADE CERQual approach and CASP quality assessment were used to appraise identified competencies, performance indicators and confidence in evidence. RESULTS Forty one studies identified 17 evidence-based competencies, of which 11 had an associated performance indicator. The competency of preceptors was most commonly measured using a preceptee completed survey (moderate to high confidence as per CERQual), followed by preceptor self-assessment, and peer-assessment. Preceptee outcomes as a measure of preceptor performance had good but limited evidence. CONCLUSIONS Competencies with defined performance indicators allow for effective measurement and may be modifiable with training. To measure preceptor competency, the preceptor perspective, as well as peer and preceptee assessment is recommended. These findings can provide the basis for a common preceptor competency framework in health professions.
Collapse
Affiliation(s)
- Andrew D Bartlett
- School of Pharmacy, Faculty of Medicine and Health, The University of Sydney, Sydney, NSW, 2006, Australia.
| | - Irene S Um
- School of Pharmacy, Faculty of Medicine and Health, The University of Sydney, Sydney, NSW, 2006, Australia
| | - Edward J Luca
- University Library, The University of Sydney, Sydney, Australia
| | - Ines Krass
- School of Pharmacy, Faculty of Medicine and Health, The University of Sydney, Sydney, NSW, 2006, Australia
| | - Carl R Schneider
- School of Pharmacy, Faculty of Medicine and Health, The University of Sydney, Sydney, NSW, 2006, Australia
| |
Collapse
|
7
|
Carlson K. Peer Coaching as a Faculty Development Tool: A Mixed Methods Evaluation. J Grad Med Educ 2020; 12:168-175. [PMID: 32322350 PMCID: PMC7161339 DOI: 10.4300/jgme-d-19-00250.1] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/10/2019] [Revised: 09/10/2019] [Accepted: 12/30/2019] [Indexed: 11/06/2022] Open
Abstract
BACKGROUND In the era of competency-based assessment, medical education faculty are frequently challenged to develop unique teaching approaches. One method to address faculty development needs in a real-time clinical learning environment is peer coaching. OBJECTIVE We implemented and evaluated a faculty development program involving peer observation and feedback for attending physicians. METHODS Hospital internal medicine faculty assigned to a teaching service were recruited for the study. Participants voluntarily agreed to observe and be observed by a peer attending physician during a 2-week block of teaching rounds. When serving in the coaching role, faculty were asked to observe 4 separate occasions using an observation tool based on the Stanford Faculty Development Program framework to guide feedback. An outside consultant facilitated a focus group and completed a qualitative content analysis to categorize all participants' experiences during the faculty development activity. RESULTS Of the 22 eligible faculty, 14 (64%) agreed to participate by committing to 6 to 8 hours observing another faculty member during rounds, 2 feedback sessions, and 90 minutes to provide program feedback during a focus group. The analysis of the focus group revealed favorable reactions to the faculty development program, including (1) observed attending awareness of unrecognized habits; (2) personalized teaching tips for the observed attending to improve teaching quality based on individual style/preferences; and (3) exposure to new teaching techniques. CONCLUSIONS An inpatient-based peer-coaching faculty development program was acceptable and feasible for a majority of faculty and may improve individual teaching effectiveness among conventionally trained physicians.
Collapse
|
8
|
van der Meulen MW, Smirnova A, Heeneman S, Oude Egbrink MGA, van der Vleuten CPM, Lombarts KMJMH. Exploring Validity Evidence Associated With Questionnaire-Based Tools for Assessing the Professional Performance of Physicians: A Systematic Review. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2019; 94:1384-1397. [PMID: 31460937 DOI: 10.1097/acm.0000000000002767] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
PURPOSE To collect and examine-using an argument-based validity approach-validity evidence of questionnaire-based tools used to assess physicians' clinical, teaching, and research performance. METHOD In October 2016, the authors conducted a systematic search of the literature seeking articles about questionnaire-based tools for assessing physicians' professional performance published from inception to October 2016. They included studies reporting on the validity evidence of tools used to assess physicians' clinical, teaching, and research performance. Using Kane's validity framework, they conducted data extraction based on four inferences in the validity argument: scoring, generalization, extrapolation, and implications. RESULTS They included 46 articles on 15 tools assessing clinical performance and 72 articles on 38 tools assessing teaching performance. They found no studies on research performance tools. Only 12 of the tools (23%) gathered evidence on all four components of Kane's validity argument. Validity evidence focused mostly on generalization and extrapolation inferences. Scoring evidence showed mixed results. Evidence on implications was generally missing. CONCLUSIONS Based on the argument-based approach to validity, not all questionnaire-based tools seem to support their intended use. Evidence concerning implications of questionnaire-based tools is mostly lacking, thus weakening the argument to use these tools for formative and, especially, for summative assessments of physicians' clinical and teaching performance. More research on implications is needed to strengthen the argument and to provide support for decisions based on these tools, particularly for high-stakes, summative decisions. To meaningfully assess academic physicians in their tripartite role as doctor, teacher, and researcher, additional assessment tools are needed.
Collapse
Affiliation(s)
- Mirja W van der Meulen
- M.W. van der Meulen is PhD candidate, Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, School of Health Professions Education, Maastricht University, Maastricht, The Netherlands, and member, Professional Performance Research Group, Medical Psychology, Academic Medical Center, University of Amsterdam, Amsterdam, The Netherlands; ORCID: https://orcid.org/0000-0003-3636-5469. A. Smirnova is PhD graduate and researcher, Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, School of Health Professions Education, Maastricht University, Maastricht, The Netherlands, and member, Professional Performance Research Group, Medical Psychology, Academic Medical Center, University of Amsterdam, Amsterdam, The Netherlands; ORCID: https://orcid.org/0000-0003-4491-3007. S. Heeneman is professor, Department of Pathology, Faculty of Health, Medicine and Life Sciences, School of Health Professions Education, Maastricht University, Maastricht, The Netherlands; ORCID: https://orcid.org/0000-0002-6103-8075. M.G.A. oude Egbrink is professor, Department of Physiology, Faculty of Health, Medicine and Life Sciences, School of Health Professions Education, Maastricht University, Maastricht, The Netherlands; ORCID: https://orcid.org/0000-0002-5530-6598. C.P.M. van der Vleuten is professor, Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, School of Health Professions Education, Maastricht University, Maastricht, The Netherlands; ORCID: https://orcid.org/0000-0001-6802-3119. K.M.J.M.H. Lombarts is professor, Professional Performance Research Group, Medical Psychology, Academic Medical Center, University of Amsterdam, Amsterdam, The Netherlands; ORCID: https://orcid.org/0000-0001-6167-0620
| | | | | | | | | | | |
Collapse
|
9
|
Kassis K, Wallihan R, Hurtubise L, Goode S, Chase M, Mahan JD. Milestone-Based Tool for Learner Evaluation of Faculty Clinical Teaching. MEDEDPORTAL : THE JOURNAL OF TEACHING AND LEARNING RESOURCES 2017; 13:10626. [PMID: 30800827 PMCID: PMC6374742 DOI: 10.15766/mep_2374-8265.10626] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/12/2017] [Accepted: 08/16/2017] [Indexed: 06/06/2023]
Abstract
INTRODUCTION Traditional normative Likert-type evaluations of faculty teaching have several drawbacks, including lack of granular feedback, potential for inflation, and the halo effect. To provide more meaningful data to faculty on their teaching skills and encourage educator self-reflection and skill development, we designed and implemented a milestone-based faculty clinical teaching evaluation tool. METHODS The evaluation tool contains 10 questions that assess clinical teaching skills with descriptive milestone behavior anchors. Nine of these items are based on the Stanford Faculty Development Clinical Teaching Model and annual Accreditation Council for Graduate Medical Education (ACGME) resident survey questions; the tenth was developed to address professionalism at our institution. The tool was developed with input from residency program leaders, residents, and the faculty development committee and piloted with graduate medical education learners before implementation. RESULTS More than 7,200 faculty evaluations by learners and 550 faculty self-evaluations have been collected. Learners found the form easy to use and preferred it to previous Likert-based evaluations. Over the 2 years that faculty self-evaluations have been collected, their scores have been similar to the learner evaluation scores. The feedback provided faculty with more meaningful data on teaching skills and opportunities for reflection and skill improvement and was used in constructing faculty teaching skills programs at the institutional level. DISCUSSION This innovation provides an opportunity to give faculty members more meaningful teaching evaluations and feedback. It should be easy for other institutions and programs to implement. It leverages a familiar milestone construct and incorporates important ACGME annual resident survey information.
Collapse
Affiliation(s)
- Karyn Kassis
- Assistant Professor of Pediatrics, Nationwide Children's Hospital
- Assistant Professor of Pediatrics, The Ohio State University College of Medicine
- Director, Center for Faculty Development, Nationwide Children's Hospital
- Director, Center for Faculty Development, The Ohio State University College of Medicine
| | - Rebecca Wallihan
- Assistant Professor of Pediatrics, Nationwide Children's Hospital
- Assistant Professor of Pediatrics, The Ohio State University College of Medicine
- Associate Program Director, Pediatric Residency Program, Nationwide Children's Hospital
- Associate Program Director, Pediatric Residency Program, The Ohio State University College of Medicine
- Vice Chair for Education, Nationwide Children's Hospital
- Vice Chair for Education, The Ohio State University College of Medicine
| | - Larry Hurtubise
- Associate Director, Center for Faculty Development, Nationwide Children's Hospital
- Adjunct Associate Professor of Biomedical Education and Anatomy, The Ohio State University College of Medicine
| | - Sara Goode
- Program Administrator, Office of Graduate Medical Education, Nationwide Children's Hospital
| | - Margaret Chase
- Associate Professor of Pediatrics, Nationwide Children's Hospital
- Associate Professor of Pediatrics, The Ohio State University College of Medicine
- Program Director of the Internal Medicine/Pediatrics Residency, Nationwide Children's Hospital
- Program Director of the Internal Medicine/Pediatrics Residency, The Ohio State University College of Medicine
| | - John D. Mahan
- Professor of Pediatrics, Nationwide Children's Hospital
- Professor of Pediatrics, The Ohio State University College of Medicine
- Program Director, Pediatric Residency Program and Pediatric Nephrology Fellowship Program, Nationwide Children's Hospital
- Program Director, Pediatric Residency Program and Pediatric Nephrology Fellowship Program, The Ohio State University College of Medicine
| |
Collapse
|