1
|
Siemerkus J, Petrescu AS, Köchli L, Stephan KE, Schmidt H. Using standardized patients for undergraduate clinical skills training in an introductory course to psychiatry. BMC MEDICAL EDUCATION 2023; 23:159. [PMID: 36922802 PMCID: PMC10016160 DOI: 10.1186/s12909-023-04107-5] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/27/2022] [Accepted: 02/14/2023] [Indexed: 06/18/2023]
Abstract
BACKGROUND The goal of this study was to assess the value and acceptance of Standardized or Simulated Patients (SPs) for training clinically inexperienced undergraduate medical students in psychiatric history taking, psychopathological assessment, and communication with psychiatric patients. METHODS As part of a newly developed introductory course to psychiatry, pairs of 3rd year medical students conducted psychiatric assessments of SPs, including history and psychopathological state, under the supervision of a clinical lecturer. Prior to the assessment, students attended introductory lectures to communication in psychiatry and psychopathology but were clinically inexperienced. After the interview, the students' summary of their findings was discussed with other students and the lecturer. Students, lecturers, and actors were invited to a survey after the course. Questions for the students included self-reports about perceived learning success and authenticity of the interviews. RESULTS 41 students, 6 actors and 8 lecturers completed the survey (response rates of 48%, 50%, and 100%, respectively). The survey results indicated that, despite their lack of clinical experience, students learned how to conduct a psychiatric interview, communicate in a non-judgmental and empathetic manner, take a psychiatric history and perform a psychopathological examination. SPs were perceived as authentic. The survey results suggested that this setting allowed for an enjoyable, non-distressful and motivating learning experience within a restricted time frame of just two afternoons. CONCLUSION The results indicated that the SP approach presented is useful for teaching clinical skills in psychiatry to students with limited previous clinical experience and knowledge of psychiatry. We argue that SPs can be used to teach practical psychiatric skills already during an early phase of the curriculum. Limitations of our study include a limited sample size, a temporal gap between the course and the survey, reliance on self-reports, and lack of comparison to alternative interventions.
Collapse
Affiliation(s)
- Jakob Siemerkus
- Translational Neuromodeling Unit (TNU), Institute for Biomedical Engineering, University of Zurich and ETH Zurich, Zurich, Switzerland.
| | - Ana-Stela Petrescu
- Translational Neuromodeling Unit (TNU), Institute for Biomedical Engineering, University of Zurich and ETH Zurich, Zurich, Switzerland
| | - Laura Köchli
- Translational Neuromodeling Unit (TNU), Institute for Biomedical Engineering, University of Zurich and ETH Zurich, Zurich, Switzerland
| | - Klaas Enno Stephan
- Translational Neuromodeling Unit (TNU), Institute for Biomedical Engineering, University of Zurich and ETH Zurich, Zurich, Switzerland
- Max Planck Institute for Metabolism Research, Cologne, Germany
| | - Helen Schmidt
- Translational Neuromodeling Unit (TNU), Institute for Biomedical Engineering, University of Zurich and ETH Zurich, Zurich, Switzerland
| |
Collapse
|
2
|
Walden D, Rawls M, Santen SA, Feldman M, Vinnikova A, Dow A. Rapid Feedback: Assessing Pre-clinical Teaching in the Era of Online Learning. MEDICAL SCIENCE EDUCATOR 2022; 32:819-826. [PMID: 35729989 PMCID: PMC9198414 DOI: 10.1007/s40670-022-01573-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Accepted: 05/29/2022] [Indexed: 06/15/2023]
Abstract
INTRODUCTION Medical schools vary in their approach to providing feedback to faculty. The purpose of this study was to test the effects of rapid student feedback in a course utilizing novel virtual learning methods. METHODS Second-year medical students were supplied with an optional, short questionnaire at the end of each class session and asked to provide feedback within 48 h. At the close of each survey, results were emailed to faculty. After the course, students and faculty were asked to rate the effectiveness of this method. This study did not affect administration of the usual end-of-course summative evaluations. RESULTS Ninety-one percent of students who participated noted increased engagement in the feedback process, but only 18% on average chose to participate. Faculty rated rapid feedback as more actionable than summative feedback (67%), 50% rated it as more specific, and 42% rated it as more helpful. Some wrote that comments were too granular, and others noted a negative personal emotional response. CONCLUSION Rapid feedback engaged students, provided actionable feedback, and increased communication between students and instructors, suggesting that this approach added value. Care must be taken to reduce the student burden and support relational aspects of the process.
Collapse
Affiliation(s)
- Daniel Walden
- Virginia Commonwealth University School of Medicine, Richmond, VA USA
| | - Meagan Rawls
- Office of Assessment, Evaluation, and Scholarship, Virginia Commonwealth University School of Medicine, Richmond, VA USA
| | - Sally A. Santen
- Office of Assessment, Evaluation, and Scholarship, Virginia Commonwealth University School of Medicine, Richmond, VA USA
- University of Cincinnati College of Medicine, Cincinnati, USA
| | - Moshe Feldman
- Office of Assessment, Evaluation, and Scholarship, Virginia Commonwealth University School of Medicine, Richmond, VA USA
| | - Anna Vinnikova
- Department of Internal Medicine, Virginia Commonwealth University School of Medicine, Richmond, VA USA
| | - Alan Dow
- Department of Internal Medicine, Virginia Commonwealth University School of Medicine, Richmond, VA USA
| |
Collapse
|
3
|
Franco R, Ament Giuliani Franco C, de Carvalho Filho MA, Severo M, Amelia Ferreira M. Use of portfolios in teaching communication skills and professionalism for Portuguese-speaking medical students. INTERNATIONAL JOURNAL OF MEDICAL EDUCATION 2020; 11:37-46. [PMID: 32061170 PMCID: PMC7252446 DOI: 10.5116/ijme.5e2a.fa68] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/26/2019] [Accepted: 01/24/2020] [Indexed: 05/13/2023]
Abstract
OBJECTIVES This study aimed to analyse the effect of a portfolio with three activities fostering students' reflection, self-efficacy and teaching of communication skills and professionalism. METHODS A cross-sectional study was applied with a sample of third- and fourth-year medical students in one Portuguese and three Brazilian universities. A three-activity portfolio (course evaluation and learning, self-efficacy activity and free reflective writing) was used during a two-month course on communication skills and professionalism. The 69 students enrolled in the course were invited to complete the three-activity portfolio via Likert-type questionnaires, open-ended questions and narrative. Content and lexical analysis and the Reflection Evaluation for Learners' Enhanced Competencies Tool (REFLECT) were used for assessing the qualitative data. The questionnaires were evaluated using principal components analysis and Cronbach's α. Pearson's correlation was applied to portfolio activities. RESULTS Of the 69 participants, 85.5% completed at least one activity. Reflecting on what they learned in the communication module, the students did not mention professionalism themes. In the self-efficacy activity on communication, 25% of the fragments were related to professionalism themes. There was a negative correlation between students' self-efficacy and the REFLECT rubric score (r(19)=-0.744; p< 0.0001). CONCLUSIONS Teachers must consider the activity's influence on the reflections when assessing the portfolio. This model of a three-activity portfolio provided diverse ways of encouraging and assessing reflections, supporting teaching improvement and adaptation, evaluating students' self-efficacy and showing that students' higher reflective capacity may promote feelings of low effectiveness.
Collapse
Affiliation(s)
- Renato Franco
- Medicine School, Pontifical Catholic University of Paraná, Curitiba, Brazil
| | | | - Marco Antonio de Carvalho Filho
- Center for Education Development and Research in Health Professions - Research Group LEARN - Lifelong Learning, Education & Assessment Research, University of Groningen, Groningen, The Netherlands
| | - Milton Severo
- Department of Medical Education and Simulation, Faculty of Medicine, University of Porto, Portugal
| | - Maria Amelia Ferreira
- Department of Public Health and Forensic Sciences, and Medical Education, Faculty of Medicine of the University of Porto, Portugal
| |
Collapse
|
4
|
Schiekirka S, Raupach T. A systematic review of factors influencing student ratings in undergraduate medical education course evaluations. BMC MEDICAL EDUCATION 2015; 15:30. [PMID: 25853890 PMCID: PMC4391198 DOI: 10.1186/s12909-015-0311-8] [Citation(s) in RCA: 29] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/15/2014] [Accepted: 02/18/2015] [Indexed: 05/10/2023]
Abstract
BACKGROUND Student ratings are a popular source of course evaluations in undergraduate medical education. Data on the reliability and validity of such ratings have mostly been derived from studies unrelated to medical education. Since medical education differs considerably from other higher education settings, an analysis of factors influencing overall student ratings with a specific focus on medical education was needed. METHODS For the purpose of this systematic review, online databases (PubMed, PsycInfo and Web of Science) were searched up to August 1st, 2013. Original research articles on the use of student ratings in course evaluations in undergraduate medical education were eligible for inclusion. Included studies considered the format of evaluation tools and assessed the association of independent and dependent (i.e., overall course ratings) variables. Inclusion and exclusion criteria were checked by two independent reviewers, and results were synthesised in a narrative review. RESULTS Twenty-five studies met the inclusion criteria. Qualitative research (2 studies) indicated that overall course ratings are mainly influenced by student satisfaction with teaching and exam difficulty rather than objective determinants of high quality teaching. Quantitative research (23 studies) yielded various influencing factors related to four categories: student characteristics, exposure to teaching, satisfaction with examinations and the evaluation process itself. Female gender, greater initial interest in course content, higher exam scores and higher satisfaction with exams were associated with more positive overall course ratings. CONCLUSIONS Due to the heterogeneity and methodological limitations of included studies, results must be interpreted with caution. Medical educators need to be aware of various influences on student ratings when developing data collection instruments and interpreting evaluation results. More research into the reliability and validity of overall course ratings as typically used in the evaluation of undergraduate medical education is warranted.
Collapse
Affiliation(s)
- Sarah Schiekirka
- Department of Cardiology and Pneumology, University Hospital Göttingen, Göttingen, Germany
- Study Deanery of Göttingen Medical School, Göttingen, Germany
| | - Tobias Raupach
- Department of Cardiology and Pneumology, University Hospital Göttingen, Göttingen, Germany
- Department of Clinical, Educational and Health Psychology, University College London, London, UK
| |
Collapse
|
5
|
Reed DA. Nimble approaches to curriculum evaluation in graduate medical education. J Grad Med Educ 2011; 3:264-6. [PMID: 22655156 PMCID: PMC3184902 DOI: 10.4300/jgme-d-11-00081.1] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/06/2011] [Accepted: 04/06/2011] [Indexed: 11/06/2022] Open
|
6
|
McNulty JA, Gruener G, Chandrasekhar A, Espiritu B, Hoyt A, Ensminger D. Are online student evaluations of faculty influenced by the timing of evaluations? ADVANCES IN PHYSIOLOGY EDUCATION 2010; 34:213-216. [PMID: 21098389 DOI: 10.1152/advan.00079.2010] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
Student evaluations of faculty are important components of the medical curriculum and faculty development. To improve the effectiveness and timeliness of student evaluations of faculty in the physiology course, we investigated whether evaluations submitted during the course differed from those submitted after completion of the course. A secure web-based system was developed to collect student evaluations that included numerical rankings (1-5) of faculty performance and a section for comments. The grades that students received in the course were added to the data, which were sorted according to the time of submission of the evaluations and analyzed by Pearson's correlation and Student's t-test. Only 26% of students elected to submit evaluations before completion of the course, and the average faculty ratings of these evaluations were highly correlated [r(14) = 0.91] with the evaluations submitted after completion of the course. Faculty evaluations were also significantly correlated with the previous year [r(14) = 0.88]. Concurrent evaluators provided more comments that were statistically longer and subjectively scored as more "substantive." Students who submitted their evaluations during the course and who included comments had significantly higher final grades in the course. In conclusion, the numeric ratings that faculty received were not influenced by the timing of student evaluations. However, students who submitted early evaluations tended to be more engaged as evidenced by their more substantive comments and their better performance on exams. The consistency of faculty evaluations from year to year and concurrent versus at the end of the course suggest that faculty tend not to make significant adjustments to student evaluations.
Collapse
Affiliation(s)
- John A McNulty
- Department of Cell and Molecular Physiology, Stritch School of Medicine, Loyola University, Maywood, IL 60153, USA.
| | | | | | | | | | | |
Collapse
|
7
|
Stieger S, Burger C. Let's Go Formative: Continuous Student Ratings with Web 2.0 Application Twitter. CYBERPSYCHOLOGY BEHAVIOR AND SOCIAL NETWORKING 2010; 13:163-7. [DOI: 10.1089/cyber.2009.0128] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Affiliation(s)
- Stefan Stieger
- Department of Basic Psychological Research, School of Psychology, University of Vienna, Austria
| | - Christoph Burger
- Department of Basic Psychological Research, School of Psychology, University of Vienna, Austria
| |
Collapse
|
8
|
McOwen KS, Bellini LM, Morrison G, Shea JA. The development and implementation of a health-system-wide evaluation system for education activities: build it and they will come. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2009; 84:1352-9. [PMID: 19881421 DOI: 10.1097/acm.0b013e3181b6c996] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Academic health centers (AHCs) use education evaluation data for multiple purposes, and they also use multiple methods to collect data in an effort to evaluate the quality of student and faculty performance. Collecting evaluation data in a standardized manner enabling collation and subsequent assessment and interpretation is critically important if the information is to be maximally useful. A case study is presented of PENN Medicine's education evaluation program and the complicated mission of developing a multiprogram, multipurpose evaluation system, developed and implemented from 2003 to 2007. The proposed solution is generalizable to other comparable AHCs. The article begins with a structured analysis of needs, continues with a description of the conceptual evaluation model guiding the system, and offers a summary of the amounts and types of data collected in the years leading to full implementation. The article concludes with a brief list of needs that emerged during implementation and suggestions for future growth. The resulting system is described as supporting the evaluation of clinical teaching of more than 1,200 clinical faculty, students, residents, and fellows across 18 clinical departments with a common set of items. For the 2006-2007 academic year, more than 30,000 faculty evaluations were collected, combined, and then presented in a Web-based teaching dossier. A by-product of this effort was the creation of an ever-expanding data set that supports medical education research.
Collapse
Affiliation(s)
- Katherine S McOwen
- GME Evaluation and Research, PENN Medicine, Philadelphia, Pennsylvania, USA
| | | | | | | |
Collapse
|