1
|
Chiu H, Wood TJ, Garber A, Halman S, Rekman J, Gofton W, Dudek N. The Ottawa resident observation form for nurses (O-RON): evaluation of an assessment tool's psychometric properties in different specialties. BMC MEDICAL EDUCATION 2024; 24:487. [PMID: 38698352 PMCID: PMC11067073 DOI: 10.1186/s12909-024-05476-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/18/2023] [Accepted: 04/26/2024] [Indexed: 05/05/2024]
Abstract
BACKGROUND Workplace-based assessment (WBA) used in post-graduate medical education relies on physician supervisors' feedback. However, in a training environment where supervisors are unavailable to assess certain aspects of a resident's performance, nurses are well-positioned to do so. The Ottawa Resident Observation Form for Nurses (O-RON) was developed to capture nurses' assessment of trainee performance and results have demonstrated strong evidence for validity in Orthopedic Surgery. However, different clinical settings may impact a tool's performance. This project studied the use of the O-RON in three different specialties at the University of Ottawa. METHODS O-RON forms were distributed on Internal Medicine, General Surgery, and Obstetrical wards at the University of Ottawa over nine months. Validity evidence related to quantitative data was collected. Exit interviews with nurse managers were performed and content was thematically analyzed. RESULTS 179 O-RONs were completed on 30 residents. With four forms per resident, the ORON's reliability was 0.82. Global judgement response and frequency of concerns was correlated (r = 0.627, P < 0.001). CONCLUSIONS Consistent with the original study, the findings demonstrated strong evidence for validity. However, the number of forms collected was less than expected. Exit interviews identified factors impacting form completion, which included clinical workloads and interprofessional dynamics.
Collapse
Affiliation(s)
- Hedva Chiu
- Department of Medicine, Division of Physical Medicine & Rehabilitation, University of Ottawa, Ottawa, Canada.
| | - Timothy J Wood
- Department of Innovation in Medical Education, University of Ottawa, Ottawa, Canada
| | - Adam Garber
- Department of Obstetrics and Gynecology, University of Ottawa, Ottawa, Canada
| | - Samantha Halman
- Department of Medicine, Division of General Internal Medicine, University of Ottawa, Ottawa, Canada
| | - Janelle Rekman
- Department of Surgery, Division of General Surgery, University of Ottawa, Ottawa, Canada
| | - Wade Gofton
- Department of Surgery, Division of Orthopedic Surgery, University of Ottawa, Ottawa, Canada
| | - Nancy Dudek
- Department of Medicine, Division of Physical Medicine & Rehabilitation), The Ottawa Hospital, University of Ottawa, Ottawa, ON, Canada
| |
Collapse
|
2
|
Rahim KA, Lakhdir MPA, Afzal N, Merchant AAH, Shaikh NQ, Noorali AA, Tariq U, Ahmad R, Bakhshi SK, Mahmood SBZ, Khan MR, Tariq M, Haider AH. Leveraging the vantage point - exploring nurses' perception of residents' communication skills: a mixed-methods study. BMC MEDICAL EDUCATION 2023; 23:148. [PMID: 36869344 PMCID: PMC9985286 DOI: 10.1186/s12909-023-04114-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Accepted: 02/17/2023] [Indexed: 06/18/2023]
Abstract
INTRODUCTION Effective communication is key to a successful patient-doctor interaction and improved healthcare outcomes. However, communication skills training in residency is often subpar, leading to inadequate patient-physician communication. There is a dearth of studies exploring the observations of nurses - key members of healthcare teams with a special vantage point to observe the impact of residents' communication with patients. Thus, we aimed to gauge the perceptions of nurses regarding residents' communication skills expertise. METHODS This study employed a sequential mixed-methods design, and was conducted at an academic medical center in South Asia. Quantitative data was collected via a REDCap survey using a structured validated questionnaire. Ordinal logistic regression was applied. For qualitative data, In-depth interviews were conducted with nurses using a semi-structured interview guide. RESULTS A total of 193 survey responses were obtained from nurses hailing from various specialties including Family Medicine (n = 16), Surgery (n = 27), Internal Medicine (n = 22), Pediatrics (n = 27), and Obstetrics/Gynecology (n = 93). Nurses rated long working hours, infrastructural deficits, and human failings as the main barriers to effective patient-resident communication. Residents working in in-patient settings were more likely to have inadequate communication skills (P-value = 0.160). Qualitative data analysis of nine in-depth interviews revealed two major themes: existing status-quo of residents' communication skills (including deficient verbal and non-verbal communication, bias in patient counselling and challenging patients) and recommendations for improving patient-resident communication. CONCLUSION The findings from this study highlight significant gaps in patient-resident communication from the perception of nurses and identify the need for creating a holistic curriculum for residents to improve patient-physician interaction.
Collapse
Affiliation(s)
- Komal Abdul Rahim
- Dean's Office, Medical College, Aga Khan University, Stadium Road, P. O. Box 3500, Karachi, 74800, Pakistan.
- Department of Community Health Sciences, Aga Khan University, Stadium Road, P. O. Box 3500, Karachi, 74800, Pakistan.
- Internal Medicine, Aga Khan University, Stadium Road, P. O. Box 3500, Karachi, 74800, Pakistan.
| | - Maryam Pyar Ali Lakhdir
- Department of Community Health Sciences, Aga Khan University, Stadium Road, P. O. Box 3500, Karachi, 74800, Pakistan
| | - Noreen Afzal
- Dean's Office, Medical College, Aga Khan University, Stadium Road, P. O. Box 3500, Karachi, 74800, Pakistan
| | | | - Namra Qadeer Shaikh
- Dean's Office, Medical College, Aga Khan University, Stadium Road, P. O. Box 3500, Karachi, 74800, Pakistan
| | - Ali Aahil Noorali
- Dean's Office, Medical College, Aga Khan University, Stadium Road, P. O. Box 3500, Karachi, 74800, Pakistan
| | - Umar Tariq
- Medical College, Aga Khan University, Karachi, Pakistan
| | - Rida Ahmad
- Dean's Office, Medical College, Aga Khan University, Stadium Road, P. O. Box 3500, Karachi, 74800, Pakistan
| | - Saqib Kamran Bakhshi
- Dean's Office, Medical College, Aga Khan University, Stadium Road, P. O. Box 3500, Karachi, 74800, Pakistan
- Medical College, Aga Khan University, Karachi, Pakistan
| | - Saad Bin Zafar Mahmood
- Dean's Office, Medical College, Aga Khan University, Stadium Road, P. O. Box 3500, Karachi, 74800, Pakistan
| | | | | | - Adil H Haider
- Medical College, Aga Khan University, Karachi, Pakistan
| |
Collapse
|
3
|
Dunbar KS, Chiel LE, Doherty DP, Winn AS, Marcus CH. A Unique Lens: Understanding What Nurses Are Best Positioned to Assess About Residents. J Grad Med Educ 2022; 14:687-695. [PMID: 36591435 PMCID: PMC9765917 DOI: 10.4300/jgme-d-22-00317.1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/15/2022] [Revised: 07/31/2022] [Accepted: 10/11/2022] [Indexed: 12/23/2022] Open
Abstract
BACKGROUND Resident feedback is generally elicited from attending physicians, although nurses can also provide feedback on distinct domains. Physicians may be hesitant to accept feedback from nurses if they perceive that nurses are being asked about areas outside their expertise. Understanding specific resident behaviors that nurses are best suited to assess is critical to successful implementation of feedback from nurses to residents. OBJECTIVE To understand specific resident behaviors nurses are uniquely positioned to assess from the perspectives of both nurses and residents. METHODS We performed a qualitative study using thematic analysis of 5 focus groups with 20 residents and 5 focus groups with 17 nurses at a large free-standing children's hospital in 2020. Two reviewers developed a codebook and subsequently analyzed all transcripts. Codes were organized into themes and subthemes. Thematic saturation was achieved prior to analyzing the final transcript. RESULTS We identified 4 major themes. Nurses are positioned to provide feedback: (1) on residents' interprofessional collaborative practice; (2) on residents' communication with patients and their families; and (3) on behalf of patients and their families. Within each of these, we identified subthemes noting specific behaviors on which nurses can provide feedback. The fourth theme encompassed topics that may not be best suited for nursing feedback-medical decision-making and resident responsiveness. CONCLUSIONS Nurses and residents described specific resident behaviors that nurses were best positioned to assess.
Collapse
Affiliation(s)
- Kimiko S. Dunbar
- Kimiko S. Dunbar, MD, is a Pediatric Hospital Medicine Fellow, Department of Hospital Medicine, Children's Hospital Colorado
| | - Laura E. Chiel
- Laura E. Chiel, MD, is a Pediatric Pulmonary Fellow, Division of Pulmonary Medicine, Department of Pediatrics, Boston Children's Hospital
| | - Dennis P. Doherty
- Dennis P. Doherty, PhD, RN, NPD-BC, is Senior Professional Development Specialist, Clinical Education, Informatics, Practice and Quality, Nursing Patient Care, Boston Children's Hospital
| | - Ariel S. Winn
- Ariel S. Winn, MD, is Associate Program Director, Department of Pediatrics, Boston Children's Hospital and Harvard Medical School
| | - Carolyn H. Marcus
- Carolyn H. Marcus, MD, is Associate Program Director, Department of Pediatrics, Boston Children's Hospital and Harvard Medical School
| |
Collapse
|
4
|
Yousefi M, Ebrahimi Z, Fazaeli S, Mashhadi L. Continuous multidimensional assessment system of medical residents in teaching hospitals. Health Sci Rep 2022; 5:e573. [PMID: 35415274 PMCID: PMC8982702 DOI: 10.1002/hsr2.573] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2021] [Revised: 03/01/2022] [Accepted: 03/03/2022] [Indexed: 11/10/2022] Open
Abstract
Background and aims Evaluation of learners is considered as one of the most important principles in education, especially in the clinical fields. Continuous assessment can be used to provide appropriate feedback to students about their strengths and weaknesses. In this regard, this study is aimed to design a system of continuous assessment of medical residents (SCMARs). Methods This study was performed using a combination of qualitative methods, including focus group discussion, expert panel, and Delphi technique. The study population consisted of all the stakeholders involved in the evaluation process of medical residents in Imam Reza Hospital Complex (IRHC) in Iran. This study was conducted in three phases, including identification of subthemes and indicators, production of a primary framework for the SCMAR, and agreement on the subthemes of the SCMAR. The nominal group technique was used for generating priority information. Data analysis was performed during the agreement review stage with the Excel software version 2016. Results The finalized SCMAR consisted of 10 main themes and 38 subthemes. The themes included objectives, evaluators, areas, and indicators of evaluation, evaluation periods, evaluation requirements, data collection, data sources, point assignment and data analysis methods, reporting, and feedback dissemination methods. Five areas of evaluation and 11 indicators of evaluation were proposed. Conclusion A comprehensive assessment system that continuously evaluates the performance of Medical Residents can be used as a stimulus to improve the quality of educational processes. The present study was aimed to address this need by designing a framework for such a system.
Collapse
Affiliation(s)
- Mehdi Yousefi
- I.R.Iran's National Institute of Health Research Tehran University of Medical Sciences Tehran Iran
- Department of Health Economics and Management Science Mashhad University of Medical Sciences Mashhad Iran
| | - Zahra Ebrahimi
- Department of Management Islamic Azad University of North Tehran Branch Tehran Iran
| | - Somayeh Fazaeli
- Department of Medical Records and Health Information Technology Mashhad University of Medical Sciences Mashhad Iran
| | - Leila Mashhadi
- Department of Anesthesiology Mashhad University of Medical Sciences Mashhad Iran
| |
Collapse
|
5
|
Bhat C, LaDonna KA, Dewhirst S, Halman S, Scowcroft K, Bhat S, Cheung WJ. Unobserved Observers: Nurses' Perspectives About Sharing Feedback on the Performance of Resident Physicians. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2022; 97:271-277. [PMID: 34647919 DOI: 10.1097/acm.0000000000004450] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
PURPOSE Postgraduate training programs are incorporating feedback from registered nurses (RNs) to facilitate holistic assessments of resident performance. RNs are a potentially rich source of feedback because they often observe trainees during clinical encounters when physician supervisors are not present. However, RN perspectives about sharing feedback have not been deeply explored. This study investigated RN perspectives about providing feedback and explored the facilitators and barriers influencing their engagement. METHOD Constructivist grounded theory methodology was used in interviewing 11 emergency medicine and 8 internal medicine RNs at 2 campuses of a tertiary care academic medical center in Ontario, Canada, between July 2019 and March 2020. Interviews explored RN experiences working with and observing residents in clinical practice. Data collection and analysis were conducted iteratively. Themes were identified using constant comparative analysis. RESULTS RNs felt they could observe authentic day-to-day behaviors of residents often unwitnessed by supervising physicians and offer unique feedback related to patient advocacy, communication, leadership, collaboration, and professionalism. Despite a strong desire to contribute to resident education, RNs were apprehensive about sharing feedback and reported barriers related to hierarchy, power differentials, and a fear of overstepping professional boundaries. Although infrequent, a key stimulus that enabled RNs to feel safe in sharing feedback was an invitation from the supervising physician to provide input. CONCLUSIONS Perceived hierarchy in academic medicine is a critical barrier to engaging RNs in feedback for residents. Accessing RN feedback on authentic resident behaviors requires dismantling the negative effects of hierarchy and fostering a collaborative interprofessional working environment. A critical step toward this goal may require supervising physicians to model feedback-seeking behavior by inviting RNs to share feedback. Until a workplace culture is established that validates nurses' input and creates safe opportunities for them to contribute to resident education, the voices of nurses will remain unheard.
Collapse
Affiliation(s)
- Chirag Bhat
- C. Bhat is a resident physician, Department of Emergency Medicine, University of Ottawa, Ottawa, Ontario, Canada; ORCID: https://orcid.org/0000-0003-3198-6450
| | - Kori A LaDonna
- K.A. LaDonna is assistant professor, Department of Innovation in Medical Education, University of Ottawa, Ottawa, Ontario, Canada
| | - Sebastian Dewhirst
- S. Dewhirst is a lecturer, Department of Emergency Medicine, University of Ottawa, Ottawa, Ontario, Canada; ORCID: https://orcid.org/0000-0003-1996-6692
| | - Samantha Halman
- S. Halman is assistant professor, Department of Medicine, University of Ottawa and the Ottawa Hospital, Ottawa, Ontario, Canada; ORCID: http://orcid.org/0000-0002-5474-9696
| | - Katherine Scowcroft
- K. Scowcroft is a research assistant, Department of Innovation in Medical Education, University of Ottawa, Ottawa, Ontario, Canada
| | - Silke Bhat
- S. Bhat is a registered nurse, Department of Emergency Medicine, the Ottawa Hospital, Ottawa, Ontario, Canada
| | - Warren J Cheung
- W.J. Cheung is associate professor, Department of Emergency Medicine, University of Ottawa, Ottawa, Ontario, Canada; ORCID: https://orcid.org/0000-0002-2730-8190
| |
Collapse
|
6
|
Dudek N, Duffy MC, Wood TJ, Gofton W. The Ottawa Resident Observation Form for Nurses (O-RON): Assessment of Resident Performance through the Eyes of the Nurses. JOURNAL OF SURGICAL EDUCATION 2021; 78:1666-1675. [PMID: 34092533 DOI: 10.1016/j.jsurg.2021.03.014] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/17/2020] [Revised: 02/06/2021] [Accepted: 03/21/2021] [Indexed: 06/12/2023]
Abstract
OBJECTIVE Most work-place based assessment relies on physician supervisors making observations of residents. Many areas of performance are not directly observed by physicians but rather by other healthcare professionals, most often nurses. Assessment of resident performance by nurses is captured with multi-source feedback tools. However, these tools combine the assessments of nurses with other healthcare professionals and so their perspective can be lost. A novel tool was developed and implemented to assess resident performance on a hospital ward from the perspective of the nurses. DESIGN Through a nominal group technique, nurses identified dimensions of performance that are reflective of high-quality physician performance on a hospital ward. These were included as items in the Ottawa Resident Observation Form for Nurses (O-RON). The O-RON was voluntarily completed during an 11-month period. Validity evidence related to quantitative and qualitative data was collected. SETTING The Orthopedic Surgery Residency Program at the University of Ottawa. PARTICIPANTS 49 nurses on the Orthopedic Surgery wards at The Ottawa Hospital (tertiary care). RESULTS The O-RON has 15 items rated on a 3-point frequency scale, one global judgment yes/no question regarding whether they would want the resident on their team and a space for comments. 1079 O-RONs were completed on 38 residents. There was an association between the response to the global judgment question and the frequency of concerns (p < 0.01). With 8 forms per resident, the reliability of the O-RON was 0.80. Open-ended responses referred to aspects of interpersonal skills, responsiveness, dependability, communication skills, and knowledge. CONCLUSIONS The O-RON demonstrates promise as a work-place based assessment tool to provide residents and training programs with feedback on aspects of their performance on a hospital ward through the eyes of the nurses. It appears to be easy to use, has solid evidence for validity and can provide reliable data with a small number of completed forms.
Collapse
Affiliation(s)
- Nancy Dudek
- Department of Medicine (Division of Physical Medicine & Rehabilitation) and The Ottawa Hospital, University of Ottawa, Ottawa, Ontario, Canada.
| | - Melissa C Duffy
- Department of Educational Studies, University of South Carolina, College of Education, University of South Carolina, Wardlaw College, Columbia, South Carolina
| | - Timothy J Wood
- Department of Innovation in Medical Education, University of Ottawa, Ottawa, Ontario, Canada
| | - Wade Gofton
- Department of Surgery (Division of Orthopedic Surgery) and The Ottawa Hospital, University of Ottawa, Division of Orthopedic Surgery, Ottawa, Ontario, Canada
| |
Collapse
|
7
|
Bowman A, Harreveld RB, Lawson C. Factors influencing the rating of sonographer students' clinical performance. Radiography (Lond) 2021; 28:8-16. [PMID: 34332858 DOI: 10.1016/j.radi.2021.07.009] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2021] [Revised: 07/07/2021] [Accepted: 07/09/2021] [Indexed: 11/18/2022]
Abstract
INTRODUCTION Little is known about the factors influencing clinical supervisor-assessors' ratings of sonographer students' performance. This study identifies these influential factors and relates them to professional competency standards, with the aim of raising awareness and improving assessment practice. METHODS This study used archived written comments from 94 clinical assessors describing 174 sonographer students' performance one month into their initial clinical practice (2015-6). Qualitative mixed method analysis revealed factors influencing assessor ratings of student performance and provided an estimate of the valency, association, and frequency of these factors. RESULTS Assessors provided written comments for 93 % (n = 162/174) of students. Comments totaled 7190 words (mean of 44 words/student). One-third of comment paragraphs were wholly positive, two-thirds were equivocal. None were wholly negative. Thematic analysis revealed eleven factors, and eight sub-factors, influencing assessor impressions of five dimensions of performance. Of the factors mentioned, 84.6 % (n = 853/1008) related to professional competencies. While 15.4 % (n = 155/1008) were unrelated to competencies, instead reflecting humanistic factors such as student motivation, disposition, approach to learning, prospects and impact on supervisor and staff. Factors were prioritised and combined independently, although some associated. CONCLUSION Clinical assessors formed impressions based on student performance, humanistic behaviours and personal qualities not necessarily outlined in educational outcomes or professional competency standards. Their presence, and interrelations, impact success in clinical practice, through their contribution to, and indication of, competence. IMPLICATIONS FOR PRACTICE Sonographer student curricula and assessor training should raise awareness of the factors influencing performance ratings and judgement of clinical competence, particularly the importance of humanistic factors. Inclusion of narrative comments, multiple assessors, and broad performance dimensions would enhance clinical assessment of sonographer student performance.
Collapse
Affiliation(s)
- A Bowman
- School of Graduate Research, Central Queensland University, Cairns, Australia.
| | - R B Harreveld
- School of Education and the Arts, Central Queensland University, Rockhampton, Australia.
| | - C Lawson
- School of Education and the Arts, Central Queensland University, Rockhampton, Australia.
| |
Collapse
|
8
|
Garcia Popov A, Hall AK, Chaplin T. Multisource Feedback in the Trauma Context: Priorities and Perspectives. AEM EDUCATION AND TRAINING 2021; 5:e10533. [PMID: 34099987 PMCID: PMC8166304 DOI: 10.1002/aet2.10533] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/04/2020] [Revised: 08/21/2020] [Accepted: 09/11/2020] [Indexed: 05/31/2023]
Abstract
OBJECTIVES Trauma resuscitations require competence in both clinical and nonclinical skills but these can be difficult to observe and assess. Multisource feedback (MSF) is workplace-based, involves the direct observation of learners, and can provide feedback on nonclinical skills. We sought to compare and contrast the priorities of multidisciplinary trauma team members when assessing resident trauma team captain (TTC) performance. Additionally, we aimed to describe the nature of the assessment and perceived the utility of incorporating MSF into the trauma context. METHODS A convenience sample of 10 trauma team activations were observed. Following each activation, the attending physician trauma team leader (TTL), TTC, and a registered nurse (RN) participated in a semistructured interview. MSF was not provided to the TTC for the purpose of this study because MSF was not part of the assessment process of TTCs at the time of this study and maintaining anonymity may have encouraged more honest interview responses. Transcripts from each assessor group (TTL, TTC, RN) were coded and assigned to one of the five crisis resource management skills: leadership, communication, situational awareness, resource utilization, and problem-solving. Comments were also coded as positive, negative, or neutral as interpreted by the coder. RESULTS All assessor groups mentioned communication skills most frequently. After communication, the RN and TTC groups commented on situational awareness most frequently, comprising 15 and 29% of their total responses, respectively, whereas 31% of the TTL comments focused on leadership skills. The RN and TTL groups provided positive assessments, with 51 and 42% of their respective comments coded as positive. Forty-five percent of self-assessment comments in the TTC group were negative. All (100%) of the TTC and TTL respondents felt that incorporating MSF would add to the quality of feedback, only 66% of the RN group felt that way. CONCLUSIONS We found that each assessor group brings a unique focus and perspective to the assessment of resident TTC performance. The future inclusion of MSF in the trauma team context has the potential to enhance the learning environment in a clinical arena that is difficult to directly observe and assess.
Collapse
Affiliation(s)
- Andrei Garcia Popov
- From theDepartment of Emergency MedicineLennox and Addington County General HospitalNapaneeOntarioCanada
| | - Andrew K. Hall
- and theDepartment of Emergency MedicineQueen’s UniversityKingstonOntarioCanada
| | - Timothy Chaplin
- and theDepartment of Emergency MedicineQueen’s UniversityKingstonOntarioCanada
| |
Collapse
|
9
|
Sieplinga K, Disbrow E, Triemstra J, van de Ridder M. Off to a Jump Start: Using Immersive Activities to Integrate Continuity Clinic and Advocacy. JOURNAL OF MEDICAL EDUCATION AND CURRICULAR DEVELOPMENT 2021; 8:23821205211059652. [PMID: 34926827 PMCID: PMC8671658 DOI: 10.1177/23821205211059652] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/18/2021] [Accepted: 10/14/2021] [Indexed: 06/14/2023]
Abstract
BACKGROUND Training in advocacy is an important component of graduate medical education. Several models have been implemented by residency programs to address this objective. Little has been published regarding application of immersive advocacy activities integrated into continuity clinic. OBJECTIVE To create an Integrated Community Health and Child Advocacy Curriculum (ICHCA) by integrating advocacy activities that were immersive and contextualized in a continuity clinic setting and to familiarize interns with continuity clinic immediately at the beginning of their training. METHODS We utilized a socio-constructivist lens, Kern's Six-step curriculum development and a published curriculum mapping tool to create the curriculum. Twenty residents completed ICHCA in 2019. Evaluations from key stakeholders including participants, support staff and attendings were analyzed on four levels of Kirkpatrick's model. We compared results before intervention, immediately following intervention and ten months following intervention. RESULTS We demonstrated improvement in learner satisfaction, knowledge and behaviors with respect to advocacy in the clinical environment. Response rate was 70% (7/10) for attendings, 75% for support staff (15/20) and 72.5% for residents (29/40). Our intervention was feasible, no cost, and required no additional materials or training as it relied on learning in real time. CONCLUSIONS An integrated advocacy curriculum utilizing the mapping tool for curricular design and evaluation is feasible and has value demonstrated by improvements in reaction, knowledge, and behaviors. This model improves understanding of social responsibility and can be implemented similarly in other residency programs.
Collapse
Affiliation(s)
- Kira Sieplinga
- Michigan State University College of Human Medicine, Pediatric Residency Spectrum Health/Helen DeVos Children’s Hospital, Grand Rapids, MI, USA
| | - Emily Disbrow
- Michigan State University College of Human Medicine, Pediatric Residency Sparrow Hospital, Lansing, MI, USA
| | - Justin Triemstra
- Michigan State University College of Human Medicine, Pediatric Residency Spectrum Health/Helen DeVos Children’s Hospital, Grand Rapids, MI, USA
| | - Monica van de Ridder
- Michigan State University College of Human Medicine, Pediatric Residency Spectrum Health/Helen DeVos Children’s Hospital, Grand Rapids, MI, USA
| |
Collapse
|
10
|
Ragsdale JW, Berry A, Gibson JW, Herber-Valdez CR, Germain LJ, Engle DL. Evaluating the effectiveness of undergraduate clinical education programs. MEDICAL EDUCATION ONLINE 2020; 25:1757883. [PMID: 32352355 PMCID: PMC7241512 DOI: 10.1080/10872981.2020.1757883] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/28/2020] [Revised: 04/07/2020] [Accepted: 04/09/2020] [Indexed: 06/11/2023]
Abstract
Medical schools should use a variety of measures to evaluate the effectiveness of their clinical curricula. Both outcome measures and process measures should be included, and these can be organized according to the four-level training evaluation model developed by Donald Kirkpatrick. Managing evaluation data requires the institution to employ deliberate strategies to monitor signals in real-time and aggregate data so that informed decisions can be made. Future steps in program evaluation includes increased emphasis on patient outcomes and multi-source feedback, as well as better integration of existing data sources.
Collapse
Affiliation(s)
- John W. Ragsdale
- Assistant Dean for Clinical Education, University of Kentucky College of Medicine, Lexington, KY, USA
| | - Andrea Berry
- Executive Director of Faculty Life, University of Central Florida College of Medicine, Orlando, FL, USA
| | - Jennifer W. Gibson
- Director, Office of Medical Education, Tulane University School of Medicine, New Orleans, LA, USA
| | - Christiane R. Herber-Valdez
- Assistant Professor, Department of Medical Education, Paul L. Foster School of Medicine, Texas Tech University Health Sciences Center at El Paso, El Paso, TX, USA
- Managing Director, Office of Institutional Research and Effectiveness, Texas Tech University Health Sciences Center at El Paso, El Paso, TX, USA
| | - Lauren J. Germain
- Director of Evaluation, Assessment and Research; Assistant Professor, Public Health and Preventive Medicine, SUNY Upstate Medical University, Syracuse, NY, USA
| | - Deborah L. Engle
- Assistant Dean, Assessment and Evaluation, Duke University School of Medicine, Durham, NC, USA
| | - representing the Program Evaluation Special Interest Group of the Southern Group on Educational Affairs (SGEA) within the Association of American Medical Colleges (AAMC)
- Assistant Dean for Clinical Education, University of Kentucky College of Medicine, Lexington, KY, USA
- Executive Director of Faculty Life, University of Central Florida College of Medicine, Orlando, FL, USA
- Director, Office of Medical Education, Tulane University School of Medicine, New Orleans, LA, USA
- Assistant Professor, Department of Medical Education, Paul L. Foster School of Medicine, Texas Tech University Health Sciences Center at El Paso, El Paso, TX, USA
- Managing Director, Office of Institutional Research and Effectiveness, Texas Tech University Health Sciences Center at El Paso, El Paso, TX, USA
- Director of Evaluation, Assessment and Research; Assistant Professor, Public Health and Preventive Medicine, SUNY Upstate Medical University, Syracuse, NY, USA
- Assistant Dean, Assessment and Evaluation, Duke University School of Medicine, Durham, NC, USA
| |
Collapse
|
11
|
Barry ES, Dong T, Durning SJ, Schreiber-Gregory D, Torre D, Grunberg NE. Faculty Assessments in a Military Medical Field Practicum: Rater Experience and Gender Do Not Appear to Influence Scoring. Mil Med 2020; 185:e358-e363. [PMID: 31925445 DOI: 10.1093/milmed/usz364] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
INTRODUCTION Any implicit and explicit biases that exist may alter our interpretation of people and events. Within the context of assessment, it is important to determine if biases exist and to decrease any existing biases, especially when rating student performance to provide meaningful, fair, and useful input. The purpose of this study was to determine if the experience and gender of faculty members contribute to their ratings of students in a military medical field practicum. This information is important for fair ratings of students. Three research questions were addressed: Were there differences between new versus experienced faculty raters? Were there differences in assessments provided by female and male faculty members? Did gender of faculty raters impact ratings of female and male students?. MATERIALS AND METHODS This study examined trained faculty evaluators' ratings of three cohorts of medical students during 2015-2017 during a medical field practicum. Female (n = 80) and male (n = 161) faculty and female (n = 158) and male (n = 311) students were included. Within this dataset, there were 469 students and 241 faculty resulting in 5,599 ratings for each of six outcome variables that relate to overall leader performance, leader competence, and leader communication. Descriptive statistics were computed for all variables for the first four observations of each student. Descriptive analyses were performed for evaluator experience status and gender differences by each of the six variables. A multivariate analyses of variance was performed to examine whether there were differences between gender of faculty and gender of students. RESULTS Descriptive analyses of the experience status of faculty revealed no significant differences between means on any of the rating elements. Descriptive analyses of faculty gender revealed no significant differences between female and male faculty ratings of the students. The overall MANOVA analyses found no statistically significant difference between female and male students on the combined dependent variables of leader performance for any of the four observations. CONCLUSIONS The study revealed that there were no differences in ratings of student leader performance based on faculty experience. In addition, there were no differences in ratings of student leader performance based on faculty gender.
Collapse
Affiliation(s)
- Erin S Barry
- Department of Military & Emergency Medicine, F. Edward Hébert School of Medicine, Uniformed Services University, 4301 Jones Bridge Rd, Bethesda, MD 20814
| | - Ting Dong
- Department of Medicine, F. Edward Hébert School of Medicine, Uniformed Services University, 4301 Jones Bridge Rd, Bethesda, MD 20814
| | - Steven J Durning
- Department of Medicine, F. Edward Hébert School of Medicine, Uniformed Services University, 4301 Jones Bridge Rd, Bethesda, MD 20814
| | - Deanna Schreiber-Gregory
- Department of Medicine, F. Edward Hébert School of Medicine, Uniformed Services University, 4301 Jones Bridge Rd, Bethesda, MD 20814
| | - Dario Torre
- Department of Medicine, F. Edward Hébert School of Medicine, Uniformed Services University, 4301 Jones Bridge Rd, Bethesda, MD 20814
| | - Neil E Grunberg
- Department of Military & Emergency Medicine, F. Edward Hébert School of Medicine, Uniformed Services University, 4301 Jones Bridge Rd, Bethesda, MD 20814
| |
Collapse
|
12
|
Brucker K, Whitaker N, Morgan ZS, Pettit K, Thinnes E, Banta AM, Palmer MM. Exploring Gender Bias in Nursing Evaluations of Emergency Medicine Residents. Acad Emerg Med 2019; 26:1266-1272. [PMID: 31373086 DOI: 10.1111/acem.13843] [Citation(s) in RCA: 25] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2018] [Revised: 07/15/2019] [Accepted: 07/30/2019] [Indexed: 11/29/2022]
Abstract
OBJECTIVES Nursing evaluations are an important component of residents' professional development as nurses are present for interactions with patients and nonphysician providers. Despite this, there has been few prior studies on the benefits, harms, or effectiveness of using nursing evaluations to help guide emergency medicine residents' development. We hypothesized that gender bias exists in nursing evaluations and that female residents, compared to their male counterparts, would receive more negative feedback on the perception of their interpersonal communication skills. METHODS Data were drawn from nursing evaluations of residents between March 2013 and April 2016. All comments were coded if they contained words falling into four main categories: standout, ability, grindstone, and interpersonal. This methodology and the list of words that guided coding were based on the work of prior scholars. Names and gendered pronouns were obscured and each comment was manually reviewed and coded for valence (positive, neutral, negative) and strength (certain or tentative) by at least two members of the research team. Following the qualitative coding, quantitative analysis was performed to test for differences. To evaluate whether any measurable differences in ability between male and female residents existed, we compiled and compared American Board of Emergency Medicine in-training examination scores and relevant milestone evaluations between female and male residents from the same period in which the residents were evaluated by nursing staff. RESULTS Of 1,112 nursing evaluations, 30% contained comments. Chi-square tests on the distribution of valence (positive, neutral, or negative) indicated statistically significant differences in ability and grindstone categories based on the gender of the resident. A total of 51% of ability comments about female residents were negative compared to 20% of those about male residents (χ2 = 11.83, p < 0.01). A total of 57% of grindstone comments about female residents were negative as opposed 24% of those about male residents (χ2 = 6.03, p < 0.01). CONCLUSIONS Our findings demonstrate that, despite the lack of difference in ability or competence as measured by in-service examination scores and milestone evaluations, nurses evaluate female residents lower in their abilities and work ethic compared to male residents.
Collapse
Affiliation(s)
- Krista Brucker
- Department of Emergency Medicine Indiana University Indianapolis IN
| | - Nash Whitaker
- Department of Emergency Medicine Indiana University Indianapolis IN
| | | | - Katie Pettit
- Department of Emergency Medicine Indiana University Indianapolis IN
| | - Erynn Thinnes
- Department of Emergency Medicine Indiana University Indianapolis IN
| | - Alison M. Banta
- Department of Emergency Medicine Indiana University Indianapolis IN
| | - Megan M. Palmer
- Department of Emergency Medicine Indiana University Indianapolis IN
| |
Collapse
|
13
|
Bing-You R, Varaklis K, Hayes V, Trowbridge R, Kemp H, McKelvy D. The Feedback Tango: An Integrative Review and Analysis of the Content of the Teacher-Learner Feedback Exchange. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2018; 93:657-663. [PMID: 28991848 DOI: 10.1097/acm.0000000000001927] [Citation(s) in RCA: 57] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
PURPOSE To conduct an integrative review and analysis of the literature on the content of feedback to learners in medical education. METHOD Following completion of a scoping review in 2016, the authors analyzed a subset of articles published through 2015 describing the analysis of feedback exchange content in various contexts: audiotapes, clinical examination, feedback cards, multisource feedback, videotapes, and written feedback. Two reviewers extracted data from these articles and identified common themes. RESULTS Of the 51 included articles, about half (49%) were published since 2011. Most involved medical students (43%) or residents (43%). A leniency bias was noted in many (37%), as there was frequently reluctance to provide constructive feedback. More than one-quarter (29%) indicated the feedback was low in quality (e.g., too general, limited amount, no action plans). Some (16%) indicated faculty dominated conversations, did not use feedback forms appropriately, or provided inadequate feedback, even after training. Multiple feedback tools were used, with some articles (14%) describing varying degrees of use, completion, or legibility. Some articles (14%) noted the impact of the gender of the feedback provider or learner. CONCLUSIONS The findings reveal that the exchange of feedback is troubled by low-quality feedback, leniency bias, faculty deficient in feedback competencies, challenges with multiple feedback tools, and gender impacts. Using the tango dance form as a metaphor for this dynamic partnership, the authors recommend ways to improve feedback for teachers and learners willing to partner with each other and engage in the complexities of the feedback exchange.
Collapse
Affiliation(s)
- Robert Bing-You
- R. Bing-You is professor, Tufts University School of Medicine, Boston, Massachusetts, and vice president for medical education, Maine Medical Center, Portland, Maine. K. Varaklis is clinical associate professor, Tufts University School of Medicine, Boston, Massachusetts, and designated institutional official, Maine Medical Center, Portland, Maine. V. Hayes is clinical assistant professor, Tufts University School of Medicine, Boston, Massachusetts, and faculty member, Department of Family Medicine, Maine Medical Center, Portland, Maine. R. Trowbridge is associate professor, Tufts University School of Medicine, Boston, Massachusetts, and director of undergraduate medical education, Department of Medicine, Maine Medical Center, Portland, Maine. H. Kemp is medical librarian, Maine Medical Center, Portland, Maine. D. McKelvy is manager of library and knowledge services, Maine Medical Center, Portland, Maine
| | | | | | | | | | | |
Collapse
|
14
|
Palis AG, Golnik KC, Mayorga EP, Filipe HP, Garg P. The International Council of Ophthalmology 360-degree assessment tool: development and validation. Can J Ophthalmol 2017; 53:145-149. [PMID: 29631826 DOI: 10.1016/j.jcjo.2017.09.002] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2017] [Revised: 08/19/2017] [Accepted: 09/01/2017] [Indexed: 10/18/2022]
Abstract
BACKGROUND The Accreditation Council for Graduate Medical Education and other organizations recommend 360-degree assessments for evaluation of interpersonal and communication skills, professional behaviours, and some aspects of patient care and system-based practice. No such tool has been developed for ophthalmology or received international content validation. OBJECTIVE To develop a valid, internationally applicable, ophthalmology-specific 360-degree assessment tool. DESIGN Exploratory study. METHODS A literature review was conducted. Individual 360-degree evaluation items from several publications were catalogued and classified according to different groups of assessors. A panel of international authors reviewed the list and voted on items that were most appropriate for international use. The list was trimmed to reduce redundancy and to make it as brief as possible while still capturing the essential components for each category. A second panel of international ophthalmic educators reviewed the international applicability and appropriateness of this collated list; relevant comments and suggestions were incorporated. RESULTS A tool for the evaluation of interpersonal and communication skills, professionalism, and system-based practice was developed. The tool has face and content validity. CONCLUSION This assessment tool can be used internationally for giving formative feedback based on the opinions of the different groups of people who interact with residents.
Collapse
Affiliation(s)
- Ana Gabriela Palis
- Ophthalmology Department, Hospital Italiano de Buenos Aires, Buenos Aires, Argentina.
| | - Karl Clifford Golnik
- University of Cincinnati, Cincinnati, OH; The Cincinnati Eye Institute, Blue Ash, OH
| | - Eduardo Pedro Mayorga
- Ophthalmology Department, Hospital Italiano de Buenos Aires, Buenos Aires, Argentina
| | - Helena Prior Filipe
- Ophthalmology Department, Hospital das Forças Armadas, Lisboa, Portugal; Hospital dos SAMS, Lisboa, Portugal
| | - Prashant Garg
- Tej Kohli Cornea Institute, L V Prasad Eye Institute, Hyderabad, India
| |
Collapse
|
15
|
Kinnear B, Bensman R, Held J, O'Toole J, Schauer D, Warm E. Critical Deficiency Ratings in Milestone Assessment: A Review and Case Study. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2017; 92:820-826. [PMID: 28557948 DOI: 10.1097/acm.0000000000001383] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
PURPOSE The Accreditation Council for Graduate Medical Education (ACGME) requires programs to report learner progress using specialty-specific milestones. It is unclear how milestones can best identify critical deficiencies (CDs) in trainee performance. Specialties developed milestones independently of one another; not every specialty included CDs within milestones ratings. This study examined the proportion of ACGME milestone sets that include CD ratings, and describes one residency program's experiences using CD ratings in assessment. METHOD The authors reviewed ACGME milestones for all 99 specialties in November 2015, determining which rating scales contained CDs. The authors also reviewed three years of data (July 2012-June 2015) from the University of Cincinnati Medical Center (UCMC) internal medicine residency assessment system based on observable practice activities mapped to ACGME milestones. Data were analyzed by postgraduate year, assessor type, rotation, academic year, and core competency. The Mantel-Haenszel chi-square test was used to test for changes over time. RESULTS Specialties demonstrated heterogeneity in accounting for CDs in ACGME milestones, with 22% (22/99) of specialties having no language describing CDs in milestones assessment. Thirty-three percent (63/189) of UCMC internal medicine residents received at least one CD rating, with CDs accounting for 0.18% (668/364,728) of all assessment ratings. The authors identified CDs across multiple core competencies and rotations. CONCLUSIONS Despite some specialties not accounting for CDs in milestone assessment, UCMC's experience demonstrates that a significant proportion of residents may be rated as having a CD during training. Identification of CDs may allow programs to develop remediation and improvement plans.
Collapse
Affiliation(s)
- Benjamin Kinnear
- B. Kinnear is assistant professor and residency assistant program director, Medicine-Pediatrics, Department of Internal Medicine, University of Cincinnati College of Medicine, and Department of Pediatrics, Cincinnati Children's Hospital Medical Center, Cincinnati, Ohio. R. Bensman is clinical fellow, Department of Pediatrics, Division of Emergency Medicine, Cincinnati Children's Hospital Medical Center, Cincinnati, Ohio. J. Held is assistant professor and residency associate program director, Department of Internal Medicine, University of Cincinnati College of Medicine, Cincinnati, Ohio. J. O'Toole is associate professor and residency associate program director, Medicine-Pediatrics, Department of Internal Medicine, University of Cincinnati College of Medicine, and Department of Pediatrics, Cincinnati Children's Hospital Medical Center, Cincinnati, Ohio. D. Schauer is associate professor and residency associate program director, Department of Internal Medicine, University of Cincinnati College of Medicine, Cincinnati, Ohio. E. Warm is Richard W. Vilter Professor of Medicine and residency program director, Department of Internal Medicine, University of Cincinnati College of Medicine, Cincinnati, Ohio
| | | | | | | | | | | |
Collapse
|
16
|
Munyewende PO, Levin J, Rispel LC. An evaluation of the competencies of primary health care clinic nursing managers in two South African provinces. Glob Health Action 2016; 9:32486. [PMID: 27938631 PMCID: PMC5149665 DOI: 10.3402/gha.v9.32486] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2016] [Revised: 11/10/2016] [Accepted: 11/12/2016] [Indexed: 11/14/2022] Open
Abstract
Background Managerial competencies to enhance individual and organisational performance have gained currency in global efforts to strengthen health systems. Competent managers are essential in the implementation of primary health care (PHC) reforms that aim to achieve universal health coverage. Objective To evaluate the competencies of PHC clinic nursing managers in two South African provinces. Design A cross-sectional study was conducted in two South African provinces. Using stratified random sampling, 111 PHC clinic nursing managers were selected. All supervisors (n=104) and subordinate nurses (n=383) were invited to participate in the survey on the day of data collection. Following informed consent, the nursing managers, their supervisors, and subordinate nurses completed a 40-item, 360-degree competency assessment questionnaire, with six domains: communication, leadership and management, staff management, financial management, planning and priority setting, and problem-solving. Standard deviations, medians, and inter-quartile ranges (IQRs) were computed separately for PHC nursing managers, supervisors, and subordinate nurses for competencies in the six domains. The Tinsley and Weiss index was used to assess agreement between each of the three possible pairs of raters. Results A 95.4% response rate was obtained, with 105 nursing managers in Gauteng and Free State completing the questionnaires. There was a lack of agreement about nursing managers’ competencies among the three groups of raters. Overall, clinic nursing managers rated themselves high on the five domains of communication (8.6), leadership and management (8.67), staff management (8.75), planning and priority setting (8.6), and problem-solving (8.83). The exception was financial management with a median score of 7.94 (IQR 6.33–9.11). Compared to the PHC clinic managers, the supervisors and subordinate nurses gave PHC nursing managers lower ratings on all six competency domains, with the lowest rating for financial management (supervisor median rating 6.56; subordinate median rating 7.31). Conclusion The financial management competencies of PHC clinic nursing managers need to be prioritised in continuing professional development programmes.
Collapse
Affiliation(s)
- Pascalia O Munyewende
- Centre for Health Policy & Medical Research Council Health Policy Research Group, School of Public Health, Faculty of Health Sciences, University of the Witwatersrand, Johannesburg, South Africa;
| | - Jonathan Levin
- Division of Epidemiology and Biostatistics, School of Public Health, Faculty of Health Sciences, University of the Witwatersrand, Johannesburg, South Africa
| | - Laetitia C Rispel
- Centre for Health Policy & Medical Research Council Health Policy Research Group, School of Public Health, Faculty of Health Sciences, University of the Witwatersrand, Johannesburg, South Africa.,DST/NRF South African Research Chair, School of Public Health, Faculty of Health Sciences, University of the Witwatersrand, Johannesburg, South Africa
| |
Collapse
|
17
|
Sadeghi T, Loripoor M. Usefulness of 360 degree evaluation in evaluating nursing students in Iran. KOREAN JOURNAL OF MEDICAL EDUCATION 2016; 28:195-200. [PMID: 26913770 PMCID: PMC4951738 DOI: 10.3946/kjme.2016.22] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 01/18/2016] [Revised: 02/23/2016] [Accepted: 02/23/2016] [Indexed: 06/05/2023]
Abstract
PURPOSE This study aimed to evaluate the clinical nursing students using 360 degree evaluation. METHODS In this descriptive cross-sectional study that conducted between September 2014 and February 2015, 28 students who were selected by census from those who were passing the last semester of the Nursing BSc program in Rafsanjan University of Medical Sciences. Data collection tools included demographic questionnaire and students' evaluation questionnaire, to evaluate "professional behavior" and "clinical skills" in pediatric ward. Every student got evaluated from clinical instructor, students, peers, clinical nurses, and children's mothers' point of view. Data analysis was done with descriptive and analytic statistics test including Pearson coefficient using SPSS version 18.0. RESULTS The evaluation mean scores were as following: students, 89.74±6.17; peers, 94.12±6.87; children's mothers, 92.87±6.21; clinical instructor, 84.01±8.81; and the nurses, 94.87±6.35. The results showed a significant correlation between evaluation scores of peers, clinical instructor and self-evaluation (Pearson coefficient, p<0.001), but the correlation between the nurses' evaluation score and that of the clinical instructor was not significant (Pearson coefficient, p=0.052). CONCLUSION 360 Degree evaluation can provide additional useful information on student performance and evaluation of different perspectives of care. The use of this method is recommended for clinical evaluation of nursing students.
Collapse
Affiliation(s)
- Tabandeh Sadeghi
- Pediatric Department, School of Nursing and Midwifery, Rafsanjan University of Medical Sciences, Rafsanjan, Iran
| | - Marzeyeh Loripoor
- Midwifery and Reproductive Health Department, School of Nursing and Midwifery, Rafsanjan University of Medical Sciences, Rafsanjan, Iran
| |
Collapse
|
18
|
van Schaik SM, Regehr G, Eva KW, Irby DM, O'Sullivan PS. Perceptions of Peer-to-Peer Interprofessional Feedback Among Students in the Health Professions. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2016; 91:807-12. [PMID: 26556298 DOI: 10.1097/acm.0000000000000981] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/16/2023]
Abstract
PURPOSE Interprofessional teamwork should include interprofessional feedback to optimize performance and collaboration. Social identity theory predicts that hierarchy and stereotypes may limit receptiveness to interprofessional feedback, but literature on this is sparse. This study explores perceptions among health professions students regarding interprofessional peer feedback received after a team exercise. METHOD In 2012-2013, students from seven health professions schools (medicine, pharmacy, nursing, dentistry, physical therapy, dietetics, and social work) participated in a team-based interprofessional exercise early in clinical training. Afterward, they wrote anonymous feedback comments for each other. Each student subsequently completed an online survey to rate the usefulness and positivity (on five-point scales) of feedback received and guessed each comment's source. Data analysis included analysis of variance to examine interactions (on usefulness and positivity ratings) between profession of feedback recipients and providers. RESULTS Of 353 study participants, 242 (68.6%) accessed the feedback and 221 (62.6%) completed the survey. Overall, students perceived the feedback as useful (means across professions = 3.84-4.27) and positive (means = 4.17-4.86). There was no main effect of profession of the feedback provider, and no interactions between profession of recipient and profession of provider regardless of whether the actual or guessed provider profession was entered into the analysis. CONCLUSIONS These findings suggest that students have positive perceptions of interprofessional feedback without systematic bias against any specific group. Whether students actually use interprofessional feedback for performance improvement and remain receptive toward such feedback as they progress in their professional education deserves further study.
Collapse
Affiliation(s)
- Sandrijn M van Schaik
- S.M. van Schaik is director, Kanbar Center for Simulation, Clinical Skills and Telemedicine Education, and associate professor, Clinical Pediatrics, Pediatric Critical Care Medicine, University of California, San Francisco, San Francisco, California. G. Regehr is associate director of research, Center for Health Education Scholarship, and professor of surgery, University of British Columbia, Vancouver, British Columbia, Canada. K.W. Eva is senior scientist, Center for Health Education Scholarship, and professor of medicine, University of British Columbia, Vancouver, British Columbia, Canada. D.M. Irby is professor of medicine, University of California, San Francisco, San Francisco, California. P.S. O'Sullivan is director of research and development in medical education and professor of medicine, University of California, San Francisco, San Francisco, California
| | | | | | | | | |
Collapse
|
19
|
Abstract
PURPOSE We examined the evaluations given by nurses to obstetrics and gynecology residents to estimate whether gender bias was evident. BACKGROUND Women receive more negative feedback and evaluations than men-from both sexes. Some suggest that, to be successful in traditionally male roles such as surgeon, women must manifest a warmth-related (communal) rather than competence-related (agentic) demeanor. Compared with male residents, female residents experience more interpersonal difficulties and less help from female nurses. We examined feedback provided to residents by female nurses. METHODS We examined Professional Associate Questionnaires (2006-2014) using a mixed-methods design. We compared scores per training year by gender using Mann-Whitney and linear regression adjusting for resident and nurse cohorts. Using grounded theory analysis, we developed a coding system for blinded comments based on principles of effective feedback, medical learners' evaluation, and impression management. χ examined the proportions of negative and positive and communal and agentic comments between genders. RESULTS We examined 2,202 evaluations: 397 (18%) for 10 men and 1,805 (82%) for 34 women. Twenty-three compliments (eg, "Great resident!") were excluded. Evaluations per training year varied: men n=77-134; women n=384-482. Postgraduate year (PGY)-1, PGY-2, and PGY-4 women had lower mean ratings (P<.035); when adjusted, the difference remained significant in PGY-2 (MWomen=1.5±0.6 compared with MMen=1.7±0.5; P=.001). Postgraduate year-1 women received disproportionately fewer positive and more negative agentic comments than PGY-1 men (positive=17.3% compared with 40%, negative=17.3% compared with 3.3%, respectively; P=.041). CONCLUSION Evidence of gender bias in evaluations emerged; albeit subtle, women received harsher feedback as lower-level residents than men. Training in effective evaluation and gender bias management is warranted.
Collapse
|
20
|
Chesluk BJ, Reddy S, Hess B, Bernabeo E, Lynn L, Holmboe E. Assessing interprofessional teamwork: pilot test of a new assessment module for practicing physicians. THE JOURNAL OF CONTINUING EDUCATION IN THE HEALTH PROFESSIONS 2015; 35:3-10. [PMID: 25799967 DOI: 10.1002/chp.21267] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
INTRODUCTION Teamwork is a basic component of all health care, and substantial research links the quality of teamwork to safety and quality of care. The TEAM (Teamwork Effectiveness Assessment Module) is a new Web-based teamwork assessment module for practicing hospital physicians. The module combines self-assessment, multisource feedback from members of other professions and specialties with whom the physician exercises teamwork, and a structured review of those data with a peer to develop an improvement plan. METHODS We conducted a pilot test of this module with hospitalist physicians to evaluate the feasibility and usefulness of the module in practice, focusing on these specific questions: Would physicians in hospitals of different types and sizes be able to use the module; would the providers identified as raters respond to the request for feedback; would the physicians be able to identify one or more "trusted peers" to help analyze the feedback; and how would physicians experience the module process overall? RESULTS 20 of 25 physicians who initially volunteered for the pilot completed all steps of the TEAM, including identifying interprofessional teammates, soliciting feedback from their team, and identifying a peer to help review data. Module users described the feedback they received as helpful and actionable, and indicated this was information they would not have otherwise received. CONCLUSIONS The results suggest that a module combining self-assessment, multisource feedback, and a guided process for interpreting these data can provide help practicing hospital physicians to understand and potentially improve their interprofessional teamwork skills and behaviors.
Collapse
|
21
|
Hayward MF, Curran V, Curtis B, Schulz H, Murphy S. Reliability of the interprofessional collaborator assessment rubric (ICAR) in multi source feedback (MSF) with post-graduate medical residents. BMC MEDICAL EDUCATION 2014; 14:1049. [PMID: 25551678 PMCID: PMC4318203 DOI: 10.1186/s12909-014-0279-9] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/04/2014] [Accepted: 12/16/2014] [Indexed: 05/14/2023]
Abstract
BACKGROUND Increased attention on collaboration and teamwork competency development in medical education has raised the need for valid and reliable approaches to the assessment of collaboration competencies in post-graduate medical education. The purpose of this study was to evaluate the reliability of a modified Interprofessional Collaborator Assessment Rubric (ICAR) in a multi-source feedback (MSF) process for assessing post-graduate medical residents' collaborator competencies. METHODS Post-graduate medical residents (n = 16) received ICAR assessments from three different rater groups (physicians, nurses and allied health professionals) over a four-week rotation. Internal consistency, inter-rater reliability, inter-group differences and relationship between rater characteristics and ICAR scores were analyzed using Cronbach's alpha, one-way and two-way repeated measures ANOVA, and logistic regression. RESULTS Missing data decreased from 13.1% using daily assessments to 8.8% utilizing an MSF process, p = .032. High internal consistency measures were demonstrated for overall ICAR scores (α = .981) and individual assessment domains within the ICAR (α = .881 to .963). There were no significant differences between scores of physician, nurse, and allied health raters on collaborator competencies (F2,5 = 1.225, p = .297, η2 = .016). Rater gender was the only significant factor influencing scores with female raters scoring residents significantly lower than male raters (6.12 v. 6.82; F1,5 = 7.184, p = .008, η 2 = .045). CONCLUSION The study findings suggest that the use of the modified ICAR in a MSF assessment process could be a feasible and reliable assessment approach to providing formative feedback to post-graduate medical residents on collaborator competencies.
Collapse
Affiliation(s)
- Mark F Hayward
- Patient Research Center, Faculty of Medicine, Memorial University, St. John’s, A1B 3 V6 NL Canada
| | - Vernon Curran
- Patient Research Center, Faculty of Medicine, Memorial University, St. John’s, A1B 3 V6 NL Canada
| | - Bryan Curtis
- Patient Research Center, Faculty of Medicine, Memorial University, St. John’s, A1B 3 V6 NL Canada
| | - Henry Schulz
- Patient Research Center, Faculty of Medicine, Memorial University, St. John’s, A1B 3 V6 NL Canada
| | - Sean Murphy
- Patient Research Center, Faculty of Medicine, Memorial University, St. John’s, A1B 3 V6 NL Canada
| |
Collapse
|
22
|
White JS, Sharma N. "Who writes what?" Using written comments in team-based assessment to better understand medical student performance: a mixed-methods study. BMC MEDICAL EDUCATION 2012; 12:123. [PMID: 23249445 PMCID: PMC3558404 DOI: 10.1186/1472-6920-12-123] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/09/2012] [Accepted: 12/04/2012] [Indexed: 05/16/2023]
Abstract
BACKGROUND Observation of the performance of medical students in the clinical environment is a key part of assessment and learning. To date, few authors have examined written comments provided to students and considered what aspects of observed performance they represent. The aim of this study was to examine the quantity and quality of written comments provided to medical students by different assessors using a team-based model of assessment, and to determine the aspects of medical student performance on which different assessors provide comments. METHODS Medical students on a 7-week General Surgery & Anesthesiology clerkship received written comments on 'Areas of Excellence' and 'Areas for Improvement' from physicians, residents, nurses, patients, peers and administrators. Mixed-methods were used to analyze the quality and quantity of comments provided and to generate a conceptual framework of observed student performance. RESULTS 1,068 assessors and 127 peers provided 2,988 written comments for 127 students, a median of 188 words per student divided into 26 "Areas of Excellence" and 5 "Areas for Improvement". Physicians provided the most comments (918), followed by patients (692) and peers (586); administrators provided the fewest (91). The conceptual framework generated contained four major domains: 'Student as Physician-in-Training', 'Student as Learner', 'Student as Team Member', and 'Student as Person.' CONCLUSIONS A wide range of observed medical student performance is recorded in written comments provided by members of the surgical healthcare team. Different groups of assessors provide comments on different aspects of student performance, suggesting that comments provided from a single viewpoint may potentially under-represent or overlook some areas of student performance. We hope that the framework presented here can serve as a basis to better understand what medical students do every day, and how they are perceived by those with whom they work.
Collapse
Affiliation(s)
- Jonathan Samuel White
- Department of Surgery, Faculty of Medicine & Dentistry, University of Alberta, 10240 Kingsway Avenue, Edmonton, AB T5H 3V9, Canada
| | - Nishan Sharma
- Department of Surgery, Faculty of Medicine & Dentistry, University of Alberta, 10240 Kingsway Avenue, Edmonton, AB T5H 3V9, Canada
| |
Collapse
|
23
|
Wong RK, Ventura CV, Espiritu MJ, Yonekawa Y, Henchoz L, Chiang MF, Lee TC, Chan RVP. Training fellows for retinopathy of prematurity care: a Web-based survey. J AAPOS 2012; 16:177-81. [PMID: 22525176 PMCID: PMC3338950 DOI: 10.1016/j.jaapos.2011.12.154] [Citation(s) in RCA: 36] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/26/2011] [Revised: 12/14/2011] [Accepted: 12/14/2011] [Indexed: 11/30/2022]
Abstract
PURPOSE To characterize the training received by pediatric ophthalmology and retina fellows in retinopathy of prematurity (ROP) management. METHODS Pediatric ophthalmology and retina fellowship programs were emailed a Web-based survey to assess fellowship training in ROP management. RESULTS Of 140 programs contacted, 42 (30%) participated, resulting in 87 surveys for analysis. Of the 87 respondents, 25 (29%) reported that two-thirds or less of ROP examinations performed by fellows were also seen by an attending. When stratified by specialty, this trend was statistically different between pediatric ophthalmology and retina fellows (P = 0.03). Additionally, pediatric ophthalmology fellows performed fewer laser photocoagulation procedures than retina fellows (P < 0.001). Regarding fellows' perceived competency in ROP management, 3 of 51 (6%) felt competent at the start of their fellowship and 43 of 51 (84%) felt competent at the time of the survey. Only 7% of respondents reported the use of formal evaluations at their programs to assess fellow competence in ROP examination. CONCLUSIONS Training programs for fellows in pediatric ophthalmology and retina vary greatly with respect to ROP training and the quality of clinical care. Many clinical ROP examinations are being performed by pediatric ophthalmology and retina fellows without involvement and/or direct supervision by attending ophthalmologists. Our findings have important implications for the development of a future workforce for ROP management.
Collapse
Affiliation(s)
- Ryan K Wong
- Department of Ophthalmology and Visual Science, Yale University School of Medicine, New Haven, Connecticut, USA
| | | | | | | | | | | | | | | |
Collapse
|
24
|
Evaluating the Performance of the Academic Coordinator/Director of Clinical Education: Tools to Solicit Input From Program Directors, Academic Faculty, and Students. ACTA ACUST UNITED AC 2011. [DOI: 10.1097/00001416-201101000-00006] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
25
|
Data Envelopment Analysis Model for the Appraisal and Relative Performance Evaluation of Nurses at an Intensive Care Unit. J Med Syst 2010; 35:1039-62. [DOI: 10.1007/s10916-010-9570-4] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2009] [Accepted: 07/21/2010] [Indexed: 10/19/2022]
|