Abstract
OBJECTIVE
Measuring teamwork is essential in critical care, but limited observational measurement systems exist for this environment. The objective of this study was to evaluate the reliability and validity of a behavioral marker system for measuring teamwork in ICUs.
DESIGN
Instances of teamwork were observed by two raters for three tasks: multidisciplinary rounds, nurse-to-nurse handoffs, and retrospective videos of medical students and instructors performing simulated codes. Intraclass correlation coefficients were calculated to assess interrater reliability. Generalizability theory was applied to estimate systematic sources of variance for the three observed team tasks that were associated with instances of teamwork, rater effects, competency effects, and task effects.
SETTING
A 15-bed surgical ICU at a large academic hospital.
SUBJECTS
One hundred thirty-eight instances of teamwork were observed. Specifically, we observed 88 multidisciplinary rounds, 25 nurse-to-nurse handoffs, and 25 simulated code exercises.
INTERVENTIONS
No intervention was conducted for this study.
MEASUREMENTS AND MAIN RESULTS
Rater reliability for each overall task ranged from good to excellent correlation (intraclass correlation coefficient, 0.64-0.81), although there were seven cases where reliability was fair and one case where it was poor for specific competencies. Findings from generalizability studies provided evidence that the marker system dependably distinguished among teamwork competencies, providing evidence of construct validity.
CONCLUSIONS
Teamwork in critical care is complex, thereby complicating the judgment of behaviors. The marker system exhibited great potential for differentiating competencies, but findings also revealed that more context specific guidance may be needed to improve rater reliability.
Collapse