We would like to thank Dr. Edler for her interest in our editorial.1In an innovative study that measured teamwork during simulated obstetric emergencies, Morgan et al.  found that the team scores were not reliable.2The investigators attributed the limited reliability to the use of the Human Factors Rating Scale. Dr. Edler correctly indicates that the limited reliability could be the result of several factors and that a generalizablity analysis that partitioned the variance associated with scoring method, raters, and scenario would help to clarify the findings. Regardless of the cause of the limited reliability, Morgan et al .’s main conclusion would still stand.

“Simulation education” or, perhaps more appropriately, simulation-based assessment, has stimulated interest in interpreting participant or team scores. The reliability and validity of scores obtained during a simulation (or any performance assessment) depend on the event and environment’s authenticity, how effectively the scoring instrument captures the skills of interest, and whether the raters consistently observe and record the performance. Morgan et al. ’s experimental design offers an important first step in evaluating teamwork in that it includes high-fidelity simulated obstetric emergencies that can be managed by a multidisciplinary team. Hopefully, rather than discouraging educators, the challenges of developing assessment instruments that can be used to provide valid and reliable team scores will inspire additional investigations. This research would help to establish the role of simulation-based assessment as a method to investigate and evaluate team performance.

*Washington University School of Medicine, St. Louis, Missouri. murrayd@wustl.edu

Murray D, Enarson C: Communication and teamwork: Essential to learn but difficult to measure. Anesthesiology 2007; 106:895–6
Morgan PJ, Pittini R, Regehr G, Marrs C, Haley MF: Evaluating teamwork in a simulated obstetric environment. Anesthesiology 2007; 106:907–15