DURING the past 15 yr, there has been increasing interest in using newer technologies to enhance the education and training of medical personnel. In this issue of Anesthesiology, Morgan et al.  1and Birnbach et al.  2give two examples of how to evaluate the impact of new approaches to teaching.

Morgan et al.  conducted a careful study comparing faculty-led sessions using either an “exemplar” video of proper practice or demonstrations with a high-fidelity patient simulator to teach final-year Canadian medical students some key points of the medical responses to specific intraoperative events. Although the students preferred the simulation sessions, there was no difference between the groups in the ability of students to respond to the events when tested in the simulator. Although the authors are circumspect in their claims, others might view this as proof that simulators are not worth their nontrivial price. However, at most, such a view would find justification only for a very restricted set of questions asked in this study.

To assess the value of an educational or training modality we must consider various factors, including the target population, the goal, and the overall  costs of the intervention. Typical target populations for simulation activities have ranged from outreach programs involving children and lay adults to preclinical and clinical students in medicine, nursing, and allied health professions to highly experienced physicians and nurses. Not all purposes and goals are equally applicable to all target populations. We should distinguish between education and training. The goal of education is typically to teach or improve conceptual understanding or to introduce individuals to skills. For training, the goal is to implement or improve specific skills and behaviors needed to accomplish a real-world job. Medicine especially has emphasized education, leaving training largely to an apprenticeship model.

Morgan et al.  chose a target population of final-year medical students. The goal of the intervention must be inferred to be education about intraoperative critical events rather than training because no one would expect these students to be able to perform this task adequately in real patient care. This is reflected in the substantial simplification of the task in the demonstrations and test relative to that encountered in real clinical situations. Given such a restricted goal and task, it may not be surprising that the students who had intensive faculty teaching using either the exemplar videos or the exemplar simulations improved their understanding and abilities versus  their baseline but did not differ in their performance depending on the modality used to teach them.

Further, was this really a comparison between a $100 intervention (the video) and a $150,000 intervention (the simulator)? Making a good training video can itself be expensive, and may require a simulator to create the clinical scenarios. Moreover, in assessing the costs of the simulator intervention, one cannot attribute to any single activity the capital costs of the simulator and the accompanying space and infrastructure. Nearly all simulation centers have a diverse set of users from different departments, for different target populations, and for different purposes. The fixed expenses of the center must be amortized over a number of years and across all the users. Although substantially greater than the cost of a video player, the simulator center usage costs attributable specifically to the intervention studied by Morgan et al.  might not be that high. This is especially true because the major cost of simulation training is faculty time. Morgan et al.  acknowledged that for both video- and simulation-based teaching, a roughly equal—and substantial—amount of faculty time was required.

Within the limits that they posed for themselves, Morgan et al.  demonstrated that it is possible to conduct a careful test of different educational modalities. The conundrum is that measuring the results of the intervention requires the ability to assess performance. Although this proved feasible for the simplified tasks expected of students, it will be more difficult to do so for more complex tasks and behaviors expected of experienced personnel. However, in some cases, studying the details of even a restricted task has important ramifications for safe and efficient patient care. Birnbach et al.  showed that most aspects of epidural catheter placement can be assessed robustly by reviewing videotapes of clinicians performing the task. Although it is a relatively simple act, successful placement of an epidural catheter is a crucial task in clinical domains, such as obstetric analgesia and anesthesia. Therefore, the results of the study by Birnbach et al. , though limited in scope, may be more relevant to clinical practice than those of Morgan et al. —although whether improvement of catheter placement skill through video analysis has practical outcome benefit for patients remains to be seen.

Another key lesson from both of these studies is that video can be a powerful teaching tool, especially when it is applied (as by Birnbach et al. ) to specific performances of those under instruction. Nonetheless, in both studies, the use of videotapes was coupled with expert teaching by motivated faculty. This reinforces a common belief that modern technologies provide tools that can enhance, but not substitute for, skilled and dedicated teachers.

By comparison to other industries, such as aviation, in my view, the greatest promise for the use of simulators and other training modalities to impact patient safety lies not in the education of target populations of early learners regarding simplified tasks, but rather with initial and recurrent training of advanced trainees and experienced practitioners regarding much more complex tasks. For these challenging settings, tests that are easy to score unambiguously will rarely replicate or capture the demands of real patient care. Tests that do address the complexity of real care will suffer from higher subjectivity. Therefore, it will be more difficult to make assessments of the impact of novel training for complex real-world job skills. Any such studies are likely to be expensive to conduct because of the high interindividual variability, the need for multiple experienced raters, and the imprecision of existing or proposed metrics of complex performance. Nonetheless, in a recent assessment of the evidence base for a variety of patient safety interventions sponsored by the Agency for Healthcare Research and Quality, the authors concluded the following regarding patient simulators 3:

Definitive experiments to improve our understanding of their effects on training will allow them to be used more intelligently to improve provider performance, reduce errors and ultimately, promote patient safety. Although such experiments will be difficult and costly, they may be justified to determine how this technology can best be applied.

For simulation and for video analysis, the work of Morgan et al.  and Birnbach et al.  are only the beginning of a very long road.

1.
Morgan PJ, Cleave-Hogg D, McIlroy J, Devitt JM: Simulation technology: A comparison of experiential and visual learning for undergraduate medical students. A nesthesiology 2002; 96: 10–6
2.
Birnbach CJ, Santos AC, Bourlier RA, Meadows WE, Datta S, Stein DJ, Kuroda MM, Thys DM: The effectiveness of video technology as an adjunct to teach and evaluate epidural anesthesia performance skills. A nesthesiology 2002; 96: 5–9
3.
Jha AK, Duncan BW, Bates DW: Simulator-based training and patient safety, Making Health Care Safer: A Critical Analysis of Patient Safety Practices (Evidence Report/Technology Assessment No. 43. AHRQ Publication 01-E058). Edited by Shojania KG, Duncan BW, McDonald KM, Wachter RM. Rockville, Agency for Healthcare Research and Quality, 2001, pp 510–7