To the Editor:
We read with interest the study by Ferrero et al.1 in the recent edition of Anesthesiology. There has been a considerable interest in the utility of echocardiography simulators to assist and accelerate the acquisition of echocardiography knowledge and skills. Indeed, we reviewed the global reach and value of simulation within echocardiography training, with particular reference to anesthesia and critical care, in the previous edition of Anesthesiology.2 Although we applaud the authors’ attempt to extend our understanding and further evaluate this technology, we would like to raise a number of issues with this study.
First, the authors state, “Bose et al. published the only investigation assessing the utility of mannequin based transesophageal echocardiography teaching.”3 Bose et al. did indeed study this subject, but the authors have overlooked our study, which randomized United Kingdom residents to didactic teaching methods or a Web-based transesophageal echocardiography learning resource and then assessed the benefit of supplemental simulator teaching in both these groups.4 Whereas our endpoint was acquisition of knowledge rather than technical performance, we showed an advantage of simulation-based transesophageal echocardiography teaching in both groups.
Second, we would question the design of the study whereby didactic teaching methods were used to train the control group of study participants in image acquisition. Our echocardiography training programs have demonstrated to us that image acquisition is a technical skill that can only be successfully taught by practical demonstration—whether that be simulation or real-time operating room demonstration. The really interesting question is whether structured echocardiography simulation teaching may be superior in some ways to real-time operating room instruction in the acquisition of transesophageal echocardiography knowledge and skills.
Third, we would question the validity of the scoring system to grade the images. The authors assessed the reliability of the scoring system by independent expert evaluation of the two groups plus the faculty anesthesiologist’s images. They then inferred that the lack of interrater discrepancy would validate the scoring system. We would like to emphasize that reliability is not equivalent to validity. We would further hypothesize that there is no preexisting scoring system for image quality precisely because of the difficulty in adequately validating such a system. Further work is required to establish the validity of the quality metric, and we remain unconvinced of its ability to distinguish accurately between the groups. To develop and validate such a scoring system is an important step in assessing the performance and teaching of echocardiography.
Competing Interests
The authors declare no competing interests.