Background:

This study describes anesthesiologists’ practice improvements undertaken during the first 3 yr of simulation activities for the Maintenance of Certification in Anesthesiology Program.

Methods:

A stratified sampling of 3 yr (2010–2012) of participants’ practice improvement plans was coded, categorized, and analyzed.

Results:

Using the sampling scheme, 634 of 1,275 participants in Maintenance of Certification in Anesthesiology Program simulation courses were evaluated from the following practice settings: 41% (262) academic, 54% (339) community, and 5% (33) military/other. A total of 1,982 plans were analyzed for completion, target audience, and topic. On follow-up, 79% (1,558) were fully completed, 16% (310) were partially completed, and 6% (114) were not completed within the 90-day reporting period. Plans targeted the reporting individual (89% of plans) and others (78% of plans): anesthesia providers (50%), non-anesthesia physicians (16%), and non-anesthesia non-physician providers (26%). From the plans, 2,453 improvements were categorized as work environment or systems changes (33% of improvements), teamwork skills (30%), personal knowledge (29%), handoff (4%), procedural skills (3%), or patient communication (1%). The median word count was 63 (interquartile range, 30 to 126) for each participant’s combined plans and 147 (interquartile range, 52 to 257) for improvement follow-up reports.

Conclusions:

After making a commitment to change, 94% of anesthesiologists participating in a Maintenance of Certification in Anesthesiology Program simulation course successfully implemented some or all of their planned practice improvements. This compares favorably to rates in other studies. Simulation experiences stimulate active learning and motivate personal and collaborative practice improvement changes. Further evaluation will assess the impact of the improvements and further refine the program.

In a review of 634 Maintenance of Certification in Anesthesiology Program simulation course participants, 94% successfully implemented some or all of their planned practice improvements, which focused mostly around environment or systems changes, teamwork skills, and personal knowledge.

What We Already Know about This Topic
  • MOCA requires assessment and improvement of practice performance

  • Simulation courses established under the aegis of the American Society of Anesthesiologists include follow-up evaluation of whether these courses affected subsequent practice

What This Article Tells Us That Is New
  • In a review of 634 MOCA simulation course participants, 94% successfully implemented some or all of their planned practice improvements, which focused mostly around environment or systems changes, teamwork skills, and personal knowledge

IN 2010, simulation programs endorsed by the American Society of Anesthesiologists (ASA) began offering a high-fidelity, mannequin-based simulation experience to satisfy the American Board of Anesthesiology requirements for the Maintenance of Certification in Anesthesiology Program (MOCA®) simulation course, specifically, for the Practice Performance Assessment and Improvement (PPAI) requirement.1 *

The American Board of Medical Specialties requires member boards to include a PPAI element for the Program for Maintenance of Certification (ABMS MOC®). For other disciplines, for example, primary care specialties, PPAI may be accomplished by improvements derived from chart review. A more realistic contextual framework was deemed necessary for anesthesiologists because they care for patients in a dynamic, stressful environment requiring quick decisions.2  Simulation, which has been shown to create a realistic environment similar to a patient care setting,3,4  was chosen as a required PPAI activity to stimulate practice improvement. It allows anesthesiologists to experience and reflect on their performance particularly during situations of crisis and high acuity—situations when patient care is most critical. MOCA case scenarios, often drawn from real closed claims cases, include life-threatening, sometimes rare, conditions requiring urgent patient management and teamwork skills for optimal outcome.

The addition of a simulation course supports the American Board of Medical Specialties’ desired evolution of MOC from a recertification program to a lifelong learning and self-assessment program.5–7  This use of simulation, specifically in the PPAI element of MOCA, deliberately incorporates an experiential strategy to activate the learners to reflect on ways to improve their practice, especially concerning management of challenging situations. Most MOCA simulation courses confer continuing medical education (CME) credit consistent with changes in CME that emphasize practice improvement.8–10 

Maintenance of Certification in Anesthesiology Program simulation courses are offered at simulation programs endorsed by the ASA.1  To qualify for MOCA credit, participants in these programs are required to propose practice improvement changes prompted by course participation. They are also required to complete a follow-up report within 90 days of the course on their actions and the status of meeting the improvement goals they had set. In this report, we present the results of 3 yr of course data with respect to the practice improvements proposed by participating anesthesiologists and their success in implementing those plans. Specifically, our primary aim is to assess the frequency and type of improvements that were completed and any factors that influence completion. Secondary aims are to assess the number of improvements that were deemed measurable and to analyze the frequency and type of non-anesthesiologist healthcare providers targeted by the anesthesiologists’ improvement plans.

Study Design

We conducted a retrospective mixed-methods analysis of practice improvement plans proposed and implemented after simulation course participation.

Data Resources/Materials

With University of California, Los Angeles Institutional Review Board (Los Angeles, California) approval, we reviewed de-identified self-reported data collected from participants after a daylong MOCA simulation course taken at an ASA-endorsed simulation center. Data from the first 3 yr of MOCA simulation courses, from January 2010 to December 2012, were compiled and analyzed. The practice improvement reports from all participants enrolled in MOCA simulation courses during this time period were eligible for inclusion in this study. The course logistics and program development are described in detail in a previous article.1  We have provided the postcourse data collection forms in appendices 1 and 2. In brief, course participants were asked to list at least three practice improvement plans that they would implement after the course. ASA contacted each person via e-mail on a monthly basis, asking them to provide a follow-up report on whether each plan was completed (yes/no/partially). Participants were also asked to write about the details of implementation success or obstacles encountered. Such a follow-up report was required within 90 days of the course for a participant to receive credit for this part of their MOCA requirements.

Sampling

Of 1,275 course participants eligible for analysis, we randomly sampled 50% for review and coding. To ensure comprehensive representation, sampling was stratified as follows:

  1. By simulation center: The number of sampled participants per center was proportional to the total number of course participants per center, except at centers that enrolled less than five participants. In those centers, all participants were included in the sample.

  2. By years of practice: Equal numbers of participants were selected from among those above and below the median practice duration (7 yr for the entire pool of available participants).

  3. By practice setting: The number of sampled participants in academic, community, and military/other practice was proportional to the number from those settings in the total pool of eligible participants.

Development of Coding Scheme

Using an iterative process, we developed a coding scheme for characterizing the practice improvement plans and follow-up reports. We used an analytic process of coding that is consistent with grounded theory and qualitative research.11–13  Each codeable unit of text (a phrase/statement that was determined by the investigators to convey a single distinct idea) was categorized according to our coding scheme. Items coded included categories and subcategories (topic themes and subtopics), target (whether improvements were directed toward the participants themselves or involved others), measurability (whether the plan was specific and sufficiently detailed to allow observable or quantifiable measurement of progress), and completion (whether the plan was implemented). Each participant submitted at least three plans for analysis. Many participants proposed two distinct improvements within each plan; these were each coded and counted separately. Measurability and completion were determined per plan rather than per improvement.

For the first step, four study investigators (A.R.B., Y.M.H., R.H.S., and J.B.C.) reviewed and discussed the first 10 participants’ practice improvement plans together to get a general sense of emerging themes (fig. 1). We agreed to code plans as either measurable or not measurable and assigned the following targets: self, other anesthesia providers, other non-anesthesia physicians (e.g., surgeons), and other non-anesthesia non-physician personnel (e.g., pharmacists, operating room/intensive care unit/postanesthesia care unit nurses). To code themes from the narratives, the investigators independently reviewed 25 participants’ plans to develop keywords and to generate coding categories. These were discussed again as a group to calibrate interpretation of the written comments. Discrepancies were noted and resolved by consensus. We had few disagreements on determining the keyword coding. For entries that were vague or nonspecific, such as one or two word statements (e.g., entries such as “Communication” or “Intraosseous access”), we agreed to code them on a broad topic/category basis, rate them as “not measureable” and assign the target as “self” (as opposed to colleagues).

Fig. 1.

Flow diagram of coding process. An illustration of the iterative process of coding scheme development. IRR = interrater reliability.

Fig. 1.

Flow diagram of coding process. An illustration of the iterative process of coding scheme development. IRR = interrater reliability.

Close modal

To reach saturation of themes, two investigators (A.R.B. and Y.M.H.) independently reviewed an additional 200 plans to identify further topic categories for the coding scheme. Each separately generated a list of categories and subcategories and included examples to provide an operational definition for the coding scheme. After evaluation of 200 participants’ plans, we had reached saturation in terms of new coding categories, and the lists were consolidated. Authors R.H.S. and J.B.C. reviewed the consolidated categories with A.R.B. and Y.M.H. and made further refinements to clarify the coding category wording and their definitions. All discrepancies were resolved by group discussion, acknowledging that we could not gather additional information since responses were de-identified, precluding contact with participants to clarify their responses. Finally, four authors (A.R.B., Y.M.H., R.H.S., and J.B.C.) coded 20 of the same participants’ data with the consolidated, final coding scheme (appendix 3) to assess whether consensus could be reached and whether the keywords identified were comprehensive. We calculated interrater reliability for categories and measurability and determined a priori that a κ value of 0.75 would be considered an acceptable interrater reliability to finalize the coding scheme among three raters who analyzed the data. The interrater reliability for pair-wise comparisons was 0.92 (average κ for category; range, 0.83 to 1.0) and 0.89 (average κ for measurability; range, 0.78 to 1.0).

Statistical Analysis

We performed content analysis on all textual data. Three investigators (A.R.B., Y.M.H., and R.H.S.) each independently coded one third of the written narratives from the sample using the final coding scheme for categorization coding. We resolved questions through discussion and consensus between all three coders. Using an Excel (Microsoft, USA) spreadsheet function, we counted the words for the combined plans of each participant and for their implementation follow-up report. We computed frequency distributions of text length (histograms and percentiles, 25, 50, and 75%) as one indicator of effort and thought put into the plans.

Using JMP v11 (SAS Institute Inc., USA), SPSS v22 (IBM Corporation, USA), and Stata/IC v.13.1 (StataCorp LP, USA), we conducted a descriptive analysis to determine the frequencies of categories for the improvement plans. We examined how practice setting, measurability, years of experience, targets of plans, and simulation center affect completion using univariable and multivariable models. We recoded the three-category completion variable into a dichotomous variable, combining the categories of partially completed and completed into a single category, which was used as the outcome variable in a series of random-intercept logistic regressions. We used random-intercept models to account for the fact that participants had more than one plan and traditional (fixed effects) models for variables that were participant, and not plan, specific (setting, experience, and center). We used the following predictor variables: setting, measurability, experience, target, and simulation center. Setting was coded as a three-level variable: academic, community, and military/other; measurability, a binary variable (yes or no); and experience, a continuous variable (number of years). Target was coded as a continuous variable comprised of the sum of targets from four categories: self, anesthesia providers (e.g., other anesthesiologists, certified nurse anesthetists), non-anesthesia physicians (e.g., surgeons), and non-anesthesia non-physician providers (e.g., nurses, pharmacists, allied health professionals). Simulation centers with less than 10 sampled participants were collapsed into a single group, which was treated as a center during analysis.

Predictors with unadjusted P values less than 0.20 in the univariable analysis were retained in the multivariable analysis. A Bonferroni correction was made to the P value of the variables in the multivariable model to account for multiple comparisons (adjusted P value = unadjusted P value times the number of variables in the multivariable model). An α value of 0.05 was used for all tests of statistical significance.

Between January 2010 and December 2012, 1,275 individuals enrolled in 303 MOCA simulation courses at 29 different simulation centers located in 20 states. Fourteen participants were excluded because their follow-up reports were not available at the time of analysis. Stratified sampling identified 634 course participants (50% of 1,261) for coding and analysis (fig. 2).

Fig. 2.

Study sample. Descriptive statistics and study sample demographics are outlined based on the sampling schema. Each participant was required to submit at least three practice improvement plans and some included two distinct improvements within each plan. ASA = American Society of Anesthesiologists; MOCA = Maintenance of Certification in Anesthesiology Program.

Fig. 2.

Study sample. Descriptive statistics and study sample demographics are outlined based on the sampling schema. Each participant was required to submit at least three practice improvement plans and some included two distinct improvements within each plan. ASA = American Society of Anesthesiologists; MOCA = Maintenance of Certification in Anesthesiology Program.

Close modal

Of the 634 participants analyzed, approximately 41% (262) were from academic settings, 54% (339) were from community practice settings, and 5% (33) were from military or other settings. These proportions were representative of the entire pool of course participants. Participants who characterized their work environment as a combination of two or more of these areas were placed in the category labeled “other.” The participants’ median number of years in practice was seven, with an interquartile range of 4 to 9 yr (range, 1 to 43 yr). The number in the sample from the 29 simulation centers varied (range, 2 to 59 participants) due to their differing number of total MOCA participants during the study period. All but 10 of the 634 were enrolled in MOCA.

Based on the sampling scheme, a total of 1,982 plans (554 participants had three plans and 80 had four plans) were analyzed for text length, categorization, target, completion, and measurability. On analysis, these plans contained a total of 2,453 improvements (some plans contained two improvements).

Practice Improvement Plans Submitted and Implemented

Each of the 2,453 improvements was assigned to one of seven categories (table 1). Of the seven categories, improvements were most often categorized as related to the work environment (“system,” 33% of 2,453 improvements), teamwork skills (30%), or personal knowledge (29%). Other categories were significantly less frequent: handoff (4%), procedural skills (3%), patient communication (1%), and other (<0.2%). Examples of significant improvements implemented are in table 2.14 

Table 1.

Most Prevalent Practice Improvement Plans within Each Category

Most Prevalent Practice Improvement Plans within Each Category
Most Prevalent Practice Improvement Plans within Each Category
Table 2.

Examples of Implemented Plans

Examples of Implemented Plans
Examples of Implemented Plans

Completion, Measurability, Targets, and Word Count of Plans

Table 3 summarizes completion, measurability, and targets of the practice improvement plans. Of 1,982 plans rated for measurability, 74% (n = 1,467) were considered specific enough to qualify as measurable. Based on follow-up reports, 79% (n = 1,558) of plans were fully completed, 16% (n = 310) were partially completed, and 6% (n = 114) were not completed within the 90-day reporting period. The target of the plan included self in 89% of plans and others in 78% of plans. Of those that involved others, the following were included: other anesthesiologists/anesthesia providers (50% of plans), non-anesthesia physicians (16% of plans), and non-anesthesia non-physician providers (e.g., nurses, 26% of plans). The median word count of plans was 63 (interquartile range, 30 to 126; minimum, 5; maximum, 444) and of the follow-up reports was 147 (interquartile range, 52 to 257; minimum, 1; maximum 842) (fig. 3, A and B).

Table 3.

Completion, Measurability, and Targets of Practice Improvement Plans

Completion, Measurability, and Targets of Practice Improvement Plans
Completion, Measurability, and Targets of Practice Improvement Plans
Fig. 3.

(A) Practice improvement plan total text length. The distribution of word count for the sum of each participants’ three to four practice improvement plans submitted after a Maintenance of Certification in Anesthesiology Program (MOCA) simulation course. Median: 63 (interquartile range [IQR], 30–126; minimum, 5; maximum, 444). (B) Practice improvement plan follow-up total text length. The distribution of word count as proxy for degree of effort for the follow-up response. Median: 147 (IQR, 52–257; minimum, 1; maximum, 842).

Fig. 3.

(A) Practice improvement plan total text length. The distribution of word count for the sum of each participants’ three to four practice improvement plans submitted after a Maintenance of Certification in Anesthesiology Program (MOCA) simulation course. Median: 63 (interquartile range [IQR], 30–126; minimum, 5; maximum, 444). (B) Practice improvement plan follow-up total text length. The distribution of word count as proxy for degree of effort for the follow-up response. Median: 147 (IQR, 52–257; minimum, 1; maximum, 842).

Close modal

Predictors That Affect Completion

In the univariable models, center, practice setting (academic, community, and military/other), and years of experience did not predict completion (table 4, individual center data not shown). If plans were deemed measurable, the odds of completion increased by a factor of 1.95 (odds ratio, 1.95; 95% CI, 1.01 to 3.76; P = 0.047). When participants targeted other anesthesia providers or interprofessional colleagues (e.g., surgeons, nurses, or pharmacists), they were more likely to complete their plans (table 4). Participants who targeted only themselves, and no one else were less likely to complete their plans (odds ratio, 0.29; 95% CI, 0.11 to 0.78; P = 0.015) than participants who targeted any other group.

Table 4.

Univariate Analysis with Completion as Outcome

Univariate Analysis with Completion as Outcome
Univariate Analysis with Completion as Outcome

Predictors with P value less than 0.20 in the univariable analysis (measurability, experience, and target) were retained in the multivariable analysis; setting and center were dropped. In the multivariable model, measurability no longer predicted the likelihood of completion (odds ratio, 1.57; 95% CI, 0.79 to 3.08; P = 0.591; table 5). Experience was not statistically significant in the multivariable model (P = 0.276). However, after controlling for the other predictors, target was significant in the multivariable model. Participants who target more groups of interprofessional colleagues in their plans have increased odds of completing their plans (odds ratio, 1.29; 95% CI, 1.06 to 1.57; P = 0.036).

Table 5.

Multivariable Analysis with Completion as Outcome

Multivariable Analysis with Completion as Outcome
Multivariable Analysis with Completion as Outcome

In this analysis of practice improvement plans, 94% of participants reported implementing at least one improvement within 3 months after an MOCA simulation course, and 79% implemented three or more practice improvements within this period (table 3). Practitioners identified challenges to completion of plans that were personal, staff, surgeon, and/or institution related, similar to other reports.15  However, these barriers did not prevent a high rate of success in implementation. Importantly, plans that target interprofessional team members resulted in higher likelihood of completion. Perhaps these participants feel more accountable when colleagues from other disciplines are targeted, or perhaps those who attempt interprofessional interventions are self-selected as more motivated to conduct practice improvement.

To date, other specialties have not incorporated mannequin-based simulation in MOC so we compare our results to those of other CME programs. However, the impact of CME programs has been questioned, leading to widespread calls for CME reform.16,17  Many CME participants do not change their practice as a result of the activities. Lecture-based programs are particularly unlikely to change performance.18 

Purkis19  described a “commitment to change” (CTC) strategy for CME, suggesting that adults are more likely to implement what they identify as relevant. In research studies examining CTC compliance, CME participants had a 47 to 87% rate of implementation of their stated goals.7,20–22  Participants were self-selected, and the proportion sampled varied considerably, so these results may not be generalizable.

In another highly selected sample, a study enrolled 144 primary care clinicians (physicians, nurse practitioners, and physician assistants) from among 800 participants. In this randomized controlled trial, 32% of lecture attendees in the control group (who were not asked to make a CTC) reported changes 7 days later, compared with 91% in the CTC group.23  Among 352 Canadian family physicians who attended daylong interactive courses at 21 centers, 57% provided follow-up data that contained 935 commitment statements. Of these, 67% were completely implemented 6 months after the course.24 

Because such data are self-reported, the actual implementation of plans after these activities is unknown. However, in a study that evaluated prescribing practices after an educational intervention, self-reported change was a valid means of assessing CME outcomes.25  Implementation after MOCA simulation was higher than after other CME activities.1,9  As suggested in a review of the impact of formal CME, interactive CME appears more likely than didactic sessions to effect change in practice.18 

Change in Practice after Maintenance of Certification Activities

To our knowledge, we are the first to combine a mandatory CTC approach with the use of full-scale high-fidelity simulation as an educational intervention for MOC and CME. The Australian and New Zealand College of Anesthetists developed the Effective Management of Anesthetic Crises course in 2002, which uses simulation to provide Maintenance of Professional Standards credit. Their first year results from a 3 to 12 month postcourse survey indicated 55% of respondents (trainees and practicing physicians) reported making changes to their practice.26  A subsequent survey of 216 participants yielded 98 responses, with 86% of respondents making changes to their clinical practice as a result of the course.27  The American Board of Family Medicine uses self-assessment modules (screen-based clinical simulations) and practice performance modules, both of which are required as part of their 7-yr MOC cycle. An analysis of the first year of self-assessment module implementation revealed that 55% of participants agreed that they would make changes as a result of completing the online modules, but no follow-up of implementation was done.28  A subsequent retrospective study showed greater improvements in patient care in physicians who completed self-assessment module/practice performance modules compared with those who did not.29 

Because the American Board of Anesthesiology required a follow-up report about attempted implementation of the improvement plans after the MOCA simulation course, the high completion rates may not be surprising. Simulation was specifically chosen for its ability to actively engage participants, facilitate reflection, and create a sense of urgency likely to foster change. The combined features of engagement and reflective learning from simulation, coupled with the goal setting and follow-up attributes of a CTC approach, resulted in high compliance and implementation.

The fact that over three quarters of the participants identified colleagues and other team members as targets of their plans is noteworthy, particularly because it was not a specific requirement of practice improvement. MOCA simulation courses generally emphasize nontechnical skills, teamwork, and systems improvement in addition to clinical management. This suggests that the courses are creating real motivation to improve because involving others in improvement plans adds considerable effort to the process. Analysis of the word count of the plans and follow-up responses suggests that the majority of participants made more than a cursory effort in their written reports. The median follow-up text was over twice as long as the plans and more than 25% of follow-up reports were over 250 words.

The improvement plans addressed topics encountered during the simulation courses, with the plans divided nearly evenly among three categories: system issues, teamwork/communication skills (nontechnical skills), and personal (technical) knowledge, which implies that the courses stimulate reflection in all of these areas.

Relevance of MOCA Simulation to Patient Safety

Despite past efforts in anesthesiology to improve safety, such as the Closed Claims Project and ASA Standards, Guidelines and Statements, patients continue to be harmed by practitioners’ failure to act in accordance with specific management guidelines and/or by a variety of individual or team errors.30–37  The simulation courses directly address events of high acuity and consequence, which are associated with mortality and morbidity.31,33  In particular, the MOCA simulation program requires prioritization of teamwork in crisis situations involving cardiovascular compromise and hypoxemia.

The field of anesthesiology has decades of experience with simulation. The combination of compelling scenarios encountered in a realistic clinical environment, coupled with postevent debriefing, creates an educational stimulus to trigger practitioner reflection and improve patient safety.38–41 

Lessons Learned

From the experience of reviewing participants’ practice improvement plans, we learned a number of lessons that will help refine the MOCA Simulation Program. From the outset, we intentionally gave participants considerable latitude in creating their plans to foster creativity and to avoid unduly directing the process. However, it was apparent that not all participants had prior experience in creating improvement plans. Seeing plans that lacked specificity, we learned that it may be necessary to practice plan development during the course. Some sites use the mnemonic SMART (specific, measurable, attainable, realistic, and timely) to guide participants in generating high-quality plans.42  The fact that the courses stimulated a high degree of practice improvement effort among 634 representative participants spread across 29 U.S. simulation centers suggests that the individual programs collectively achieved the mission of facilitating practice change.

Although our analysis suggests that simulation-based training is stimulating reflection and practice improvement, our analytical approach has limitations. Although self-reporting is the standard for part 4 MOC in other specialties and for follow-up of CME activities, it is impossible to know how accurate the reports are. The word counts of the plans and follow-up suggest that many participants made more than a cursory effort. Many participants described implementing compelling plans that exceeded the scope of the plan’s initial description.

Because all of the participants in this analysis enrolled in a simulation course, there is no control group, which limits the ability to attribute causality to the intervention. Whether educational methodologies other than simulation would have achieved similar results, with less effort (or expense), is unclear and beyond the scope of this article.

Allowing participants to describe their plans using open text fields permitted richness in detail that might not be possible using a more structured reporting form (checklist or dropdown menu). In some instances, participants packed substantial meaning into a short report (table 2, item 1). In other instances, the lack of structured follow-up resulted in extreme brevity (e.g., “yes”) and vagueness. Had we used structured forms to categorize the individuals affected by the plans (the plans’ “targets”), it might have biased the results.

In addition, our coders could have misinterpreted plan categories, subcategories, targets, completion, and measurability during their assessments. Nonetheless, the high agreement between coders suggests reliable interpretations.

Importantly, the impact of the improvements on actual patient care or patient outcomes is unknown, which is typical for most educational programs. It would be very difficult or even impossible to determine whether practice improvement plans triggered by the MOCA simulation requirement produce significant change in patient outcome after uncommon events.

In conclusion, we have characterized some aspects of the perceived impact of a new program that uses simulation as a stimulus for practice improvement. The impact of the program, as determined by the fraction of participants who reported having implemented practice change, appears to be substantial. Future work will help delineate the barriers and enablers to plan implementation. Ultimately, if possible, the impact of resulting changes on patient outcomes should be assessed.

The authors thank the American Society of Anesthesiologists (ASA) (Schaumburg, Illinois) and American Board of Anesthesiology (Raleigh, North Carolina) for providing de-identified data, support, and feedback; the ASA Simulation Editorial Board for their review and input for the article; and Christine Wells, Ph.D., at the UCLA Statistical Consulting Group, Los Angeles, California, for her help with data analysis.

This study was supported by the authors’ institutions: Department of Anesthesiology, David Geffen School of Medicine at the University of California, Los Angeles (UCLA); UCLA Simulation Center, Los Angeles, California; Department of Anesthesiology, Cooper Medical School of Rowan University, Camden, New Jersey; Center for Medical Simulation, Boston, Massachusetts; Harvard Medical School and Department of Anesthesia, Critical Care and Pain Medicine, Massachusetts General Hospital, Boston, Massachusetts, and the Stanford University School of Medicine, Palo Alto, California.

The views and opinions contained in this article are those of the authors and do not necessarily reflect the views of the American Society of Anesthesiologists (ASA), the American Board of Anesthesiology, or the Department of Veterans Affairs. Drs. Steadman, Gaba, and Huang receive compensation for serving as Maintenance of Certification in Anesthesiology Program (MOCA) course instructors. Drs. Steadman, Burden, Gaba, and Cooper are members of the ASA Editorial Board for Simulation-based Training (Park Ridge, Illinois), and Drs. Gaba and Cooper are members of the Executive Committee of the Anesthesia Patient Safety Foundation (Indianapolis, Indiana).

*

The American Board of Anesthesiology Maintenance of Certification in Anesthesiology Program (MOCA) Web site. Available at: http://www.theaba.org/Home/anesthesiology_maintenance. Accessed April 6, 2014.

American Society of Anesthesiologists Standards, Guidelines, Statements, and Other Documents Web site. Available at: https://www.asahq.org/For-Members/Standards-Guidelines-and-Statements.aspx. Accessed April 6, 2014.

1.
McIvor
W
,
Burden
A
,
Weinger
MB
,
Steadman
R
:
Simulation for maintenance of certification in anesthesiology: The first two years.
J Contin Educ Health Prof
2012
;
32
:
236
42
2.
Gaba
DM
,
Fish
KJ
,
Howard
SK
:
Crisis Management in Anesthesiology
, 1st edition.
New York
,
Churchill Livingstone
,
1993
, pp
pp 5
29
3.
Weller
J
,
Henderson
R
,
Webster
CS
,
Shulruf
B
,
Torrie
J
,
Davies
E
,
Henderson
K
,
Frampton
C
,
Merry
AF
:
Building the evidence on simulation validity: Comparison of anesthesiologists’ communication patterns in real and simulated cases.
Anesthesiology
2014
;
120
:
142
8
4.
Weinger
MB
,
Burden
AR
,
Steadman
RH
,
Gaba
DM
:
This is not a test!: Misconceptions surrounding the maintenance of certification in anesthesiology simulation course.
Anesthesiology
2014
;
121
:
655
9
5.
Drazen
JM
,
Weinstein
DF
:
Considering recertification.
N Engl J Med
2010
;
362
:
946
7
6.
Iglehart
JK
,
Baron
RB
:
Ensuring physicians’ competence—Is maintenance of certification the answer?
N Engl J Med
2012
;
367
:
2543
9
7.
Shershneva
MB
,
Wang
MF
,
Lindeman
GC
,
Savoy
JN
,
Olson
CA
:
Commitment to practice change: An evaluator’s perspective.
Eval Health Prof
2010
;
33
:
256
75
8.
Wakefield
J
,
Herbert
CP
,
Maclure
M
,
Dormuth
C
,
Wright
JM
,
Legare
J
,
Brett-MacLean
P
,
Premi
J
:
Commitment to change statements can predict actual change in practice.
J Contin Educ Health Prof
2003
;
23
:
81
93
9.
Wakefield
JG
:
Commitment to change: Exploring its role in changing physician behavior through continuing education.
J Contin Educ Health Prof
2004
;
24
:
197
204
10.
Marinopoulos
SS
,
Dorman
T
,
Ratanawongsa
N
,
Wilson
LM
,
Ashar
BH
,
Magaziner
JL
,
Miller
RG
,
Thomas
PA
,
Prokopowicz
GP
,
Qayyum
R
,
Bass
EB
:
Effectiveness of continuing medical education.
Evid Rep Technol Assess (Full Rep)
2007
;
149
:
1
69
11.
Corbin
J
,
Strauss
A
:
Basics of Qualitative Research: Techniques and Procedures for Developing Grounded Theory
, 3rd edition.
Thousand Oaks
,
Sage
,
2007
, pp
pp 159
94
12.
Patton
MQ
:
Enhancing the quality and credibility of qualitative analysis.
Health Serv Res
1999
;
34
(
5 Pt 2
):
1189
208
13.
Bradley
EH
,
Curry
LA
,
Devers
KJ
:
Qualitative data analysis for health services research: Developing taxonomy, themes, and theory.
Health Serv Res
2007
;
42
:
1758
72
14.
Chung
F
,
Subramanyam
R
,
Liao
P
,
Sasaki
E
,
Shapiro
C
,
Sun
Y
:
High STOP-Bang score indicates a high probability of obstructive sleep apnoea.
Br J Anaesth
2012
;
108
:
768
75
15.
Bhandari
M
,
Montori
V
,
Devereaux
PJ
,
Dosanjh
S
,
Sprague
S
,
Guyatt
GH
:
Challenges to the practice of evidence-based medicine during residents’ surgical training: A qualitative study using grounded theory.
Acad Med
2003
;
78
:
1183
90
16.
Spivey
BE
:
Continuing medical education in the United States: Why it needs reform and how we propose to accomplish it.
J Contin Educ Health Prof
2005
;
25
:
134
43
17.
Mansouri
M
,
Lockyer
J
:
A meta-analysis of continuing medical education effectiveness.
J Contin Educ Health Prof
2007
;
27
:
6
15
18.
Davis
D
,
O’Brien
MA
,
Freemantle
N
,
Wolf
FM
,
Mazmanian
P
,
Taylor-Vaisey
A
:
Impact of formal continuing medical education: Do conferences, workshops, rounds, and other traditional continuing education activities change physician behavior or health care outcomes?
JAMA
1999
;
282
:
867
74
19.
Purkis
IE
:
Commitment for changes: An instrument for evaluating CME courses.
J Med Educ
1982
;
57
:
61
3
20.
Dolcourt
JL
:
Commitment to change: A strategy for promoting educational effectiveness.
J Contin Educ Health Prof
2000
;
20
:
156
63
21.
Mazmanian
PE
,
Daffron
SR
,
Johnson
RE
,
Davis
DA
,
Kantrowitz
MP
:
Information about barriers to planned change: A randomized controlled trial involving continuing medical education lectures and commitment to change.
Acad Med
1998
;
73
:
882
6
22.
Martin
KO
,
Mazmanian
PE
:
Anticipated and encountered barriers to change in CME: Tools for planning and evaluation.
J Contin Educ Health Prof
1991
;
11
:
301
18
23.
Domino
FJ
,
Chopra
S
,
Seligman
M
,
Sullivan
K
,
Quirk
ME
:
The impact on medical practice of commitments to change following CME lectures: A randomized controlled trial.
Med Teach
2011
;
33
:
e495
500
24.
Lockyer
JM
,
Fidler
H
,
Ward
R
,
Basson
RJ
,
Elliott
S
,
Toews
J
:
Commitment to change statements: A way of understanding how participants use information and skills taught in an educational session.
J Contin Educ Health Prof
2001
;
21
:
82
9
25.
Curry
L
,
Purkis
IE
:
Validity of self-reports of behavior changes by participants after a CME course.
J Med Educ
1986
;
61
:
579
84
26.
Weller
J
,
Wilson
L
,
Robinson
B
:
Survey of change in practice following simulation-based training in crisis management.
Anaesthesia
2003
;
58
:
471
3
27.
Weller
J
,
Morris
R
,
Watterson
L
,
Garden
A
,
Flanagan
B
,
Robinson
B
,
Thompson
W
,
Jones
R
:
Effective management of anaesthetic crises: Development and evaluation of a college-accredited simulation-based course for anaesthesia education in Australia and New Zealand.
Simul Healthc
2006
;
1
:
209
14
28.
Hagen
MD
,
Ivins
DJ
,
Puffer
JC
,
Rinaldo
J
,
Roussel
GH
,
Sumner
W
,
Xu
J
:
Maintenance of certification for family physicians (MC-FP) self assessment modules (SAMs): The first year.
J Am Board Fam Med
2006
;
19
:
398
403
29.
Galliher
JM
,
Manning
BK
,
Petterson
SM
,
Dickinson
LM
,
Brandt
EC
,
Staton
EW
,
Phillips
RL
,
Pace
WD
:
Do professional development programs for Maintenance of Certification (MOC) affect quality of patient care?
J Am Board Fam Med
2014
;
27
:
19
25
30.
Cheney
FW
:
The American Society of Anesthesiologists Closed Claims Project: What have we learned, how has it affected practice, and how will it affect practice in the future?
Anesthesiology
1999
;
91
:
552
6
31.
Cheney
FW
:
The American Society of Anesthesiologists closed claims project: The beginning.
Anesthesiology
2010
;
113
:
957
60
32.
McGlynn
EA
,
Asch
SM
,
Adams
J
,
Keesey
J
,
Hicks
J
,
DeCristofaro
A
,
Kerr
EA
:
The quality of health care delivered to adults in the United States.
N Engl J Med
2003
;
348
:
2635
45
33.
Porter
ME
,
Teisberg
EO
:
How physicians can change the future of health care.
JAMA
2007
;
297
:
1103
11
34.
Forrest
JB
,
Cahalan
MK
,
Rehder
K
,
Goldsmith
CH
,
Levy
WJ
,
Strunin
L
,
Bota
W
,
Boucek
CD
,
Cucchiara
RF
,
Dhamee
S
,
Domino
KB
,
Dudman
AJ
,
Hamilton
WK
,
Kampine
J
,
Kotrly
KJ
,
Maltby
JR
,
Mazloomdoost
M
,
MacKenzie
RA
,
Melnick
BM
,
Motoyama
E
,
Muir
JJ
,
Munshi
C
:
Multicenter study of general anesthesia. II. Results.
Anesthesiology
1990
;
72
:
262
8
35.
Lee
LA
,
Domino
KB
:
The Closed Claims Project. Has it influenced anesthetic practice and outcome?
Anesthesiol Clin North America
2002
;
20
:
485
501
36.
Oken
A
,
Rasmussen
MD
,
Slagle
JM
,
Jain
S
,
Kuykendall
T
,
Ordonez
N
,
Weinger
MB
:
A facilitated survey instrument captures significantly more anesthesia events than does traditional voluntary event reporting.
Anesthesiology
2007
;
107
:
909
22
37.
Studdert
DM
,
Mello
MM
,
Gawande
AA
,
Gandhi
TK
,
Kachalia
A
,
Yoon
C
,
Puopolo
AL
,
Brennan
TA
:
Claims, errors, and compensation payments in medical malpractice litigation.
N Engl J Med
2006
;
354
:
2024
33
38.
Fanning
RM
,
Gaba
DM
:
The role of debriefing in simulation-based learning.
Simul Healthc
2007
;
2
:
115
25
39.
Gaba
DM
:
The pharmaceutical analogy for simulation: A policy perspective.
Simul Healthc
2010
;
5
:
5
7
40.
Murray
DJ
,
Boulet
JR
,
Avidan
M
,
Kras
JF
,
Henrichs
B
,
Woodhouse
J
,
Evers
AS
:
Performance of residents and anesthesiologists in a simulation-based skill assessment.
Anesthesiology
2007
;
107
:
705
13
41.
Rudolph
JW
,
Simon
R
,
Raemer
DB
,
Eppich
WJ
:
Debriefing as formative assessment: Closing performance gaps in medical education.
Acad Emerg Med
2008
;
15
:
1010
6
42.
Reed
VA
,
Schifferdecker
KE
,
Turco
MG
:
Motivating learning and assessing outcomes in continuing medical education using a personal learning plan.
J Contin Educ Health Prof
2012
;
32
:
287
94

Appendix 1. Post Simulation Commitment to Change Form

Appendix 2. Post Simulation Course 30-, 60-, and 90-day Follow-up E-mail

Appendix 3. Coding Scheme: Categorization of Themes (numbered items 1–8) and Subthemes (lettered items with Roman numeral subclassifications) with Representative Examples