Background

The authors used continuous quality improvement (CQI) program data to investigate trends in quality of anesthesia care associated with changing staffing patterns in a university hospital.

Methods

The monthly proportion of cases performed by solo attending anesthesiologists versus attending-resident teams or attending-certified registered nurse anesthetist (CRNA) teams was used to measure staffing patterns. Anesthesia team productivity was measured as mean monthly surgical anesthesia hours billed per attending anesthesiologist per clinical day. Supervisory ratios (concurrency) were measured as mean monthly number of cases supervised concurrently by attending anesthesiologists. Quality of anesthesia care was measured as monthly rates of critical incidents, patient injury, escalation of care, operational inefficiencies, and human errors per 10,000 cases. Trends in quality at increasing productivity and concurrency levels from 1992 to 1997 were analyzed by the one-sided Jonckheere-Terpstra test.

Results

Productivity was positively correlated with concurrency (r = 0.838; P<0.001). Productivity levels ranged from 10 to 17 h per anesthesiologist per clinical day. Concurrency ranged from 1.6 to 2.2 cases per attending anesthesiologist. At higher productivity and concurrency levels, solo anesthesiologists conducted a smaller percentage of cases, and the proportion of cases with CRNA team members increased. The patient injury rate decreased with increased productivity levels (P = 0.002), whereas the critical incident rate increased (P = 0.001). Changes in operational inefficiency, escalation of care, and human error rates were not statistically significant (P = 0.072, 0.345, 0.320, respectively).

Conclusions

Most aspects of quality of anesthesia care were apparently not effected by changing anesthesia team composition or increased productivity and concurrency. Only team performance was measured; the role of individuals (attending anesthesiologist, resident, or CRNA) in quality of care was not directly measured. Further research is needed to explain lower patient injury rates and increases in critical incident reporting at higher concurrency and productivity levels.

HEALTHCARE cost-containment efforts in the last two decades have resulted in changes in the structure of the healthcare delivery system. Managed care, especially in capitated systems, requires a strong balance of productivity and quality if a service provider is to be chosen by healthcare purchasers and remain competitive in the current healthcare market. In addition to market forces, teaching hospitals have faced the difficulty of maintaining or increasing service in the face of decreased compensation for the costs of teaching.

In our academic medical center, the anesthesia service has responded to these changes by increasing productivity. We improved operating room (OR) turnover time and monitored block time usage versus  scheduling to optimize prime time OR use. We optimized attending anesthesiologist supervision of residents and certified registered nurse anesthetists (CRNAs) to more consistently meet our staffing ratio goal of one attending anesthesiologist supervising one or two residents concurrently (depending on resident experience). We also scheduled two additional anesthesiologists to work later each day, expanding the use of the OR. This made it possible to complete more cases in the same facilities. However, any increase in productivity is accompanied by the risk of a decrease in production quality. We used data from our continuous quality improvement (CQI) program to investigate trends in the quality of anesthesia care during this period of increasing anesthesia service productivity.

This investigation was approved by the University of Washington Human Subjects Committee. The study was conducted at the University of Washington Medical Center, Seattle, Washington, a tertiary care teaching hospital with 350 beds, 17 ORs, and 10 other anesthetizing locations (labor and delivery, cystoscopy, diagnostic radiology, radiation oncology, gastrointestinal endoscopy, intensive care unit, pain service, and dental clinics). Anesthesia services are provided in a team mode, with attending anesthesiologists supervising residents in all 3 yr of clinical anesthesia training, fellows, and CRNAs.

Data Collection 

Data from 1992 to 1997 were collected and analyzed retrospectively. We derived productivity and concurrency data from the Department of Anesthesiology clinical activity database. 1This database consists of data from the anesthesia record and serves as the basis for departmental third-party billing. It is also the administrative mechanism for recording the clinical activity of anesthesiologists, CRNAs, and residents. Data regarding quality of anesthesia care were abstracted from the Department of Anesthesiology CQI Program database, which contains peer-reviewed analysis of adverse events and outcomes reported to the departmental quality improvement program. 2 

The anesthesia CQI program is based on context-sensitive self-reported adverse events and adverse outcomes (fig. 1). The current study is restricted to events and outcomes related to anesthesia management (rather than surgery, nursing, or patient factors), as determined by peer review and discussion. An adverse event or outcome is classified as related to anesthesia management if any of the following conditions apply:

Fig. 1. Overview of the contiuous quality improvement (CQI) program. Any adverse event or adverse outcome during anesthesia care generates a CQI report. Peer review of events and outcomes (left arrows) and the role of human error (right path) is conducted for every CQI report. The symbols (  triangles ,  squares ,  circle ) in each of the five quality-indicator circles are keyed to the corresponding categories on  figures 3–5 . For definitions of quality indicators, see text. 

Fig. 1. Overview of the contiuous quality improvement (CQI) program. Any adverse event or adverse outcome during anesthesia care generates a CQI report. Peer review of events and outcomes (left arrows) and the role of human error (right path) is conducted for every CQI report. The symbols (  triangles ,  squares ,  circle ) in each of the five quality-indicator circles are keyed to the corresponding categories on  figures 3–5 . For definitions of quality indicators, see text. 

Close modal
  1. some aspect of anesthesia management caused or contributed significantly to the event or outcome;

  2. anesthesia management prevented, mitigated, or ameliorated an adverse outcome, regardless of whether the event was related to anesthesia management; or

  3. the event or outcome involved an aspect of care for which the anesthesia provider customarily is considered to be ultimately responsible.

Only the methods relevant to cases related to anesthesia management will be described.

Adverse events and adverse outcomes reported to the CQI program are context-sensitive rather than standardized indicators. Following the model developed by Cooper et al.  3anesthesia providers are asked to report a CQI event if, within the context of the particular case, an event was unexpected or undesirable and had the potential to cause an adverse outcome. All adverse outcomes should be reported. The providers indicated whether some adverse event or outcome occurred by marking Y (for yes) or N (for no) in the CQI box on the anesthesia record. Any member of the perioperative patient care team (i.e. , attending anesthesiologist, resident, CRNA, OR nurse, postanesthesia care unit nurse) may report an event or outcome to the CQI coordinator, who investigates all reports within 1–5 days of the report. The CQI coordinator obtains a synopsis of events from providers and gathers any relevant records. These CQI case reports undergo standardized, nonpunitive peer review on a weekly basis. This peer review establishes the categorization of adverse events and outcomes and whether anesthesia management contributed. Peer review of anesthesia management issues requires the participation of three or more attending anesthesiologists who were not members of the care team for the case being reviewed. Residents and CRNAs are encouraged but not required to participate in the weekly peer-review meeting. The total number and composition of the weekly peer-review group is variable from week to week. The peer review includes analysis of human factors in anesthesia management using the Cooper et al.  4typology of human error (technical, vigilance, or judgmental). Cases involving judgmental error undergo a second peer review. The peer review itself consists of discussion by all attendees of the meeting, with decisions made by consensus. If consensus is not reached, a vote is taken and a simple majority rules.

Adverse events are problems encountered during the process of patient care that caused or had the potential to cause an adverse outcome. Adverse events that do not result in adverse outcomes are classified as critical incidents (fig. 1). An example of a critical incident would be failure to resume mechanical ventilation for a brief period after cardiopulmonary bypass, with recognition and correction of the problem before injury to the patient. Any adverse event may have a single adverse outcome, multiple outcomes, or no adverse outcome (i.e. , a critical incident).

Adverse outcomes are defined as (1) patient injury, (2) escalation of care, or (3) operational inefficiencies (fig. 1). Patient injuries include death, brain damage, nerve damage, soft tissue injury, myocardial infarction, and any other significant change in the patient's physical status. Transient changes in vital signs (e.g. , transient ischemia or hypotension that resolved without treatment) are excluded from the definition of patient injury. Escalation of care is defined as any unanticipated increase in patient care beyond the initial anesthesia plan, such as a switch from regional to general anesthesia, use of extra drugs, tests, or procedures, prolonged intubation, reintubation, specialty consultation, or unscheduled intensive care unit admission. Operational inefficiencies are problems with the flow of the anesthesia care delivery system and include delay or cancellation of surgery caused by such anesthesia problems as difficult block placement, incomplete preanesthesia evaluation, or equipment malfunction.

For each adverse event and outcome, a peer-review process is used to determine whether human error was a contributing factor. Each case undergoes error analysis regardless of the case outcome. Whether the case involved a critical incident, patient injury, escalation of care, or operational inefficiency, if it was related to anesthesia management, it goes through error analysis. Human error is defined as a deviation from ideal performance 5in the areas of judgment, technique, or vigilance.

Statistical Analysis 

Although all members of the anesthesia care team may contribute to productivity, only the attending anesthesiologist, as leader of the care team, supervises multiple cases concurrently. Residents and CRNAs do not practice without attending anesthesiologist supervision. This drove our decision to use attending anesthesiologist productivity as a measure of anesthesia team productivity. This approach was also driven by practical considerations because it corresponded with available data. This approach also corresponds to our CQI data for quality of care, which is based on the anesthesia care team as the unit of analysis.

We calculated productivity on a monthly basis by dividing the total attending anesthesia hours (time units) by the sum of clinical days worked by all attending anesthesiologists. This measure captures increased caseload and increased intensity of attending anesthesiologist workload that results from decreased slack time (room turnover, empty rooms, late starts) and increased supervisory responsibilities. To differentiate the effect of increased supervisory responsibilities from other factors affecting productivity, we measured changes in concurrency of case supervision on a monthly basis. Concurrency is measured as the number of cases an anesthesiologist supervises during overlapping time periods. If an attending anesthesiologist supervises two cases during overlapping time periods, the concurrency is two. If an anesthesiologist supervises three cases in whole or in part at the same time, concurrency is three. Any portion of a case that is supervised during supervision of any part of another case is counted as concurrent. If an anesthesiologist is supervising only one case or working alone (which may happen occasionally), the concurrency is one. The mean monthly concurrency was calculated as a weighted mean of cases at each level of concurrency.

As a descriptive measure of changes in staffing, we analyzed the personnel mix of anesthesia care teams using data from the clinical activity database. Each anesthesia record in this database includes identification of one attending anesthesiologist and one resident or CRNA who assisted in the case (if applicable). To calculate the mean proportion of cases with resident or CRNA team members each month, the total number of cases was divided by the total number of cases with a resident identified on the record and the total number of cases with a CRNA identified on the record. Records with no second provider identified were considered as solo attending anesthesiologist cases. Level of CRNA experience was calculated as years since graduation (in whole numbers).

Quality of anesthesia care was measured using five indicators: monthly CQI report rates of critical incidents (adverse events with no adverse outcome), patient injury, escalation of care, operational inefficiencies, and human error. Rates were calculated by dividing the number of indicators each month by the total number of anesthetics administered that month. Rates are expressed per 10,000 cases. Any statistically significant increase in critical incident, patient injury, escalation of care, operational inefficiency, or human error report rates at higher productivity or concurrency levels could indicate a decrease in some aspect of the quality of anesthesia care.

The internal validity of our productivity measure was assessed by calculating a one-tailed Pearson correlation coefficient between productivity and concurrency for the 72 months of the study (hypothesizing a positive correlation between the two measures). Productivity levels were constructed by rounding monthly productivity to the nearest full hour (integer). Concurrency levels were constructed by rounding to the nearest decimal (tenth). Monthly critical incident, patient injury, escalation of care, operational inefficiency, and human error rates at different levels of productivity and concurrency were analyzed by one-sided Jonckheere-Terpstra test to determine whether any of these quality indicators increased with higher levels of productivity and concurrency (suggesting a decrease in quality of care). 6Any results suggesting lower quality indicator rates at higher productivity or concurrency levels (i.e. , better quality with higher productivity) were analyzed with a two-tailed Jonckheere-Terpstra test. Probability of type 1 error (alpha) was calculated by asymptotic method for Pearson's r and was estimated by the Monte Carlo method using 10,000 permutations with 99% confidence intervals for the Jonckheere-Terpstra test. A P  value ≤ 0.05 was established for statistical significance.

The total number of anesthetics administered annually increased by 12% between 1992 and 1997 (table 1). The distribution of the American Society of Anesthesiologists physical status of patients remained stable during the 6 yr studied (table 1). During this same period, attending anesthesiologist productivity measured by hours billed per attending anesthesiologist per clinical day increased from 11.2 (±0.55) in 1992 to 15.1 (±1.0) in 1997 (P < 0.001;fig. 2). Productivity levels ranged from 10 h per anesthesiologist per clinical day to 17 h per anesthesiologist per clinical day (table 2). Productivity was positively correlated with concurrency of case supervision (r = 0.838;P < 0.001;fig. 2). Mean monthly concurrency ranged from 1.6 to 2.2. Mean monthly concurrency was rounded to the nearest tenth to create seven levels for analysis.

Table 1. Caseload and ASA Physical Status 

Table 1. Caseload and ASA Physical Status 
Table 1. Caseload and ASA Physical Status 

Fig. 2. The mean monthly productivity (hours billed per attending anesthesiologist per clinical day) is indicated by the shaded area and left axis. The mean number of cases supervised concurrently by attending anesthesiologists is represented by the broken black line and right axis. A solid line at the level corresponding to two cases supervised concurrently (read from right axis) is provided for reference. All months are illustrated but not labeled. Concurrency was strongly correlated with productivity (r = 0.838,  P < 0.001). 

Fig. 2. The mean monthly productivity (hours billed per attending anesthesiologist per clinical day) is indicated by the shaded area and left axis. The mean number of cases supervised concurrently by attending anesthesiologists is represented by the broken black line and right axis. A solid line at the level corresponding to two cases supervised concurrently (read from right axis) is provided for reference. All months are illustrated but not labeled. Concurrency was strongly correlated with productivity (r = 0.838,  P < 0.001). 

Close modal

Table 2. Productivity and Concurrency Levels and CRNA Experience 

Table 2. Productivity and Concurrency Levels and CRNA Experience 
Table 2. Productivity and Concurrency Levels and CRNA Experience 

The composition of anesthesia care teams changed at different levels of productivity and concurrency (figs. 3 and 4). At higher productivity and concurrency levels, a smaller percentage of cases were done by solo attending anesthesiologists, and the proportion of cases with CRNA team members increased. The experience level of the CRNAs on anesthesia teams was lower at higher levels of productivity (table 2). At the lowest productivity levels, 11–18% of CRNAs had ≤ 3 yrs’ prior experience. At the highest productivity levels, 29–31% of CRNAs had ≤ 3 yrs’ experience (table 2).

Fig. 3. Mean proportion of cases by solo anesthesiologists (  broken line ), attending–resident teams (  shaded area ), and attending–certified registered nurse anesthetist teams (  solid line ) at different concurrency levels. 

Fig. 3. Mean proportion of cases by solo anesthesiologists (  broken line ), attending–resident teams (  shaded area ), and attending–certified registered nurse anesthetist teams (  solid line ) at different concurrency levels. 

Close modal

Fig. 4. Mean proportion of cases by solo anesthesiologists (  broken line ), attending–resident teams (  shaded area ), and attending–certified registered nurse anesthetist teams (  solid line ) at different productivity levels. 

Fig. 4. Mean proportion of cases by solo anesthesiologists (  broken line ), attending–resident teams (  shaded area ), and attending–certified registered nurse anesthetist teams (  solid line ) at different productivity levels. 

Close modal

Anesthesia care quality indicators at different productivity levels are illustrated in Figures 5–7. The patient injury rate decreased from 134 per 10,000 cases to 38 per 10,000 cases (P = 0.002) while the critical incident rate increased from 36 per 10,000 cases to 92 per 10,000 cases (P = 0.001) between the lowest and highest levels of productivity (fig. 5). The rates of operational inefficiencies (mean 73 per 10,000 cases), escalation of care (mean 289 per 10,000 cases), and human errors (mean 47 per 10,000 cases) did not exhibit any statistically significant relation with levels of productivity (P = 0.074 and 0.345, fig. 6; and P = 0.320, fig. 7, respectively).

Fig. 5. Mean monthly rates of patient injury (  filled triangles , pointed down) and critical incidents (  open triangles , pointed up) at each productivity level. The 95% confidence intervals of the means are displayed as error bars. The injury rate decreased (  P = 0.002) and the critical incident rate increased (  P = 0.001) at higher productivity levels. 

Fig. 5. Mean monthly rates of patient injury (  filled triangles , pointed down) and critical incidents (  open triangles , pointed up) at each productivity level. The 95% confidence intervals of the means are displayed as error bars. The injury rate decreased (  P = 0.002) and the critical incident rate increased (  P = 0.001) at higher productivity levels. 

Close modal

Fig. 6. Mean monthly rates of escalation of care (  open squares ) and operational inefficiencies (  filled squares ) at each productivity level. The 95% confidence intervals of the means are displayed as error bars. These quality indicators did not differ significantly as productivity increased. 

Fig. 6. Mean monthly rates of escalation of care (  open squares ) and operational inefficiencies (  filled squares ) at each productivity level. The 95% confidence intervals of the means are displayed as error bars. These quality indicators did not differ significantly as productivity increased. 

Close modal

Fig. 7. Mean monthly rates of human error (  filled circles ) at each productivity level. The 95% confidence intervals of the means are displayed as error bars. The trend toward lower human error rates at higher productivity levels was not statistically significant (  P = 0.320). 

Fig. 7. Mean monthly rates of human error (  filled circles ) at each productivity level. The 95% confidence intervals of the means are displayed as error bars. The trend toward lower human error rates at higher productivity levels was not statistically significant (  P = 0.320). 

Close modal

Anesthesia care quality indicators at different concurrency levels followed patterns similar to those observed at different productivity levels. The patient injury rate decreased (P = 0.001) and the critical incident rate increased (P = 0.002) at higher concurrency levels. Changes in the escalation of care and human error rates were not statistically significant (P = 0.392 and 0.069, respectively). The rate of operational inefficiencies decreased at higher concurrency levels (P = 0.019).

Over a 6-yr period of changing staffing patterns and increasing anesthesia productivity in our academic medical center, most indicators of the quality of anesthesia care did not appear to decrease. On the contrary, rates of patient injury were lower at higher productivity and concurrency levels, suggesting an overall improvement in patient safety. Rates of human error were relatively constant. Operational inefficiency and escalation of care rates did not increase at higher productivity levels, and operational inefficiencies actually decreased at higher concurrency levels, suggesting that the increased workload of the attending anesthesia staff did not have concomitant hidden monetary costs for the patient, anesthesia service, or hospital. These results reflect a sufficient level of attending anesthesia staff and support personnel for the caseload and case intensity of our anesthesia service. 7 

The rate of critical incidents did increase at higher levels of productivity. Critical incidents, representing adverse events in the absence of an adverse outcome, can be considered an anesthesia patient-safety early-warning system. Because adverse outcomes are relatively rare in anesthesia, critical incidents have been used as indicators of patient safety based on the assumption that their causes are similar to the causes of events that do generate adverse outcomes. 8,9Each critical incident represents an opportunity for an adverse outcome. A reduction in critical incidents, according to this line of thought, may result in a corresponding reduction in adverse outcomes. The results of our study are inconsistent with this theory. We saw an increase in critical incidents concurrent with a decrease in patient injury. This does not mean that critical incidents are not useful in measuring quality of anesthesia care. It has long been recognized that quality includes components of structure, process, and outcome of care. Critical incidents may be a useful measure of the quality of the care-delivery process, even if this aspect of quality is not positively correlated with outcomes.

Although we did see an increase in critical incidents, we did not see an increase in human errors. This latter finding is consistent with Gaba et al. , 10who found no correlation between human errors stemming from fatigue and work hours or caseload. Experimental studies have shown associations between workload and human error that may be applicable to the performance of anesthesiologists. 11Although we found that our attending anesthesiologists were supervising more cases and staffing more anesthesia hours per clinical day, we did not directly measure fatigue, task complexity, or production pressure. We do not know whether our anesthesiologists were relatively fatigued or otherwise experiencing workload-related stress during the study period. We do not know whether case supervisory responsibilities (concurrency and team composition) are correlated with stress or fatigue. Because data were available only at the level of the anesthesia team but not the individual team members, we do not know whether increasing anesthesia team productivity influenced the workload of individual team members equally. We observed attending anesthesiologists spending more of their clinical hours performing direct patient care rather than nonclinical activities (e.g. , coffee breaks), but this use of time was not measured directly. We can only infer that the increases in workload (concurrency and productivity) did not push anesthesia care teams beyond the limits of safe job performance. Whether quality could be sustained with further increases in workload is unknown.

It may seem puzzling that most of our measures of quality (patient injury, escalation of care, operational inefficiency, and human error) remained constant or showed improvement, yet the rate of critical incidents rose with increasing productivity levels. It is possible that critical incidents do not reflect the same types of human performance problems as adverse outcomes and are not a valid indicator of anesthesia patient safety. Conversely, workload pressures may have created more adverse events, but the anesthesiologist team members also increased their success in rescuing these cases and preventing adverse outcomes. The successful avoidance of adverse outcomes in the face of increasing events could be a result of the Hawthorne effect, i.e. , improvement as a byproduct of the act of studying a production process rather than actual improvement in the process itself. It seems unlikely to us, however, that a Hawthorne effect could be sustained for a period of 6 yr.

It may be that the changes in quality indicators were independent of changes in staffing and productivity. The relative reduction in adverse outcomes could be the result of quality-improvement efforts, which occurred in response to the CQI reports during this time period. A future investigation will explore this possibility. It is also possible that the increase in critical incidents could reflect improved compliance with CQI reporting guidelines as the nonpunitive nature of the CQI program withstood the test of time and as the CQI process became integrated into the culture of the work place. The CQI process may also have served as a vehicle for expressing concerns about changes in the work environment, with increased reporting of critical incidents reflecting dissatisfaction with increasing demands. Unfortunately, we can only speculate about these possible associations because we do not have the appropriate data to investigate these hypotheses.

Our measures of the quality of anesthesia care are selective, focusing primarily on patient safety and operational efficiency. These measures are consistent with the definition of quality put forth by the Institute of Medicine:

Quality of care is the degree to which health services for individuals and populations increase the likelihood of desired health outcomes and are consistent with current professional knowledge … How care is provided should reflect appropriate use of the most current knowledge about scientific, clinical, technical, interpersonal, manual, cognitive, and organizational and management elements of health care. 12 

Because surgical anesthesia care facilitates the treatment of the patient's disease but is not expected to directly produce the desired health outcome of the surgical procedure, the avoidance of undesirable side effects or outcomes are fundamental objectives. 13The contribution of anesthesia care to any undesired health outcomes may serve as an indicator of quality problems. We used patient injury as an indicator of an undesirable health outcome of anesthesia. Escalation of care represents increased discomfort, expense, and possible risk of injury and is therefore also an undesirable outcome. The rate of human judgmental errors reflects the consistency of anesthesia care with current scientific and clinical knowledge and technique. Problems with the technical and manual elements of anesthesia care are captured by monitoring technical errors. Inappropriate application of our knowledge of human cognitive functions to delivery of anesthesia care results in vigilance errors. Operational inefficiencies represent inappropriate organizational and management elements of anesthesia care delivery. We did not obtain quality measures, such as patient satisfaction, directly from our patients. This study, and our current CQI program, focus on quality measures obtained from perioperative healthcare providers.

This study is limited by drawing data from a single academic institution with retrospective data collection and reliance on voluntary self-reporting of adverse events and outcomes. Our anesthesia service is based on a team model of service delivery. We did not measure productivity of residents and CRNAs (who provide the bulk of the hands-on anesthesia care) independent of the team context. Our quality measures are also aggregated at the team level and cannot be validly interpreted to reflect the relative quality of care provided by solo anesthesiologists versus  attending–resident or attending–CRNA care teams.

Questions have been raised concerning the effectiveness of voluntary self-reporting of adverse events during anesthesia as a method of data collection. 14–16For example, Sanborn et al.  14detected 796 per 10,000 changes in intraoperative vital signs by electronic monitoring but only 33 per 10,000 by self-report. However, these results are based on physiologic criteria that differ significantly from our context-sensitive event definitions and make direct comparison inappropriate. In contrast, other medical specialties have found that house staff voluntarily reported as many adverse events as were uncovered by chart review. 17,18Although different adverse events were captured by each system, the events reported by physicians were more likely to be preventable by quality-improvement efforts than the events detected by retrospective medical record review. 19Lagasse et al.  20reported an increasing self-report rate from 65% to 88% in an anesthesia CQI program over a 1-yr period. Another recent anesthesia study found that among 734 adverse outcomes detected by either anesthesiologist self-report, incident report, or chart review, 61% of the cases involving human error were self-reported by the attending or resident anesthesiologist who administered the anesthesia care in the case. Nearly all (92%) cases involving human error that resulted in disabling injury were voluntarily reported by the physician. Our program involves all care providers in event reporting, including attending physicians and house staff, CRNAs, and postanesthesia care unit and intensive care unit nurses. We believe that our CQI program is designed around the fundamental concepts that encourage good self-report compliance: nonpunitive yet thorough investigation of adverse events and outcomes, incorporating analysis of system problems rather than focusing on seeking blame. 21Our definitions of adverse events and outcomes are sensitive to the context of care, taking into account current knowledge of anesthesia and its complications and the particular patient and planned surgical procedure. This approach is consistent with recognized theory and practice in the study of human performance. 22Experimental investigation of the critical-incident technique has found the information collected to be valid and reliable. 23 

We have not identified comparable studies of the relation between productivity increases and quality of care. Comparison of rates of adverse events, adverse outcomes, and human error in our hospital with rates reported by other investigators is complicated by differences in reporting criteria, data sources, peer-review processes, and general program design. Our human error rate of 47 per 10,000 anesthetics is similar to the 45 per 10,000 rate of human error and equipment problems reported by Kumar et al.  24and lower than the 61 per 10,000 rate estimated from Short el al.  25While our human error rate is higher than the 24/10,000 rate of Ender et al. , 19this could be explained by the limitation to errors associated with adverse outcomes in the latter study. The current study includes reports of human error associated with critical incidents as well as non-injury adverse outcomes. Short et al.  25reported a critical incident rate of 76 per 10,000 anesthetics, similar to our mean annual rate of 70 per 10,000. Galletly and Mushet 26observed a critical incident rate of 135 per 10,000, nearly twice that of the mean rate in our study, although their reported rate of patient injury or escalation of care was lower. This difference may be explained, at least in part, by the relatively small size of Galletly and Mushet's sample (3,546 anesthetics in a 3-month period). The results reported by Galletly and Mushet 26are within the range of monthly measurements observed in our practice. Chopra et al.  27reported 13 per 10,000 injuries or adverse events between 1978 and 1987, and Lagasse et al.  20observed a rate of 88.9 per 10,000 cases for 13 Joint Committee on Accreditation of Healthcare Organizations anesthesia clinical indicators in 1992. These results are not directly comparable to ours because they grouped adverse events and patient injuries, whereas we counted adverse events and injuries separately.

In conclusion, it appears that most aspects of the quality of anesthesia patient care in our teaching hospital did not decrease with changes in staffing patterns, supervision, and increasing productivity levels in our anesthesia service. Further investigation into factors associated with decreasing rates of patient injury during this time period, and investigation of the increase in critical incident reporting, may provide additional insight about relations among productivity, CQI activities, and patient safety in anesthesia care.

1.
Bashein G, Barna CR: A comprehensive computer system for anesthetic record retrieval. Anesth Analg 1985; 64: 425–31
2.
Posner KL, Kendall-Gallagher D, Wright IH, Glosten B, Gild WM, Cheney FW: Linking process and outcome of care in a continuous quality improvement program for anesthesia services. Am J Med Qual 1994; 9: 129–37
3.
Cooper JB, Cullen DJ, Nemeskal R, Hoaglin DC, Gevirtz CC, Csete M, Venable C: Effects of information feedback and pulse oximetry on the incidence of anesthesia complications. A NESTHESIOLOGY 1987; 67: 686–94
4.
Cooper JB, Newbower RS, Kitz RJ: An analysis of major errors and equiptment failures in anesthesia management: Considerations for prevention and detection. A NESTHESIOLOGY 1984; 60: 34–42
5.
Allnutt MF: Human factors in accidents. Br J Anaesth 1987; 59: 856–64
6.
Mehta CR, Patel NR: SPSS Exact Tests 7.0 for Windows. Chicago, SPSS, Inc., 1996, pp 115–34
7.
Robertson RH, Dowd SB, Hassan M: Skill-specific staffing intensity and the cost of hospital care. Health Care Manage Rev 1997; 22: 61–71
8.
Cooper JB, Newbower RS, Long CD, McPeek B: Preventable anesthesia mishaps: A study of human factors. A NESTHESIOLOGY 1978; 49: 399–406
9.
Beckmann U, Runciman WB: The role of incident reporting in continuous quality improvement in the intensive care setting. Anaesth Intens Care 1996; 24: 311–3
10.
Gaba DM, Howard SK, Jump B: Production pressure in the work environment. California anesthesiologists’ attitudes and experiences. A NESTHESIOLOGY 1994; 81: 488–500
11.
Weinger MB, Englund CE: Ergonomic and human factors affecting anesthetic vigilance and monitoring performance in the operating room environment. A NESTHESIOLOGY 1990; 73: 995–1021
12.
Lohr KN: Medicare. A strategy for quality assurance, Vol I. Washington DC, Institute of Medicine, National Academy Press, 1990, p 5
13.
Caplan RA, Posner KL, Cheney FW: Effect of outcome on physician judgements of appropriateness of care. JAMA 1991; 265: 1957–60
14.
Sanborn KV, Castro J, Kuroda M, Thys DM: Detection of intraoperative incidents by electronic scanning of computerized anesthesia records. Comparison with voluntary reporting. A NESTHESIOLOGY 1996; 85: 977–87
15.
Cooper JB: Is voluntary reporting of critical events effective for quality assurance? A NESTHESIOLOGY 1996; 85: 961–4
16.
Mackenzie CF, Jefferies NJ, Hunter WA, Bernhard WN, Xiao Y: Comparison of self-reporting of deficiencies in airway management with video analyses of actual performance. LOTAS Group. Level one trauma anesthesia simulation. Hum Factors 1996; 38: 623–35
17.
O'Neil AC, Petersen LA, Cook F, Bates DW, Lee TH, Brennan TA: Physician reporting compared with medical record review to identify adverse medical events. Ann Intern Med 1993; 119: 370–6
18.
Welsh CH, Pedot R, Anderson RJ: Use of morning report to enhance adverse event detection. J Gen Intern Med 1996; 11: 454–60
19.
Ender JI, Katz RI, Lagasse RS: Factors affecting voluntary physician reporting of adverse perioperative outcomes (abstract). Anesth Analg 1998; 86: S167
20.
Lagasse RS, Steinberg ES, Katz RI, Saubermann AJ: Defining quality of perioperative care by statistical process control of adverse outcomes. A NESTHESIOLOGY 1995; 82: 1181–8
21.
Feldman SE, Roblin DW: Medical accidents in hospital care: Applications of failure analysis to hospital quality appraisal. Jt Comm J Qual Improv 1997; 23: 567–80
22.
Runciman WB, Sellen A, Webb RK, Williamson JA, Currie M, Morgan C, Russel WJ: The Australian Incident Monitoring Study. Errors, incidents and accidents in anaesthetic practice. Anaesth Intens Care 1993; 21: 506–19
23.
Andersson BE, Nilsson SG: Studies in the reliability and validity of the critical incident technique. J Appl Psychol 1964; 48: 398–403
24.
Kumar V, Barcellos WA, Mehta MP, Carter JG: An analysis of critical incidents in a teaching department for quality assurance. A survey of mishaps during anaesthesia. Anaesthesia 1988; 43: 879–83
25.
Short TG, O'Regan A, Lew J, Oh TE: Critical incident reporting in an anaesthetic department quality assurance programme. Anaesthesia 1993; 48: 3–7
26.
Galletly DC, Mushet NN: Anaesthesia system errors. Anesth Intens Care 1991; 19: 66–73
27.
Chopra V, Bovill JG, Spierdijk J: Accidents, near accidents and complications during anaesthesia. A retrospective analysis of a 10-year period in a teaching hospital. Anaesthesia 1990; 45: 3–6