PHYSICIANS are of two minds about performance benchmarking. On the one hand, we have at least a curiosity about how our clinical quality compares with that of our peers and a strong desire to be the best. On the other hand, we have a deep distrust of the growing number of public report cards about physicians. The investment that the American Society of Anesthesiologists (ASA) is making in a national clinical database is motivated by a desire to address both considerations.

The launch of the Anesthesia Quality Institute was inspired by the investment in quality databases by anesthesiologists everywhere. Numerous institutions have measured rates of adverse events and outcomes at the local level for years.1Several national anesthesia subspecialty societies have made substantial investments in multi-institutional or national clinical registries. Actions speak louder than words, and these efforts demonstrate the commitment of anesthesiologists to data-driven performance improvement.

The ASA’s lengthy experience in the development of practice parameters and guidelines has made clear the widespread gaps in our ability to rigorously define best practices. An astonishing number of recommended practices in these publications are based on expert consensus after our task forces have come up empty-handed in their quest for high-quality evidence. Our members, as well as healthcare quality watchdogs, rightfully demand more than opinion as the basis for guidelines against which practitioner performance is compared.

The potential for nationwide data aggregation to power clinical research and drive improved outcomes is brilliantly illustrated by the 20-yr experience of the Society of Thoracic Surgeons (STS) national database.2The U.S. Congress has cited STS as the exemplar of quality-driven healthcare. It is this quality agenda that will be the focus of Medicare policy for the foreseeable future. The STS registry has produced scores of published outcomes studies‡and supported numerous national and regional quality improvement initiatives.‡These are credited with reducing surgical mortality. In addition, the STS registry has now become the authoritative source of publicly-reported quality data in cardiothoracic surgery.

Many of the key research priorities discussed in the previously published editorials in this series can be advanced through the use of a national registry.3–5A rotating set of variable data elements, added to a core data set and chosen for the purpose of powering research on clinical topics, can rapidly generate answers to important questions or suggest new hypotheses for future testing. For example, identification of the characteristics of anesthetic management predisposing to chronic postsurgical pain or cognitive dysfunction across diverse practice settings can be facilitated. The Anesthesia Quality Institute directly addresses Lagasse’s admonition that development of “methods for ongoing national surveillance for anesthesia exposure and outcomes is imperative.”6 

Although physicians may have mixed feelings about performance measurement, the public is not conflicted. The desire for transparency drives public reporting of performance data and a demand that this information be incorporated in the credentialing of physicians. Medical specialty boards, including the American Board of Anesthesiology, have responded by instituting performance assessment and improvement components in their maintenance of certification programs. As the Federation of State Medical Boards contemplates the implementation of requirements for maintenance of licensure, benchmarking is likely to play a role. Where are the tools to satisfy these emerging requirements for performance measurement? This is a need that demands a national solution and is an essential role for our national specialty society.

When confronted by the prospect of physician report cards from Zagat§or Angie’s List||–or even state health departments – we must consider whether we defer to these parties for the development of metrics and data or take the initiative to assemble and own the data ourselves. The Anesthesia Quality Institute’s mission is to do the latter and to be certain that the metrics by which the public judges anesthesiologists are legitimate and meaningful. The early history of the STS database is closely linked to widespread shortcomings in the public quality reports from state health departments in New York and Pennsylvania. The criteria by which data are used for confidential physician-peer benchmarking may differ from the criteria for public reporting. Indeed, protecting our ability to provide confidential reports, safe from discoverability when appropriate, will be an objective of the Anesthesia Quality Institute.

The challenges in this endeavor are daunting. Creating participation opportunities for small, busy practices demands intelligent technology and logistics. Developing consistent and well-understood definitions of data elements and harmonizing numerous existing data sets will require cooperation. The use of rigorous risk adjustments to eliminate concerns that “my patients are sicker” and to justify confidence in the results of the data aggregation is complex and difficult, as is trying to separate out factors that are more reflective of the performance of the surgeon or institution rather than the anesthesiologist alone. Establishing connections between an anesthesia registry and emerging registries in organizations such as our subspecialty societies and the American College of Surgeons, Association of Operating Room Nurses, and Association of Perioperative Registered Nurses is demanding but critical to obtaining synergy and maximum value of all these efforts. Doing all of this while guaranteeing security of patient information and at a cost that’s sustainable for ASA and our members will be a test of our commitment and resourcefulness.

The good news is that ASA volunteers have begun work on almost all of these aspects of building a successful national database. Furthermore, we can look to STS and others to demonstrate that it can be done. Their experience leaves no doubt that it will be a lengthy and costly process, but it will also advance patient care and the stature of anesthesiology as a medical specialty.

Initial certification and maintenance of certification in anesthesiology (MOCA) are becoming increasingly important goals for those practicing the specialty in the United States. A number of the current health care reform bills pending in Congress include provisions that offer a reimbursement advantage to physicians who actively participate in maintenance of certification processes. Many large anesthesia groups and health care facilities, especially in metropolitan areas, now require board certification and participation in the MOCA program requirements for employment or privileges.

A major requirement of the MOCA program is review and assessment of personal practices. Inherent in this requirement is the need for comparative data, allowing practitioners the ability to compare their practice data and outcomes with those of others in their specialties. Anesthesiologists do not currently have national databases that provide these comparative data. This lack of information puts anesthesiologists at a disadvantage compared to many other physicians. The Anesthesia Quality Institute will help tremendously in addressing this issue.

Will the Institute have a strong database to help with MOCA overnight? Obviously not. But we are fortunate to have great experiences by others outside of our specialty to help jump start the database development and avoid many of the pitfalls that accompany such large undertakings.

The American Board of Anesthesiology understands that the Anesthesia Quality Institute will take time to develop its database and provide practical and valid comparative data to anesthesiologists. It has worked closely with the ASA to ensure that anesthesiologists, both members of ASA and those who are not, will have opportunities to meet MOCA requirements in the interim. In the long term, however, comparative data appear necessary to document quality of personal performance compared to that of peers. Therefore, a robust comprehensive Anesthesia Quality Institute becomes important to all anesthesiologists involved in the MOCA program.

Like it or not – agree or disagree – the federal government and other payers, ostensibly to promote high-quality patient care, will demand that individual anesthesiologists or groups of anesthesiologists in the United States document their performance metrics and outcomes against benchmark data. They will use reimbursement to prod anesthesiologists to develop the benchmark data and to carefully assess and improve their practices. All physicians involved in maintenance of certification from member boards of the American Board of Medical Specialties, including the American Board of Anesthesiology, will require similar information Thus, strong and validated anesthesia process and outcome databases, such as the Anesthesia Quality Institute database, will become a necessity for future practice. It is an imperative that hopefully will improve patient safety and the quality of care that we provide.

*Anesthesia Department, Newton-Wellesley Hospital, Newton, Massachusetts. ahannenberg@partners.org. †Department of Anesthesiology, Mayo Clinic, Rochester, Minnesota.

1.
Haller G, Stoelwinder J, Myles PS, McNeil J: Quality and safety indicators in anesthesia. Anesthesiology 2009; 110:1158–75
2.
Edwards FH: Evolution of the Society of Thoracic Surgeons national cardiac surgery database. J Invasive Cardiol 1998; 10:485–8
3.
Sessler DI: Long-term consequences of anesthetic management. Anesthesiology 2009; 111:1–4
4.
Devereaux PJ: Can attenuation of the perioperative stress response prevent intermediate or long-term cardiovascular outcomes among patients undergoing noncardiac surgery? Anesthesiology 2009; 111:223–6
5.
DeKock M: Expanding our horizons: Transition of acute postoperative pain to persistent pain and establishment of chronic postsurgical pain services. Anesthesiology 2009; 111:461–3
6.
Lagasse RS: Innocent prattle. Anesthesiology 2009; 110:698–9