We thank our colleagues for their interest in the published work1 and their thoughtful comments. The study of clinical decision support, process of care measures, and clinical outcomes is a complex area that demands increased attention from the peer-reviewed literature, academic institutions, and industry.
We concur with Dr. Freundlich et al. that multicenter studies of clinical decision support are necessary to advance the field. To maximize generalizability and reproducibility, multicenter research is a natural step in the evolution of evidence-based practice change. Donabedian’s classic “ structure-process-outcomes” framework clearly identifies that the context within which care is delivered must be incorporated into clinical and health services research.2 A decision support system that is associated with clinical impact in one health system or structure of care may demonstrate no value in another setting, or vice versa. Multicenter pragmatic clinical trials and Randomized, Embedded, Multifactorial, Adaptive Platform (also known as “REMAP”) trial designs are potential cost-effective avenues to study clinical decision support.3 With support from a multitude of national and international anesthesiology organizations, the Multicenter Perioperative Outcomes Group, on behalf of its more than 40 contributing member organizations, has invested in the Initiative for Multicenter Perioperative Clinical Trials (IMPACT).4 We look forward to working with centers and investigators from around the world that wish to use this infrastructure for pragmatic perioperative trials.
We also share Doctoral Candidate Patel’s interest in the complexity of data elements and factors involved in clinical research. We struggle, however, to identify a specific way forward offered by Patel et al., or any recommendation underlying their correspondence. The “curse of dimensionality” is not unique to the study of decision support systems or healthcare research in general. Nor is pharmaceutical research, and the billions of dollars expended each year on unsuccessful drug targets, an appropriate model for future work. In fact, much of current drug development involves the use of hypothesis-generating phenome-wide or genome-wide association studies that provide hundreds of candidate targets for a given disease state; these potential targets are then evaluated at scale using computational, theoretical models. Although increased sample size and data breadth may address some issues with analyses of multifactorial clinical outcomes, data heterogeneity and quality issues often arise in concert. Increased sample size may have allowed adequate power to detect a difference in infrequent outcomes such as stage 2 acute kidney injury.
In summary, we concur that new data types and approaches are needed to advance the science of decision support system evaluation. Although our current analysis demonstrated process of care impact, it could not evaluate the impact on many crucial clinical outcomes inconsistently recorded in the electronic health record, such as surgical site infection, pulmonary complications, delirium, or patient satisfaction. Finally, the positive impact upon process of care measures and some resource outcomes should offer encouragement to clinicians and researchers seeking to advance care for their patients. Changing clinician behavior remains a goal almost as elusive as improving clinical outcomes.
Support provided by the Department of Anesthesiology, University of Michigan Medical School, Ann Arbor, Michigan.
Dr. Tremper is the founder of, and has an equity interest in, AlertWatch (Ann Arbor, Michigan), the company that developed the decision support system being evaluated.