“Every clinician who uses an [anesthesia information management system] is at ‘ground zero’ for artifact generation … and should recognize and consider ways to reduce the artifacts that might produce faulty and potentially harmful research and quality improvement conclusions.”

“Doveryai no proveryai”

— Trust, but verify. (Russian proverb)

Image: © ThinkStock.

A NONINVASIVE blood pressure cuff is applied proximally to a pulse oximetry probe, causing the value of oxygen saturation measured by pulse oximetry to drop transiently with every cuff inflation cycle. Electrocautery causes the heart rate monitor to double with every buzzing zap. Anesthesia providers observe—and lament—vital sign artifacts such as these, whereas the anesthesia information management system (AIMS) dutifully stores the vital sign values that might later be used for research or quality improvement purposes. For these initiatives to be valid, they must be based on data of high quality—that is, data that are accurate, reliable, consistent, and meticulously collected and recorded.1  However, physiologic monitors routinely send artefactual data to AIMS.2  Failure to recognize or minimize these artifacts can negatively impact data quality and secondary uses of the data and potentially lead to inaccurate observations (“garbage in, garbage out”).3  Thus, verification of data quality is crucial and must be considered before proceeding with data analysis.4,5  In this issue of Anesthesiology, Hoorweg et al.6  present a novel prospective observational cohort study of the artefactual physiologic data in their locally developed AIMS. They collected their data in a pediatric population undergoing procedures with general anesthesia at a tertiary pediatric hospital.

The authors should be applauded for prospectively measuring the occurrence of data artifacts in their AIMS, because there exists a paucity of research on this phenomenon. This study was thoughtfully designed and well conducted and builds on some of the authors’ previous work in an adult population.7  The study reported in this issue defined artefactual measurements by concurrent observations by a single investigator and the attending anesthesiologist. The major finding was that artifacts were present in a substantial number of recordings and that they were associated with deviating status of the measurement, phase of anesthesia, and anesthetic technique. For example, end-tidal carbon dioxide artifacts were more common during an inhalation induction when compared to an intravenous induction, and human errors were responsible for a remarkable number of artifacts. The study informs other pediatric hospitals of areas of potential concern for artefactual AIMS data, as well as methods by which those institutions can prospectively scrutinize their own generation of AIMS artifacts.

The authors state their limitations clearly. They used an institution-specific AIMS with one set of monitoring equipment that uses filtering methods that differ from those of other pediatric hospitals. For example, the heart rate was derived from values obtained by either electrocardiogram or pulse plethysmograph, depending on the quality of the signal for either monitor. Thus, the authors’ conclusions are not fully generalizable to AIMS-based research at other pediatric institutions, because of the wide variety of practice styles, equipment, and AIMS implementations. The authors admit that many AIMSs allow for editing of vital sign data and annotation of artifacts, whereas their AIMS does not. Manual editing of automatically recorded AIMS data is common at other institutions, and AIMSs with this feature might see a significantly different pattern of data artifacts.8  The authors relied on attending anesthesiologists approving that an artifact had indeed occurred, which is subject to individual variation and the Hawthorne effect. Last, it is important to recall that the selected definitions for artifacts can have a significant impact on the reported incidence of those artifacts during automated data collection.9 

How can clinical researchers investigate and improve the quality of their data, other than having an independent observer watch every case and question every artifact? Individuals who use AIMS data for research or quality improvement purposes likely have neither the time nor the resources to conduct a prospective observational study similar to that of Hoorweg et al.6  Visualizing and exploring large AIMS data sets using visual analytical tools can facilitate the data validation process.3  Automated, real-time annotation of artifacts can be employed to reduce the need for manual annotation, and advanced analytics techniques such as machine learning algorithms and neural networks can be used on physiologic data patterns to detect clinical events that might have been manually documented erroneously.10,11  Although access to these techniques is not ubiquitous, all researchers and quality improvement leaders who use AIMS data should dedicate time and effort to both discerning what aspects of typical clinical workflow might increase artifact generation and conducting manual review and validation of AIMS data to gain insight into outliers and aberrant patterns.

What message can practicing clinicians take from the report of Hoorweg et al.6  of their AIMS artifacts? Clinicians should always practice with a questioning attitude, not only in the operating room, but also when reading research articles or participating in quality improvement projects based on AIMS data. Every clinician who uses an AIMS is at “ground zero” for artifact generation (not only from automated sources but also manually entered “garbage in”) and should recognize and consider ways to reduce the artifacts that might produce faulty and potentially harmful research and quality improvement conclusions. Clinicians should feel obligated and empowered to annotate or manually correct data artifacts when they occur. The study of data artifacts by Hoorweg et al.6  in their own institution has provided an excellent example of why every clinician should not simply trust but also verify the AIMS data upon which many current clinical research and quality improvement projects are built.

The authors are not supported by, nor maintain any financial interest in, any commercial activity that may be associated with the topic of this article.

1.
Lanier
WL
:
Using database research to affect the science and art of medicine.
Anesthesiology
2010
;
113
:
268
70
2.
Takla
G
,
Petre
JH
,
Doyle
DJ
,
Horibe
M
,
Gopakumaran
B
:
The problem of artifacts in patient monitor data during surgery: A clinical and methodological review.
Anesth Analg
2006
;
103
:
1196
204
3.
Simpao
AF
,
Ahumada
LM
,
Rehman
MA
:
Big data and visual analytics in anaesthesia and health care.
Br J Anaesth
2015
;
115
:
350
6
4.
Eisenach
JC
,
Kheterpal
S
,
Houle
TT
:
Reporting of observational research in Anesthesiology: The importance of the analysis plan.
Anesthesiology
2016
;
124
:
998
1000
5.
Fleischut
PM
,
Mazumdar
M
,
Memtsoudis
SG
:
Perioperative database research: Possibilities and pitfalls.
Br J Anaesth
2013
;
111
:
532
4
6.
Hoorweg
AJ
,
Pasma
W
,
van Wolfswinkel
L
,
de Graaff
JC
:
Incidence of artifacts and deviating values in research data obtained from an anesthesia information management system in children.
Anesthesiology
2018
;
128
:
293
304
7.
Kool
NP
,
van Waes
JA
,
Bijker
JB
,
Peelen
LM
,
van Wolfswinkel
L
,
de Graaff
JC
,
van Klei
WA
:
Artifacts in research data obtained from an anesthesia information and management system.
Can J Anaesth
2012
;
59
:
833
41
8.
Wax
DB
,
Beilin
Y
,
Hossain
S
,
Lin
HM
,
Reich
DL
:
Manual editing of automatically recorded data in an anesthesia information management system.
Anesthesiology
2008
;
109
:
811
5
9.
Bijker
JB
,
van Klei
WA
,
Kappen
TH
,
van Wolfswinkel
L
,
Moons
KG
,
Kalkman
CJ
:
Incidence of intraoperative hypotension as a function of the chosen definition: Literature definitions applied to a retrospective cohort using automated data collection.
Anesthesiology
2007
;
107
:
213
20
10.
Gostt
RK
,
Rathbone
GD
,
Tucker
AP
:
Real-time pulse oximetry artifact annotation on computerized anaesthetic records.
J Clin Monit Comput
2002
;
17
:
249
57
11.
Gálvez
JA
,
Jalali
A
,
Ahumada
L
,
Simpao
AF
,
Rehman
MA
:
Neural network classifier for automatic detection of invasive versus noninvasive airway management technique based on respiratory monitoring parameters in a pediatric anesthesia.
J Med Syst
2017
;
41
:
153