The authors hypothesized that the electroencephalogram (EEG) during higher anesthetic concentrations would show more "order" and less "randomness" than at lower anesthetic concentrations. "Approximate entropy" is a new statistical parameter derived from the Kolmogorov-Sinai entropy formula which quantifies the amount of regularity in data. The approximate entropy quantifies the predictability of subsequent amplitude values of the EEG based on the knowledge of the previous amplitude values. The authors investigated the dose-response relation of the EEG approximate entropy during desflurane anesthesia in comparison with spectral edge frequency 95, median frequency, and bispectral index.
Twelve female patients were studied during gynecologic laparotomies. Between opening and closure of the peritoneum, end-tidal desflurane concentrations were varied between 0.5 and 1.6 minimum alveolar concentration (MAC). The EEG approximate entropy, median EEG frequency, spectral edge frequency 95, and bispectral index were determined and the performance of each to predict the desflurane effect compartment concentration, obtained by simultaneous pharmacokinetic-pharmacodynamic modeling, was compared.
Electroencephalogram approximate entropy decreased continuously over the observed concentration range of desflurane. The performance of the approximate entropy (prediction probability PK = 0.86 +/- 0.06) as an indicator for desflurane concentrations is similar to spectral edge frequency 95 (PK = 0.86 +/- 0.06) and bispectral index (PK = 0.82 +/- 0.06) and is statistically significantly better than median frequency (PK = 0.78 +/- 0.06).
The amount of regularity in the EEG increases with increasing desflurane concentrations. The approximate entropy could be a useful EEG measure of anesthetic drug effect.
AUTOMATIC analysis of the electroencephalogram (EEG) is increasingly used for monitoring of electrical cortical activity during anesthesia to reduce the complexity of EEG pattern to a single parameter associated with anesthetic drug effect.
EEG signals are commonly analyzed in the frequency domain using a fast Fourier transformation, and indices such as median EEG frequency, 1,2spectral edge frequency, 3δ power, 4or a complex parameter like bispectral index, 2,4,5which is composed of a combination of time domain, frequency domain, and higher-order spectral subparameter, 6are derived.
Neuronal systems at some level have been shown to exhibit various nonlinear behaviors. 7Thus, it has been suggested to regard the EEG waveforms not as a sum of sine waves but as a chaotic pattern. 8Therefore it seems reasonable to apply methods from the theory of nonlinear dynamics to the EEG signal series. 9Entropy is a concept that addresses system randomness and predictability. 10Various entropy algorithms 11,12have been developed in the past two decades to characterize chaotic behavior in time series data, because entropy has been shown to characterize chaotic behavior. 13The algorithm of approximate entropy was published in 1991 14and offers rapid calculation of the regularity of biologic signals, which could potentially be applied for on-line analysis during anesthesia.
The approximate entropy quantifies the predictability of subsequent amplitude values of data series (e.g. , the EEG) based on the knowledge of the previous amplitude values. In a perfectly regular data series, the knowledge of the previous values enables the prediction of the subsequent value. The approximate entropy value would be zero. For example, in a perfectly regular data series 0, 0, 1, 0, 0, 1, … , knowing that the two previous values were 0 and 0 enables the prediction that the subsequent value will be 1.
With increasing irregularity, even knowing the previous values, the prediction of the subsequent value will get worse. The approximate value will increase. In an entirely irregular data series, even knowing the previous values does not allow any prediction of the subsequent value. The approximate entropy value of an entirely irregular data series depends on the length of the epoch and on the number of previous values used for the prediction of the subsequent value. Simulations with data series of random numbers showed that, for an epoch length of 1,024 data points, as used in this study, the approximate entropy value for an irregular data series is about 1.7.
The aim of this study was to investigate the relation between the regularity of EEG pattern and the concentration of desflurane.
Materials and Methods
Patients and Anesthesia
After approval by the local ethics committee, written informed consent was obtained from 12 female patients aged 33–65 yr scheduled for gynecologic laparotomies. All participants were American Society of Anesthesiologists physical status 1 or 2. Patients with neurologic deficits, thyroid dysfunction, or pregnancy were not included. The patients received 7.5 mg midazolam per os 2 h before surgery. Anesthesia was induced with 2 mg/kg propofol. Vecuronium was administered for neuromuscular block. After the trachea was intubated, anesthesia was maintained with desflurane as the sole anesthetic agent. End-tidal desflurane concentrations were measured with a Capnomac anesthetic gas analyzer (Datex, Copenhagen, Denmark). The patients lungs were ventilated with oxygen and air (fraction of inspired oxygen = 0.4). End-tidal carbon dioxide tension was constantly measured and kept at 35 mmHg. Blood pressure and heart rate were measured noninvasively with a Dinamap Vital Data Monitor (Criticon, Tampa, FL) at 3-min intervals.
EEG Analysis
The EEG was recorded continuously with a frontal montage (Fp1-Fpz, Fp2-Fpz; international 10-20 system; Sirecust 404, Siemens, Erlangen, Germany). The raw signal was filtered between 0.5 and 32 Hz and divided into epochs of 8.192-s duration. The raw signal was digitized at a rate of 125 Hz and stored on a hard drive for further “off-line” analysis. The approximate entropy (discussed later), the median EEG frequency (50% quantile of the power spectrum), and the spectral edge frequency 95 (95% quantile of the power spectrum) were calculated. A moving average over seven epochs (3 forward and 3 backward epochs) was used for data smoothing for approximate entropy, median EEG frequency, and spectral edge frequency 95. In addition, we used the Aspect A-1000 (Aspect, Natick, MA) to determine the bispectral index. The bispectral index was internally averaged over 60 s. For each EEG epoch, the corresponding end-tidal desflurane concentration was recorded.
Only the EEG data from the time interval between opening and closure of the peritoneum was used for further analysis. In all cases a waiting period of at least 60 min after induction of anesthesia was maintained to minimize the effects of the propofol administered for induction. After opening the peritoneum, the end-tidal desflurane concentration was decreased until 0.5 minimum alveolar concentration (MAC; = 1.3 • MAC awake) was achieved or until the attending anesthesiologist no longer considered the depth of anesthesia to be clinically adequate. Subsequently, the desflurane vapor setting was increased until an end-tidal desflurane concentration of 1.6 MAC was achieved or until the attending anesthesiologist considered the level of anesthesia to be too deep, and the desflurane vapor setting was decreased again. Several of these cycles were performed in each patient.
Approximate Entropy
The approximate entropy was calculated off-line on a personal computer according to the following algorithm 14:
where Φm(r) is defined as
and Cim(r) is defined as
where x(i) and x(j) are vectors defined by
x(j) =[u(j),…,u(j =m −1)]
from a time series of data u(i), u(2), … , u(N).
The value of the approximate entropy as a relative (not absolute) measure depends on three parameters: The length of the epoch (N), the number of previous values used for the prediction of the subsequent value (m), and a filtering level (r). The noise filter, r, was used as a relative size, expressed as part of the SD of the N amplitude values. We started our analysis with the parameters m = 2, r = 20% of the SD of the amplitude values, and N = 1,024, because theoretical considerations suggested these parameters as a good starting point. 10,14,15
In a separate step, we evaluated the influence of different m (1, 2, and 3), r (between 0% and 90% of SD in increments of 10%) and N (256, 512 and 2,024) on the prediction probability PK16of EEG approximate entropy. A step-by-step procedure to calculate approximate entropy, a short example, and a VBasic program can be found in the Appendix.
Pharmacodynamic Analysis
The relation between end-tidal concentrations of desflurane and approximate entropy was described by a fractional sigmoid Emax model 17with input from a first-order effect site model. 18
where Ceff,et= effect compartment, end-tidal concentration of desflurane, and keo= first-order rate constant determining the efflux from the effect compartment.
where approximate entropy0= baseline (approximate entropy in the awake state) value averaged from 12 awake volunteers, C50= concentration associated with 50% of the maximum effect, and γ= steepness factor determining the slope of the concentration–response relation.
We used the prediction probability PK16to determine which EEG parameter (median frequency, spectral edge frequency 95, bispectral index, or approximate entropy) predicted desflurane effect compartment concentrations most accurately. To achieve the optimal correlation between effect compartment concentration and the EEG parameters, we determined ke0and the pharmacodynamic parameters independently for each parameter and each individual. The computations were performed on a spreadsheet using the Excel software program (Microsoft, Redmond, WA). The parameters were optimized with the Solver tool within Excel using nonlinear regression with ordinary least-squares.
In addition, we estimated population values of ke0and the pharmacodynamic parameters based on approximate entropy and end-tidal desflurane concentration measurements from the entire population. These calculations were performed with NONMEM. 19Initially, the typical values of the structural parameters were determined with Ω2set to zero (naive pooled data [NPD] approach). Thereafter, the typical values were fixed to their NPD values and the variance parameters determined.
A proportional model was used to describe the interindividual variability of the parameters:
where θ(n,i)refers to the individual value of the nthparameter in the ithindividual, [θ](n,m)is the typical value in the population of the nthparameter, and [η] varies randomly between individuals with mean zero and diagonal variance–covariance matrix Ω2.
An additive error model was chosen for modeling residual variability:
Approximate entropyobsrefers to the observed value of the approximate entropy, and approximate entropyexprefers to the value predicted based on desflurane concentration, time, and the individual keoand pharmacodynamic parameters. [ε] is a normally distributed random variable with mean zero and variance [ς2].
Statistical Analysis
The prediction probability PK16was used to determine the predictive performance of the derived EEG parameters (approximate entropy, EEG median frequency, spectral edge frequency 95, and bispectral index) for desflurane effect compartment concentrations. The best approximate entropy parameter set (m and r) was chosen using the prediction probability PKvalues for the approximate entropy values calculated for the different parameter sets.
Differences in prediction probability PKvalues between approximate entropy and median EEG frequency, spectral edge frequency 95, and bispectral index were tested using the Wilcoxon test. Statistical significance was assumed at probability levels of P ≥ 0.05.
Results
Approximate Entropy (m = 2, r = 0.
2, N = 1,024)
With increasing desflurane concentrations the EEG approximate entropy decreased (and vice versa ) within a time delay (fig. 1). Plotting approximate entropy versus end-tidal desflurane concentration revealed hysteresis (fig. 2A), which was collapsed by introduction of an effect compartment (fig. 2B).
Fig. 1. End-tidal desflurane concentration and electroencephalogram approximate entropy (m = 2, r = 0.2 · SD, N = 1,024) versus time between opening and closure of the peritoneum for patient no. 1.
Fig. 1. End-tidal desflurane concentration and electroencephalogram approximate entropy (m = 2, r = 0.2 · SD, N = 1,024) versus time between opening and closure of the peritoneum for patient no. 1.
Fig. 2. Relation between electroencephalogram (EEG) approximate entropy (m = 2, r = 0.2 · SD, N = 1,024) and end-tidal desflurane concentrations (left ) and between EEG approximate entropy (m = 2, r = 0.2 · SD, N = 1,024) and desflurane effect compartment concentrations (right ) for patient no. 1.
Fig. 2. Relation between electroencephalogram (EEG) approximate entropy (m = 2, r = 0.2 · SD, N = 1,024) and end-tidal desflurane concentrations (left ) and between EEG approximate entropy (m = 2, r = 0.2 · SD, N = 1,024) and desflurane effect compartment concentrations (right ) for patient no. 1.
The parameter estimates for the sigmoid fractional Emaxmodel are displayed in table 1. The resulting prediction based on the typical parameter values is shown in figure 3(line). Superimposed on the prediction are the approximate entropy values calculated for each individual epoch (dots). The fit adequately describes the approximate entropy data.
Table 1. Pharmacodynamic Parameters
The typical values of the parameters were calculated with the NPD method and subsequently fixed for calculation of interindividual variability. The baseline value was fixed to the approximate entropy value for awake subjects as indicated in methods. The mean residual error (additive error model) was 0.094 without and 0.064 with accounting for interindividual variability.
IIV = interindividual variability.

Fig. 3. Relation between electroencephalogram (EEG) approximate entropy (m = 2, r = 0.2 · SD, N = 1,024) and desflurane effect compartment concentrations. Dots refer to all values calculated from the raw EEG (“measured” approximate entropy), and the line represents the naive pooled data (NPD) fit through the data.
Fig. 3. Relation between electroencephalogram (EEG) approximate entropy (m = 2, r = 0.2 · SD, N = 1,024) and desflurane effect compartment concentrations. Dots refer to all values calculated from the raw EEG (“measured” approximate entropy), and the line represents the naive pooled data (NPD) fit through the data.
The performance of approximate entropy (prediction probability PK= 0.86 ± 0.06, mean ± SD) as an index of desflurane effect compartment concentrations is equal to spectral edge frequency 95 (PK= 0.86 ± 0.06), statistically significantly (P < 0.05) better than EEG median frequency (PK= 0.78 ± 0.06) and slightly but not significantly better than bispectral index (PK= 0.82 ± 0.06;fig. 4).
Fig. 4. Prediction probability PKcomparison. PK-values for all four measurements (approximate entropy, spectral edge frequency 95, bispectral index, and electroencephalogram [EEG] median frequency) of desflurane EEG effect. Each black dot represents a patient.
Fig. 4. Prediction probability PKcomparison. PK-values for all four measurements (approximate entropy, spectral edge frequency 95, bispectral index, and electroencephalogram [EEG] median frequency) of desflurane EEG effect. Each black dot represents a patient.
EEG effect (approximate entropy, spectral edge frequency 95, bispectral index, and EEG median frequency) versus desflurane effect compartment concentration relations in patients with the median and the worst prediction probability (PK) value for approximate entropy are shown in figures 5 and 6.
Fig. 5. Effect (vertical axis) versus desflurane effect compartment concentration (horizontal axis) relations in the patient with the median prediction probability (PK) value for approximate entropy. The relations for all four electroencephalogram (EEG) measures (approximate entropy, spectral edge frequency 95, bispectral index, and EEG median frequency) in this patient are shown. Each graph also shows the value for prediction probability PK.
Fig. 5. Effect (vertical axis) versus desflurane effect compartment concentration (horizontal axis) relations in the patient with the median prediction probability (PK) value for approximate entropy. The relations for all four electroencephalogram (EEG) measures (approximate entropy, spectral edge frequency 95, bispectral index, and EEG median frequency) in this patient are shown. Each graph also shows the value for prediction probability PK.
Fig. 6. Effect (vertical axis) versus desflurane effect compartment concentration (horizontal axis) relations in the patient with the lowest prediction probability (PK) value for approximate entropy. The relationships for all four electroencephalogram (EEG) measures (approximate entropy, spectral edge frequency 95, bispectral index, and EEG median frequency) in this patient are shown. Each graph also shows the value for prediction probability PK.
Fig. 6. Effect (vertical axis) versus desflurane effect compartment concentration (horizontal axis) relations in the patient with the lowest prediction probability (PK) value for approximate entropy. The relationships for all four electroencephalogram (EEG) measures (approximate entropy, spectral edge frequency 95, bispectral index, and EEG median frequency) in this patient are shown. Each graph also shows the value for prediction probability PK.
Approximate Entropy (m, r, N)
Calculating EEG approximate entropy for 2,048 data points (m = 2, r = 0.2 · SD, N = 2,048) did not improve prediction probability (PK= 0.86 ± 0.06) compared with 1,024 data points (discussed previously; PK= 0.86 ± 0.06). Decreasing the number of data points by 50% (75%) worsened the prediction probability slightly (N = 512: PK= 0.85 ± 0.06; N = 256: PK= 0.83 ± 0.06).
The influence of different combinations of m (the number of previous values used for the prediction of the subsequent value) and r (the filtering level) with N = 1,024 on the prediction probability PK16for approximate entropy as an index of desflurane effect compartment concentrations is displayed in table 2and figure 7. The combinations m = 2, r = 0.2 · SD and m = 3, r = 0.2 • SD yielded the best PK.
Table 2. Prediction Probability, PK, of Electroencephalogram Approximate Entropy with Different Parameter Sets: Different Number of Previous Values Used for the Prediction of the Subsequent Value (m) and Different Filter Levels (r)
Values are mean ± SD. The best prediction probabilities, PK, are printed in bold letters. A value of 1 means indicator predicts anesthetic concentration perfectly; a value of 0.5 means indicator predicts no better than a 50:50 chance. 16

Fig. 7. Prediction probability PKof electroencephalogram approximate entropy with different parameter sets: different lengths of compared runs (m) and different filter levels (r). Each dot represents the mean prediction probability PKfor each of the 12 patients for a combination of r (horizontal axis) and m (different dotted lines). A value of 1 means that the indicator predicts anesthetic concentration perfectly; a value of 0.5 means that the indicator predicts no better than a 50:50 chance. 16
Fig. 7. Prediction probability PKof electroencephalogram approximate entropy with different parameter sets: different lengths of compared runs (m) and different filter levels (r). Each dot represents the mean prediction probability PKfor each of the 12 patients for a combination of r (horizontal axis) and m (different dotted lines). A value of 1 means that the indicator predicts anesthetic concentration perfectly; a value of 0.5 means that the indicator predicts no better than a 50:50 chance. 16
Except for m = 1, the calculation of approximate entropy without filtering level r = 0 · SD could not predict the desflurane effect compartment concentration (PK= 0.5 means, prediction no better than a 50:50 chance). 16Choosing filtering levels r between 0.2 · SD and 0.9 · SD yielded different approximate entropy values but changed PKonly slightly.
Discussion
In this study we demonstrated a close correlation between approximate entropy values and desflurane effect compartment concentrations. During anesthesia with high desflurane concentrations, the EEG showed more “order” (i.e., lower approximate entropy values) and less “random” (i.e., higher approximate entropy values) than at lower desflurane concentrations. Approximate entropy detected differences in EEG dynamics in the observed concentration range as good as (in the case of spectral edge frequency 95, bispectral index) or even better than (in the case of EEG median frequency) conventional EEG measures of anesthetic drug effect as judged by the PKvalue. Given two randomly selected data points with distinct anesthetic drug concentration, the prediction probability PKis the probability that the EEG parameter predicts correctly which of the data points is the one with the higher (or lower) anesthetic drug concentration. In our opinion, this is an adequate meaning for a measure of anesthetic drug effect. The correlation coefficient r2for approximate entropy, spectral edge frequency 95, bispectral index, and median EEG frequency was also computed and confirmed our findings with PK.
The use of the spectral edge frequency 95 and EEG median frequency as measures of anesthetic drug effect is limited by a biphasic response at low anesthetic concentrations 2and the misclassification of burst suppression as lightening of anesthesia. 20The bispectral index is an univariate descriptor of anesthetic drug effect devoid of these shortcomings. However, it is calculated from components of the EEG signal with empirically obtained weighting coefficients, which are proprietary and unpublished. 5,6Therefore a further search for an “optimal” EEG parameter as a measure of anesthetic drug effect appears to be justified.
Concept of Approximate Entropy
Neuronal systems at some level have been shown to exhibit a variety of nonlinear behaviors. 7Therefore, it seems reasonable to hypothesize that some of the complex behavior of certain macroscopic neural systems, such as human EEG, reflects the effects of those underlying nonlinear mechanisms. 21Measures from nonlinear dynamics of the EEG should, in principle, allow insight into the evolution of complexity of underlying brain activity. 22
Entropy is a concept that addresses system randomness and predictability. Greater entropy is often associated with more randomness and less system order. Classically, entropy has been part of the modern quantitative development of thermodynamics, statistical mechanics, and information theory. Interest in entropy has grown since it was shown to be a parameter that characterizes chaotic behavior. 13
Approximate entropy was constructed not to test for a particular model form, such as deterministic chaos, but to provide a widely applicable, statistically valid formula for data analysis that can distinguish data sets by a measure of regularity. 14,15The approximate entropy has three technical advantages in comparison with other entropy algorithms 11,12for statistical usage:
1. The approximate entropy is nearly unaffected by noise of magnitude below the filter level, r, and therefore robust to occasional very large or small artifacts.
2. The approximate entropy gives meaningful information with a reasonable number of data points.
3. The approximate entropy is finite for both stochastic and deterministic processes. These debits are key in the current context, because physiologic time series such as the EEG are probably compromised of both stochastic and deterministic components.
The approximate entropy has been used in endocrinology to characterize normal and abnormal pulsatility of hormones 23–32and in ECG data to determine complexity in heart rate variability 33–37(e.g. , in postinfarction patients, 38in experimental human endotoxemia, 39and after cardiac surgery 40).
The quantification of regularity in the EEG by approximate entropy was suggested as early as 1991. 14Recently, Sleigh and Donovan published a comparison of bispectral index, spectral edge frequency 95, and “approximate entropy” of the EEG during induction of general anesthesia. 41Unfortunately, they never tried to optimize the performance of their approximate entropy parameter. The absolute value of the approximate entropy depends on the filtering level (r), the number of data points per epoch (N), and the number of previous values used for the prediction of the subsequent value (m), which should be optimized. We tried to identify the optimal parameters (N, m, r) for EEG approximate entropy calculation. Some theoretical considerations have been previously made for the optimal choice of the parameters. 10,14,15
The input data for approximate entropy is a scalar time series, typically with between 100 and 5,000 numbers. Fewer than 100 numbers will likely yield a less meaningful computation, especially for m = 2 or m = 3. 10Pincus demonstrated the utility of approximate entropy (m = 2, r, N = 1,000) by applying this statistic to three low-dimensional nonlinear deterministic systems. 15The number of previous values used for the prediction of the subsequent value (m) is typically chosen m = 2 or m = 3. 14It can be demonstrated theoretically that the optimal number of previous values used for the prediction of the subsequent value (m) depends on the number of data points N. According to Wolf et al. , 42the number of data points should range between 10mand 30m. Pincus suggested that for m = 2 and N = 1,000, r should range from 0.1 to 0.2 · SD of the u(i) data. 15
Therefore we started the calculation of approximate entropy with the parameters m = 2, r = 0.2 · SD, and N = 1,024. In further steps we evaluated the prediction probability PKfor the EEG approximate entropy calculated with other parameters m, r, and N. The best prediction probability of approximate entropy was observed for the parameter sets N = 1,024, m = 2, r = 0.2 · SD, and N = 1,024, m = 3, r = 0.2 · SD. In fact, this is well in accordance with the theoretical considerations of Pincus et al. 10,14,15
Unfortunately, Sleigh and Donovan chose arbitrary values for N and r which do not correspond to theoretical considerations and our own results. Therefore, values of their “approximate entropy” parameter might differ substantially from ours, limiting comparability. We suggest standardization of values of m, N, and r to circumvent this problem in the future.
Pincus and Goldberger 10provided a physiologic interpretation of what changing (decreasing) approximate entropy could indicate—A measure such as the EEG represents the output of multiple mechanisms, including coupling (“feedback”) interactions and external inputs (“noise”) from internal and external sources. In general, approximate entropy increases with greater system coupling and greater external inputs. A decrease of approximate entropy therefore represents system decoupling and/or lessening of external inputs.
In fact, this is a model for how anesthesia could be described. Based on information theory, approximate entropy should therefore be an appropriate measure of anesthetic drug effect.
Limitations of the Study Design
We chose to perform our investigation in a setting that is as close as possible to the clinical situation. Therefore we studied patients undergoing major surgery instead of volunteers; this necessarily restricted our design options.
First, all patients received midazolam as premedication before desflurane. Because benzodiazepines alter the EEG, 4,43this might have influenced the desflurane concentration–effect curve. However, the concentrations of midazolam necessary to cause EEG changes (> 100 ng/ml 43) exceed the peak concentration after oral administration of 7.5 mg (35 ng/ml, as extrapolated from Greenblatt et al. 44) by a factor of three.
Second, propofol for induction of anesthesia was administered as a single bolus at least 60 min before data acquisition. Based on pharmacokinetic simulation with parameters from 24 volunteers, 45the mean concentration 60 min after administration was 0.1 μg/ml, far below the EC50for hypnotic effect (2.2 μg/ml). 46,47Therefore it is unlikely that propofol contributed to hypnotic effects at the time of data acquisition.
Third, the investigated concentrations ranged from 0.5–1.6 MAC to avoid patient awareness at lower levels and excessive cardiovascular depression at higher levels. In a volunteer trial, researchers can explore beyond this limited range.
Fourth, a constant level of stimulation during data acquisition was assumed. Although we sampled data only between opening and closure of the peritoneum, which was a period of relatively constant stimulus intensity, changing levels of stimulation cannot be excluded. Because painful stimuli may cause EEG arousal reactions, 48this might have led to a change in the EEG signal despite constant anesthetic concentrations, thereby lowering the correlation between EEG parameters and desflurane concentration. However, the high predictive probabilities of the EEG parameters for desflurane effect compartment concentrations support the assumption of a constant level of stimulation during data acquisition.
Our promising results obtained in patients suggest that a volunteer trial should be done to more precisely define the role of EEG approximate entropy as a measure of anesthetic drug effect.
We conclude that the amount of regularity in the EEG increases with increasing desflurane concentrations. The approximate entropy appears to be a relatively simple and robust EEG measure of anesthetic drug effect.
References
Appendix
The following is a step-by-step procedure to calculate the approximate entropy of a sample with N data points: x1, x2, x3, x4, x5, … , xN. In this study, N = 1,024 was used, but for simplicity, N = 10 is used in the following example.
Step 1
Compute the standard deviation of the points. This is done using the conventional formula for standard deviation.
Step 2
Compute R, the range around any point that will constitute a “match.” This is computed as r (the filter factor) times the standard deviation computed in step 1. For example, if the standard deviation is 10 and r is 0.2, then points within 2 units are considered a match in steps 3 and 4.
Step 3
Step 3 establishes how repetitive the signal is. Find the number of matches between the first sequence of m data points and all sequences of m data points until the sequence beginning with the (N − m)thdata point (see also step 6). If m = 1, then this involves comparing x1with x1, x2, x3, x4, x5, x6, x7, x8, and x9. For every point within R units of x1, a match is counted. Obviously, there is at least one match because x1is compared with itself. If m = 2, then this involves comparing x1and x2to the sequences (x1, x2), (x2, x3), (x3, x4), (x4, x5), (x5, x6), (x6, x7), (x7, x8), and (x8, x9). In this case, a match counts only if both x1and x2match (i.e. , are closer than R) their respective points in the comparison. If m = 3, then this involves comparing x1and x2 and x3with the sequence (x1, x2, x3), (x2, x3, x4), (x3, x4, x5), (x4, x5, x6), (x5, x6, x7), (x6, x7, x8), and (x7, x8, x9). In this case, a match counts only if x1and x2and x3all match (i.e., are closer than R) their respective points in the comparison sequence.
Step 4
Step 4 establishes how well each match, identified in step 3, predicts the subsequent data point. Here we find the number of matches between the first m + 1 data points and all sequences of m + 1 data points. For m = 1, this involves comparing x1 and x2to the sequences (x1, x2), (x2, x3),1 (x3, x4), (x4, x5), (x5, x6), (x6, x7), (x7, x8), (x8, x9), and (x9, x10). Note that this is identical to the m = 2 description in step 3. However, by comparing the number of matches in step 4 with the number of matches in step 3, we can see how well the first point predicted the second point. Similarly, if m = 2, this involves comparing x1 and x2 and x3with the sequence (x1, x2, x3), (x2, x3, x4), (x3, x4, x5), (x4, x5, x6), (x5, x6, x7), (x6, x7, x8), (x7, x8, x9), and (x8, x9, x10). This is the same as m = 3 in step 3 above. However, by comparing the number of matches found for m = 2 in step 3 with the number of matches here, we can see how well a pair of adjacent points predicts the third point. Similarly, if m = 3, this involves comparing x1 and x2 and x3and x4with the sequence (x1, x2, x3, x4), (x2, x3, x4, x5), (x3, x4, x5, x6), (x4, x5, x6, x7), (x5, x6, x7, x8), (x6, x7, x8, x9), and (x7, x8, x9, x10). By comparing the number of matches for m = 3 in step 3 with the number of matches here, we can see how well the first three points predicted the fourth point.
Step 5
As mentioned in step 4, we are looking at the ability of a sequence of points to predict the subsequent point. As such, we normalize the number of matches in step 4 by the number of matches (i.e. , the maximum possible predictions) in step 3. This is done by dividing the results of step 4 by the results of step 3, yielding a number greater than 0 and less than or equal to 1. We then take the logarithm of that ratio.
Step 6:
We then repeat steps 3, 4, and 5, but start at the second, third, fourth, fifth, sixth, seventh, eighth, and ninth data points, as possible. Actually, for m = 2 we cannot go further than the eighth data point, and for m = 3 we cannot go further than the seventh data point. In general, we cannot go further than the (N −m )thdata point. In each case, however, we analyze the sequence starting at the beginning data point (x1), assuring us of at least one match in the analysis.
Step 7
Add together all the logarithms computed in step 5, and divide the sum by (N − m). The result is then multiplied by −1. This is the approximate entropy (m, r, N).
Short Example (Without a Filter Level, r)
This example calculates the approximate entropy for the following set of data points:
For simplicity, this example is performed without a filter level (r = 0). Therefore, we start with step 3.
Step 3
We find the number of matches between the first sequence of m data points and all sequences of m data points.
for m = 1: number of matches = 3
(x1= x1, x1= x4, x1= x8)
for m = 2: number of matches = 2
(x1, x2= x1, x2, x1, x2= x4, x5)
Step 4
We find the number of matches between the first m + 1 data points and all sequences of m + 1 data points.
for m = 1: number of matches = 2
(x1, x2= x1, x2, x1, x2= x4, x5)
for m = 2: number of matches = 1
(x1, x2, x3= x1, x2, x3)
Step 5
We divide the results of step 4 by the results of step 3, and then take the logarithm of that ratio.
for m = 1: log(2/3)
for m = 2: log(1/2)
Steps 6 + 7
We repeat steps 3, 4, and 5, but start at the second, third, fourth, fifth, sixth, seventh, eighth, and ninth (only for m = 1) data points. We add together all the logarithms computed in step 5 and divide the sum by (N − m). The result is then multiplied by −1. This is the approximate entropy (m, r, N).
for m = 1:
approximate entropy (1, 0, 10) =−1 ·[log(2/3) + log(1/3) + log(1/1) + log(2/3) + log(1/3) + log(1/2) + log(1/3) + log(1/3) + log(1/2)]/9
for m = 2:
approximate entropy (2, 0, 10) =−1 ·[log(1/2) + log(1/1) + log(1/1) + log(1/2) + log(1/1) + log(1/1) + log(1/1) + log(1/1)]/8
Table 1. VBasic Program for Computing Approximate Entropy
A Fortran program for computing approximate entropy was published by Pincus et al. 14
