Suppression of behavioral and physical responses defines the anesthetized state. This is accompanied, in humans, by characteristic changes in electroencephalogram patterns. However, these measures reveal little about the neuron or circuit-level physiologic action of anesthetics nor how information is trafficked between neurons. This study assessed whether entropy-based metrics can differentiate between the awake and anesthetized state in Caenorhabditis elegans and characterize emergence from anesthesia at the level of interneuronal communication.
Volumetric fluorescence imaging measured neuronal activity across a large portion of the C. elegans nervous system at cellular resolution during distinct states of isoflurane anesthesia, as well as during emergence from the anesthetized state. Using a generalized model of interneuronal communication, new entropy metrics were empirically derived that can distinguish the awake and anesthetized states.
This study derived three new entropy-based metrics that distinguish between stable awake and anesthetized states (isoflurane, n = 10) while possessing plausible physiologic interpretations. State decoupling is elevated in the anesthetized state (0%: 48.8 ± 3.50%; 4%: 66.9 ± 6.08%; 8%: 65.1 ± 5.16%; 0% vs. 4%, P < 0.001; 0% vs. 8%, P < 0.001), while internal predictability (0%: 46.0 ± 2.94%; 4%: 27.7 ± 5.13%; 8%: 30.5 ± 4.56%; 0% vs. 4%, P < 0.001; 0% vs. 8%, P < 0.001), and system consistency (0%: 2.64 ± 1.27%; 4%: 0.97 ± 1.38%; 8%: 1.14 ± 0.47%; 0% vs. 4%, P = 0.006; 0% vs. 8%, P = 0.015) are suppressed. These new metrics also resolve to baseline during gradual emergence of C. elegans from moderate levels of anesthesia to the awake state (n = 8). The results of this study show that early emergence from isoflurane anesthesia in C. elegans is characterized by the rapid resolution of an elevation in high frequency activity (n = 8, P = 0.032). The entropy-based metrics mutual information and transfer entropy, however, did not differentiate well between the awake and anesthetized states.
Novel empirically derived entropy metrics better distinguish the awake and anesthetized states compared to extant metrics and reveal meaningful differences in information transfer characteristics between states.
General anesthesia-induced transitions between distinct brain states rely on dynamic changes within the information flow and content of neural networks
Entropy, a measure of the overall information content of a given signal, is useful in the investigation of communication in the central nervous system
How neural information flow and content change under general anesthesia is an incompletely explored question
Measuring the entropy of calcium transients in simultaneously imaged neurons of Caenorhabditis elegans can evaluate how isoflurane anesthesia alters interneuronal information coupling
These investigations led to the construction of three novel entropy metrics: state decoupling, internal predictability, and system consistency
These new metrics proved to be more sensitive indicators for differentiating the awake state from isoflurane-anesthetized states in C. elegans than the standard metrics of mutual information and transfer entropy
Information theoretic measures calculate how information is stored or shared in a system. If we conceptualize the nervous system as an information processing system, then it follows that these metrics can be applied to the investigation of neuronal dynamics and states. The roots of information theory were laid out in Claude Shannon’s seminal 1948 paper, “A Mathematical Theory of Communication.”1 Shannon established the system-agnostic paradigm for analyzing the transmission of information that now underpins all modern digital communications and machine learning2 and that has since been applied to investigate biologic information encoding in neuroscience and anesthesia.3–8 Shannon’s conceptualization of “entropy” as a measure of the overall information content of a given signal has been particularly useful in the investigation of communication within nervous systems. Entropy measures the information content of an individual signal, but further metrics derive naturally to quantify the information transfer between two signals. Mutual information quantifies the information content shared between two signals,1 and transfer entropy measures the time-dependent transfer of information from one specified signal to another.9
General anesthesia is characterized by transitions between distinct brain states, as observed in humans by electroencephalogram (EEG) 10,11 and functional magnetic resonance imaging12 and in model organisms such as Drosophila melanogaster13 and Caenorhabditis elegans.14,15 Essentially all multicellular organisms are susceptible to exposure to volatile anesthetic agents at broadly similar concentrations.16,17 However, it remains unclear how communication at a nervous system or local network level is altered by the mechanism of action at the neuron. The predominant hypothesis is that the behavioral endpoint results from a disruption in coordinated neural activity, thus causing dissociation between the organism and its stimuli.11,15 Mutual information and transfer entropy have been employed to investigate anesthetized states,3,4,7,8 but use of these existing metrics presupposes how information transfer is expected to change under anesthesia. We address this limitation using a novel generalized model of time-dependent information transfer and hypothesize that the application of this approach to multineuron activity recordings taken from C. elegans head ganglia15,18 will discriminate between the awake and anesthetized states in the animal and furthermore characterize the recovery of the nervous system to the awake state.
Materials and Methods
C. elegans Strains
All experiments were on young adult hermaphrodites of the transgenic strain QW1217 (zfIs124[Prgef-1::GCaMP6s]; otIs355[Prab-3::NLS::tagRFP]). GCaMP6s, a fluorescent calcium reporter, and nuclear-localized red fluorescent protein are both expressed pan-neuronally in this strain (gift of M. Alkema, University of Massachusetts, Worcester, Massachusetts). C. elegans were cultivated at 20°C on nematode growth medium seeded with Escherichia coli OP50.
Imaging of Neuronal Activity
GCaMP6s and red fluorescent protein fluorescence in the C. elegans head ganglia were captured in volumetric stacks using a dual inverted selective plane illumination microscope (Applied Scientific Instrumentation, USA) and water-immersed 0.8 NA 40× objective (Nikon, USA). GCaMP6s and red fluorescent protein were respectively excited using 5-mW 488-nm and 561-nm lasers (Vortan Laser Technology, USA). Volumes for each fluorescent channel (GCaMP6s and red fluorescent protein) were obtained at rate of 2 Hz. The animals were immobilized for imaging by encapsulation in a pad of permeable hydrogel consisting of 13.3% polyethylene glycol diacrylate (Advanced BioMatrix, USA) with 0.1% Irgacure (Sigma-Aldrich, USA).19 Hydrogel pads containing animals to be imaged were cured with ultraviolet light onto silanated glass coverslips, which were affixed to the bottom of 50-mm Petri dishes with vacuum grease. Petri dishes were then filled with 50 mL of S-basal buffer (100 mM NaCl, 50 mM KPO4 buffer, and 5 μg/ml cholesterol) as the immersion medium. Tetramisole was added to this buffer at 5 mM to further immobilize the animals.
Extraction of Neuronal Activity
For each animal imaged, 120 neurons were tracked in the head region using the nuclear-localized red fluorescent protein fluorophore, and their activity was extracted using the fluctuations in cytoplasmic GCaMP6s neurofluorescence. This tracking and extraction procedure was performed as a massively parallel computation executed at the Massachusetts Green High Performance Computing Center using computational techniques as previously detailed by Awal et al.15
We make the assumption that the activity state of any given neuron can be approximated by the instantaneous calcium concentration in the soma of that neuron. However, the state space of a neuron in a functioning network is certainly much more complex in truth, and limitations apply to the use of information theory in a system in which the state space of signals is incompletely observable. Calcium fluorescence does not report, for example, potentially relevant properties of the neuron under anesthesia, such as the level of endocytic activity.20 Furthermore, it remains unknown whether the state of a whole nervous system can be represented as the concatenation of the states of individual neurons within it. We make these base assumptions due to technical limitations and theoretical gaps in knowledge.
Regimes of Isoflurane Anesthesia in C. elegans
C. elegans become anesthetized on exposure to isoflurane with a minimum alveolar concentration (MAC) value of approximately 3% at room temperature.14 At 17°C, isoflurane is 2.3 times more soluble in tissues such as muscle and fat than at 37°C,21 and consequently absorption of a larger quantity of isoflurane is required to produce a similar chemical potential gradient at cooler temperatures. A concentration of 3% isoflurane at 17°C is thus pharmacodynamically similar, with respect to its physical chemistry, to a concentration of 1.3% at 37°C. Alterations in C. elegans neuronal activity in response to isoflurane exposure were assessed using two experimental regimes: stepwise anesthetization and emergence.
Stepwise Anesthetization
Three 5-min-long neuronal activity recordings were taken from each animal (n = 10) after progressive equilibration to 0%, 4%, and 8% atmospheric isoflurane.14 These concentrations correspond to 1.3 and 2.6 MAC, respectively, which are appropriate to obtain consistent moderate and deep anesthesia. Imaging was performed at time t = 30 min, 150 min, and 270 min from the start of the experiment at isoflurane levels of 0%, 4%, and 8%. Isoflurane levels were stepped by exchanging the immersion medium for fresh S-basal buffer with the addition of 13 or 26 μl of pipetted isoflurane for 4% and 8% isoflurane, respectively. The atmosphere within the covered Petri dish was then equilibrated, during the time between imaging sequences, to a concentration of 4% or 8%, respectively, using continuous monitoring with an infrared spectrometer (Ohmeda 5250 RGM; GE Healthcare, USA) and instillation of isoflurane via syringe pump as necessary to maintain the targeted concentration.
Emergence
While encapsulated and immersed in 50 ml of S-basal buffer, C. elegans (n = 8) were imaged for 10 min in a preanesthetized state (i.e., isoflurane 0%). Then, 49 ml of buffer was withdrawn from the Petri dish and reserved, and each animal was equilibrated to an atmosphere of 4% isoflurane for 120 min while being immersed in 1 ml of buffer. The Petri dish was then unsealed, and the 49 ml of buffer solution was reintroduced, thus immediately reducing the buffer concentration of isoflurane to 1/50th of its previous level. A 120-min-long neuronal activity recording was initiated immediately after the dilution of buffer, capturing each animal’s emergence from the anesthetized state. Controls (n = 8) were performed by resting animals in room air for 120 min before commencement of postdilution activity recording.
Statistical Methods
Normalization of Neuronal Activity Traces
In recordings from the stepwise equilibration regime, neuronal activity intensity for all three recordings in each animal (0%, 4%, and 8%) was normalized against the average intensity across all neurons and time points in the 0% isoflurane recording. In recordings from the emergence regime, neuronal activity intensity in each recording was normalized to the average intensity across all neurons and time points in that recording. Because the animals are immobilized and encapsulated in a hydrogel for imaging, it is not possible to normalize to an alternative behavioral endpoint such as wakefulness.
Signal Power and Spectral Density Analysis
Signal power was calculated for each neuronal trace in each animal in the stepwise equilibration experimental regime and averaged by exposure condition (0%, 4%, and 8% isoflurane). Means were compared with one-way ANOVA and the two-tailed independent sample t test. The time differential for each neuronal trace in each 120-min-long emergence recording was calculated using total-variation regularization,22 and the power spectral density was calculated over sequential 12-min epochs by Fourier transformation. The power spectra were averaged across all 120 neurons tracked in each animal to produce a mean power spectrum for each animal at each epoch, with cumulative power spectra and median power frequencies calculated from these mean power spectra. Differences in mean spectral median frequencies between animals emerging from 4% isoflurane exposure and control animals were compared using the two-tailed independent sample t test.
Statistical significance was defined at P < 0.05, with further highlighting of results demonstrating P < 0.01 and P < 0.001, respectively. A formal a priori power calculation for the number of experimental animals was not performed; previous experience with this animal model indicated that the number used here would be sufficient to detect significant changes.14,15 Data analysis was performed using MATLAB (MathWorks, USA).
Entropy Calculations and Derived Metrics
Entropy can be intuited as a measure of the “amount of information” present in a given signal. It can be calculated for any signal that consists of a series of discrete states. It is maximized when that signal is maximally disordered, i.e., when all possible discrete states within the signal occur with equal probability. Entropy can also be interpreted as quantifying the level of surprise or unpredictability in a signal. Given multiple signals, the joint entropy reflects the combined entropy of those signals. Mutual information, as introduced by Shannon1 and later named by Fano and Hawkins,23 represents the mutual dependence of two (or more) signals. Mutual information can also be understood as the amount of information provided by one signal about another, a formulation that is frequently applied to the quantification and modeling of coherence between neural signals, from unit spiking to EEG.3–6 The basic mathematical details of entropy, joint entropy, and mutual information are reviewed in the appendix, accompanied by a valuable and extensible graphical analogy.
Transfer entropy is inherently directional and time-dependent. For a system consisting of two signals, X and Y, the transfer entropy from X to Y (TEX→Y) represents the amount of information shared between the past of X and the future of Y that is not shared with the past of Y. It is therefore a measure of the extent to which the past of X causes the future of Y. This dependence on time requires separating each source into past and future, and thus the two-signal system of X and Y becomes a four-signal system consisting of XP and XF (the past and future of X) and YP and YF (the past and future of Y).9 The relationship between the information content in this four-signal model consisting of XP, YP, XF, and YF can be illustrated as the intersections of a four-set Venn diagram (fig. 1A) comprising 15 discrete numbered regions. The mutual information between the pasts of X and Y is as shown in figure 1B; the sum of the information in regions 3, 7, 11, and 15. The transfer entropy from signal X to signal Y, TEX→Y, can then be understood as the intersection of XP with the intersection of YF and the complement of YP (fig. 1C): the information in the past of X shared with the future of Y that is not shared with the past of Y itself, and hence the sum of the information in regions 9 and 13. A rigorous formal algebra connects these equivalent entropy and set representations.24 For simplicity, table 1 shows the full conversion between each set region and the joint entropies of XP, YP, XF, and YF. Thus, the amount of information in region 1 (i.e., the amount of information in XP that is not shared with any of YP, XF, and YF) is given by H(XP,XF,YP,YF) – H(XF,YP,YF). TEX→Y, given by the sum of regions 9 and 13, is H(XP,YP) + H(YP,YF) – H(XP,YP,YF) – H(YP). Once illustrated graphically, it becomes obvious that we are not limited to pre-existing metrics such as mutual information or transfer entropy but that we may select any additive combination of the 15 regions of the four-signal model between neuron pairs in the awake and anesthetized states that we choose. Thus, we can empirically construct new metrics that have the potential to better characterize how neuron-to-neuron information transfer is altered in the anesthetized state, whose physiologic rationale can be made explicit. The null hypothesis is that there should be no significant difference seen in entropy measurements at different levels of anesthesia. By extension, the null hypothesis is also that no grouping of entropy calculations (e.g., mutual information, transfer entropy) should differ significantly across different levels of anesthesia.
Conversion Table from Combinations of Information Regions into Combinations of Entropy Measures

A four-signal model for entropy-based information transfer between neuronal pairs. (A) Venn diagram of the four-signal model of past and future information content of two sources X and Y. The 15 regions are numbered by binary combinations of XP, YP, XF, and YF with values of 1, 2, 4, and 8, respectively. The region colors are assigned according to shading combinations of the base colors shown in regions 1, 2, 4, and 8. (B) The entropy quantity of mutual information, representing the information that is common between the past states of source X and Y. Graphically, this corresponds to the intersection of the regions XP and YP, i.e., the combination of regions 3, 7, 11, and 15, described in set notation as XP ∩ YP, the information content of which is therefore H(XP) + H(YP) – H(XP,YP). (C) Entropy quantity of transfer entropy from X to Y (TEX→Y), representing the information that is present in the future state of Y (YF) that can be predicted from the past of X (XP) but not the past of Y (YP) and therefore represents causal transmission of information from X to Y. Graphically, this corresponds to the intersection of regions XP and YF but not YP, i.e., the combination of regions 9 and 13 described in set notation as XP ∩ (YF ∩ Y′P), the information content of which is therefore H(XP,YP) + H(YP,YF) – H(XP,YP,YF) – H(YP).
A four-signal model for entropy-based information transfer between neuronal pairs. (A) Venn diagram of the four-signal model of past and future information content of two sources X and Y. The 15 regions are numbered by binary combinations of XP, YP, XF, and YF with values of 1, 2, 4, and 8, respectively. The region colors are assigned according to shading combinations of the base colors shown in regions 1, 2, 4, and 8. (B) The entropy quantity of mutual information, representing the information that is common between the past states of source X and Y. Graphically, this corresponds to the intersection of the regions XP and YP, i.e., the combination of regions 3, 7, 11, and 15, described in set notation as XP ∩ YP, the information content of which is therefore H(XP) + H(YP) – H(XP,YP). (C) Entropy quantity of transfer entropy from X to Y (TEX→Y), representing the information that is present in the future state of Y (YF) that can be predicted from the past of X (XP) but not the past of Y (YP) and therefore represents causal transmission of information from X to Y. Graphically, this corresponds to the intersection of regions XP and YF but not YP, i.e., the combination of regions 9 and 13 described in set notation as XP ∩ (YF ∩ Y′P), the information content of which is therefore H(XP,YP) + H(YP,YF) – H(XP,YP,YF) – H(YP).
Four-signal Model in C. elegans during Stable Levels of Anesthesia
The four-signal model requires that the continuous traces of C. elegans neuronal activity data be discretized into a set number of quantized activity states. The GCaMP fluorescence data were quantized to four levels using optimum thresholds derived using Otsu’s algorithm, thus producing a two-bit value for each neuron at each imaging time step for each animal.25,26 The same thresholds were used across all states (0%, 4%, and 8% isoflurane), on a per animal basis. The thresholds were recalculated using Otsu’s algorithm for each individual animal, which compensates for such issues as experimental variability in fluorophore expression between animals or strength of illumination. This quantization scheme was chosen due to previous success with this approach.14 Sources X and Y are any pair of neurons drawn from the pool of 120 neurons whose activity was extracted, thus generating 14,400 neuron pairs, with XP, YP, XF, and YF generated by time shifting by 1 s (i.e., by two samples). This four-signal model can assess the tendency for a neuronal signal (i.e., XP) to predict the next 1 s of either that same signal (XF) or the next 1 s of another neuron’s signal (YF). Entropy content was also normalized by the joint entropy H(XP,XF,YP,YF), which corresponds to the sum of all 15 regions or XP ∪ XF ∪ YP ∪ YF; this allows the proportional distribution of entropy to be analyzed independently of changes in total entropy across treatment conditions. We compared the entropy distributions of mutual information, transfer entropy, and new empirically defined metrics by pooling the entropy content of all possible neuron pairs in each animal by treatment condition and comparing the mean entropy content in each information region across conditions with ANOVA, with multiple comparisons performed with a Tukey–Kramer adjustment.
Four-signal Model in C. elegans during Emergence
Mutual information, transfer entropy, and other new metrics were calculated as in the preceding section but under a moving window of ± 2 min (i.e., ± 240 samples) across the 120 min of emergence neuronal activity data. Average time courses were time-smoothed using a moving mean operator with a 100-s window to better reveal the trend of the data while suppressing superimposed periodic variations, and 95% CIs were calculated for averaged time courses.
Results
Isoflurane Exposure Suppresses Power and the Total Entropy of Neuronal Activity
Figure 2A (panels a through c) shows examples of the gross alterations in neuronal activity that accompany anesthesia with isoflurane at 0 MAC, 1.3 MAC, and 2.6 MAC (0%, 4%, and 8% isoflurane, respectively). The mean signal power in anesthetized animals decreases compared to unanesthetized animals (fig. 2A, panel d). After discretization, the total entropy also reduces with increasing isoflurane levels (fig. 2A, panel e). The lower activation level of the neurons overall results in a smaller number of accessible neurologic states and consequently a lower total entropy.
Isoflurane exposure suppresses power and Shannon entropy of neuronal activity signals, and early emergence from isoflurane exposure is characterized by a quickly resolving high-frequency spectral shift. (A, panels a through c) Fluorescent optical activity of 120 neurons in the head of C. elegans equilibrated to atmospheres of 0%, 4%, and 8% isoflurane, respectively. Color represents normalized GCaMP6s fluorescence activity in a cytoplasmic shell surrounding each tracked neuronal nucleus. (A, panel d) Decreasing total signal power under increasing concentrations of isoflurane. Mean signal power ± SD of neuronal activity traces were recorded from animals equilibrated to room air or 4% or 8% isoflurane. (A, panel e) Mean Shannon entropy ± SD of quantized neuronal activity traces recorded from animals equilibrated to room air or 4% or 8% isoflurane. (B, panels a and b) Intensity traces of neuronal activity in 120 neurons in the C. elegans head ganglia before isoflurane exposure and as the animal emerges from equilibration to 4% isoflurane greater than 120 min. (B, panel c) Mean power spectral density of the time-differentiated neuronal activity traces 0.2 and 0.8 hours postexposure to either room air or 4% isoflurane. (B, panel d) Mean spectral median power frequency ± SD of neuronal traces in animals equilibrated to room air or 4% isoflurane, calculated at 12-min epochs postexposure. The error bars show the SD. *P < 0.05; **P < 0.01; ***P < 0.001.
Isoflurane exposure suppresses power and Shannon entropy of neuronal activity signals, and early emergence from isoflurane exposure is characterized by a quickly resolving high-frequency spectral shift. (A, panels a through c) Fluorescent optical activity of 120 neurons in the head of C. elegans equilibrated to atmospheres of 0%, 4%, and 8% isoflurane, respectively. Color represents normalized GCaMP6s fluorescence activity in a cytoplasmic shell surrounding each tracked neuronal nucleus. (A, panel d) Decreasing total signal power under increasing concentrations of isoflurane. Mean signal power ± SD of neuronal activity traces were recorded from animals equilibrated to room air or 4% or 8% isoflurane. (A, panel e) Mean Shannon entropy ± SD of quantized neuronal activity traces recorded from animals equilibrated to room air or 4% or 8% isoflurane. (B, panels a and b) Intensity traces of neuronal activity in 120 neurons in the C. elegans head ganglia before isoflurane exposure and as the animal emerges from equilibration to 4% isoflurane greater than 120 min. (B, panel c) Mean power spectral density of the time-differentiated neuronal activity traces 0.2 and 0.8 hours postexposure to either room air or 4% isoflurane. (B, panel d) Mean spectral median power frequency ± SD of neuronal traces in animals equilibrated to room air or 4% isoflurane, calculated at 12-min epochs postexposure. The error bars show the SD. *P < 0.05; **P < 0.01; ***P < 0.001.
A Resolving High-frequency Spectral Shift Characterizes Early Emergence from Isoflurane
Isoflurane exposure at 4% in C. elegans has previously been demonstrated to cause a shift in the mean spectral power of neuronal activity toward higher frequencies. This effect is grossly visible in figure 2A (panel b): when compared to the activity at 0% in figure 2, A (panel a) and B (panel a), the activity at 4% appears less settled and more jittery, even though the total signal power and entropy are lower. This behavior appears to characterize the general shift from ordered activation and deactivation of suites of networked neurons in the baseline unanesthetized state toward less organized and rapidly shifting dynamics in the isoflurane-induced anesthetized state.15 Figure 2B (panel b) shows the progressive resolution of this neuronal activity in emerging from 4% isoflurane to 0% isoflurane. At 0.2 h, we find that high-frequency content is greatly enriched after exposure to 4% isoflurane compared to controls. However, this phenomenon resolves quickly during anesthesia emergence, and the power spectra of emerging and control animals do not appear different by 0.8 h (fig. 2B, panel c). This result is further quantified by measuring the median power frequency during each 0.2 h epoch during emergence (n = 8 emerging from 4% isoflurane, n = 8 controls). The median power frequency in C. elegans exposed to 4% isoflurane was significantly elevated compared to control animals at 0.2 h postexposure (fig. 2B, panel d; two-tailed independent sample t test, P = 0.032) with no significant differences between treatment groups observed beyond that time point.
Exposure to Isoflurane Significantly Alters the Distribution of Entropy
The proportional entropy content in each information region was averaged across all neuron pairs across all animals within each isoflurane exposure level (0%, 4%, and 8%) as shown in figure 3, with the numbering and coloring of regions consistent with figure 1A. For comparison, supplemental figure S1A (https://links.lww.com/ALN/D133) shows the absolute distribution of entropy content across these regions (which also reflects the decline in overall entropy seen with deeper levels of anesthesia seen in figure 2A, panel e), and supplemental figure S1B (https://links.lww.com/ALN/D133) shows the proportional distribution of entropy under an alternative encoding strategy that places greater emphasis on the time course of neuron activity at the expense a cruder quantization of the level of activity into fewer bins.
Exposure to isoflurane significantly alters the distribution of entropy within the four-signal information transfer model. Mean entropy content ± SD in information regions 1 to 15 of the four-signal information transfer model in animals exposed to room air or 4% or 8% isoflurane (n = 10). Entropy content was calculated for each neuron pair recorded in each animal (14,400 neuron pairs/animal/exposure condition), normalized to the total joint entropy of the four-signal model, and then averaged by exposure condition. The error bars show the SD.
Exposure to isoflurane significantly alters the distribution of entropy within the four-signal information transfer model. Mean entropy content ± SD in information regions 1 to 15 of the four-signal information transfer model in animals exposed to room air or 4% or 8% isoflurane (n = 10). Entropy content was calculated for each neuron pair recorded in each animal (14,400 neuron pairs/animal/exposure condition), normalized to the total joint entropy of the four-signal model, and then averaged by exposure condition. The error bars show the SD.
Isoflurane exposure at 4% and 8% in C. elegans is associated with significantly elevated proportional entropy in regions 1, 2, 4, and 8, representing the quantity of information contained in each of the respective signals in the four-signal model (XP, YP, XF, YF) that is not shared with any of the other three signals. Proportional entropy content is also significant reduced in regions 5 and 10 under anesthesia, representing the amount of information in a neuron’s past state (XP or YP) that is predictive of its future state (XF or YF, respectively) that is independent of information present in another neuron (Y or X, respectively). Finally, we observe that the proportional content in region 15 is also lower under anesthesia; this region is intuitively the information shared among all four signals, XP ∩ XF ∩ YP ∩ YF.
From these results, we can empirically construct and name new metrics by grouping those information regions observed to exhibit altered entropy content in anesthetized animals and which, when so grouped, are conceptually meaningful regarding how information transfer could be altered in the anesthetized state. Three such metrics are:
- •
State decoupling. The sum of regions 1, 2, 4, and 8 describes the amount of information present in only one of the signals XP, YP, XF, and YF and hence the extent to which neurons become decoupled from each other and from themselves.
- •
Internal predictability. The sum of regions 5 and 10 describes the extent to which the past state of a neuron is predictive of its future behavior, independent of other sources of information.
- •
System consistency. Region 15 alone represents that amount of state information that is present in both the past and future of both sources X and Y and hence the extent to which the system remains in a mechanistically consistent condition. This metric is similar to a system-wide expansion of the concept of multi-information, as previously measured within the circumscribed and well defined group of neurons that comprise the C. elegans motor circuit.14
Pre-existing Metrics Poorly Distinguish the Awake versus the Anesthetized State
Mutual information and transfer entropy are represented graphically in figure 4 (A, panel a, and B, panel a, respectively). These metrics were calculated across all possible functional neuron pairs in the cohorts of animals exposed to stepwise equilibrated 0%, 4%, and 8% isoflurane (n = 10). No statistically significant difference is seen in mutual information between anesthetized states and the control state (fig. 4A, panel b; 0% isoflurane: 3.67 ± 1.35%, 4% isoflurane: 2.26 ± 1.71%, 8% isoflurane: 2.29 ± 0.83%; ANOVA, P = 0.043; Tukey–Kramer 0% vs. 4%; P = 0.069, Tukey–Kramer 0% vs. 8%, P = 0.075). Transfer entropy was found to be elevated in the 4% isoflurane-exposed state relative to controls (fig. 4B, panel b; 0% isoflurane: 0.58 ± 0.08%, 4% isoflurane: 0.98 ± 0.27%, 8% isoflurane: 0.71 ± 0.34%; ANOVA, P = 0.004; Tukey–Kramer 0% vs. 4%, P = 0.004; Tukey–Kramer 0% vs. 8%, P = 0.483), but not in the 8% isoflurane-exposed state.
The anesthetized state can be characterized by shifts in novel entropy-based measures of information transfer between neuron pairs. (A through E, panel a) Selected information region clusters in the four-signal model representative of information theoretic metrics: mutual information, transfer entropy, state decoupling, internal predictability, and system consistency. Colored Venn areas represent the combined information regions that compose each metric. The equations used to calculate each metric from joint entropies are also shown. (A through E, panel b) Mean proportional entropy content ± SD in the selected information region clusters in neuronal activity trace pairs recorded from animals exposed to room air or 4% or 8% isoflurane. The error bars show the SD. *P < 0.05; **P < 0.01; ***P < 0.001.
The anesthetized state can be characterized by shifts in novel entropy-based measures of information transfer between neuron pairs. (A through E, panel a) Selected information region clusters in the four-signal model representative of information theoretic metrics: mutual information, transfer entropy, state decoupling, internal predictability, and system consistency. Colored Venn areas represent the combined information regions that compose each metric. The equations used to calculate each metric from joint entropies are also shown. (A through E, panel b) Mean proportional entropy content ± SD in the selected information region clusters in neuronal activity trace pairs recorded from animals exposed to room air or 4% or 8% isoflurane. The error bars show the SD. *P < 0.05; **P < 0.01; ***P < 0.001.
In comparison, figure 4 (C to E) shows the graphical representation and entropy formulae for state decoupling, internal predictability, and system consistency, as well as the effect of isoflurane anesthesia upon these new metrics. State decoupling is elevated in both the 4% and 8% isoflurane exposure groups relative to controls with a high degree of significance (fig. 4C, panel b; 0% isoflurane: 48.8 ± 3.50%, 4% isoflurane: 66.9 ± 6.08%, 8% isoflurane: 65.1 ± 5.16%; ANOVA, P < 0.001; Tukey–Kramer 0% vs. 4%, P < 0.001; Tukey–Kramer 0% vs. 8%, P < 0.001), whereas the internal predictability is suppressed in both the 4% and 8% isoflurane cohorts relative to control with a high degree of significance (fig. 4D, panel b; 0% isoflurane: 46.0 ± 2.94%, 4% isoflurane: 27.7 ± 5.13%, 8% isoflurane: 30.5 ± 4.56%; ANOVA, P < 0.001; Tukey–Kramer 0% vs 4%, P < 0.001; Tukey–Kramer 0% vs. 8%, P < 0.001). The metric of system consistency (as a generalization of multi-information) is also significantly suppressed under 4% and 8% isoflurane exposure relative to controls (fig. 4E, panel b; ANOVA, 0% isoflurane: 2.64 ± 1.27%, 4% isoflurane: 0.97 ± 1.38%, 8% isoflurane: 1.14 ± 0.47%; P = 0.004; Tukey–Kramer 0% vs. 4%, P = 0.006; Tukey–Kramer 0% vs. 8%, P = 0.015).
Emergence from Isoflurane Anesthesia Is Well Characterized by Novel Entropy-based Metrics
Mutual information, transfer entropy, state decoupling, internal predictability, and system consistency were calculated for all possible functional neuron pairs under a moving window across a 120-min emergence period (n = 8 emerging from 4% isoflurane, n = 8 controls). The average values for these metrics were then calculated as smoothed means with 95% CI over the emergence period as shown in figure 5 for each metric, respectively (4% isoflurane emergence in the purple band and controls in the gray band). We observe that the degree of separation in each metric between the 4% isoflurane and control conditions compares well with the equivalent measurements at stable levels of anesthesia. The separation of the means resolves over the imaging period for all five assessed metrics. However, the separation of means for mutual information and transfer entropy (fig. 5, A and B) is small, and any difference resolves relatively early in the emergence timeline. The novel metrics of state decoupling (fig. 5C) and internal predictability (fig. 5D) demonstrate very distinct separation of means between the anesthetized and control conditions with resolution of these differences appearing much later in the emergence timeline, more than an hour after beginning emergence from anesthesia. System consistency (fig. 5E) begins with a smaller, though clear, separation of means and appears to resolve along a timeline most consistent with that seen for the more classical measurement of median power frequency as shown in figure 2B (panel d).
Alterations in entropy-based measures of neuronal-pair information transfer resolve to baseline levels nonlinearly as animals emerge from isoflurane anesthesia: mutual information, transfer entropy, state decoupling, internal predictability, and system consistency, respectively. (A through E) Time-smoothed means ± 95% CI of proportional entropy content in information region clusters corresponding to measures of information transfer in animals emerging from anesthesia with 4% isoflurane and in room air–exposed controls. The metrics as calculated for individual animals are also shown for each measurement type (n = 8 isoflurane exposed, n = 8 controls).
Alterations in entropy-based measures of neuronal-pair information transfer resolve to baseline levels nonlinearly as animals emerge from isoflurane anesthesia: mutual information, transfer entropy, state decoupling, internal predictability, and system consistency, respectively. (A through E) Time-smoothed means ± 95% CI of proportional entropy content in information region clusters corresponding to measures of information transfer in animals emerging from anesthesia with 4% isoflurane and in room air–exposed controls. The metrics as calculated for individual animals are also shown for each measurement type (n = 8 isoflurane exposed, n = 8 controls).
A slow drift is seen in the smoothed means of control animals across all metrics. This is most likely a technical artifact, arising from a combination of progressive photobleaching of the GCaMP fluorophores under prolonged imaging coupled with a tendency toward diminishing sensory input and more quiescent neuronal activity in C. elegans specimens that are encapsulated and immobile. The relevant findings are the resolution of the differences between the isoflurane and control populations.
Entropy-based Measures Resolve Nonlinearly as Animals Emerge from Isoflurane Anesthesia
Although the bands of the means of the metrics in figure 5 are smooth, there are considerable periodic fluctuations in the underlying signals of the individual experimental animals, in both the isoflurane and control cohorts, over short time scales. The trends reliably resolve over the emergence period imaged, but this appears to occur nonlinearly with regard to the underlying individual metrics. Indeed, the nonlinearity of emergence is evident by visual inspection of the neuronal traces displayed in the example trial (fig. 2B, panel b), in which the animal clearly passes through periods of relative neuronal quiescence and higher neuronal activity over the 2 h of the recording.
Discussion
We applied a generalized model of neuron-to-neuron information transfer to construct novel entropy metrics beyond the standard forms of mutual information and transfer entropy. Mutual information and transfer entropy fared poorly in distinguishing awake and anesthetized states in C. elegans under isoflurane anesthesia. Rather than rely on preconceptions about how anesthesia affects neuron-to-neuron information transfer, our approach allows for conceptualization of new metrics based on empirical measurements. These new metrics, named state decoupling, internal predictability, and system consistency, better differentiate the awake state from isoflurane-anesthetized states in C. elegans.
What can this tell us about how neuron-to-neuron communication is altered during anesthesia? The increase in state decoupling in anesthetized animals plausibly describes individual neurons becoming decoupled from their previous state and the state of surrounding neurons, representative of an induced disorder of the usual functioning of the neurologic system at the level of neuron-to-neuron communication. This result corresponds well with observations across species that the anesthetized state is associated with apparent informational decoupling. Similarly, the decrease in system consistency between neurons in anesthetized animals can be interpreted as the inverse effect; as the nervous system becomes anesthetized, the amount of information shared between neurons decreases, and the overall predictability of neuronal activity, even by any given neuron’s own past activity, is also reduced.13–15,27,28 This latter effect is captured by a reduction in the metric of internal predictability in the anesthetized state. There is necessarily some concordance between the physiologic interpretations of these new metrics. A gain of relative entropy content in one information region must be made up for by the loss of relative entropy content in others. The elevation of relative entropy content in the information regions defining state decoupling is balanced by the loss of relative entropy content in the regions defining internal predictability and system consistency. Our grouping of certain information regions into particular metrics is a method for expressing and interpreting an overall shift in the shape of entropy content distribution between the awake and anesthetized states. The broader utility of the specific metrics we have defined in this study to explain neuronal dysfunction requires further investigation in other systems.
One important consideration is whether these effects are simply due to an overall reduction in the information content of individual neurons as the animal becomes more deeply anesthetized. Indeed, we do observe that the mean entropy of individual neuronal activity signals in C. elegans significantly decreases in a dose-dependent fashion after exposure to isoflurane, accompanied by a suppression of overall activity. However, all entropy-based metrics calculated were normalized to the total joint entropy of the four-signal model used to generate them. Consequently, we demonstrate that changes in overall entropy content in the system cannot account for observed alterations in the normalized entropy content of information regions or any derivative metrics. Rather, changes in our entropy-based metrics reflect changes in the distribution of entropy among different information regions and therefore alterations in how the information content of each neuronal pair is communicated. We find it fundamentally interesting that our entropy-based metrics strongly distinguish between the awake and anesthetized states in a seemingly binary fashion, while metrics assessing the mean behavior of individual neurons, such as mean entropy and mean signal power, change in a graded fashion as the animal is exposed to greater levels of anesthesia. The differential sensitivity of these metrics suggests that breakdown in neuron-to-neuron communication in the anesthetized state is not merely a linear function of global suppression of neuronal activity.
Transfer entropy was significantly elevated in C. elegans at equilibration to 4% isoflurane but not in those equilibrated to 8% isoflurane when compared to controls. How does this paradoxical result arise? Transfer entropy measures the extent to which the future of a source (e.g., YF) is influenced by the past of another source (e.g., XP) but not by its own past (e.g., YP). If there is an increase in high frequency activity and instability in Y (i.e., less internal predictability in Y), then the past of X can have proportionally more influence on future Y than Y’s own past. Hence, transfer entropy can be elevated at 4% isoflurane. Transfer entropy may be lower when there is greater regularity in the nervous system as at isoflurane 0% or when neuronal activity becomes comparatively static as at 8% isoflurane.
A limitation of this current study is that neurons are not individually identified. One of the strengths of C. elegans as a model system is the ability to leverage the availability of a comprehensive neuronal connectome29 and genetic fate mapping data,30–32 making neuronal identification a potentially very powerful approach. Indeed, we previously applied entropy to the analysis of a circuit of five manually identified neurons (AVA, AVB, AVD, AVE, and RIM).14 Recent advances in the polychromatic labeling of C. elegans neurons bring automated neuronal identification at scale to the cusp of practicability.33 Neuronal identification allows for comparison of specific neuron pairs across subjects, precisely identifying information transfer between neurons that are known to be mechanistically linked or alternately allowing for segmentation the neuronal population based on activity.15 In the future, we anticipate that it will become possible to reconcile specific neuronal behaviors under anesthesia to chemical and electrical synaptic density as established by neuronal identification and the connectome.14 During deep anesthesia, neuronal dynamics become relatively static, with neurons apparently locked into specific levels of activation. Does this represent a freezing of state in some moment of anesthesia, or is this pattern reproduced across individuals based upon the action of isoflurane on the underlying fixed connectome?29 Neuronal identification will clarify which of these plausible hypotheses is true. This would fundamentally influence our understanding of the mechanism of action.
Neuronal identification would also allow us to further probe the nonlinear nature of anesthetic emergence, including what appears to be periods of global neuronal quiescence. We demonstrate that the anesthetized state and early emergence are characterized by an elevation in high-frequency activity. However, once this effect abates, what remains appears to be periods of suppressed neuronal activity sporadically interspersed with periods of activity. This pattern is evident in figure 2B (panel b); it is present in almost all 4% isoflurane emergence trials but is not present in any control trials. Likewise, we note significant short-term oscillations in the entropy-based metrics of individual trials displayed in figure 5. C. elegans is known to enter periods of lethargus, a physiologic sleep-like state characterized by a lack of physical movement and global neuronal quiescence.34 The periods of broad neuronal quiescence we observe here appear similar to the neuronal activity patterns in C. elegans lethargus, suggesting the animal may be passing in and out of a “sleep-like” state after isoflurane emergence. In the future, neuronal identification within the C. elegans imaging assays will allow us to probe this hypothesis further as we will identify and measure the activity of specific sleep-associated neurons, such as the interneuron RIS, in anesthetized and emerging animals.
Work in human subjects has been largely based on studies analyzing EEG and functional magnetic resonance imaging data in anesthetized volunteers, although these approaches can only address the question of communication in the nervous system on a region-to-region basis rather than a neuron-to-neuron basis.35–38 Our work applies an information theoretic approach to anesthesia at a more exquisite scale, but our findings are broadly consistent with a network inefficiency conceptualization of anesthesia. Network disruption between cortical “nodes” may well be an emergent outcome of network disruption at the subcortical level. We observed, during the emergence trials, that entropy metrics are strikingly nonsmooth in both control and anesthetized animals alike, even though stable trends are produced. Neuron-to-neuron information transfer is natively in a state of rapid flux. This finding is particularly interesting in light of studies demonstrating a rhythmic oscillation between brain states, as measured by local field potential and EEG, in animals exposed to steady-state concentrations of anesthetics.39–41 It is known that C. elegans has transient metastable states that encode specific gross behaviors and that C. elegans transitions between these states as it performs these different behaviors.42 We previously reproduced these state-space findings in awake worms and demonstrated that under isoflurane anesthesia, the ability to form these representative metastable states is lost.15 Our findings lend further credence to approaching emergence from anesthesia to wakefulness as a stochastic process,43 in which the probability of the nervous system being in activity states associated with wakefulness versus sedation increases over time.
Recent investigations on the internal operation of the clinical Bispectral Index monitor have revealed algorithms that it applies to the frontal EEG.44 While the Bispectral Index monitor does not explicitly calculate entropy, an algorithm that it most commonly applies under general anesthesia can be effectively restated in terms of the Wiener entropy across the low gamma EEG band (40 to 47 Hz) relative to the Wiener entropy across the whole power spectrum (0 to 47 Hz).45 It makes sense that information theory should remain an appropriate tool to assess the effects of anesthetics even over the enormous scale difference between the C. elegans nervous system and the primate brain.
Research Support
Supported by National Institutes of Health (Bethesda, Maryland) grants R35 GM145319 and NIH R01 GM121457 and by departmental sources.
Competing Interests
Dr. Connor has consulted for Teleflex, LLC (Wayne, Pennsylvania), on issues regarding airway management and device design and for General Biophysics, LLC (Wayland, Massachusetts), on issues regarding pharmacokinetics. These activities are unrelated to the material in this article. The other authors declare no competing interests.
Supplemental Digital Content
Supplemental Figure 1, https://links.lww.com/ALN/D133
Appendix
Given a signal source X, the entropy H(X) represents the information content of that signal. Assuming that X can be in a finite number of states [x1, x2, x3, …, xn] with probabilities [P(x1), P(x2), P(x3), …, P(xn)], then the information content is given by the Shannon entropy function:
where H(X) is in bits if the logarithm is base-2. The entropy is maximal when the states xi are all equally probable, and the entropy drops to 0 if any one state is certain. If a process involves two signal sources, X and Y, then it is natural to question what information is contained in both sources and whether any information is shared between the two. Assuming X and Y have states x and y, respectively, the entropy of both sources together (i.e., the joint entropy) is
and the Mutual Information (MI), denoted as I(X;Y), which is the information that is common to both sources is given by
When both sources X and Y are independent of each other, the mutual information is 0.
Such equations are challenging to manipulate and conceptualize, and so it is helpful to note that it is possible to rewrite equation A3 as a simple combination of the underlying entropies:
These problems in the amount of information have a dual relationship based in additive set functions.23 This dual relationship can be considerably more intuitive. The idea is rather like the equivalence between the time-domain and frequency-domain of a signal: although these representations are equivalent, some problems are easier to solve in one domain than the other, and thus it is often useful to convert between the two domains. For example, if X and Y represent the corresponding sets for sources X and Y, then the mutual information I(X;Y) shared between the sources X and Y is simply X∩Y, and the joint entropy is X∪Y. Hence,
Consequently, in its dual representation as an additive function of sets, the mutual information shared between X and Y can be immediately visualized and calculated as
Equations A3 and A6 are equivalent representations, yet the graphical form of equation A6 is significantly more intuitive to manipulate and more useful for constructive reasoning.