A while back, a friend of mine asked me to help out in a longitudinal project trying to predict cognitive decline in (otherwise healthy) older adults (Mulligan, Areshenkoff, & Smart, 2015). One of the measures we used was resting state EEG, but since it’s not obvious where you would look in an EEG signal for predictors of future decline, we decided to look at general signal complexity at multiple scalp locations, using a measure called sample entropy (SampEn).

The general idea behind the approach is like this: We have a time series (in this case, the scalp voltage measured at a sensor), which we assume is the result of some underlying nonlinear dynamical system (brain activity). This system is high dimensional, but we only measure some unidimensional result of the system (the voltage at a single location). This is clearly not enough to reconstruct the system, but we can, in some sense, “extract” additional dimensions from by looking at the change in at different time lags, and then “projecting” the result into a higher dimension.

For example, the voltage at a single time point is 1-dimensional, but we if we take samples at lag , then we get the -dimensional vector . The value is then called the embedding dimension, and the time lag. We can then ask the question: if two points are similar in embedding dimension (that is, closer than some distance ), what is the probability that they will remain similar in dimension ? Letting be the probability that two points are similar in dimension , and be the probability that two points are similar in dimension , SampEn estimates

As an intuition aid, a constant or strictly periodic signal has zero entropy, whereas a signal with no correlation structure (such as white noise) has maximum entropy (what “maximum entropy” is, exactly, depends on the sample size and the parameters of the algorithm). SampEn thus provides a measure of the complexity of the signal.

Multiscale sample entropy (MSSampEn) is an extension of SampEn that examines the complexity of a time series at different time scales. The most common approach is straightforward: we partition into bins of length , and then compute the mean of within each bin, giving a new time series of length (this procedure is called coarse-graining). SampEn can then be computed for various value of , estimating the complexity of at increasing time scales. Time series which are “locally” unpredictable, but which exhibit “low frequency” structure, might then be expected to show increasing entropy at longer time scales. Conversely, white noise is “scale invariant” in some sense, and so would show constant entropy at any time scale. This is easy to show by setting the similarity threshold at some constant percentage of signal variance, which we can accomplish by scaling the series to have unit variance after coarse-graining. If the raw series is

the the coarse-grained series is

but after scaling, this is just more white noise, so SampEn is the same. If the series is not scaled after coarse-graining (or is not adapted), then, because coarse-graining reduces signal variance, entropy will decrease monotonically as increases.

Ahmed and Mandic (2011) introduced a multivariate extension of MSSampEn, which we’ll call MVMSSampEn, which takes into account the cross-correlations between variables in the time series. This is particularly useful for our purposes, since EEG experiments generally record from multiple sensor sites. In particular, we were interested in comparing entropy across frontal and “control” regions. Ahmed and Mandic helpfully provide a Matlab implementation here, which we ended up using to analyze our data. I don’t like using Matlab, though (since it isn’t open-source), so I ended up writing a more featurefull implementation in R, which I made into a package.

### Installation and use

The package can be installed directly from the Github repository using the devtools package:

devtools::install_github('areshenk/MSMVSampEn') library(MSMVSampEn)

The package implements multiscale multivariate sample entropy (MSMVSampEn) as described by Ahmed and Mandic (2011), with a few improvements over the author’s Matlab code. First, most of the heavy lifting has been rewritten in C++, resulting in an order of magnitude speedup. Second, the author’s method of estimating is extremely RAM intensive, since it computes and stores the full distance matrix between the observations. I just tally the number of observations within the threshold (also in C++), which sidesteps that issue entirely.

Entropy can then be calculated using the `MSMVSampEn()`

function, which accepts an matrix `mat`

, containing a -variate time series, a vector `M`

specifying the embedding dimension for each variable, a vector `tau`

specifying the time lag for each variable, a similarity threshold `r`

, and a time scale `eps`

. As an example, we can compute the entropy of a -variate white noise series using

n <- 2000 data <- matrix(rnorm(3*n), nrow = 3) MSMVSampEn(mat = data, M = c(2,2,2), tau = c(1,1,1), r = .5, eps = 1, scaleMat = T)

As a bonus, I allow the user to specify an arbitrary function for use during coarse-graining. The default in the mean (corresponding to traditional MSMVSampEn), but, as Humeau-Heurtier (2015) points out, higher moments carry useful information. For example, the variance carries information about the volatility of a time series at multiple scales. This is easy enough to do using by specifying `fun = var`

:

MSMVSampEn(mat = data, M = c(2,2,2), tau = c(1,1,1), r = .5, eps = 1, scaleMat = T, fun = var)

There are a few issues I hope to address eventually, mostly related to the coarse-graining. As Humeau-Heurtier points out, the averaging procedure used in the standard MSMVSampEn is a very crude low-pass filter with *terrible* spectral properties. Much better performance can be obtained with a proper filter.

The package can be found here.

### References

Ahmed, M. U., & Mandic, D. P. (2011). Multivariate multiscale entropy: A tool for complexity analysis of multichannel data. *Physical Review E*, 84(6), 061918.

Humeau-Heurtier, A. (2015). The multiscale entropy algorithm and its variants: A review. *Entropy*, 17(5), 3110-3123.

Mulligan, B. P., Areshenkoff, C. N., & Smart, C. M. (2015, September). EEG entropy predicts intensively-measured cognitive performance in healthy older adults. In Psychophysiology (Vol. 52, pp. S25-S25). 111 River St, Hoboken 07030-5774, NJ USA: Wiley-Blackwell.