Next Article in Journal
Entropy Churn Metrics for Fault Prediction in Software Systems
Previous Article in Journal
Multifractality of Pseudo-Velocities and Seismic Quiescence Associated with the Tehuantepec M8.2 EQ
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Range Entropy: A Bridge between Signal Complexity and Self-Similarity

1
The Florey Institute of Neuroscience and Mental Health, Austin Campus, Heidelberg, VIC 3084, Australia
2
Faculty of Medicine, Dentistry and Health Sciences, The University of Melbourne, VIC 3010, Australia
3
Department of Electrical and Computer Engineering, Sultan Qaboos University, Muscat 123, Oman
4
Department of Neurology, Austin Health, Melbourne, VIC 3084, Australia
*
Author to whom correspondence should be addressed.
Entropy 2018, 20(12), 962; https://doi.org/10.3390/e20120962
Submission received: 1 November 2018 / Revised: 3 December 2018 / Accepted: 6 December 2018 / Published: 13 December 2018

Abstract

:
Approximate entropy (ApEn) and sample entropy (SampEn) are widely used for temporal complexity analysis of real-world phenomena. However, their relationship with the Hurst exponent as a measure of self-similarity is not widely studied. Additionally, ApEn and SampEn are susceptible to signal amplitude changes. A common practice for addressing this issue is to correct their input signal amplitude by its standard deviation. In this study, we first show, using simulations, that ApEn and SampEn are related to the Hurst exponent in their tolerance r and embedding dimension m parameters. We then propose a modification to ApEn and SampEn called range entropy or RangeEn. We show that RangeEn is more robust to nonstationary signal changes, and it has a more linear relationship with the Hurst exponent, compared to ApEn and SampEn. RangeEn is bounded in the tolerance r-plane between 0 (maximum entropy) and 1 (minimum entropy) and it has no need for signal amplitude correction. Finally, we demonstrate the clinical usefulness of signal entropy measures for characterisation of epileptic EEG data as a real-world example.

1. Introduction

Complexity is a global concept in data analysis that is observed in a wide range of real-world phenomena and systems including biological signals [1,2,3,4,5,6], brain dynamics [7,8,9], mechanical systems [10,11], climate change [12], volcanic eruption [13], earthquakes [14], and financial markets [15]. It is difficult to provide a formal definition for signal complexity. This concept, however, can be approached as a mid-point situation between signal regularity and randomness. From this perspective, complexity can be defined as the amount of nonlinear information that a time series conveys over time. Highly random fluctuations (such as white noise as an extreme case) have very low complexity, because they present no regular pattern in their dynamical behaviour. Real-world phenomena, on the other hand, usually contain spreading patterns of nonlinear ’structured activity’ across their frequency components and temporal scales. Dynamics of the brain or fluctuation of stock markets are examples of complex processes. Despite the importance of complexity in science, its quantification is not straightforward. Time-frequency distributions and wavelet transforms [16] are examples of analysis tools for capturing signal dynamics, but they may be insensitive to nonlinear changes.
A promising avenue for understanding temporal complexity is through signal entropy analysis, a family of methods rooted in information theory. Entropy rate of a random process is defined as the average rate of generation of new information [17]. In this context, independent and identically distributed white noise is assumed to have maximal entropy and disorder. This is because identically distributed white noise has a normal distribution where each upcoming time point contains new information. On the other hand, a completely periodic signal with a repeating pattern of constant values will lead to minimal entropy, as there is no generation of new information. The most prominent types of signal entropy measures include Shannon entropy [17], Renyi entropy [18], Kolmogorov entropy [19,20], Kolmogorov–Sinai entropy [21], Eckmann–Ruelle entropy [22], approximate entropy (ApEn) [23], sample entropy (SampEn) [24], and multi-scale entropy [25]. See [26] for more examples of entropy-based signal measures.
Among the aforementioned signal entropy measures, ApEn and SampEn are two of the most commonly used measures in contemporary science, especially in the analysis of biological signals [27]. Like ApEn, SampEn resembles a template-matching search throughout the input signal with two main parameters: embedding dimension m and tolerance r. The former governs the length of each segment (template) to be searched and the later controls the level of similarity between segments. In fact, SampeEn stems from ApEn after addressing some of its limitations including inconsistency over the parameter r and strong dependency to the input signal length [24]. However, both measures still suffer from sensitivity to signal amplitude changes. Another important aspect of these measures is their inverse relationship with the Hurst exponent as a measure of self-similarity in signals [28]. The analysis of this link, however, deserves more attention.
In this study, we investigate the behaviour of ApEn and SampEn in the presence of self-similarity and examine their relationship with the Hurst exponent through their tolerance and embedding dimension parameters. We also address the issue of sensitivity to signal amplitude changes in ApEn and SampEn by developing modified versions called range entropies or RangeEn. We compare RangeEn with ApEn and SampEn from different perspectives using multiple simulations. Finally, we demonstrate the capacity of signal entropy measures for epileptic EEG characterisation. A Python implementation of this study is publicly available at https://github.com/omidvarnia/RangeEn.

2. Materials and Methods

2.1. Signal Complexity Analysis

2.1.1. Reconstructed Phase Space

Numerical computation of signal entropy for a uniformly-sampled signal x = { x 1 , x 2 , , x N } can be done through the concept of reconstructed phase space [27]. It represents the dynamical states of a system with state variables X i m , τ defined as [29]:
{ X i m , τ = { x i , x i + τ , , x i + ( m 1 ) τ } i = 1 , , N ( m 1 ) τ ,
where m denotes the embedding dimension and τ is the delay time. X i m , τ represents a state vector in an m-dimensional phase space V x . The parameter τ is also referred to as scale.
Given a reconstructed state vector X i m , τ , it is possible to partition V x into small non-overlapping and equisized regions ε k , so that k ε k = V x and k ε k = 0 . Signal entropy can then be computed by assigning a probability value p k to each region as the probability of visiting the phase trajectory [27].
From now on, we consider a special case of V x where τ = 1 . In this case, the state vector X i m , τ is reduced to a vector sequence of x i through to x i + m 1 , i.e.,:
X i m = { x i , x i + 1 , , x i + m 1 } , i = 1 , . . . , N m + 1 .

2.1.2. Approximate Entropy

Let each X i m in Equation (2) be used as a template to search for neighbouring samples in the reconstructed phase space. Two templates X i m and X j m are matching if their relative distance is less than a predefined tolerance r. The distance function used in both ApEn and SampEn is the Chebyshev distance defined as d c h e b y s h e v ( X i m , X j m ) : = max k ( | x i + k y j + k | , k = 0 , , m 1 ) . It leads to an r-neighbourhood conditional probability function C i m ( r ) for any vector X i m in the phase space V x :
C i m ( r ) = 1 N m + 1 B i m ( r ) , i = 1 , , N m + 1 ,
where
B i m ( r ) = { N o . o f X j m s | d c h e b y s h e v ( X i m , X j m ) r } j = 1 , . . . , N m + 1 .
Let Φ m ( r ) be the sum of natural logarithms C i m ( r ) ; that is,
Φ m ( r ) = i = 1 N m + 1 l n C i m ( r ) .
The rate of change in Φ m ( r ) along the embedding dimension m is called the Eckmann–Ruelle entropy and is defined as [22]:
H E R = lim r 0 lim m 0 lim N Φ m ( r ) Φ m + 1 ( r ) .
An approximation of H E R , proposed by Pincus through fixing r and m in Equation (6), is called approximate entropy (ApEn) [23,30]:
A p E n = lim N Φ m ( r ) Φ m + 1 ( r ) , r , m f i x e d .
ApEn quantifies the mean negative log probability that an m-dimensional state vector will repeat itself at dimension (m + 1). It is recommended that the tolerance is corrected as r × S D (SD being the standard deviation of x ) to account for amplitude variations across different signals.

2.1.3. Sample Entropy

As Equations (3) and (4) suggest, ApEn allows for the self-matching of templates X i m in the definition of C i m ( r ) to avoid the occurrence of ln(0) in its formulation [30]. However, this will result in an unwanted bias that occurs in particular for short signal lengths (small N). Inconsistency of ApEn over the tolerance parameter r has also been reported [24,30]. In order to address these issues, sample entropy was developed by updating B i m ( r ) in Equation (4) [24]:
B i m ( r ) = { N o . o f X j m s | d c h e b y s h e v ( X i m , X j m ) r } j = 1 , , N m , j i ,
averaging over time as:
B m r = 1 N m i = 1 N m B i m ( r ) .
Sample entropy is then defined as:
S a m p E n = lim N l n B m + 1 r B m r .
There are three major differences between SampEn and ApEn:
(1)
Conditional probabilities of SampEn, i.e., B i m ( r ) in Equation (8), are obtained without self-matching of the templates X i m .
(2)
Unlike ApEn, which takes the logarithm of each individual probability value (see Equation (5)), SampEn considers the logarithm of the sum of probabilities in the phase space (see Equations (9) and (10)).
(3)
ApEn is defined under all circumstances due to its self-matching, while SampEn can sometimes be undefined, as B m r and B m + 1 r in Equation (10) are allowed to be zero.
Since d c h e b y s h e v ( X i m , X j m ) is always smaller than or equal to d c h e b y s h e v ( X i m + 1 , X j m + 1 ) , B m + 1 r is less than B m r for all values of m. Therefore, SampEn is always non-negative [24]. The parameter set of m = 2 and r between 0.2 to 0.6 has been widely used for extracting SampEn in the literature [9,24,28].

2.2. Signal Self-Similarity Analysis

2.2.1. Self-Similar Processes

The time series x ( t ) is self-similar or scale-invariant if it repeats the same statistical characteristics across multiple temporal scales [31]. In this case, scaling along the time axis by a factor of a requires a rescaling along the signal amplitude axis by a factor of a H ; that is, x ( a t ) = a H x ( t ) for all t > 0 , a > 0 , and H > 0 . The Hurst exponent is a common measure of quantifying self-similarity in signals. Intuitively, the more a signal is self-similar, the more its long-term memory increases. Given the definition of signal entropy as ‘the average rate of generation of new information’ [17], we expect a link between signal entropy and self-similarity. This can be investigated by looking into the signal entropy values of time series with certain degrees of self-similarity. Fractional Levy and Brawnian motions (fLm and fBm, respectively) are well suited for this purpose. The fBm signal B H ( t ) is a continuous-time Gaussian process whose difference also leads to a Gaussian distribution, and its self-similarity level is controlled by its Hurst parameter. It is given by [31]:
B H ( t ) = { ( t u ) + H 1 / 2 ( u ) + H 1 / 2 } d B ( u ) ,
where H is the Hurst exponent ( 0 < H < 1 ), ( x ) + : = m a x ( x , 0 ) . B ( t ) is an ordinary Brownian motion, a spacial case at H = 0.5 , whose frequency spectrum follows the 1 / f 2 pattern. fBm has the following covariance function:
E { B H ( t ) B H ( s ) } = 1 2 ( t 2 H + s 2 H t s 2 H ) , t , s 0 .
It represents self-similarity (or long-term memory) for H > 0.5 and anti self-similarity (or short-term memory) for H < 0.5 . A more general form of fBm is the fLm which is defined based on α -stable Levy processes L α ( t ) with the following characteristic function (the Fourier transform of the probability density function) [32]:
f ( x ) = 1 π 0 e | C k | α c o s ( k x ) d k ,
where α is the Levy index ( 0 < α 2 ), and C > 0 is the scale parameter controlling the standard deviation of Gaussian distributions. The fLm signal Z H α ( t ) is given by [31]:
Z H α ( t ) = { ( t u ) + d ( u ) + d } d L α ( u ) ,
where d = H 1 / α . For α = 2 , Equation (13) is reduced to the characteristic function of a Gaussian distribution and Z H α ( t ) is converted to fBm.

2.2.2. Rescaled Range Analysis for Self-Similarity Assessment

A commonly used approach for estimating the Hurst exponent of an N-long time series x is through rescaled range analysis [33]. It applies a multi-step procedure on x = { x 1 , x 2 , , x N } as follows:
(1)
Divide x into n equisized non-overlapping segments x n s with the length of N / n , where s = 1 , 2 , 3 , , n and n = 1 , 2 , 4 , . This process is repeated as long as x n s has more than four data points.
(2)
For each segment x n s ,
(a)
Center it as y n s = x n s m n s , where m n s is the mean of x n s . y n s shows the deviation of x n s from its mean.
(b)
Compute the cumulative sum of centered segment y n s as z n s = i = 1 N / n y n s ( i ) . z n s shows the total sum of y n s as it proceeds in time.
(c)
Calculate the largest difference within the cumulative sum z n s , namely,
R n s = max k z n s ( k ) min k z n s ( k ) .
(d)
Calculate the standard deviation of x n s as S n s and obtain its r e s c a l e d r a n g e as R n s / S n s .
(3)
Compute the average rescaled range at n as R ( n ) / S ( n ) = ( 1 / n ) s = 1 n R n s / S n s .
The average rescaled range R ( n ) / S ( n ) is modelled as a power law function over n whose asymptotic behaviour represents the Hurst exponent:
lim n E { R ( n ) / S ( n ) } = C n H .
H can be estimated as the slope of the logarithmic plot of the rescaled ranges versus ln(n). The main idea behind rescaled range analysis is to quantify the fluctuations of a signal around its stable mean [33]. Next, we relate ApEn and SampEn with rescaled range analysis through their embedding dimension parameter m. We then introduce a change into these measures, which makes them more sensitive to self-similarity of signals. We will show the link between entropy measures and the Hurst exponent in Section 3 through simulations of the fBm and fLm processes.

2.3. Complexity and Self-Similarity Analyses Combined

2.3.1. RangeEn: A Proposed Modification to ApEn and SampEn

Both ApEn and SampEn aim to extract the conditional probabilities of B i m ( r ) by computing the Chebyshev distance between two templates (or state vectors) X i m and X j m in the reconstructed m-dimensional phase space, as shown in Equations (4) and (8). The idea here is to estimate the (logarithmic) likelihood that runs of patterns that are close remain close on next incremental comparisons [30]. The closer the two states stay together in the reconstructed phase space over time, the less change they will introduce into the signal dynamics. The idea of quantifying the Chebyshev distance between two state vectors originated from the seminal paper by Takens [34].
Although d c h e b y s h e v ( X i m , X j m ) can provide useful information about the variation of state vectors, it has two limitations. First, it is not normalised as it has no upper limit. It leads to an unbounded range for the tolerance parameter r in the conditional probabilities B i m ( r ) (see Equations (4) and (8)). Second, it only considers the maximum element-wise difference between two state vectors, so it is blind to the lower limit of this differences. To address these issues, we adapt the general idea behind the average rescaled range R ( n ) / S ( n ) in Equation (16) and propose an updated version of distance function for ApEn and SampEn as follows:
d r a n g e ( X i m , X j m ) = max k x i + k x j + k min k x i + k x j + k max k x i + k x j + k + min k x i + k x j + k k = 0 , , m 1 .
In the spacial case of a two-dimensional reconstructed phase space (m = 2), d r a n g e ( X i m , X j m ) is reduced to the simple form of ( a b ) / ( a + b ) , where a = max { ( x i x j ) , ( x i + 1 x j + 1 ) } and b = min { ( x i x j ) , ( x i + 1 x j + 1 ) } . In fact, d r a n g e considers the stretching of state vectors across time and dimension. In contrast to d c h e b y s h e v ( X i m , X j m ) , the proposed d r a n g e ( X i m , X j m ) is normalised between 0 and 1. It also recognises the range of element-wise differences between X i m and X i m by combining the absolute value, min and max operators. d r a n g e ( X i m , X j m ) is defined over all values, except for identical m-dimensional segments where the denominator in Equation (17) becomes zero.
Strictly speaking, d r a n g e ( X i m , X j m ) is not a distance per se, because it does not satisfy all conditions of a distance function. For any two equilength vectors v 1 and v 2 , these requirements are defined as follows [35]:
( 1 ) d i s t ( v 1 , v 2 ) 0 ( n o n n e g a t i v i t y ) ( 2 ) d i s t ( v 1 , v 2 ) = d i s t ( v 2 , v 1 ) ( s y m m e t r y ) ( 3 ) d i s t ( v 1 , v 1 ) = 0 ( r e f l e x i v i t y ) .
d r a n g e ( X i m , X j m ) violates the first and third conditions, as it is undefined for equal templates. In fact, d r a n g e ( X i m , X j m ) does not necessarily increase as X i m and X j m become farther away from one another by other definitions of distance functions. For instance, assume all elements of X i m are increased by a constant positive value. The numerator of d r a n g e then remains unchanged, while the denominator increases, leading to a reduction in d r a n g e , even though the Euclidean distance between X i m and X j m has increased. Having this in mind, we have referred to d r a n g e ( X i m , X j m ) as a distance function throughout this paper for practicality. By replacing d c h e b y s h e v ( X i m , X j m ) in Equations (4) and (8) with d r a n g e ( X i m , X j m ) , we update ApEn and SampEn as two new range entropy measures, i.e., R a n g e E n A and R a n g e E n B , respectively.

2.3.2. Properties of RangeEn

Property 1: RangeEn is more robust to nonstationary amplitude changes. Unlike SampEn and ApEn, which are highly sensitive to signal amplitude changes, RangeEn is less affected by variation in the magnitude of signals. This originates from the in-built normalisation step in d r a n g e ( X i m , X j m ) , which is directly applied to the amplitude of all templates.
Property 2: In terms of r, RangeEn is constrained in the interval [ 0 , 1 ] . It becomes more obvious if we rewrite Equation (4) (and similarly, Equation (8)) as:
B i m ( r ) = j = 1 N m + 1 Ψ ( r d r a n g e ( X i m , X j m ) ) ,
where Ψ ( . ) is the Heaviside function defined as:
Ψ ( a ) = 0 a < 0 1 a 0 .
Since d r a n g e ( X i m , X j m ) is normalised, we conclude from Equation (19) that:
R a n g e E n A : B i m ( r ) = N m + 1 r 1 R a n g e E n B : B i m ( r ) = N m r 1 .
This ensures that both conditional probability functions C i m ( r ) in Equation (3) and B m r in Equation (9) will always be equal to 1 for r 1 leading to the following property for R a n g e E n A and R a n g e E n B :
R a n g e E n A ( x , m , r ) = 0 r 1 R a n g e E n B ( x , m , r ) = 0 r 1 .
Property 3: RangeEn is more sensitive to the Hurst exponent changes. We will show through simulations that all of ApEn, SampEn, and RangeEn reflect self-similarity properties of the signals in the r and m domains to different extents. However, ApEn and SampEn may become insensitive to self-similarity over a significant interval of Hurst exponents, while RangeEn still preserves its link.

2.4. Simulations

We used simulated data in order to test the behaviour of ApEn, SampEn, and RangeEn on random processes.

2.4.1. Synthetic Data

We simulated 100 realisations of Gaussian white noise (N(0,1)), pink ( 1 / f ) noise, and brown noise ( 1 / f 2 ) and extracted their different entropy estimates across a range of signal lengths and tolerance parameters r. We used Python’s acoustics library (https://pypi.org/project/acoustics/) to generate noise signals.
We also generated a range of fixed-length fBm and fLm signals (N = 1000) with pre-defined Hurst exponents ranging from 0.01 (minimal self-similarity) to 0.99 (maximal self-similarity) with the increasing step of Δ H = 0.01. We fixed the α parameter of all fractional Levy motions to 1. We used Python’s nolds library (https://pypi.org/project/nolds/) to simulate the fBm time series and flm (https://github.com/cpgr/flm) library to generate fLm signals.

2.4.2. Tolerance Parameter r of Entropy and the Hurst Exponent

For each of the fBm and fLm signals at different self-similarity levels, we set the embedding dimension parameter m to 2 (a widely used value across the literature) and computed different entropies over a span of tolerance values r from 0.01 to 1 with 0.01 increasing steps. In this way, we investigated the relationship between a systematic increase in self-similarity (modelled by the Hurst exponent) and the tolerance parameter r in the measures. For each r-trajectory, we estimated the slope of a fitted line to the entropy measures with respect to log(r) and called this quantity r-exponent.

2.4.3. Embedding Dimension m of Entropy and the Hurst Exponent

Similar to Section 2.4.2, this time we fixed the tolerance parameter r to 0.2 (a common choice in previous studies) and investigated entropy measures over different embedding dimensions m from 2 to 10. Therefore, we examined the relationship between a systematic change in the Hurst exponent and the embedding dimension m. For each m-trajectory, we estimated the slope of a fitted line to the entropy measures with respect to log(m) and called this quantity m-exponent.
For both analyses described in Section 2.4.2 and Section 2.4.3, we did not perform line fitting for those time series whose extracted entropy measures were undefined for at least one r or m values. We repeated the above tests with and without amplitude correction (i.e., dividing the signal amplitude by its standard deviation). This correction step is recommended for ApEn and SampEn analyses, as it can reduce their sensitivity to differences in signal amplitudes (see [24]).

2.5. Epileptic EEG Datasets

We used three out of five datasets of a public epileptic EEG database [36] in our study. Each dataset consisted of 100 single-channel EEG segments with the length of 23.6 s and sampling frequency of 173.61 Hz (N = 4097) which were randomised over recording contacts and subjects. Datasets C, D, and E of [36] are intracranial EEG (iEEG) from five epilepsy patients who had epilepsy surgery at the hippocampal area and became seizure-free after that. Dataset C was recorded from the hippocampal formation on the opposite (contralateral) side of seizure focus, while dataset D was recorded from the hippocampal area on the seizure (ipsilateral) side. Both datasets C and D were obtained during interictal (seizure-free) intervals. In contrast, dataset E covered ictal (seizure) intervals only.
All datasets were obtained using a 128-channel EEG recorder with common average referencing. Additionally, eye movement artefacts and strong pathological activities were identified and removed from the signals through visual inspection. A band-pass filter of 0.53–40 Hz was applied to the data. See [36] for more details about these datasets.

3. Results

3.1. Sensitivity to Signal Length

We simulated three color noise types at different lengths varying from 50 to 1000 samples increasing with 10-sample increasing steps. One hundred realisations of each noise type were generated. Four entropy measures (ApEn, SampEn, R a n g e E n A , and R a n g e E n B ) were then computed from the simulated noise signals. For all entropy measures, we fixed the dimension m to 2 and the tolerance r to 0.2. Figure 1 illustrates the errorbar plot of each entropy measure for the three noise types over different signal lengths. As the figure suggests, the variations of R a n g e E n A and R a n g e E n B are smaller than both ApEn and SampEn over different lengths. Among the four measures, SampEn has the largest standard deviations (poor repeatability) at each signal length, especially for shorter signals. A common observation in all measures is that their standard deviation increases and their mean decreases by increasing the exponential decay in the frequency domain, given a higher spectral exponent of Brown noise compared to pink noise and of pink noise compared to white noise. ApEn is the most sensitive measure to signal length, as its mean tends to change (almost linearly) with the data length. RangeEn measures present a more stable mean (in contrast to ApEn) with small variance (in contrast to SampEn).

3.2. The Role of Tolerance r

To investigate the effect of tolerance r on entropy measures, we again simulated three noise types at the fixed length of N = 1000 samples. We computed the measures at m = 2, but over a range of tolerance values r from 0.01 to 1 in increments of 0.01. Figure 2 illustrates the entropy patterns in the r-plane for each noise type. Five observations can be drawn from this analysis. First, both R a n g e E n A and R a n g e E n B reach zero at r = 1. This is not the case for ApEn and SampEn. Second, SampEn shows the highest standard deviation, in particular at low r values ( r 0.3 ). Third, R a n g e E n A has the highest number of undefined values across the four measures (note the missing values of R a n g e E n A as vacant points in the figures, especially in the white noise and pink noise results). Finally, the level of exponential decay in the frequency domain appears to be coded in the slope and starting point of the RangeEn trajectories in the r-plane. Figure 2 suggests that Brown noise ( 1 / f 2 , with the largest spectral decay amongst the three noise types) has the lowest entropy pattern, while white noise with no decay in the frequency domain has the steepest entropy trajectory, with the largest starting value of 4 at r = 0.

3.3. Dependency to Signal Amplitude

To evaluate the effect of signal amplitude on entropy, we simulated a white noise signal x 1 ( n ) with N = 1000 time points and its copy multiplied by 5, i.e., x 2 ( n ) = 5 x 1 ( n ) (see first and second rows in the top panel of Figure 3, respectively). We then computed ApEn, SampEn, R a n g e E n A , and R a n g e E n B for m = 2 and a range of tolerance values r from 0.01 to 1 with Δ r = 0.01. As Figure 3 shows, R a n g e E n A and R a n g e E n B obtained from x 1 ( n ) and x 2 ( n ) are nearly identical, while ApEn and SampEn diverge. In most of the existing ApEn and SampEn studies in the literature, the input signal is divided by its SD to reduce the dependency of the entropy on the signal gain factor. This solution is useful only for stationary changes of signal amplitude, where the entire SD of the whole signal is an accurate description of its variability. We therefore designed a more difficult test for the entropies using a nonstationary signal x 3 ( n ) , whose SD is time-varying:
x 3 ( n ) = x 1 ( n ) n = 1 , , 200 3 x 1 ( n ) n = 201 , , 400 10 x 1 ( n ) n = 401 , , 600 4 x 1 ( n ) n = 601 , , 800 x 1 ( n ) n = 801 , , 1000 . .
The signal x 3 ( n ) (illustrated in the third row in the top panel of Figure 3) resembles a nonstationary random process which has been generated through a stationary process modelled by x 1 ( n ) , but also affected by a time-varying amplitude change. In order to correct for the amplitude (gain) variation prior to computing the entropies ApEn and SampEn, we replaced x 3 ( n ) by x 3 ( n ) / σ x 3 for these two entropy measures where σ x 3 is the standard deviation of x 3 ( n ) . As entropy patterns of Figure 3 suggest, even after applying this amplitude correction, ApEn and SampEn are still sensitive to amplitude changes. This is, however, not the case for RangeEn measures that are much less affected by this nonstationary change.

3.4. Relationship with the Hurst Exponent

The results of ApEn and SampEn for fLm signals with different Hurst exponents are summarised in Figure 4. As seen in Figure 4A–D, ApEn and SampEn show a systematic relationship with the Hurst exponent. In particular, SampEn has an inverse monotonic relationship with the Hurst exponent in the r-plane (note the descending colour scale along the y-axis at all r values). Although the relationship between ApEn and Hurst is not as monotonic as that of SampEn, it still shows a systematic change. One way of quantifying these changes is by examining their corresponding m-exponents and r-exponents (i.e., the linear slopes of entropy patterns versus ln(m) and ln(r), respectively. See Section 2.4.2 and Section 2.4.3). Figure 4E–H suggest that m- and r-exponents of both ApEn and SampEn are related to the Hurst exponent in a nonlinear way, before signal amplitude correction. Additionally, their trajectories reach a plateau in the r and m domains at high self-similarity levels (note the relatively flat regions of red dots in Figure 4E–H). This implies that ApEn and SampEn lose their link with the Hurst exponent in highly self-similar signals. The black dotted plots in Figure 4E–H suggest that signal amplitude correction results in a more linear relationship between the Hurst exponent and r-exponent, but it is less for the m-exponent.
We repeated the same analysis for fBm signals with the results illustrated in Figure 5. As the figure shows, signal amplitude correction has a more significant impact on entropy patterns of fBm than fLm. For example, there is almost no systematic relationship between SampEn and the Hurst exponent without amplitude correction (see Figure 5B versus Figure 5D). Additionally, the number of defined SampEn is reduced if we do not perform this correction (note the very low number of red dots in Figure 5F and the absence of any red dot in Figure 5H). Similar to the fLm results in Figure 4, amplitude correction can linearise the relationship between entropy exponents and the Hurst exponent.
A similar analysis using R a n g e E n A and R a n g e E n B highlights their properties in contrast to ApEn and SampEn. The results extracted from fLm and fBm signals are summarised in Figure 6 and Figure 7, respectively. Firstly, the patterns of R a n g e E n A and R a n g e E n B are relatively similar to each other, except that R a n g e E n A has a considerable amount of missing (undefined) values, specially over low r values. Secondly, the r- and m-exponents of both R a n g e E n A and R a n g e E n B have a more linear relationship with the Hurst exponent than ApEn and SampEn. In particular, the flat regions of their exponents over high H values are shorter than those of ApEn and SampEn (see Panels E to H of Figure 6 and Figure 7).

3.5. Linear Scaling of the Covariance Matrix in fBm

As another test of robustness to amplitude variations, we investigated whether the relationship between signal entropy and Hurst exponents of fBm (Figure 5 and Figure 7) are independent from linear scaling of its covariance matrix defined in Equation (12). We simulated fBm signals using the Cholesky decomposition method [37] ( f b m function of Python’s nolds library) at a Hurst exponent of H = 0.75 and five scaling coefficients D = 0.001, 0.01, 1, 10, and 100, where the value of D = 1 leads to the original form of fBm. Figure 8 shows the estimated entropy patterns of altered fBm. Note that we did not correct the input signals to ApEn and SampEn by their standard deviation to ensure the same testing condition for all four measures. The results in Figure 8 show that ApEn and SampEn are more vulnerable to linear scaling of the covariance matrix than RangeEn.

3.6. Analysis of Epileptic EEG

We performed self-similarity analysis of epileptic EEG datasets by extracting their Hurst exponent through the standard rescaled range approach [38] ( h u r s t _ r s function of Python’s nolds library). Figure 9A illustrates the distributions of Hurst exponents for three datasets. Whilst interictal segments are clustered toward higher self-similarity levels, ictal segments have been distributed across a wider range between high and low self-similarity. Figure 9B–E represent the patterns of ApEn, SampEn, R a n g e E n A , and R a n g e E n B in the r-plane for the three EEG datasets (corrected amplitudes and fixed m of 2). In all plots, the two interictal r-trajectories are close to each other and represent a relatively different trajectory to the ictal state.

4. Discussion

In this study, we showed that signal complexity measures of ApEn and SampEn are linked to the self-similar properties of signals quantified by their Hurst exponent. However, they may become insensitive to high levels of self-similarity due to the nonlinear nature of this relationship. We subsequently introduced a modification to ApEn and SampEn (called R a n g e E n ) that not only improves their insensitivity issue but also alleviates the need for amplitude correction.
Signal complexity analysis can be approached through the concept of state vectors in the reconstructed phase space [23,30,39]. From this perspective, ApEn and SampEn of a random process assess its dynamics in the phase space by quantifying the evolution of its states over time. This is done through computing the Chebyshev distance d c h e b y s h e v , as a measure of similarity between state vectors, and obtaining the conditional probability of space occupancy by the phase trajectories, as detailed in Section 2.1.1, Section 2.1.2 and Section 2.1.3. However, d c h e b y s h e v only considers the maximum element-wise difference between two state vectors while ignoring the lower limit of this differences. In addition, it is not normalised, thus sensitive to changes in signal magnitude (gain) and defined for all values of the tolerance parameter r (from 0 to ). This last issue leads to unbounded values of ApEn and SampEn along the r-axis. In order to alleviate these limitations, we replaced d c h e b y s h e v with a normalised distance (called range distance or d r a n g e ) defined in Equation (17) prior to computing the entropies ApEn and SampEn. This led to modified forms of ApEn and SampEn, namely R a n g e E n A and R a n g e E n B , respectively.
R a n g e E n A and R a n g e E n B offer a set of desirable characteristics when applied to simulated and experimental data. First, they are more robust to signal amplitude changes compared to ApEn and SampEn. This property originates from the fact that the distance used in the definition of the proposed entropies is normalised between 0 and 1. Unlike ApEn and SampEn measures that require an extra amplitude regulation step that involves multiplying the tolerance parameter r by the input signal’s standard deviation [24], the RangeEn measures are needless of any amplitude correction. This is a plausible feature when analysing real-world signals, which are usually affected by confounding amplitude changes such as artefacts. Figure 3 illustrates two situations where ApEn and SampEn are highly sensitive to variations of signal amplitude, contrary to RangeEn measures. It is for future work to investigate the vulnerability of R a n g e E n to more complicated cases of nonstationarity compared with those shown in Figure 3.
The second desirable property of RangeEn is that, regardless of the dynamic nature of the signal, both R a n g e E n A and R a n g e E n B measures always reach 0 at the tolerance value r of 1. The explanation of this property is straightforward: r = 1 is the value where all m-long segments X i m and X j m match. This leads to the joint conditional probability being 1 (see Equations (19)–(22)).
According to the simulation results of fLm and fBm signals with certain Hurst exponents, all of ApEn, SampEn, and RangeEn measures are able to reflect the self-similarity of time series to different extents. However, RangeEn have a more linear relationship with the Hurst exponent. This brings us to the third property of the RangeEn measures, namely a more linear link between their r- and m-exponents and the Hurst exponent, compared to ApEn and SampEn. We evaluated this property by extracting RangeEn measures from fLm and fBm signals, as their level of self-similarity can be accurately controlled through their Hurst exponent. We simulated these processes for different values of the Hurst exponents ranging from 0.01 (very short memory or high anti self-similarity) to 0.99 (very long memory or high self-similarity). The simulation results (Figure 4, Figure 5, Figure 6 and Figure 7) reveal a regular pattern of almost linearly decreasing Hurst exponents associated with the slope of RangeEn trajectories versus ln(r) and ln(m). This pattern is more nonlinear and sometimes non-monotonically increasing for ApEn and SampEn.
Among the four signal entropy measures investigated in our study, ApEn is the only measure that is always defined due to the self-matching of state vectors (or templates) in its definition [23]. SampEn and R a n g e E n B may result in undefined values, as they compute the logarithm of the sum of conditional probabilities C i m ( r ) , which could lead to ln(0) (see Section 2.1.3 and Section 2.3.1 for more details). This issue may also happen to R a n g e E n A , as it calculates the sum of log probabilities (i.e., l n C i m ( r ) ). However, the number of undefined values in R a n g e E n A is usually much higher than SampEn and R a n g e E n B . This is because it is more likely that all joint conditional probabilities ( C i m ( r ) ) between a single state vector and the rest of the state vectors in the phase space become zero, in particular, at small tolerance values of r where the small partitions of phase space are not visited by any trajectory. Figure 7 provides an exemplary situation where there are many undefined R a n g e E n A values for fBm compared to the other three.
A realisation of real-world signal complexity is reflected in EEG signals. EEG conveys information about electrical activity of neuronal populations within cortical and sub-cortical structures in the brain. Epilepsy research is a field that significantly benefits from EEG analysis, as the disease is associated with abnormal patterns in EEG such as seizures and interictal epileptiform discharges [40]. Therefore, characterisation of abnormal events in epileptic EEG recordings is helpful in the diagnosis, prognosis, and management of epilepsy [27,41,42]. Our results suggest that interictal EEG at the intracranial level is more self-similar than ictal EEG with clustered Hurst exponents toward 1. On the other hand, the Hurst exponent of ictal EEG covers high and low self-similarity (see Figure 9A). Therefore, self-similarity may not be a discriminative feature of ictal state. All entropy measures represent distinctive trajectories in the r-plane for interictal versus ictal states with relatively low variance over EEG segments (see Figure 9B–E). This implies that signal complexity analysis may be more beneficial than self-similarity analysis for epileptic seizure detection and classification. Note that, in the absence of any time-varying artefact, EEG signals can be considered as weak stationary processes [43]. Therefore, a correction of the amplitude changes by the standard deviation of the signal in ApEn and SampEn may lead to comparable results with RangeEn.
Multiscale entropy is a generalisation of SampEn where the delay time (or scale factor) τ in Equation (1) is expanded to an interval of successive integers starting from 1 through coarse-graining of the input signal [25]. It is straightforward to extend this idea to the RangeEn measures. Exploring the properties and capacities of multiscale RangeEn is left for future research.

5. Conclusions

In this study, we proposed modifications to ApEn and SampEn called R a n g e E n A and R a n g e E n B , respectively. We showed that these new signal complexity measures, compared with ApEn and SampEn, are more sensitive to self-similarity in the data and more robust to changes in signal amplitude. Additionally, they do not need any signal amplitude correction. We showed, in an exemplary application, that signal entropies can differentiate between normal and epileptic brain states using EEG signals. Given the high interest accorded to ApEn and SampEn in different scientific areas (more than 4000 citations each since being introduced), we believe that our present study has targeted a significant problem by addressing some of their important practical limitations.

Author Contributions

Conceptualisation, A.O., M.M., M.P., and G.J.; formal analysis, A.O.; funding acquisition, G.J.; methodology, A.O., M.M., and M.P.; project administration, A.O.; resources, G.J.; software, A.O.; supervision, G.J.; validation, A.O., M.M., and M.P.; visualisation, A.O.; writing—original draft preparation, A.O.; writing—review and editing, A.O., M.M., M.P., and G.J.

Funding

This work was supported by the National Health and Medical Research Council (NHMRC) of Australia (program grant 1091593). G.J. was supported by an NHMRC practitioner fellowship (1060312). The Florey Institute of Neuroscience and Mental Health acknowledges the strong support from the Victorian Government and in particular the funding from the Operational Infrastructure Support Grant. This research was supported by Melbourne Bioinformatics at the University of Melbourne, grant number UOM0042.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Lin, P.F.; Tsao, J.; Lo, M.T.; Lin, C.; Chang, Y.C. Symbolic Entropy of the Amplitude rather than the Instantaneous Frequency of EEG Varies in Dementia. Entropy 2015, 17, 560–579. [Google Scholar] [CrossRef] [Green Version]
  2. Rodríguez-Sotelo, J.L.; Osorio-Forero, A.; Jiménez-Rodríguez, A.; Cuesta-Frau, D.; Cirugeda-Roldán, E.; Peluffo, D. Automatic Sleep Stages Classification Using EEG Entropy Features and Unsupervised Pattern Analysis Techniques. Entropy 2014, 16, 6573–6589. [Google Scholar] [CrossRef] [Green Version]
  3. Pan, W.Y.; Su, M.C.; Wu, H.T.; Lin, M.C.; Tsai, I.T.; Sun, C.K. Multiscale Entropy Analysis of Heart Rate Variability for Assessing the Severity of Sleep Disordered Breathing. Entropy 2015, 17, 231–243. [Google Scholar] [CrossRef] [Green Version]
  4. Liu, Q.; Ma, L.; Fan, S.; Abbod, M.; Shieh, J. Sample entropy analysis for the estimating depth of anaesthesia through human EEG signal at different levels of unconsciousness during surgeries. PeerJ 2018, 6, e4817. [Google Scholar] [CrossRef] [PubMed]
  5. Chen, C.; Jin, Y.; Lo, I.; Zhao, H.; Sun, B.; Zhao, Q.; Zheng, J.; Zhang, X. Complexity Change in Cardiovascular Disease. Int. J. Biol. Sci. 2017, 13, 1320–1328. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  6. Lake, D.; Richman, J.; Griffin, M.; Moorman, J. Sample entropy analysis of neonatal heart rate variability. Am. J. Physiol. Regul. Integr. Comp. Physiol. 2002, 283, R789–R797. [Google Scholar] [CrossRef]
  7. Pedersen, M.; Omidvarnia, A.; Walz, J.; Zalesky, A.; Jackson, G. Spontaneous brain network activity: Analysis of its temporal complexity. Netw. Neurosci. 2017, 1, 100–115. [Google Scholar] [CrossRef] [PubMed]
  8. McIntosh, A.; Vakorin, V.; Kovacevic, N.; Wang, H.; Diaconescu, A.; Protzner, A. Spatiotemporal dependency of age-related changes in brain signal variability. Cereb. Cortex 2014, 24, 1806–1817. [Google Scholar] [CrossRef] [PubMed]
  9. Saxe, G.; Calderone, D.; Morales, L. Brain entropy and human intelligence: A resting-state fMRI study. PLoS ONE 2018, 13, e0191582. [Google Scholar] [CrossRef] [PubMed]
  10. Villecco, F.; Pellegrino, A. Evaluation of Uncertainties in the Design Process of Complex Mechanical Systems. Entropy 2017, 19, 475. [Google Scholar] [CrossRef]
  11. Villecco, F.; Pellegrino, A. Entropic Measure of Epistemic Uncertainties in Multibody System Models by Axiomatic Design. Entropy 2017, 19, 291. [Google Scholar] [CrossRef]
  12. Shao, Z.G. Contrasting the complexity of the climate of the past 122,000 years and recent 2000 years. Sci. Rep. 2017, 7, 4143. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  13. Glynn, C.C.; Konstantinou, K.I. Reduction of randomness in seismic noise as a short-term precursor to a volcanic eruption. Sci. Rep. 2016, 6, 37733. [Google Scholar] [CrossRef] [PubMed]
  14. Min, L.; Guang, M.; Sarkar, N. Complexity Analysis of 2010 Baja California Earthquake Based on Entropy Measurements. In Vulnerability, Uncertainty, and Risk; American Society of Civil Engineers: Liverpool, UK, 2014; pp. 1815–1822. [Google Scholar]
  15. Zhao, X.; Shang, P.; Wang, J. Measuring information interactions on the ordinal pattern of stock time series. Phys. Rev. E 2013, 87, 022805. [Google Scholar] [CrossRef] [PubMed]
  16. Debnath, L. (Ed.) Wavelet Transforms and Time-Frequency Signal Analysis; Birkhäuser Boston: Boston, MA, USA, 2001. [Google Scholar]
  17. Shannon, C. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef]
  18. Renyi, A. On Measures of Entropy and Information. In Proceedings of the Fourth Berkeley Symposium on Mathematical Statistics and Probability, Volume 1: Contributions to the Theory of Statistics; University of California Press: Berkeley, CA, USA, 1961; pp. 547–561. [Google Scholar]
  19. Kolmogorov, A.N. New Metric Invariant of Transitive Dynamical Systems and Endomorphisms of Lebesgue Space. Dokl. Russ. Acad. Sci. 1958, 119, 861–864. [Google Scholar]
  20. Grassberger, P.; Procaccia, I. Estimation of the Kolmogorov entropy from a chaotic signal. Phys. Rev. A 1983, 28, 2591–2593. [Google Scholar] [CrossRef]
  21. Latora, V.; Baranger, M. Kolmogorov-Sinai Entropy Rate versus Physical Entropy. Phys. Rev. Lett. 1999, 82, 520–523. [Google Scholar] [CrossRef]
  22. Eckmann, J.; Ruelle, D. Ergodic theory of chaos and strange attractors. Rev. Mod. Phys. 1985, 57, 617–656. [Google Scholar] [CrossRef]
  23. Pincus, S. Approximate entropy as a measure of system complexity. Proc. Natl. Acad. Sci. USA 1991, 88, 2297–2301. [Google Scholar] [CrossRef]
  24. Richman, J.; Moorman, J. Physiological time-series analysis using approximate entropy and sample entropy. Am. J. Physiol. Heart Circ. Physiol. 2000, 278, H2039–H2049. [Google Scholar] [CrossRef] [PubMed]
  25. Costa, M.; Goldberger, A.; Peng, C. Multiscale entropy analysis of complex physiologic time series. Phys. Rev. Lett. 2002, 89, 068102. [Google Scholar] [CrossRef] [PubMed]
  26. James, R.G.; Ellison, C.J.; Crutchfield, J.P. Anatomy of a Bit: Information in a Time Series Observation. Chaos Interdiscip. J. Nonlinear Sci. 2011, 21, 037109. [Google Scholar] [CrossRef] [PubMed]
  27. Gao, J.; Hu, J.; Tung, W. Entropy measures for biological signal analyses. Nonlinear Dyns. 2012, 68, 431–444. [Google Scholar] [CrossRef]
  28. Sokunbi, M.; Gradin, V.; Waiter, G.; Cameron, G.; Ahearn, T.; Murray, A.; Steele, D.; Staff, R. Nonlinear Complexity Analysis of Brain fMRI Signals in Schizophrenia. PLoS ONE 2014, 9, e0095146. [Google Scholar] [CrossRef]
  29. Takens, F. Detecting strange attractors in turbulence. In Dynamical Systems and Turbulence, Warwick 1980; Rand, D., Young, L.S., Eds.; Springer: Berlin/Heidelberg, Germany, 1981; pp. 366–381. [Google Scholar]
  30. Pincus, S.; Goldberger, A.; Goldberger, A. Irregularity and asynchrony in biologic network signals. Meth. Enzymol. 2000, 321, 149–182. [Google Scholar]
  31. Burnecki, K.; Weron, A. Fractional Lévy stable motion can model subdiffusive dynamics. Phys. Rev. E 2010, 82, 021130. [Google Scholar] [CrossRef]
  32. Liu, H.H.; Bodvarsson, G.S.; Lu, S.; Molz, F.J. A Corrected and Generalized Successive Random Additions Algorithm for Simulating Fractional Levy Motions. Math. Geol. 2004, 36, 361–378. [Google Scholar] [CrossRef] [Green Version]
  33. Mandelbrot, B.; Wallis, J. Noah, Joseph, and Operational Hydrology. Water Resour. Res. 1968, 4, 909–918. [Google Scholar] [CrossRef]
  34. Takens, F. Invariants Related to Dimension and Entropy. Atas do 13 Colognio Brasiliero de Mathematica 1983, 13, 353–359. [Google Scholar]
  35. Deza, M.; Deza, E. Encyclopedia of Distances; Springer: Berlin/Heidelberg, Germany, 2014. [Google Scholar]
  36. Andrzejak, R.; Lehnertz, K.; Mormann, F.; Rieke, C.; David, P.; Elger, C. Indications of nonlinear deterministic and finite-dimensional structures in time series of brain electrical activity: Dependence on recording region and brain state. Phys. Rev. E Stat. Nonliner Soft Matter Phys. 2001, 64, 061907. [Google Scholar] [CrossRef] [PubMed]
  37. Dieker, T. Simulation of Fractional Brownian Motion. Master’s Thesis, University of Twente, Amsterdam, The Netherland, 2004. [Google Scholar]
  38. Hurst, H. Long-Term Storage Capacity of Reservoirs. Trans. Am. Soc. Civ. Eng. 1951, 116, 770–799. [Google Scholar]
  39. Grassberger, P.; Procaccia, I. Measuring the strangeness of strange attractors. Phys. D Nonlinear Phenom. 1983, 9, 189–208. [Google Scholar] [CrossRef]
  40. Acharya, U.; Sree, S.; Swapna, G.; Martis, R.; Suri, J. Automated EEG analysis of epilepsy: A review. Knowl.-Based Syst. 2013, 45, 147–165. [Google Scholar] [CrossRef]
  41. Acharya, U.; Molinari, F.; Sree, S.; Chattopadhyay, S.; Ng, K.; Suri, J. Automated diagnosis of epileptic EEG using entropies. Biomed. Signal Process. Control 2012, 7, 401–408. [Google Scholar] [CrossRef]
  42. Kannathal, N.; Choo, M.; Acharya, U.; Sadasivan, P. Entropies for detection of epilepsy in EEG. Comput. Methods Programs Biomed. 2005, 80, 187–194. [Google Scholar] [CrossRef]
  43. Blanco, S.; Garcia, H.; Quiroga, R.Q.; Romanelli, L.; Rosso, O.A. Stationarity of the EEG series. IEEE Eng. Med. Biol. Mag. 1995, 14, 395–399. [Google Scholar] [CrossRef]
Figure 1. Variation of the entropy measures over different signal lengths (N in time samples). Each noise type has been simulated 100 times, and errorbars represent the variation over noise realisations. R a n g e E n A (in black) and R a n g e E n B (in blue) show less deviation around their mean values compared to ApEn (in green) and SampEn (in red), in particular over short signal lengths. In all panels, the x-axis is on a logarithmic scale.
Figure 1. Variation of the entropy measures over different signal lengths (N in time samples). Each noise type has been simulated 100 times, and errorbars represent the variation over noise realisations. R a n g e E n A (in black) and R a n g e E n B (in blue) show less deviation around their mean values compared to ApEn (in green) and SampEn (in red), in particular over short signal lengths. In all panels, the x-axis is on a logarithmic scale.
Entropy 20 00962 g001
Figure 2. Impact of the tolerance parameter r on the signal entropy measures extracted from three noise types. Note that RangeEn measures always reach to 0 at r = 1, but this is not the case for ApEn and SampEn. In all panels, entropy measures have been illustrated in distinct colors and the x-axis is on a logarithmic scale.
Figure 2. Impact of the tolerance parameter r on the signal entropy measures extracted from three noise types. Note that RangeEn measures always reach to 0 at r = 1, but this is not the case for ApEn and SampEn. In all panels, entropy measures have been illustrated in distinct colors and the x-axis is on a logarithmic scale.
Entropy 20 00962 g002
Figure 3. Dependency of the signal entropy measures to stationary and nonstationary amplitude changes. Top panel shows three input signals, i.e., white noise ( x 1 ( t ) , in black), scaled white noise by a constant coefficient ( x 2 ( t ) = 5 x 1 ( t ) , in red), and scaled white noise by a time-varying coefficient ( x 3 ( t ) defined in Equation (23), in green). Panels A–D demonstrate the signal entropy trajectories over the r interval of 0.01 to 1, with 0.01 increasing steps. Note that the patterns of R a n g e E n A and R a n g e E n B are almost identical for white noise and both of its scaled versions, but ApEn and SampEn show drastic changes after any change in the amplitude of their input signal.
Figure 3. Dependency of the signal entropy measures to stationary and nonstationary amplitude changes. Top panel shows three input signals, i.e., white noise ( x 1 ( t ) , in black), scaled white noise by a constant coefficient ( x 2 ( t ) = 5 x 1 ( t ) , in red), and scaled white noise by a time-varying coefficient ( x 3 ( t ) defined in Equation (23), in green). Panels A–D demonstrate the signal entropy trajectories over the r interval of 0.01 to 1, with 0.01 increasing steps. Note that the patterns of R a n g e E n A and R a n g e E n B are almost identical for white noise and both of its scaled versions, but ApEn and SampEn show drastic changes after any change in the amplitude of their input signal.
Entropy 20 00962 g003
Figure 4. ApEn and SampEn analyses of fractional Levy motion (fLm). Panels (AD) illustrate the entropy trajectories in the r-plane with pre-defined Hurst exponents ranging from 0.01 to 0.99 with increasing steps of Δ H = 0.01. Each analysis has been repeated for two conditions: with and without amplitude correction (i.e., dividing the input signal by its standard deviation). The H values have been colour-coded. The missing points in each plot have been left as blank. In all panels, the x-axis is on a logarithmic scale. Panels (E) and (F) represent the scatter plots of r-exponents (i.e., the slope of the fitted line to the measure versus ln(r)), before and after amplitude correction. Panels (G) and (H) represent the scatter plots of m-exponents (i.e., the slope of the fitted line to the measure versus ln(m)), before and after amplitude correction.
Figure 4. ApEn and SampEn analyses of fractional Levy motion (fLm). Panels (AD) illustrate the entropy trajectories in the r-plane with pre-defined Hurst exponents ranging from 0.01 to 0.99 with increasing steps of Δ H = 0.01. Each analysis has been repeated for two conditions: with and without amplitude correction (i.e., dividing the input signal by its standard deviation). The H values have been colour-coded. The missing points in each plot have been left as blank. In all panels, the x-axis is on a logarithmic scale. Panels (E) and (F) represent the scatter plots of r-exponents (i.e., the slope of the fitted line to the measure versus ln(r)), before and after amplitude correction. Panels (G) and (H) represent the scatter plots of m-exponents (i.e., the slope of the fitted line to the measure versus ln(m)), before and after amplitude correction.
Entropy 20 00962 g004
Figure 5. ApEn and SampEn analyses of fractional Brownian motion (fBm). See the caption of Figure 4 for more details.
Figure 5. ApEn and SampEn analyses of fractional Brownian motion (fBm). See the caption of Figure 4 for more details.
Entropy 20 00962 g005
Figure 6. Analyses of R a n g e E n A and R a n g e E n B of fractional Levy motion (fLm). See the caption of Figure 4 for more details.
Figure 6. Analyses of R a n g e E n A and R a n g e E n B of fractional Levy motion (fLm). See the caption of Figure 4 for more details.
Entropy 20 00962 g006
Figure 7. R a n g e E n A and R a n g e E n B analyses of fractional Brownian motion (fBm). See the caption of Figure 4 for more details.
Figure 7. R a n g e E n A and R a n g e E n B analyses of fractional Brownian motion (fBm). See the caption of Figure 4 for more details.
Entropy 20 00962 g007
Figure 8. Sensitivity of the entropy measures to linear scaling of the covariance matrix in fBm (see also Equation (12)). Panels (AD) illustrate the entropy trajectories in the r-plane at H = 0.75 and five scaling factors of D = 0.001, 0.01, 1, 10, and 100. The D values have been colour-coded. The missing points in each plot have been left as blank.
Figure 8. Sensitivity of the entropy measures to linear scaling of the covariance matrix in fBm (see also Equation (12)). Panels (AD) illustrate the entropy trajectories in the r-plane at H = 0.75 and five scaling factors of D = 0.001, 0.01, 1, 10, and 100. The D values have been colour-coded. The missing points in each plot have been left as blank.
Entropy 20 00962 g008
Figure 9. Self-similarity and complexity analyses of epileptic EEG. Datasets C, D, and E have been taken from the public EEG database of [36]. In the legends, iEEG stands for intracranial EEG. (A) Distributions of the Hurst exponent extracted from EEG segments. (BE) Trajectories of ApEn, SampEn, R a n g e E N A , and R a n g e E n B , respectively. For all entropy measures, the embedding dimension parameter m was fixed to 2. In each plot, error bars show one standard deviation.
Figure 9. Self-similarity and complexity analyses of epileptic EEG. Datasets C, D, and E have been taken from the public EEG database of [36]. In the legends, iEEG stands for intracranial EEG. (A) Distributions of the Hurst exponent extracted from EEG segments. (BE) Trajectories of ApEn, SampEn, R a n g e E N A , and R a n g e E n B , respectively. For all entropy measures, the embedding dimension parameter m was fixed to 2. In each plot, error bars show one standard deviation.
Entropy 20 00962 g009

Share and Cite

MDPI and ACS Style

Omidvarnia, A.; Mesbah, M.; Pedersen, M.; Jackson, G. Range Entropy: A Bridge between Signal Complexity and Self-Similarity. Entropy 2018, 20, 962. https://doi.org/10.3390/e20120962

AMA Style

Omidvarnia A, Mesbah M, Pedersen M, Jackson G. Range Entropy: A Bridge between Signal Complexity and Self-Similarity. Entropy. 2018; 20(12):962. https://doi.org/10.3390/e20120962

Chicago/Turabian Style

Omidvarnia, Amir, Mostefa Mesbah, Mangor Pedersen, and Graeme Jackson. 2018. "Range Entropy: A Bridge between Signal Complexity and Self-Similarity" Entropy 20, no. 12: 962. https://doi.org/10.3390/e20120962

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop