Transfer Entropy

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 3636 Experts worldwide ranked by ideXlab platform

Jun Wang - One of the best experts on this subject based on the ideXlab platform.

  • Multi-scale symbolic Transfer Entropy analysis of EEG
    Physica A-statistical Mechanics and Its Applications, 2017
    Co-Authors: Jun Wang
    Abstract:

    From both global and local perspectives, we symbolize two kinds of EEG and analyze their dynamic and asymmetrical information using multi-scale Transfer Entropy. Multi-scale process with scale factor from 1 to 199 and step size of 2 is applied to EEG of healthy people and epileptic patients, and then the permutation with embedding dimension of 3 and global approach are used to symbolize the sequences. The forward and reverse symbol sequences are taken as the inputs of Transfer Entropy. Scale factor intervals of permutation and global way are (37, 57) and (65, 85) where the two kinds of EEG have satisfied Entropy distinctions. When scale factor is 67, Transfer Entropy of the healthy and epileptic subjects of permutation, 0.1137 and 0.1028, have biggest difference. And the corresponding values of the global symbolization is 0.0641 and 0.0601 which lies in the scale factor of 165. Research results show that permutation which takes contribution of local information has better distinction and is more effectively applied to our multi-scale Transfer Entropy analysis of EEG.

  • Multivariate Symbolic Transfer Entropy Analysis of Different Age Groups
    DEStech Transactions on Biology and Health, 2017
    Co-Authors: Xiao Li, Min Wu, Jun Wang
    Abstract:

    The life activities of the human body are closely related to age, and the life process is constantly associated with the Entropy change. In this paper, multivariate symbolic Transfer Entropy algorithm was proposed based on traditional symbolic Transfer Entropy, then we also adopted the dynamic adaptive method to symbolize the time series. This algorithm was used to study the relationship between multivariate symbolic Transfer Entropy and age. Experiments showed that the value of multivariate symbolic Transfer Entropy of middle-aged man is greater than that of the young. The hypothesis test can prove that the multivariate symbolic Transfer Entropy shows a significant difference in age. Thus, multivariate symbolic Transfer Entropy algorithm can provide a new method for the study of other physiological signals.

  • Analysis of Sleep Staging Based on Multivariate Symbolic Transfer Entropy
    DEStech Transactions on Engineering and Technology Research, 2017
    Co-Authors: Min Wu, Xiao Li, Jun Wang
    Abstract:

    Physiological electrical signals such as ECG, EEG and EMG during sleep contain a lot of physiological information. The analysis of these signals can provide effective advice for the diagnosis of sleep staging or sleep disorders. The symbol Transfer Entropy algorithm is applied to study the interaction information of physiological signals. However, because the traditional symbol Transfer Entropy algorithm only focuses on the relationship between one or two variables, this paper used the multivariate symbolic Transfer Entropy based on the traditional symbol Transfer Entropy to consider the coupling relationship between multiple variables. When dividing the time series, we used two methods respectively: static partitioning and dynamic adaptive. By comparing the multivariate symbolic Transfer Entropy values in the awake and sleep periods of different subjects, we can get that the multivariate symbol shift Entropy of the awake period is significantly higher than that of the sleep subjects, although the Entropy values are different from those of the subjects. And there is a significant difference between the two by using T test, which is consistent with the theory that the degree of brain disorder decreases and the Entropy decreases when the sleep is deepen. The multivariate symbolic Transfer Entropy algorithm is effective in distinguishing human awake period from sleep period and can provide an effective way for other physiological signals research.

  • Analysis of Magnetoencephalography based on symbolic Transfer Entropy
    2017 10th International Congress on Image and Signal Processing BioMedical Engineering and Informatics (CISP-BMEI), 2017
    Co-Authors: Bihan Zhang, Chuchu Ding, Jun Wang
    Abstract:

    In this paper, we symbolize two kinds of different channels of Magnetoencephalography(MEG) and analyze their coupling relationship using symbolic Transfer Entropy algorithm. We record MEG signals from six depressive disorders and nine healthy subjects stimulated by positive, neutral, and negative emotional pictures and explore coupling relationship of different MEG channels. The results show that there are obvious differences on correlations between two channels of MLP32 and MRP32 with positive emotional stimulus, MLP31 and MRP31 with neutral emotional stimulus, MLP53 and MRP53 with negative emotional stimulus. In general, these channels have more correlation in patients with major depression, and can be able to distinguish depression patient from crowd. It also shows that the research of symbolic Transfer Entropy in MEG channel can distinguish the difference between normal and case samples, which of significance for clinical pathological estimation and diagnosis.

  • Multivariate symbol Transfer Entropy analysis on epileptic EEG
    Proceedings of the 2015 6th International Conference on Manufacturing Science and Engineering, 2015
    Co-Authors: Jun Wang
    Abstract:

    Epilepsy is caused by abnormal synchronous discharge of neurons in the brain, which is the main basis for the diagnosis of epilepsy. Use of complexity theory to study the epileptic signal has become a hot spot. The symbolic Transfer Entropy can be used as a characteristic of epilepsy playing an increasingly important role in the study of epilepsy in EEG feature extraction. But symbolic Transfer Entropy is generally used to measure the dynamic characteristics and directional information between two variables and ignores the interaction between multivariate. In this paper, epileptic EEG signals is analyzed based on multivariate symbol Transfer Entropy. By choosing the lead signal and the signal length and analyzing the robustness, the method can be used to distinguish between normal and patients with epilepsy. It is proved the algorithm is robust and reliable. The findings will help clinical diagnosis.

Michael Wibral - One of the best experts on this subject based on the ideXlab platform.

  • efficient Transfer Entropy analysis of non stationary neural time series
    PLOS ONE, 2014
    Co-Authors: Patricia Wollstadt, Raul Vicente, Mario Martinezzarzuela, Francisco Javier Diazpernas, Michael Wibral
    Abstract:

    Information theory allows us to investigate information processing in neural systems in terms of information Transfer, storage and modification. Especially the measure of information Transfer, Transfer Entropy, has seen a dramatic surge of interest in neuroscience. Estimating Transfer Entropy from two processes requires the observation of multiple realizations of these processes to estimate associated probability density functions. To obtain these necessary observations, available estimators typically assume stationarity of processes to allow pooling of observations over time. This assumption however, is a major obstacle to the application of these estimators in neuroscience as observed processes are often non-stationary. As a solution, Gomez-Herrero and colleagues theoretically showed that the stationarity assumption may be avoided by estimating Transfer Entropy from an ensemble of realizations. Such an ensemble of realizations is often readily available in neuroscience experiments in the form of experimental trials. Thus, in this work we combine the ensemble method with a recently proposed Transfer Entropy estimator to make Transfer Entropy estimation applicable to non-stationary time series. We present an efficient implementation of the approach that is suitable for the increased computational demand of the ensemble method's practical application. In particular, we use a massively parallel implementation for a graphics processing unit to handle the computationally most heavy aspects of the ensemble method for Transfer Entropy estimation. We test the performance and robustness of our implementation on data from numerical simulations of stochastic processes. We also demonstrate the applicability of the ensemble method to magnetoencephalographic data. While we mainly evaluate the proposed method for neuroscience data, we expect it to be applicable in a variety of fields that are concerned with the analysis of information Transfer in complex biological, social, and artificial systems.

  • Transfer Entropy in neuroscience
    2014
    Co-Authors: Michael Wibral, Raul Vicente, Michael Lindner
    Abstract:

    Information Transfer is a key component of information processing, next to information storage and modification. Information Transfer can be measured by a variety of directed informationmeasures of which Transfer Entropy is themost popular, andmost principled one. This chapter presents the basic concepts behind Transfer Entropy in an intuitive fashion, including graphical depictions of the key concepts. It also includes a special section devoted to the correct interpretation of the measure, especially with respect to concepts of causality. The chapter also provides an overview of estimation techniques for Transfer Entropy and pointers to popular open source toolboxes. It also introduces recent extensions of Transfer Entropy that serve to estimate delays involved in information Transfer in a network. By touching upon alternative measures of information Transfer, such as Massey’s directed information Transfer and Runge’s momentary information Transfer, it may serve as a frame of reference for more specialised treatments and as an overview over the field of studies in information Transfer in general.

  • Revisiting Wiener's principle of causality — interaction-delay reconstruction using Transfer Entropy and multivariate analysis on delay-weighted graphs
    2012 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, 2012
    Co-Authors: Michael Wibral, Patricia Wollstadt, Ulrich Meyer, Nicolae Pampu, Viola Priesemann, Raul Vicente
    Abstract:

    To understand the function of networks we have to identify the structure of their interactions, but also interaction timing, as compromised timing of interactions may disrupt network function. We demonstrate how both questions can be addressed using a modified estimator of Transfer Entropy. Transfer Entropy is an implementation of Wiener's principle of observational causality based on information theory, and detects arbitrary linear and non-linear interactions. Using a modified estimator that uses delayed states of the driving system and independently optimized delayed states of the receiving system, we show that Transfer Entropy values peak if the delay of the state of the driving system equals the true interaction delay. In addition, we show how reconstructed delays from a bivariate Transfer Entropy analysis of a network can be used to label spurious interactions arising from cascade effects and apply this approach to local field potential (LFP) and magnetoencephalography (MEG) data.

  • Transfer Entropy in magnetoencephalographic data quantifying information flow in cortical and cerebellar networks
    Progress in Biophysics & Molecular Biology, 2011
    Co-Authors: Michael Wibral, Raul Vicente, Michael Lindner, Benjamin Rahm, Maria Rieder, Jochen Kaiser
    Abstract:

    Abstract The analysis of cortical and subcortical networks requires the identification of their nodes, and of the topology and dynamics of their interactions. Exploratory tools for the identification of nodes are available, e.g. magnetoencephalography (MEG) in combination with beamformer source analysis. Competing network topologies and interaction models can be investigated using dynamic causal modelling. However, we lack a method for the exploratory investigation of network topologies to choose from the very large number of possible network graphs. Ideally, this method should not require a pre-specified model of the interaction. Transfer Entropy – an information theoretic implementation of Wiener-type causality – is a method for the investigation of causal interactions (or information flow) that is independent of a pre-specified interaction model. We analysed MEG data from an auditory short-term memory experiment to assess whether the reconfiguration of networks implied in this task can be detected using Transfer Entropy. Transfer Entropy analysis of MEG source-level signals detected changes in the network between the different task types. These changes prominently involved the left temporal pole and cerebellum – structures that have previously been implied in auditory short-term or working memory. Thus, the analysis of information flow with Transfer Entropy at the source-level may be used to derive hypotheses for further model-based testing.

Joseph T Lizier - One of the best experts on this subject based on the ideXlab platform.

  • deriving pairwise Transfer Entropy from network structure and motifs
    Proceedings of The Royal Society A: Mathematical Physical and Engineering Sciences, 2020
    Co-Authors: Leonardo Novelli, Joseph T Lizier, Fatihcan M Atay, Jurgen Jost
    Abstract:

    Transfer Entropy (TE) is an established method for quantifying directed statistical dependencies in neuroimaging and complex systems datasets. The pairwise (or bivariate) TE from a source to a targ...

  • Transfer Entropy in continuous time with applications to jump and neural spiking processes
    Physical Review E, 2017
    Co-Authors: Richard Spinney, Mikhail Prokopenko, Joseph T Lizier
    Abstract:

    : Transfer Entropy has been used to quantify the directed flow of information between source and target variables in many complex systems. While Transfer Entropy was originally formulated in discrete time, in this paper we provide a framework for considering Transfer Entropy in continuous time systems, based on Radon-Nikodym derivatives between measures of complete path realizations. To describe the information dynamics of individual path realizations, we introduce the pathwise Transfer Entropy, the expectation of which is the Transfer Entropy accumulated over a finite time interval. We demonstrate that this formalism permits an instantaneous Transfer Entropy rate. These properties are analogous to the behavior of physical quantities defined along paths such as work and heat. We use this approach to produce an explicit form for the Transfer Entropy for pure jump processes, and highlight the simplified form in the specific case of point processes (frequently used in neuroscience to model neural spike trains). Finally, we present two synthetic spiking neuron model examples to exhibit the pertinent features of our formalism, namely, that the information flow for point processes consists of discontinuous jump contributions (at spikes in the target) interrupting a continuously varying contribution (relating to waiting times between target spikes). Numerical schemes based on our formalism promise significant benefits over existing strategies based on discrete time formalisms.

  • an introduction to Transfer Entropy information flow in complex systems
    2016
    Co-Authors: Terry Bossomaier, Lionel Barnett, Michael Harr, Joseph T Lizier
    Abstract:

    This book considers a relatively new metric in complex systems, Transfer Entropy, derived from a series of measurements, usually a time series. After a qualitative introduction and a chapter that explains the key ideas from statistics required to understand the text, the authors then present information theory and Transfer Entropy in depth. A key feature of the approach is the authors' work to show the relationship between information flow and complexity. The later chapters demonstrate information Transfer in canonical systems, and applications, for example in neuroscience and in finance. The book will be of value to advanced undergraduate and graduate students and researchers in the areas of computer science, neuroscience, physics, and engineering.

  • Miscellaneous Applications of Transfer Entropy
    An Introduction to Transfer Entropy, 2016
    Co-Authors: Terry Bossomaier, Lionel Barnett, Michael Harré, Joseph T Lizier
    Abstract:

    The previous chapters have outlined the use of Transfer Entropy in some of its major application domains: providing insights into canonical complex systems and financial markets.

  • Fisher Transfer Entropy: Quantifying the gain in transient sensitivity
    Proceedings of The Royal Society A: Mathematical Physical and Engineering Sciences, 2015
    Co-Authors: Mikhail Prokopenko, Joseph T Lizier, Lionel Barnett, Michael Harré, Oliver Obst, X. Rosalind Wang
    Abstract:

    We introduce a novel measure, Fisher Transfer Entropy (FTE), which quantifies a gain in sensitivity to a control parameter of a state transition, in the context of another observable source. The new measure captures both transient and contextual qualities of Transfer Entropy and the sensitivity characteristics of Fisher information. FTE is exemplified for a ferromagnetic two-dimensional lattice Ising model with Glauber dynamics and is shown to diverge at the critical point.

Raul Vicente - One of the best experts on this subject based on the ideXlab platform.

  • efficient Transfer Entropy analysis of non stationary neural time series
    PLOS ONE, 2014
    Co-Authors: Patricia Wollstadt, Raul Vicente, Mario Martinezzarzuela, Francisco Javier Diazpernas, Michael Wibral
    Abstract:

    Information theory allows us to investigate information processing in neural systems in terms of information Transfer, storage and modification. Especially the measure of information Transfer, Transfer Entropy, has seen a dramatic surge of interest in neuroscience. Estimating Transfer Entropy from two processes requires the observation of multiple realizations of these processes to estimate associated probability density functions. To obtain these necessary observations, available estimators typically assume stationarity of processes to allow pooling of observations over time. This assumption however, is a major obstacle to the application of these estimators in neuroscience as observed processes are often non-stationary. As a solution, Gomez-Herrero and colleagues theoretically showed that the stationarity assumption may be avoided by estimating Transfer Entropy from an ensemble of realizations. Such an ensemble of realizations is often readily available in neuroscience experiments in the form of experimental trials. Thus, in this work we combine the ensemble method with a recently proposed Transfer Entropy estimator to make Transfer Entropy estimation applicable to non-stationary time series. We present an efficient implementation of the approach that is suitable for the increased computational demand of the ensemble method's practical application. In particular, we use a massively parallel implementation for a graphics processing unit to handle the computationally most heavy aspects of the ensemble method for Transfer Entropy estimation. We test the performance and robustness of our implementation on data from numerical simulations of stochastic processes. We also demonstrate the applicability of the ensemble method to magnetoencephalographic data. While we mainly evaluate the proposed method for neuroscience data, we expect it to be applicable in a variety of fields that are concerned with the analysis of information Transfer in complex biological, social, and artificial systems.

  • Transfer Entropy in neuroscience
    2014
    Co-Authors: Michael Wibral, Raul Vicente, Michael Lindner
    Abstract:

    Information Transfer is a key component of information processing, next to information storage and modification. Information Transfer can be measured by a variety of directed informationmeasures of which Transfer Entropy is themost popular, andmost principled one. This chapter presents the basic concepts behind Transfer Entropy in an intuitive fashion, including graphical depictions of the key concepts. It also includes a special section devoted to the correct interpretation of the measure, especially with respect to concepts of causality. The chapter also provides an overview of estimation techniques for Transfer Entropy and pointers to popular open source toolboxes. It also introduces recent extensions of Transfer Entropy that serve to estimate delays involved in information Transfer in a network. By touching upon alternative measures of information Transfer, such as Massey’s directed information Transfer and Runge’s momentary information Transfer, it may serve as a frame of reference for more specialised treatments and as an overview over the field of studies in information Transfer in general.

  • Revisiting Wiener's principle of causality — interaction-delay reconstruction using Transfer Entropy and multivariate analysis on delay-weighted graphs
    2012 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, 2012
    Co-Authors: Michael Wibral, Patricia Wollstadt, Ulrich Meyer, Nicolae Pampu, Viola Priesemann, Raul Vicente
    Abstract:

    To understand the function of networks we have to identify the structure of their interactions, but also interaction timing, as compromised timing of interactions may disrupt network function. We demonstrate how both questions can be addressed using a modified estimator of Transfer Entropy. Transfer Entropy is an implementation of Wiener's principle of observational causality based on information theory, and detects arbitrary linear and non-linear interactions. Using a modified estimator that uses delayed states of the driving system and independently optimized delayed states of the receiving system, we show that Transfer Entropy values peak if the delay of the state of the driving system equals the true interaction delay. In addition, we show how reconstructed delays from a bivariate Transfer Entropy analysis of a network can be used to label spurious interactions arising from cascade effects and apply this approach to local field potential (LFP) and magnetoencephalography (MEG) data.

  • Transfer Entropy in magnetoencephalographic data quantifying information flow in cortical and cerebellar networks
    Progress in Biophysics & Molecular Biology, 2011
    Co-Authors: Michael Wibral, Raul Vicente, Michael Lindner, Benjamin Rahm, Maria Rieder, Jochen Kaiser
    Abstract:

    Abstract The analysis of cortical and subcortical networks requires the identification of their nodes, and of the topology and dynamics of their interactions. Exploratory tools for the identification of nodes are available, e.g. magnetoencephalography (MEG) in combination with beamformer source analysis. Competing network topologies and interaction models can be investigated using dynamic causal modelling. However, we lack a method for the exploratory investigation of network topologies to choose from the very large number of possible network graphs. Ideally, this method should not require a pre-specified model of the interaction. Transfer Entropy – an information theoretic implementation of Wiener-type causality – is a method for the investigation of causal interactions (or information flow) that is independent of a pre-specified interaction model. We analysed MEG data from an auditory short-term memory experiment to assess whether the reconfiguration of networks implied in this task can be detected using Transfer Entropy. Transfer Entropy analysis of MEG source-level signals detected changes in the network between the different task types. These changes prominently involved the left temporal pole and cerebellum – structures that have previously been implied in auditory short-term or working memory. Thus, the analysis of information flow with Transfer Entropy at the source-level may be used to derive hypotheses for further model-based testing.

Mikhail Prokopenko - One of the best experts on this subject based on the ideXlab platform.

  • Transfer Entropy in continuous time with applications to jump and neural spiking processes
    Physical Review E, 2017
    Co-Authors: Richard Spinney, Mikhail Prokopenko, Joseph T Lizier
    Abstract:

    : Transfer Entropy has been used to quantify the directed flow of information between source and target variables in many complex systems. While Transfer Entropy was originally formulated in discrete time, in this paper we provide a framework for considering Transfer Entropy in continuous time systems, based on Radon-Nikodym derivatives between measures of complete path realizations. To describe the information dynamics of individual path realizations, we introduce the pathwise Transfer Entropy, the expectation of which is the Transfer Entropy accumulated over a finite time interval. We demonstrate that this formalism permits an instantaneous Transfer Entropy rate. These properties are analogous to the behavior of physical quantities defined along paths such as work and heat. We use this approach to produce an explicit form for the Transfer Entropy for pure jump processes, and highlight the simplified form in the specific case of point processes (frequently used in neuroscience to model neural spike trains). Finally, we present two synthetic spiking neuron model examples to exhibit the pertinent features of our formalism, namely, that the information flow for point processes consists of discontinuous jump contributions (at spikes in the target) interrupting a continuously varying contribution (relating to waiting times between target spikes). Numerical schemes based on our formalism promise significant benefits over existing strategies based on discrete time formalisms.

  • Inferring Coupling of Distributed Dynamical Systems via Transfer Entropy
    arXiv: Artificial Intelligence, 2016
    Co-Authors: Oliver M. Cliff, Mikhail Prokopenko, Robert Fitch
    Abstract:

    In this work, we are interested in structure learning for a set of spatially distributed dynamical systems, where individual subsystems are coupled via latent variables and observed through a filter. We represent this model as a directed acyclic graph (DAG) that characterises the unidirectional coupling between subsystems. Standard approaches to structure learning are not applicable in this framework due to the hidden variables, however we can exploit the properties of certain dynamical systems to formulate exact methods based on state space reconstruction. We approach the problem by using reconstruction theorems to analytically derive a tractable expression for the KL-divergence of a candidate DAG from the observed dataset. We show this measure can be decomposed as a function of two information-theoretic measures, Transfer Entropy and stochastic interaction. We then present two mathematically robust scoring functions based on Transfer Entropy and statistical independence tests. These results support the previously held conjecture that Transfer Entropy can be used to infer effective connectivity in complex networks.

  • Fisher Transfer Entropy: Quantifying the gain in transient sensitivity
    Proceedings of The Royal Society A: Mathematical Physical and Engineering Sciences, 2015
    Co-Authors: Mikhail Prokopenko, Joseph T Lizier, Lionel Barnett, Michael Harré, Oliver Obst, X. Rosalind Wang
    Abstract:

    We introduce a novel measure, Fisher Transfer Entropy (FTE), which quantifies a gain in sensitivity to a control parameter of a state transition, in the context of another observable source. The new measure captures both transient and contextual qualities of Transfer Entropy and the sensitivity characteristics of Fisher information. FTE is exemplified for a ferromagnetic two-dimensional lattice Ising model with Glauber dynamics and is shown to diverge at the critical point.

  • Transfer Entropy and transient limits of computation
    Scientific Reports, 2015
    Co-Authors: Mikhail Prokopenko, Joseph T Lizier
    Abstract:

    Transfer Entropy is a recently introduced information-theoretic measure quantifying directed statistical coherence between spatiotemporal processes, and is widely used in diverse fields ranging from finance to neuroscience. However, its relationships to fundamental limits of computation, such as Landauer's limit, remain unknown. Here we show that in order to increase Transfer Entropy (predictability) by one bit, heat flow must match or exceed Landauer's limit. Importantly, we generalise Landauer's limit to bi-directional information dynamics for non-equilibrium processes, revealing that the limit applies to prediction, in addition to retrodiction (information erasure). Furthermore, the results are related to negEntropy, and to Bremermann's limit and the Bekenstein bound, producing, perhaps surprisingly, lower bounds on the computational deceleration and information loss incurred during an increase in predictability about the process. The identified relationships set new computational limits in terms of fundamental physical quantities, and establish Transfer Entropy as a central measure connecting information theory, thermodynamics and theory of computation.

  • on thermodynamic interpretation of Transfer Entropy
    Entropy, 2013
    Co-Authors: Mikhail Prokopenko, Joseph T Lizier, Don Price
    Abstract:

    We propose a thermodynamic interpretation of Transfer Entropy near equilibrium, using a specialised Boltzmann’s principle. The approach relates conditional probabilities to the probabilities of the corresponding state transitions. This in turn characterises Transfer Entropy as a difference of two Entropy rates: the rate for a resultant transition and another rate for a possibly irreversible transition within the system affected by an additional source. We then show that this difference, the local Transfer Entropy, is proportional to the external Entropy production, possibly due to irreversibility. Near equilibrium, Transfer Entropy is also interpreted as the difference in equilibrium stabilities with respect to two scenarios: a default case and the case with an additional source. Finally, we demonstrated that such a thermodynamic treatment is not applicable to information flow, a measure of causal effect.