The Experts below are selected from a list of 360 Experts worldwide ranked by ideXlab platform
Emery N Brown  One of the best experts on this subject based on the ideXlab platform.

state space models for the analysis of neural Spike Train and behavioral data
Encyclopedia of Computational Neuroscience, 2014CoAuthors: Zhe Chen, Emery N BrownAbstract:An adaptation of the statespace paradigm to the analysis of neuroscience data in which the observation model is either a point process or a time series of binary observations and the state model is typically a linear Gaussian process. The paradigm has been applied to a number of problems including neural Spike Train decoding, analysis of receptive field dynamics, analyses of learning, neural prosthetic control, and control of brain states under anesthesia. The statespace paradigm for analyses of point processes and time series of discrete binary observations has been developed for the analysis of neural Spike Train and behavioral data (Brown et al. 1998; Smith and Brown 2003). The statespace point process (SSPP) paradigm has two standard components. The state equation defines the system dynamics. The observation equation defines how the system is measured. For the SSPP system the observations can be point processes or time series of discrete binary responses. Point processes are binary (0–1) events defined in continuous time. They are an ideal framework for analyzing series of neuronal action potentials where the binary events are defined as 1 if there is an action potential or Spike and 0 otherwise. Data in behavioral studies commonly consist of time series of binary responses such as in a learning experiment where 1 is given for a correct response and 0 is given for an incorrect response. In system neuroscience experiments, the state equation defines the signal, stimulus, or brain state that is being represented by the neural activity or the behavior.

nstat open source neural Spike Train analysis toolbox for matlab
PMC, 2012CoAuthors: Emery N Brown, Iahn Cajigas, W Q MalikAbstract:Abstract Over the last decade there has been a tremendous advance in the analytical tools available to neuroscientists to understand and model neural function. In particular, the point process – generalized linear model (PPGLM) framework has been applied successfully to problems ranging from neuroendocrine physiology to neural decoding. However, the lack of freely distributed software implementations of published PPGLM algorithms together with problemspecific modifications required for their use, limit wide application of these techniques. In an effort to make existing PPGLM methods more accessible to the neuroscience community, we have developed nSTAT – an open source neural Spike Train analysis toolbox for Matlab ® . By adopting an objectoriented programming (OOP) approach, nSTAT allows users to easily manipulate data by performing operations on objects that have an intuitive connection to the experiment (Spike Trains, covariates, etc.), rather than by dealing with data in vector/matrix form. The algorithms implemented within nSTAT address a number of common problems including computation of peristimulus time histograms, quantification of the temporal response properties of neurons, and characterization of neural plasticity within and across trials. nSTAT provides a starting point for exploratory data analysis, allows for simple and systematic building and testing of point process models, and for decoding of stimulus variables based on point process models of neural function. By providing an opensource toolbox, we hope to establish a platform that can be easily used, modified, and extended by the scientific community to address limitations of current techniques and to extend available techniques to more complex problems.

state space analysis of time varying higher order Spike correlation for multiple neural Spike Train data
PLOS Computational Biology, 2012CoAuthors: Hideaki Shimazaki, Emery N Brown, Sonja Grun, Shunichi AmariAbstract:Precise Spike coordination between the spiking activities of multiple neurons is suggested as an indication of coordinated network activity in active cell assemblies. Spike correlation analysis aims to identify such cooperative network activity by detecting excess Spike synchrony in simultaneously recorded multiple neural Spike sequences. Cooperative activity is expected to organize dynamically during behavior and cognition; therefore currently available analysis techniques must be extended to enable the estimation of multiple timevarying Spike interactions between neurons simultaneously. In particular, new methods must take advantage of the simultaneous observations of multiple neurons by addressing their higherorder dependencies, which cannot be revealed by pairwise analyses alone. In this paper, we develop a method for estimating timevarying Spike interactions by means of a statespace analysis. Discretized parallel Spike sequences are modeled as multivariate binary processes using a loglinear model that provides a welldefined measure of higherorder Spike correlation in an information geometry framework. We construct a recursive Bayesian filter/smoother for the extraction of Spike interaction parameters. This method can simultaneously estimate the dynamic pairwise Spike interactions of multiple single neurons, thereby extending the Ising/spinglass model analysis of multiple neural Spike Train data to a nonstationary analysis. Furthermore, the method can estimate dynamic higherorder Spike interactions. To validate the inclusion of the higherorder terms in the model, we construct an approximation method to assess the goodnessoffit to Spike data. In addition, we formulate a test method for the presence of higherorder Spike correlation even in nonstationary Spike data, e.g., data from awake behaving animals. The utility of the proposed methods is tested using simulated Spike data with known underlying correlation dynamics. Finally, we apply the methods to neural Spike data simultaneously recorded from the motor cortex of an awake monkey and demonstrate that the higherorder Spike correlation organizes dynamically in relation to a behavioral demand.

an analysis of hippocampal spatio temporal representations using a bayesian algorithm for neural Spike Train decoding
IEEE Transactions on Neural Systems and Rehabilitation Engineering, 2005CoAuthors: Riccardo Barbieri, Matthew A Wilson, Loren M Frank, Emery N BrownAbstract:Neural Spike Train decoding algorithms are important tools for characterizing how ensembles of neurons represent biological signals. We present a Bayesian neural Spike Train decoding algorithm based on a point process model of individual neurons, a linear stochastic statespace model of the biological signal, and a temporal latency parameter. The latency parameter represents the temporal lead or lag between the biological signal and the ensemble spiking activity. We use the algorithm to study whether the representation of position by the ensemble spiking activity of pyramidal neurons in the CA1 region of the rat hippocampus is more consistent with prospective coding, i.e., future position, or retrospective coding, past position. Using 44 simultaneously recorded neurons and an ensemble delay latency of 400 ms, the median decoding error was 5.1 cm during 10 min of foraging in an open circular environment. The true coverage probability for the algorithm's 0.95 confidence regions was 0.71. These results illustrate how the Bayesian neural Spike Train decoding paradigm may be used to investigate spatiotemporal representations of position by an ensemble of hippocampal neurons.

Multiple neural Spike Train data analysis: stateoftheart and future challenges
Nature Neuroscience, 2004CoAuthors: Emery N Brown, Robert E. Kass, Partha P MitraAbstract:Multiple electrodes are now a standard tool in neuroscience research that make it possible to study the simultaneous activity of several neurons in a given brain region or across different regions. The data from multielectrode studies present important analysis challenges that must be resolved for optimal use of these neurophysiological measurements to answer questions about how the brain works. Here we review statistical methods for the analysis of multiple neural SpikeTrain data and discuss future challenges for methodology research.
Sonja Grun  One of the best experts on this subject based on the ideXlab platform.

detection and evaluation of spatio temporal Spike patterns in massively parallel Spike Train data with spade
Frontiers in Computational Neuroscience, 2017CoAuthors: Pietro Quaglio, Alper Yegenoglu, Emiliano Torre, Dominik Endres, Sonja GrunAbstract:Abstract Repeated precise sequences of Spikes are largely considered a signature of activation of cell assemblies. These repeated sequences are commonly known under the name of spatiotemporal patterns (STPs). STPs are hypothesized to play a role in the communication of information in the computational process operated by the cerebral cortex. A variety of statistical methods for the detection of STPs have been developed and applied to electrophysiological recordings, but such methods scale poorly with the current size of available parallel Spike Train recordings (more than 100 neurons). In this work, we introduce a novel method capable to overcome the computational and statistical limits of existing analysis techniques in detecting repeating STPs within massively parallel Spike Trains (MPST). We employ advanced data mining techniques to efficiently extract repeating sequences of Spikes from the data. Then, we introduce and compare two alternative approaches to distinguish statistically surprising patterns from chance sequences. The first approach uses a measure known as conceptual stability, of which we investigate a computationally cheap approximation for applications to such large data sets. The second approach is based on the evaluation of pattern statistical significance. In particular, we provide an extension to STPs of a method we recently introduced for the evaluation of statistical significance of synchronous Spike patterns. The performance of the two approaches is evaluated in terms of computational load and statistical power on a variety of artificial data sets that replicate specific features of experimental data. Both methods provide an effective and robust procedure for detection of STPs in MPST data. The method based on significance evaluation shows the best overall performance, although at a higher computational cost. We name the novel procedure the spatiotemporal Spike PAttern Detection and Evaluation SPADE analysis.

state space analysis of time varying higher order Spike correlation for multiple neural Spike Train data
PLOS Computational Biology, 2012CoAuthors: Hideaki Shimazaki, Emery N Brown, Sonja Grun, Shunichi AmariAbstract:Precise Spike coordination between the spiking activities of multiple neurons is suggested as an indication of coordinated network activity in active cell assemblies. Spike correlation analysis aims to identify such cooperative network activity by detecting excess Spike synchrony in simultaneously recorded multiple neural Spike sequences. Cooperative activity is expected to organize dynamically during behavior and cognition; therefore currently available analysis techniques must be extended to enable the estimation of multiple timevarying Spike interactions between neurons simultaneously. In particular, new methods must take advantage of the simultaneous observations of multiple neurons by addressing their higherorder dependencies, which cannot be revealed by pairwise analyses alone. In this paper, we develop a method for estimating timevarying Spike interactions by means of a statespace analysis. Discretized parallel Spike sequences are modeled as multivariate binary processes using a loglinear model that provides a welldefined measure of higherorder Spike correlation in an information geometry framework. We construct a recursive Bayesian filter/smoother for the extraction of Spike interaction parameters. This method can simultaneously estimate the dynamic pairwise Spike interactions of multiple single neurons, thereby extending the Ising/spinglass model analysis of multiple neural Spike Train data to a nonstationary analysis. Furthermore, the method can estimate dynamic higherorder Spike interactions. To validate the inclusion of the higherorder terms in the model, we construct an approximation method to assess the goodnessoffit to Spike data. In addition, we formulate a test method for the presence of higherorder Spike correlation even in nonstationary Spike data, e.g., data from awake behaving animals. The utility of the proposed methods is tested using simulated Spike data with known underlying correlation dynamics. Finally, we apply the methods to neural Spike data simultaneously recorded from the motor cortex of an awake monkey and demonstrate that the higherorder Spike correlation organizes dynamically in relation to a behavioral demand.
Benjamin Lindner  One of the best experts on this subject based on the ideXlab platform.

SelfConsistent Scheme for SpikeTrain Power Spectra in Heterogeneous Sparse Networks
Frontiers in Computational Neuroscience, 2018CoAuthors: Rodrigo Felipe De Oliveira Pena, Sebastian Vellmer, Davide Bernardi, Antonio C Roque, Benjamin LindnerAbstract:Recurrent networks of spiking neurons can be in an asynchronous state characterized by low or absent crosscorrelations and Spike statistics which resemble those of cortical neurons. Although spatial correlations are negligible in this state, neurons can show pronounced temporal correlations in their Spike Trains that can be quantified by the autocorrelation function or the SpikeTrain power spectrum. Depending on cellular and network parameters, correlations display diverse patterns (ranging from simple refractoryperiod effects and stochastic oscillations to slow fluctuations) and it is generally not well understood how these dependencies come about. Previous work has explored how the singlecell correlations in a homogeneous network (excitatory and inhibitory integrateandfire neurons with nearly balanced mean recurrent input) can be determined numerically from an iterative singleneuron simulation. Such a scheme is based on the fact that every neuron is driven by the network noise (i.e. the input currents from all its presynaptic partners) but also contributes to the network noise, leading to a selfconsistency condition for the input and output spectra. Here we first extend this scheme to homogeneous networks with strong recurrent inhibition and a synaptic filter, in which instabilities of the previous scheme are avoided by an averaging procedure. We then extend the scheme to heterogeneous networks in which i) different neural subpopulations (e.g. excitatory and inhibitory neurons) have different cellular or connectivity parameters; ii) the number and strength of the input connections are random (Erdős–Renyi topology) and thus different among neurons. In all heterogeneous cases, neurons are lumped in different classes each of which is represented by a single neuron in the iterative scheme; in addition, we make a Gaussian approximation of the input current to the neuron. These approximations seem to be justified over a broad range of parameters as indicated by comparison with simulation results of large recurrent networks. Our method can help to elucidate how network heterogeneity shapes the asynchronous state in recurrent neural networks.

self consistent determination of the Spike Train power spectrum in a neural network with sparse connectivity
Frontiers in Computational Neuroscience, 2014CoAuthors: Benjamin Dummer, Stefan Wieland, Benjamin LindnerAbstract:A major source of random variability in cortical networks is the quasirandom arrival of presynaptic action potentials from many other cells. In network studies as well as in the study of the response properties of single cells embedded in a network, synaptic background input is often approximated by Poissonian Spike Trains. However, the output statistics of the cells is in most cases far from being Poisson. This is inconsistent with the assumption of similar SpikeTrain statistics for pre and postsynaptic cells in a recurrent network. Here we tackle this problem for the popular class of integrateandfire neurons and study a selfconsistent statistics of input and output spectra of neural Spike Trains. Instead of actually using a large network, we use an iterative scheme, in which we simulate a single neuron over several generations. In each of these generations, the neuron is stimulated with surrogate stochastic input that has a similar statistics as the output of the previous generation. For the surrogate input, we employ two distinct approximations: (i) a superposition of renewal Spike Trains with the same interSpike interval density as observed in the previous generation and (ii) a Gaussian current with a power spectrum proportional to that observed in the previous generation. For input parameters that correspond to balanced input in the network, both the renewal and the Gaussian iteration procedure converge quickly and yield comparable results for the selfconsistent SpikeTrain power spectrum. We compare our results to largescale simulations of a random sparsely connected network of leaky integrateandfire neurons (Brunel, 2000) and show that in the asynchronous regime close to a state of balanced synaptic input from the network, our iterative schemes provide an excellent approximations to the autocorrelation of Spike Trains in the recurrent network.

superposition of many independent Spike Trains is generally not a poisson process
Physical Review E, 2006CoAuthors: Benjamin LindnerAbstract:We study the sum of many independent Spike Trains and ask whether the resulting Spike Train has Poisson statistics or not. It is shown that for a nonPoissonian statistics of the single Spike Train, the resulting sum of Spikes has exponential interSpike interval (ISI) distributions, vanishing the ISI correlation at a finite lag but exhibits exactly the same power spectrum as the original Spike Train does. This paradox is resolved by considering what happens to ISI correlations in the limit of an infinite number of superposed Trains. Implications of our findings for stochastic models in the neurosciences are briefly discussed.

Maximizing Spike Train coherence or incoherence in the leaky integrateandfire model.
Physical review. E Statistical nonlinear and soft matter physics, 2002CoAuthors: Benjamin Lindner, Lutz Schimanskygeier, André LongtinAbstract:We study noiseinduced resonance effects in the leaky integrateandfire neuron model with absolute refractory period, driven by a Gaussian white noise. It is demonstrated that a finite noise level may either maximize or minimize the regularity of the Spike Train. We also partition the parameter space into regimes where either or both of these effects occur. It is shown that the coherence minimization at moderate noise results in a flat spectral response with respect to periodic stimulation in contrast to sharp resonances that are observed for both small and large noise intensities.
Thomas Kreuz  One of the best experts on this subject based on the ideXlab platform.

which Spike Train distance is most suitable for distinguishing rate and temporal coding
Journal of Neuroscience Methods, 2018CoAuthors: Eero Satuvuori, Thomas KreuzAbstract:Abstract Background It is commonly assumed in neuronal coding that repeated presentations of a stimulus to a coding neuron elicit similar responses. One common way to assess similarity are Spike Train distances. These can be divided into Spikeresolved, such as the VictorPurpura and the van Rossum distance, and timeresolved, e.g. the ISI, the Spike and the RISpikedistance. New method We use independent steadyrate Poisson processes as surrogates for Spike Trains with fixed rate and no timing information to address two basic questions: How does the sensitivity of the different Spike Train distances to temporal coding depend on the rates of the two processes and how do the distances deal with very low rates? Results Spikeresolved distances always contain rate information even for parameters indicating time coding. This is an issue for reasonably high rates but beneficial for very low rates. In contrast, the operational range for detecting time coding of timeresolved distances is superior at normal rates, but these measures produce artefacts at very low rates. The RISpikedistance is the only measure that is sensitive to timing information only. Comparison with existing methods While our results on ratedependent expectation values for the Spikeresolved distances agree with Chicharro et al. (2011) , we here go one step further and specifically investigate applicability for very low rates. Conclusions The most appropriate measure depends on the rates of the data being analysed. Accordingly, we summarize our results in one table that allows an easy selection of the preferred measure for any kind of data.

measures of Spike Train synchrony for data with multiple time scales
arXiv: Data Analysis Statistics and Probability, 2017CoAuthors: Mario Mulansky, Nebojsa Bozanic, Eero Satuvuori, Irene Malvestio, Fleur Zeldenrust, Kerstin Lenk, Thomas KreuzAbstract:Background: Measures of Spike Train synchrony are widely used in both experimental and computational neuroscience. Timescale independent and parameterfree measures, such as the ISIdistance, the Spikedistance and Spikesynchronization, are preferable to timescale parametric measures, since by adapting to the local firing rate they take into account all the timescales of a given dataset. New Method: In data containing multiple timescales (e.g. regular spiking and bursts) one is typically less interested in the smallest timescales and a more adaptive approach is needed. Here we propose the AISIdistance, the ASpikedistance and ASpikesynchronization, which generalize the original measures by considering the local relative to the global timescales. For the ASpikedistance we also introduce a rateindependent extension called the RIASpikedistance, which focuses specifically on Spike timing. Results: The adaptive generalizations AISIdistance and ASpikedistance allow to disregard Spike time differences that are not relevant on a more global scale. ASpikesynchronization does not any longer demand an unreasonably high accuracy for Spike doublets and coinciding bursts. Finally, the RIASpikedistance proves to be independent of rate ratios between Spike Trains. Comparison with Existing Methods: We find that compared to the original versions the AISIdistance and the ASpikedistance yield improvements for Spike Trains containing different timescales without exhibiting any unwanted side effects in other examples. ASpikesynchronization matches Spikes more efficiently than SpikeSynchronization. Conclusions: With these proposals we have completed the picture, since we now provide adaptive generalized measures that are sensitive to rate only (AISIdistance), to timing only (ARISpikedistance), and to both at the same time (ASpikedistance).

pySpike a python library for analyzing Spike Train synchrony
SoftwareX, 2016CoAuthors: Mario Mulansky, Thomas KreuzAbstract:Understanding how the brain functions is one of the biggest challenges of our time. The analysis of experimentally recorded neural firing patterns (Spike Trains) plays a crucial role in addressing this problem. Here, the PySpike library is introduced, a Python package for Spike Train analysis providing parameterfree and timescale independent measures of Spike Train synchrony. It allows to compute similarity and dissimilarity profiles, averaged values and distance matrices. Although mainly focusing on neuroscience, PySpike can also be applied in other contexts like climate research or social sciences. The package is available as Open Source on Github and PyPI.

a guide to time resolved and parameter free measures of Spike Train synchrony
International Conference on Eventbased Control Communication and Signal Processing, 2015CoAuthors: Mario Mulansky, Nebojsa Bozanic, Andreea Ioana Sburlea, Thomas KreuzAbstract:Measures of Spike Train synchrony have proven a valuable tool in both experimental and computational neuroscience. Particularly useful are timeresolved methods such as the ISI and the Spikedistance, which have already been applied in various bivariate and multivariate contexts. Recently, SpikeSynchronization was proposed as another timeresolved synchronization measure. It is based on EventSynchronization and has a very intuitive interpretation. Here, we present a detailed analysis of the mathematical properties of these three synchronization measures. For example, we were able to obtain analytic expressions for the expectation values of the ISIdistance and SpikeSynchronization for Poisson Spike Trains. For the Spikedistance we present an empirical formula deduced from numerical evaluations. These expectation values are crucial for interpreting the synchronization of Spike Trains measured in experiments or numerical simulations, as they represent the point of reference for fully randomized Spike Trains.

spiky a graphical user interface for monitoring Spike Train synchrony
arXiv: Data Analysis Statistics and Probability, 2014CoAuthors: Thomas Kreuz, Mario Mulansky, Nebojsa BozanicAbstract:Techniques for recording largescale neuronal spiking activity are developing very fast. This leads to an increasing demand for algorithms capable of analyzing large amounts of experimental Spike Train data. One of the most crucial and demanding tasks is the identification of similarity patterns with a very high temporal resolution and across different spatial scales. To address this task, in recent years three timeresolved measures of Spike Train synchrony have been proposed, the ISIdistance, the Spikedistance, and event synchronization. The Matlab source codes for calculating and visualizing these measures have been made publicly available. However, due to the many different possible representations of the results the use of these codes is rather complicated and their application requires some basic knowledge of Matlab. Thus it became desirable to provide a more userfriendly and interactive interface. Here we address this need and present SPIKY, a graphical user interface which facilitates the application of timeresolved measures of Spike Train synchrony to both simulated and real data. SPIKY includes implementations of the ISIdistance, the Spikedistance and Spikesynchronization (an improved and simplified extension of event synchronization) which have been optimized with respect to computation speed and memory demand. It also comprises a Spike Train generator and an event detector which makes it capable of analyzing continuous data. Finally, the SPIKY package includes additional complementary programs aimed at the analysis of large numbers of datasets and the estimation of significance levels.
Hiroyuki Torikai  One of the best experts on this subject based on the ideXlab platform.

2009 special issue an artificial chaotic spiking neuron inspired by spiral ganglion cell paralleled Spike encoding theoretical analysis and electronic circuit implementation
Neural Networks, 2009CoAuthors: Hiroyuki Torikai, Toru NishigamiAbstract:A novel chaotic spiking neuron is presented and its nonlinear dynamics and encoding functions are analyzed. A set of paralleled N neurons accepts a common analog input and outputs a set of N chaotic SpikeTrains. Three theorems which guarantee that the neurons can encode the analog input into a summation of the N chaotic SpikeTrains are derived: (1) a Spike histogram of the summed SpikeTrain can mimic waveforms of various inputs, (2) the SpikeTrains do not synchronize to each other and thus the summed SpikeTrain can have N times higher encoding resolution than each single SpikeTrain, and (3) firing rates of the neurons can be adjusted by internal parameters. The theorems are proven by using nonlinear iterative maps and are confirmed by numerical simulations as well. Electronic circuit implementation methods of the paralleled neurons are also presented and typical paralleled encoding functions are confirmed by both experimental measurements and SPICE simulations.

an artificial chaotic spiking neuron inspired by spiral ganglion cell paralleled Spike encoding theoretical analysis and electronic circuit implementation
International Joint Conference on Neural Network, 2009CoAuthors: Hiroyuki Torikai, Toru NishigamiAbstract:A novel chaotic spiking neuron is presented and its nonlinear dynamics and encoding functions are analyzed. A set of paralleled N neurons accepts a common analog input and outputs a set of N chaotic SpikeTrains. Three theorems which guarantee that the neurons can encode the analog input into a summation of the N chaotic SpikeTrains are derived: (1) a Spike histogram of the summed SpikeTrain can mimic waveforms of various inputs, (2) the SpikeTrains do not synchronize to each other and thus the summed SpikeTrain can have N times higher encoding resolution than each single SpikeTrain, and (3) firing rates of the neurons can be adjusted by internal parameters. The theorems are proven by using nonlinear iterative maps and are confirmed by numerical simulations as well. Electronic circuit implementation methods of the paralleled neurons are also presented and typical paralleled encoding functions are confirmed by both experimental measurements and SPICE simulations.

synchronization phenomena in pulse coupled networks driven by Spike Train inputs
IEEE Transactions on Neural Networks, 2004CoAuthors: Hiroyuki Torikai, Toshimichi SaitoAbstract:We present a pulsecoupled network (PCN) of spiking oscillators (SOCs) which can be implemented as a simple electrical circuit. The SOC has a periodic reset level that can realize rich dynamics represented by chaotic SpikeTrains. Applying a SpikeTrain input, the PCN can exhibit the following interesting phenomena. 1) Each SOC synchronizes with a part of the input without overlapping, i.e., the input is decomposed. 2) Some SOCs synchronize with a part of the input with overlapping, i.e., the input is decomposed and the SOCs are clustered. The PCN has multiple synchronization phenomena and exhibits one of them depending on the initial state. We clarify the numbers of the synchronization phenomena and the parameter regions in which these phenomena can be observed. Also stability of the synchronization phenomena is clarified. Presenting a simple test circuit, typical phenomena are confirmed experimentally.