Spike Train

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 360 Experts worldwide ranked by ideXlab platform

Emery N Brown - One of the best experts on this subject based on the ideXlab platform.

  • state space models for the analysis of neural Spike Train and behavioral data
    Encyclopedia of Computational Neuroscience, 2014
    Co-Authors: Zhe Chen, Emery N Brown
    Abstract:

    An adaptation of the state-space paradigm to the analysis of neuroscience data in which the observation model is either a point process or a time series of binary observations and the state model is typically a linear Gaussian process. The paradigm has been applied to a number of problems including neural Spike Train decoding, analysis of receptive field dynamics, analyses of learning, neural prosthetic control, and control of brain states under anesthesia. The state-space paradigm for analyses of point processes and time series of discrete binary observations has been developed for the analysis of neural Spike Train and behavioral data (Brown et al. 1998; Smith and Brown 2003). The state-space point process (SSPP) paradigm has two standard components. The state equation defines the system dynamics. The observation equation defines how the system is measured. For the SSPP system the observations can be point processes or time series of discrete binary responses. Point processes are binary (0–1) events defined in continuous time. They are an ideal framework for analyzing series of neuronal action potentials where the binary events are defined as 1 if there is an action potential or Spike and 0 otherwise. Data in behavioral studies commonly consist of time series of binary responses such as in a learning experiment where 1 is given for a correct response and 0 is given for an incorrect response. In system neuroscience experiments, the state equation defines the signal, stimulus, or brain state that is being represented by the neural activity or the behavior.

  • nstat open source neural Spike Train analysis toolbox for matlab
    PMC, 2012
    Co-Authors: Emery N Brown, Iahn Cajigas, W Q Malik
    Abstract:

    Abstract Over the last decade there has been a tremendous advance in the analytical tools available to neuroscientists to understand and model neural function. In particular, the point process – generalized linear model (PP-GLM) framework has been applied successfully to problems ranging from neuro-endocrine physiology to neural decoding. However, the lack of freely distributed software implementations of published PP-GLM algorithms together with problem-specific modifications required for their use, limit wide application of these techniques. In an effort to make existing PP-GLM methods more accessible to the neuroscience community, we have developed nSTAT – an open source neural Spike Train analysis toolbox for Matlab ® . By adopting an object-oriented programming (OOP) approach, nSTAT allows users to easily manipulate data by performing operations on objects that have an intuitive connection to the experiment (Spike Trains, covariates, etc.), rather than by dealing with data in vector/matrix form. The algorithms implemented within nSTAT address a number of common problems including computation of peri-stimulus time histograms, quantification of the temporal response properties of neurons, and characterization of neural plasticity within and across trials. nSTAT provides a starting point for exploratory data analysis, allows for simple and systematic building and testing of point process models, and for decoding of stimulus variables based on point process models of neural function. By providing an open-source toolbox, we hope to establish a platform that can be easily used, modified, and extended by the scientific community to address limitations of current techniques and to extend available techniques to more complex problems.

  • state space analysis of time varying higher order Spike correlation for multiple neural Spike Train data
    PLOS Computational Biology, 2012
    Co-Authors: Hideaki Shimazaki, Emery N Brown, Sonja Grun, Shunichi Amari
    Abstract:

    Precise Spike coordination between the spiking activities of multiple neurons is suggested as an indication of coordinated network activity in active cell assemblies. Spike correlation analysis aims to identify such cooperative network activity by detecting excess Spike synchrony in simultaneously recorded multiple neural Spike sequences. Cooperative activity is expected to organize dynamically during behavior and cognition; therefore currently available analysis techniques must be extended to enable the estimation of multiple time-varying Spike interactions between neurons simultaneously. In particular, new methods must take advantage of the simultaneous observations of multiple neurons by addressing their higher-order dependencies, which cannot be revealed by pairwise analyses alone. In this paper, we develop a method for estimating time-varying Spike interactions by means of a state-space analysis. Discretized parallel Spike sequences are modeled as multi-variate binary processes using a log-linear model that provides a well-defined measure of higher-order Spike correlation in an information geometry framework. We construct a recursive Bayesian filter/smoother for the extraction of Spike interaction parameters. This method can simultaneously estimate the dynamic pairwise Spike interactions of multiple single neurons, thereby extending the Ising/spin-glass model analysis of multiple neural Spike Train data to a nonstationary analysis. Furthermore, the method can estimate dynamic higher-order Spike interactions. To validate the inclusion of the higher-order terms in the model, we construct an approximation method to assess the goodness-of-fit to Spike data. In addition, we formulate a test method for the presence of higher-order Spike correlation even in nonstationary Spike data, e.g., data from awake behaving animals. The utility of the proposed methods is tested using simulated Spike data with known underlying correlation dynamics. Finally, we apply the methods to neural Spike data simultaneously recorded from the motor cortex of an awake monkey and demonstrate that the higher-order Spike correlation organizes dynamically in relation to a behavioral demand.

  • an analysis of hippocampal spatio temporal representations using a bayesian algorithm for neural Spike Train decoding
    IEEE Transactions on Neural Systems and Rehabilitation Engineering, 2005
    Co-Authors: Riccardo Barbieri, Matthew A Wilson, Loren M Frank, Emery N Brown
    Abstract:

    Neural Spike Train decoding algorithms are important tools for characterizing how ensembles of neurons represent biological signals. We present a Bayesian neural Spike Train decoding algorithm based on a point process model of individual neurons, a linear stochastic state-space model of the biological signal, and a temporal latency parameter. The latency parameter represents the temporal lead or lag between the biological signal and the ensemble spiking activity. We use the algorithm to study whether the representation of position by the ensemble spiking activity of pyramidal neurons in the CA1 region of the rat hippocampus is more consistent with prospective coding, i.e., future position, or retrospective coding, past position. Using 44 simultaneously recorded neurons and an ensemble delay latency of 400 ms, the median decoding error was 5.1 cm during 10 min of foraging in an open circular environment. The true coverage probability for the algorithm's 0.95 confidence regions was 0.71. These results illustrate how the Bayesian neural Spike Train decoding paradigm may be used to investigate spatio-temporal representations of position by an ensemble of hippocampal neurons.

  • Multiple neural Spike Train data analysis: state-of-the-art and future challenges
    Nature Neuroscience, 2004
    Co-Authors: Emery N Brown, Robert E. Kass, Partha P Mitra
    Abstract:

    Multiple electrodes are now a standard tool in neuroscience research that make it possible to study the simultaneous activity of several neurons in a given brain region or across different regions. The data from multi-electrode studies present important analysis challenges that must be resolved for optimal use of these neurophysiological measurements to answer questions about how the brain works. Here we review statistical methods for the analysis of multiple neural Spike-Train data and discuss future challenges for methodology research.

Sonja Grun - One of the best experts on this subject based on the ideXlab platform.

  • detection and evaluation of spatio temporal Spike patterns in massively parallel Spike Train data with spade
    Frontiers in Computational Neuroscience, 2017
    Co-Authors: Pietro Quaglio, Alper Yegenoglu, Emiliano Torre, Dominik Endres, Sonja Grun
    Abstract:

    Abstract Repeated precise sequences of Spikes are largely considered a signature of activation of cell assemblies. These repeated sequences are commonly known under the name of spatio-temporal patterns (STPs). STPs are hypothesized to play a role in the communication of information in the computational process operated by the cerebral cortex. A variety of statistical methods for the detection of STPs have been developed and applied to electrophysiological recordings, but such methods scale poorly with the current size of available parallel Spike Train recordings (more than 100 neurons). In this work, we introduce a novel method capable to overcome the computational and statistical limits of existing analysis techniques in detecting repeating STPs within massively parallel Spike Trains (MPST). We employ advanced data mining techniques to efficiently extract repeating sequences of Spikes from the data. Then, we introduce and compare two alternative approaches to distinguish statistically surprising patterns from chance sequences. The first approach uses a measure known as conceptual stability, of which we investigate a computationally cheap approximation for applications to such large data sets. The second approach is based on the evaluation of pattern statistical significance. In particular, we provide an extension to STPs of a method we recently introduced for the evaluation of statistical significance of synchronous Spike patterns. The performance of the two approaches is evaluated in terms of computational load and statistical power on a variety of artificial data sets that replicate specific features of experimental data. Both methods provide an effective and robust procedure for detection of STPs in MPST data. The method based on significance evaluation shows the best overall performance, although at a higher computational cost. We name the novel procedure the spatio-temporal Spike PAttern Detection and Evaluation SPADE analysis.

  • state space analysis of time varying higher order Spike correlation for multiple neural Spike Train data
    PLOS Computational Biology, 2012
    Co-Authors: Hideaki Shimazaki, Emery N Brown, Sonja Grun, Shunichi Amari
    Abstract:

    Precise Spike coordination between the spiking activities of multiple neurons is suggested as an indication of coordinated network activity in active cell assemblies. Spike correlation analysis aims to identify such cooperative network activity by detecting excess Spike synchrony in simultaneously recorded multiple neural Spike sequences. Cooperative activity is expected to organize dynamically during behavior and cognition; therefore currently available analysis techniques must be extended to enable the estimation of multiple time-varying Spike interactions between neurons simultaneously. In particular, new methods must take advantage of the simultaneous observations of multiple neurons by addressing their higher-order dependencies, which cannot be revealed by pairwise analyses alone. In this paper, we develop a method for estimating time-varying Spike interactions by means of a state-space analysis. Discretized parallel Spike sequences are modeled as multi-variate binary processes using a log-linear model that provides a well-defined measure of higher-order Spike correlation in an information geometry framework. We construct a recursive Bayesian filter/smoother for the extraction of Spike interaction parameters. This method can simultaneously estimate the dynamic pairwise Spike interactions of multiple single neurons, thereby extending the Ising/spin-glass model analysis of multiple neural Spike Train data to a nonstationary analysis. Furthermore, the method can estimate dynamic higher-order Spike interactions. To validate the inclusion of the higher-order terms in the model, we construct an approximation method to assess the goodness-of-fit to Spike data. In addition, we formulate a test method for the presence of higher-order Spike correlation even in nonstationary Spike data, e.g., data from awake behaving animals. The utility of the proposed methods is tested using simulated Spike data with known underlying correlation dynamics. Finally, we apply the methods to neural Spike data simultaneously recorded from the motor cortex of an awake monkey and demonstrate that the higher-order Spike correlation organizes dynamically in relation to a behavioral demand.

Benjamin Lindner - One of the best experts on this subject based on the ideXlab platform.

  • Self-Consistent Scheme for Spike-Train Power Spectra in Heterogeneous Sparse Networks
    Frontiers in Computational Neuroscience, 2018
    Co-Authors: Rodrigo Felipe De Oliveira Pena, Sebastian Vellmer, Davide Bernardi, Antonio C Roque, Benjamin Lindner
    Abstract:

    Recurrent networks of spiking neurons can be in an asynchronous state characterized by low or absent cross-correlations and Spike statistics which resemble those of cortical neurons. Although spatial correlations are negligible in this state, neurons can show pronounced temporal correlations in their Spike Trains that can be quantified by the autocorrelation function or the Spike-Train power spectrum. Depending on cellular and network parameters, correlations display diverse patterns (ranging from simple refractory-period effects and stochastic oscillations to slow fluctuations) and it is generally not well understood how these dependencies come about. Previous work has explored how the single-cell correlations in a homogeneous network (excitatory and inhibitory integrate-and-fire neurons with nearly balanced mean recurrent input) can be determined numerically from an iterative single-neuron simulation. Such a scheme is based on the fact that every neuron is driven by the network noise (i.e. the input currents from all its presynaptic partners) but also contributes to the network noise, leading to a self-consistency condition for the input and output spectra. Here we first extend this scheme to homogeneous networks with strong recurrent inhibition and a synaptic filter, in which instabilities of the previous scheme are avoided by an averaging procedure. We then extend the scheme to heterogeneous networks in which i) different neural subpopulations (e.g. excitatory and inhibitory neurons) have different cellular or connectivity parameters; ii) the number and strength of the input connections are random (Erdős–Renyi topology) and thus different among neurons. In all heterogeneous cases, neurons are lumped in different classes each of which is represented by a single neuron in the iterative scheme; in addition, we make a Gaussian approximation of the input current to the neuron. These approximations seem to be justified over a broad range of parameters as indicated by comparison with simulation results of large recurrent networks. Our method can help to elucidate how network heterogeneity shapes the asynchronous state in recurrent neural networks.

  • self consistent determination of the Spike Train power spectrum in a neural network with sparse connectivity
    Frontiers in Computational Neuroscience, 2014
    Co-Authors: Benjamin Dummer, Stefan Wieland, Benjamin Lindner
    Abstract:

    A major source of random variability in cortical networks is the quasi-random arrival of presynaptic action potentials from many other cells. In network studies as well as in the study of the response properties of single cells embedded in a network, synaptic background input is often approximated by Poissonian Spike Trains. However, the output statistics of the cells is in most cases far from being Poisson. This is inconsistent with the assumption of similar Spike-Train statistics for pre- and postsynaptic cells in a recurrent network. Here we tackle this problem for the popular class of integrate-and-fire neurons and study a self-consistent statistics of input and output spectra of neural Spike Trains. Instead of actually using a large network, we use an iterative scheme, in which we simulate a single neuron over several generations. In each of these generations, the neuron is stimulated with surrogate stochastic input that has a similar statistics as the output of the previous generation. For the surrogate input, we employ two distinct approximations: (i) a superposition of renewal Spike Trains with the same interSpike interval density as observed in the previous generation and (ii) a Gaussian current with a power spectrum proportional to that observed in the previous generation. For input parameters that correspond to balanced input in the network, both the renewal and the Gaussian iteration procedure converge quickly and yield comparable results for the self-consistent Spike-Train power spectrum. We compare our results to large-scale simulations of a random sparsely connected network of leaky integrate-and-fire neurons (Brunel, 2000) and show that in the asynchronous regime close to a state of balanced synaptic input from the network, our iterative schemes provide an excellent approximations to the autocorrelation of Spike Trains in the recurrent network.

  • superposition of many independent Spike Trains is generally not a poisson process
    Physical Review E, 2006
    Co-Authors: Benjamin Lindner
    Abstract:

    We study the sum of many independent Spike Trains and ask whether the resulting Spike Train has Poisson statistics or not. It is shown that for a non-Poissonian statistics of the single Spike Train, the resulting sum of Spikes has exponential interSpike interval (ISI) distributions, vanishing the ISI correlation at a finite lag but exhibits exactly the same power spectrum as the original Spike Train does. This paradox is resolved by considering what happens to ISI correlations in the limit of an infinite number of superposed Trains. Implications of our findings for stochastic models in the neurosciences are briefly discussed.

  • Maximizing Spike Train coherence or incoherence in the leaky integrate-and-fire model.
    Physical review. E Statistical nonlinear and soft matter physics, 2002
    Co-Authors: Benjamin Lindner, Lutz Schimansky-geier, André Longtin
    Abstract:

    We study noise-induced resonance effects in the leaky integrate-and-fire neuron model with absolute refractory period, driven by a Gaussian white noise. It is demonstrated that a finite noise level may either maximize or minimize the regularity of the Spike Train. We also partition the parameter space into regimes where either or both of these effects occur. It is shown that the coherence minimization at moderate noise results in a flat spectral response with respect to periodic stimulation in contrast to sharp resonances that are observed for both small and large noise intensities.

Thomas Kreuz - One of the best experts on this subject based on the ideXlab platform.

  • which Spike Train distance is most suitable for distinguishing rate and temporal coding
    Journal of Neuroscience Methods, 2018
    Co-Authors: Eero Satuvuori, Thomas Kreuz
    Abstract:

    Abstract Background It is commonly assumed in neuronal coding that repeated presentations of a stimulus to a coding neuron elicit similar responses. One common way to assess similarity are Spike Train distances. These can be divided into Spike-resolved, such as the Victor-Purpura and the van Rossum distance, and time-resolved, e.g. the ISI-, the Spike- and the RI-Spike-distance. New method We use independent steady-rate Poisson processes as surrogates for Spike Trains with fixed rate and no timing information to address two basic questions: How does the sensitivity of the different Spike Train distances to temporal coding depend on the rates of the two processes and how do the distances deal with very low rates? Results Spike-resolved distances always contain rate information even for parameters indicating time coding. This is an issue for reasonably high rates but beneficial for very low rates. In contrast, the operational range for detecting time coding of time-resolved distances is superior at normal rates, but these measures produce artefacts at very low rates. The RI-Spike-distance is the only measure that is sensitive to timing information only. Comparison with existing methods While our results on rate-dependent expectation values for the Spike-resolved distances agree with Chicharro et al. (2011) , we here go one step further and specifically investigate applicability for very low rates. Conclusions The most appropriate measure depends on the rates of the data being analysed. Accordingly, we summarize our results in one table that allows an easy selection of the preferred measure for any kind of data.

  • measures of Spike Train synchrony for data with multiple time scales
    arXiv: Data Analysis Statistics and Probability, 2017
    Co-Authors: Mario Mulansky, Nebojsa Bozanic, Eero Satuvuori, Irene Malvestio, Fleur Zeldenrust, Kerstin Lenk, Thomas Kreuz
    Abstract:

    Background: Measures of Spike Train synchrony are widely used in both experimental and computational neuroscience. Time-scale independent and parameter-free measures, such as the ISI-distance, the Spike-distance and Spike-synchronization, are preferable to time-scale parametric measures, since by adapting to the local firing rate they take into account all the time-scales of a given dataset. New Method: In data containing multiple time-scales (e.g. regular spiking and bursts) one is typically less interested in the smallest time-scales and a more adaptive approach is needed. Here we propose the A-ISI-distance, the A-Spike-distance and A-Spike-synchronization, which generalize the original measures by considering the local relative to the global time-scales. For the A-Spike-distance we also introduce a rate-independent extension called the RIA-Spike-distance, which focuses specifically on Spike timing. Results: The adaptive generalizations A-ISI-distance and A-Spike-distance allow to disregard Spike time differences that are not relevant on a more global scale. A-Spike-synchronization does not any longer demand an unreasonably high accuracy for Spike doublets and coinciding bursts. Finally, the RIA-Spike-distance proves to be independent of rate ratios between Spike Trains. Comparison with Existing Methods: We find that compared to the original versions the A-ISI-distance and the A-Spike-distance yield improvements for Spike Trains containing different time-scales without exhibiting any unwanted side effects in other examples. A-Spike-synchronization matches Spikes more efficiently than Spike-Synchronization. Conclusions: With these proposals we have completed the picture, since we now provide adaptive generalized measures that are sensitive to rate only (A-ISI-distance), to timing only (ARI-Spike-distance), and to both at the same time (A-Spike-distance).

  • pySpike a python library for analyzing Spike Train synchrony
    SoftwareX, 2016
    Co-Authors: Mario Mulansky, Thomas Kreuz
    Abstract:

    Understanding how the brain functions is one of the biggest challenges of our time. The analysis of experimentally recorded neural firing patterns (Spike Trains) plays a crucial role in addressing this problem. Here, the PySpike library is introduced, a Python package for Spike Train analysis providing parameter-free and time-scale independent measures of Spike Train synchrony. It allows to compute similarity and dissimilarity profiles, averaged values and distance matrices. Although mainly focusing on neuroscience, PySpike can also be applied in other contexts like climate research or social sciences. The package is available as Open Source on Github and PyPI.

  • a guide to time resolved and parameter free measures of Spike Train synchrony
    International Conference on Event-based Control Communication and Signal Processing, 2015
    Co-Authors: Mario Mulansky, Nebojsa Bozanic, Andreea Ioana Sburlea, Thomas Kreuz
    Abstract:

    Measures of Spike Train synchrony have proven a valuable tool in both experimental and computational neuroscience. Particularly useful are time-resolved methods such as the ISI- and the Spike-distance, which have already been applied in various bivariate and multivariate contexts. Recently, Spike-Synchronization was proposed as another time-resolved synchronization measure. It is based on Event-Synchronization and has a very intuitive interpretation. Here, we present a detailed analysis of the mathematical properties of these three synchronization measures. For example, we were able to obtain analytic expressions for the expectation values of the ISI-distance and Spike-Synchronization for Poisson Spike Trains. For the Spike-distance we present an empirical formula deduced from numerical evaluations. These expectation values are crucial for interpreting the synchronization of Spike Trains measured in experiments or numerical simulations, as they represent the point of reference for fully randomized Spike Trains.

  • spiky a graphical user interface for monitoring Spike Train synchrony
    arXiv: Data Analysis Statistics and Probability, 2014
    Co-Authors: Thomas Kreuz, Mario Mulansky, Nebojsa Bozanic
    Abstract:

    Techniques for recording large-scale neuronal spiking activity are developing very fast. This leads to an increasing demand for algorithms capable of analyzing large amounts of experimental Spike Train data. One of the most crucial and demanding tasks is the identification of similarity patterns with a very high temporal resolution and across different spatial scales. To address this task, in recent years three time-resolved measures of Spike Train synchrony have been proposed, the ISI-distance, the Spike-distance, and event synchronization. The Matlab source codes for calculating and visualizing these measures have been made publicly available. However, due to the many different possible representations of the results the use of these codes is rather complicated and their application requires some basic knowledge of Matlab. Thus it became desirable to provide a more user-friendly and interactive interface. Here we address this need and present SPIKY, a graphical user interface which facilitates the application of time-resolved measures of Spike Train synchrony to both simulated and real data. SPIKY includes implementations of the ISI-distance, the Spike-distance and Spike-synchronization (an improved and simplified extension of event synchronization) which have been optimized with respect to computation speed and memory demand. It also comprises a Spike Train generator and an event detector which makes it capable of analyzing continuous data. Finally, the SPIKY package includes additional complementary programs aimed at the analysis of large numbers of datasets and the estimation of significance levels.

Hiroyuki Torikai - One of the best experts on this subject based on the ideXlab platform.

  • 2009 special issue an artificial chaotic spiking neuron inspired by spiral ganglion cell paralleled Spike encoding theoretical analysis and electronic circuit implementation
    Neural Networks, 2009
    Co-Authors: Hiroyuki Torikai, Toru Nishigami
    Abstract:

    A novel chaotic spiking neuron is presented and its nonlinear dynamics and encoding functions are analyzed. A set of paralleled N neurons accepts a common analog input and outputs a set of N chaotic Spike-Trains. Three theorems which guarantee that the neurons can encode the analog input into a summation of the N chaotic Spike-Trains are derived: (1) a Spike histogram of the summed Spike-Train can mimic waveforms of various inputs, (2) the Spike-Trains do not synchronize to each other and thus the summed Spike-Train can have N times higher encoding resolution than each single Spike-Train, and (3) firing rates of the neurons can be adjusted by internal parameters. The theorems are proven by using nonlinear iterative maps and are confirmed by numerical simulations as well. Electronic circuit implementation methods of the paralleled neurons are also presented and typical paralleled encoding functions are confirmed by both experimental measurements and SPICE simulations.

  • an artificial chaotic spiking neuron inspired by spiral ganglion cell paralleled Spike encoding theoretical analysis and electronic circuit implementation
    International Joint Conference on Neural Network, 2009
    Co-Authors: Hiroyuki Torikai, Toru Nishigami
    Abstract:

    A novel chaotic spiking neuron is presented and its nonlinear dynamics and encoding functions are analyzed. A set of paralleled N neurons accepts a common analog input and outputs a set of N chaotic Spike-Trains. Three theorems which guarantee that the neurons can encode the analog input into a summation of the N chaotic Spike-Trains are derived: (1) a Spike histogram of the summed Spike-Train can mimic waveforms of various inputs, (2) the Spike-Trains do not synchronize to each other and thus the summed Spike-Train can have N times higher encoding resolution than each single Spike-Train, and (3) firing rates of the neurons can be adjusted by internal parameters. The theorems are proven by using nonlinear iterative maps and are confirmed by numerical simulations as well. Electronic circuit implementation methods of the paralleled neurons are also presented and typical paralleled encoding functions are confirmed by both experimental measurements and SPICE simulations.

  • synchronization phenomena in pulse coupled networks driven by Spike Train inputs
    IEEE Transactions on Neural Networks, 2004
    Co-Authors: Hiroyuki Torikai, Toshimichi Saito
    Abstract:

    We present a pulse-coupled network (PCN) of spiking oscillators (SOCs) which can be implemented as a simple electrical circuit. The SOC has a periodic reset level that can realize rich dynamics represented by chaotic Spike-Trains. Applying a Spike-Train input, the PCN can exhibit the following interesting phenomena. 1) Each SOC synchronizes with a part of the input without overlapping, i.e., the input is decomposed. 2) Some SOCs synchronize with a part of the input with overlapping, i.e., the input is decomposed and the SOCs are clustered. The PCN has multiple synchronization phenomena and exhibits one of them depending on the initial state. We clarify the numbers of the synchronization phenomena and the parameter regions in which these phenomena can be observed. Also stability of the synchronization phenomena is clarified. Presenting a simple test circuit, typical phenomena are confirmed experimentally.