Stationary Input

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 20529 Experts worldwide ranked by ideXlab platform

Justin C Sanchez - One of the best experts on this subject based on the ideXlab platform.

  • using reinforcement learning to provide stable brain machine interface control despite neural Input reorganization
    2014
    Co-Authors: Eric A Pohlmeyer, Babak Mahmoudi, Shijia Geng, Noeline W Prins, Justin C Sanchez
    Abstract:

    Brain-machine interface (BMI) systems give users direct neural control of robotic, communication, or functional electrical stimulation systems. As BMI systems begin transitioning from laboratory settings into activities of daily living, an important goal is to develop neural decoding algorithms that can be calibrated with a minimal burden on the user, provide stable control for long periods of time, and can be responsive to fluctuations in the decoder’s neural Input space (e.g. neurons appearing or being lost amongst electrode recordings). These are significant challenges for static neural decoding algorithms that assume Stationary Input/output relationships. Here we use an actor-critic reinforcement learning architecture to provide an adaptive BMI controller that can successfully adapt to dramatic neural reorganizations, can maintain its performance over long time periods, and which does not require the user to produce specific kinetic or kinematic activities to calibrate the BMI. Two marmoset monkeys used the Reinforcement Learning BMI (RLBMI) to successfully control a robotic arm during a two-target reaching task. The RLBMI was initialized using random initial conditions, and it quickly learned to control the robot from brain states using only a binary evaluative feedback regarding whether previously chosen robot actions were good or bad. The RLBMI was able to maintain control over the system throughout sessions spanning multiple weeks. Furthermore, the RLBMI was able to quickly adapt and maintain control of the robot despite dramatic perturbations to the neural Inputs, including a series of tests in which the neuron Input space was deliberately halved or doubled.

M V Jankovic - One of the best experts on this subject based on the ideXlab platform.

  • a new simple spl infin oh neuron model as a biologically plausible principal component analyzer
    2003
    Co-Authors: M V Jankovic
    Abstract:

    A new approach to unsupervised learning in a single-layer neural network is discussed. An algorithm for unsupervised learning based upon the Hebbian learning rule is presented. A simple neuron model is analyzed. A dynamic neural model, which contains both feed-forward and feedback connections between the Input and the output, has been adopted. The, proposed learning algorithm could be more correctly named self-supervised rather than unsupervised. The solution proposed here is a modified Hebbian rule, in which the modification of the synaptic strength is proportional not to pre- and postsynaptic activity, but instead to the presynaptic and averaged value of postsynaptic activity. It is shown that the model neuron tends to extract the principal component from a Stationary Input vector sequence. Usually accepted additional decaying terms for the stabilization of the original Hebbian rule are avoided. Implementation of the basic Hebbian scheme would not lead to unrealistic growth of the synaptic strengths, thanks to the adopted network structure.

Stefan Wermter - One of the best experts on this subject based on the ideXlab platform.

  • lifelong learning of human actions with deep neural network self organization
    2017
    Co-Authors: German Ignacio Parisi, Jun Tani, Cornelius Weber, Stefan Wermter
    Abstract:

    Abstract Lifelong learning is fundamental in autonomous robotics for the acquisition and fine-tuning of knowledge through experience. However, conventional deep neural models for action recognition from videos do not account for lifelong learning but rather learn a batch of training data with a predefined number of action classes and samples. Thus, there is the need to develop learning systems with the ability to incrementally process available perceptual cues and to adapt their responses over time. We propose a self-organizing neural architecture for incrementally learning to classify human actions from video sequences. The architecture comprises growing self-organizing networks equipped with recurrent neurons for processing time-varying patterns. We use a set of hierarchically arranged recurrent networks for the unsupervised learning of action representations with increasingly large spatiotemporal receptive fields. Lifelong learning is achieved in terms of prediction-driven neural dynamics in which the growth and the adaptation of the recurrent networks are driven by their capability to reconstruct temporally ordered Input sequences. Experimental results on a classification task using two action benchmark datasets show that our model is competitive with state-of-the-art methods for batch learning also when a significant number of sample labels are missing or corrupted during training sessions. Additional experiments show the ability of our model to adapt to non-Stationary Input avoiding catastrophic interference.

François Grimbert - One of the best experts on this subject based on the ideXlab platform.

  • Persistent Neural States: Stationary Localized Activity Patterns in Nonlinear Continuous n-Population, q-Dimensional Neural Networks
    2009
    Co-Authors: Olivier Faugeras, Romain Veltz, François Grimbert
    Abstract:

    Neural continuum networks are an important aspect of the modeling of macroscopic parts of the cortex. Two classes of such networks are considered: voltage and activity based. In both cases, our networks contain an arbitrary number, n, of interacting neuron populations. Spatial nonsymmetric connectivity functions represent cortico-cortical, local connections, and external Inputs represent nonlocal connections. Sigmoidal nonlinearities model the relationship between (average) membrane potential and activity. Departing from most of the previous work in this area, we do not assume the nonlinearity to be singular, that is, represented by the discontinuous Heaviside function. Another important difference from previous work is that we relax the assumption that the domain of definition where we study these networks is infinite, that is, equal to or . We explicitly consider the biologically more relevant case of a bounded subset Ω of , a better model of a piece of cortex. The time behavior of these networks is described by systems of integro-differential equations. Using methods of functional analysis, we study the existence and uniqueness of a Stationary (i.e., time-independent) solution of these equations in the case of a Stationary Input. These solutions can be seen as ‘persistent’; they are also sometimes called bumps. We show that under very mild assumptions on the connectivity functions and because we do not use the Heaviside function for the nonlinearities, such solutions always exist. We also give sufficient conditions on the connectivity functions for the solution to be absolutely stable, that is, independent of the initial state of the network. We then study the sensitivity of the solutions to variations of such parameters as the connectivity functions, the sigmoids, the external Inputs, and, last but not least, the shape of the domain of existence Ω of the neural continuum networks. These theoretical results are illustrated and corroborated by a large number of numerical experiments in most of the cases 2 ⩽ n ⩽ 3, 2 ⩽ q ⩽ 3.

Eric A Pohlmeyer - One of the best experts on this subject based on the ideXlab platform.

  • using reinforcement learning to provide stable brain machine interface control despite neural Input reorganization
    2014
    Co-Authors: Eric A Pohlmeyer, Babak Mahmoudi, Shijia Geng, Noeline W Prins, Justin C Sanchez
    Abstract:

    Brain-machine interface (BMI) systems give users direct neural control of robotic, communication, or functional electrical stimulation systems. As BMI systems begin transitioning from laboratory settings into activities of daily living, an important goal is to develop neural decoding algorithms that can be calibrated with a minimal burden on the user, provide stable control for long periods of time, and can be responsive to fluctuations in the decoder’s neural Input space (e.g. neurons appearing or being lost amongst electrode recordings). These are significant challenges for static neural decoding algorithms that assume Stationary Input/output relationships. Here we use an actor-critic reinforcement learning architecture to provide an adaptive BMI controller that can successfully adapt to dramatic neural reorganizations, can maintain its performance over long time periods, and which does not require the user to produce specific kinetic or kinematic activities to calibrate the BMI. Two marmoset monkeys used the Reinforcement Learning BMI (RLBMI) to successfully control a robotic arm during a two-target reaching task. The RLBMI was initialized using random initial conditions, and it quickly learned to control the robot from brain states using only a binary evaluative feedback regarding whether previously chosen robot actions were good or bad. The RLBMI was able to maintain control over the system throughout sessions spanning multiple weeks. Furthermore, the RLBMI was able to quickly adapt and maintain control of the robot despite dramatic perturbations to the neural Inputs, including a series of tests in which the neuron Input space was deliberately halved or doubled.