Neural Computation

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 36996 Experts worldwide ranked by ideXlab platform

Juan Carlos Herrero - One of the best experts on this subject based on the ideXlab platform.

  • IWANN (2) - Challenges for a real-world information processing by means of real-time Neural Computation and real-conditions simulation
    Lecture Notes in Computer Science, 1999
    Co-Authors: Juan Carlos Herrero
    Abstract:

    Should we consider the dimensions of natural Neural Computation as they are known as a result of the scientific research, we realize there is a long tomorrow before us, interested in Neural Computation, for the simple reason that we can only handle a relatively low number of units and connections nowadays. All along this century we have significantly improved our knowledge on natural Neural nets, to realize that huge number of cells and connections and begin to understand some of the brain signals processing and the repetitive structures which support it. However, even in the most developed cases, such as the auditory pathway modelling, there is not a Neural Computational device which can involve a real time response and follow the facts already known or plausibly postulated on some brain processes (e.g. by McCulloch and Pitts), with the unavoidable great number of processing elements involved too, besides neither suitable models regarding those kind of real-look nets have been designed nor their corresponding real-conditions simulations have been carried out. That means there is a lack of connectionistically computable models and also reduction methods by which we can obtain a connectionistic implementation design, given the knowledge level model.

Brian J. Fischer - One of the best experts on this subject based on the ideXlab platform.

  • IJCNN - Neural Computation with non-uniform population codes
    2017 International Joint Conference on Neural Networks (IJCNN), 2017
    Co-Authors: Brian J. Fischer
    Abstract:

    A central goal of neuroscience is to understand how the responses of populations of neurons and the connectivity patterns between groups of neurons allow brains to perform Computations. The Neural Engineering Framework (NEF) is a promising approach to designing Neural models that perform a wide range of Computations. Emerging principles of efficient coding and divisive normalization from neuroscience constrain models of Neural Computation, but their role in the NEF has not been described. Here, we show how efficient coding and divisive normalization are important in this approach to modeling Neural Computation. We show that divisive normalization in networks of neurons that encode the statistics of the environment in their tuning properties allows networks to perform Bayes-optimal Computations across multiple layers in a network. The result is to incorporate several emerging principles of Neural Computation in an already successful modeling framework.

  • Neural Computation with efficient population codes
    BMC Neuroscience, 2013
    Co-Authors: Brian J. Fischer
    Abstract:

    Brain function depends on populations of neurons that perform Computations on perceptually and behaviorally relevant variables. One of the main goals of neuroscience is to understand how the responses of populations of neurons and the connectivity patterns between groups of neurons allow brains to perform a wide range of Neural Computations. The Neural Engineering Framework (NEF) is a promising approach to designing Neural models that perform many Neural Computations [1,2]. The central thesis behind the NEF is that populations of neurons represent, and perform Computations on, low-dimensional time-dependent variables. By characterizing how neurons in a population encode a variable, and how the variable can be decoded from the distributed representation, functioning Neural circuits are constructed that allow a comparison with experimental data at a range of levels from single neuron responses to connectivity patterns to perceptual and behavioral performance. The particular models that result from this approach depend on how the Neural encoding and decoding processes are characterized. This is where emerging principles of Neural Computation constrain models of Neural encoding and decoding. We describe how efficient codes are used to design Neural circuit models that perform a wide variety of Computations. The fundamental characteristic of the efficient code is that the Neural representation is adapted to the statistics of the environment. Here, we take an efficient code to be one where the preferred stimuli are drawn from the prior distribution and the Neural tuning curves are proportional to the likelihood function [3-6]. We show that in an efficient code, a general center-of-mass decoder can extract Bayesian estimates of encoded variables or functions of encoded variables, which allows for the construction of networks that perform many Computations [3,4]. Networks constructed using the method we describe have several nice functional properties and match many experimental observations. First, Neural tuning properties match the statistics of the variables they process. Second, in this framework, normalization is an essential Computation at each stage of processing. This is consistent with normalization being described as a canonical Neural Computation [7]. Also, Computation in networks of neurons using efficient coding is robust to neuronal loss and uses local connection rules. Finally, the networks we describe are flexible and can incorporate changes to environmental statistics or goals using gain modulation or changes in tuning curve widths. The overall result is to show the importance of several emerging principles of Neural Computation in an already successful modeling framework.

Fernando J. Corbacho - One of the best experts on this subject based on the ideXlab platform.

  • ICANN - Towards a New Information Processing Measure for Neural Computation
    Artificial Neural Networks — ICANN 2002, 2002
    Co-Authors: Manuel A. Sánchez-montañés, Fernando J. Corbacho
    Abstract:

    The understanding of the relation between structure and function in the brain requires theoretical frameworks capable of dealing with a large variety of complex experimental data. Likewise Neural Computation strives to design structures from which complex functionality should emerge. The framework of information theory has been partially successful in explaining certain brain structures with respect to sensory transformations under restricted conditions. Yet classical measures of information have not taken an explicit account of some of the fundamental concepts in brain theory and Neural Computation: namely that optimal coding depends on the specific task(s) to be solved by the system, and that autonomy and goal orientedness also depend on extracting relevant information from the environment and specific knowledge from the receiver to be able to affect it in the desired way. This paper presents a general (i.e. implementation independent) new information processing measure that takes into account the previously mentioned issues. It is based on measuring the transformations required to go from the original alphabet in which the sensory messages are represented, to the objective alphabet which depends on the implicit task(s) imposed by the environment-system relation.

  • Towards a new information processing measure for Neural Computation
    Lecture Notes in Computer Science, 2002
    Co-Authors: Manuel A. Sánchez-montañés, Fernando J. Corbacho
    Abstract:

    The understanding of the relation between structure and function in the brain requires theoretical frameworks capable of dealing with a large variety of complex experimental data. Likewise Neural Computation strives to design structures from which complex functionality should emerge. The framework of information theory has been partially successful in explaining certain brain structures with respect to sensory transformations under restricted conditions. Yet classical measures of information have not taken an explicit account of some of the fundamental concepts in brain theory and Neural Computation: namely that optimal coding depends on the specific task(s) to be solved by the system, and that autonomy and goal orientedness also depend on extracting relevant information from the environment and specific knowledge from the receiver to be able to affect it in the desired way. This paper presents a general (i.e. implementation independent) new information processing measure that takes into account the previously mentioned issues. It is based on measuring the transformations required to go from the original alphabet in which the sensory messages are represented, to the objective alphabet which depends on the implicit task(s) imposed by the environment-system relation.

Terrence J. Sejnowski - One of the best experts on this subject based on the ideXlab platform.

  • Neural Computation theories of learning
    Reference Module in Neuroscience and Biobehavioral Psychology#R##N#Learning and Memory: A Comprehensive Reference (Second Edition), 2017
    Co-Authors: Samat Moldakarimov, Terrence J. Sejnowski
    Abstract:

    This chapter gives an overview of basic learning rules and examples of learning algorithms used in Neural network models and describes specific problems that can be solved by these algorithms. Hebb's hypothesis states that connection strengths between neurons are modified based on Neural activities in the presynaptic and postsynaptic cells has served as the starting point for these models as well as studies of synaptic plasticity in biological Neural systems. Neural network models assume that the strengths of connections are adjusted according to an algorithm that specifies how and under what conditions a learning rule or a combination of several different learning rules should be applied to adjust the network connections. Although early modeling studies focused mainly on traditional mechanisms for synaptic plasticity, such as long-term potentiation and long-term depression, relatively new forms of Neural plasticity have been explored recently, such as homeostatic plasticity and plasticity of intrinsic Neural excitability, that are being incorporated in network models. Models that combine several types of plasticity show promise in overcoming some of the limitations of earlier models.

  • Neural codes and distributed representations: foundations of Neural Computation
    1999
    Co-Authors: L. F. Abbott, Terrence J. Sejnowski
    Abstract:

    Since its founding in 1989 by Terrence Sejnowski, Neural Computation has become the leading journal in the field. Foundations of Neural Computation collects, by topic, the most significant papers that have appeared in the journal over the past nine years. The present volume focuses on Neural codes and representations, topics of broad interest to neuroscientists and modelers. The topics addressed are: how neurons encode information through action potential firing patterns, how populations of neurons represent information, and how individual neurons use dendritic processing and biophysical properties of synapses to decode spike trains. The papers encompass a wide range of levels of investigation, from dendrites and neurons to networks and systems.

  • Unsupervised learning : foundations of Neural Computation
    1999
    Co-Authors: Geoffrey E. Hinton, Terrence J. Sejnowski
    Abstract:

    Since its founding in 1989 by Terrence Sejnowski, Neural Computation has become the leading journal in the field. Foundations of Neural Computationcollects, by topic, the most significant papers that have appeared in the journal over the past nine years.This volume of Foundations of Neural Computation, on unsupervised learning algorithms, focuses on Neural network learning algorithms that do not require an explicit teacher. The goal of unsupervised learning is to extract an efficient internal representation of the statistical structure implicit in the inputs. These algorithms provide insights into the development of the cerebral cortex and implicit learning in humans. They are also of interest to engineers working in areas such as computer vision and speech recognition who seek efficient representations of raw input data.

Jérémie Cabessa - One of the best experts on this subject based on the ideXlab platform.

  • Turing complete Neural Computation based on synaptic plasticity.
    PloS one, 2019
    Co-Authors: Jérémie Cabessa
    Abstract:

    In Neural Computation, the essential information is generally encoded into the neurons via their spiking configurations, activation values or (attractor) dynamics. The synapses and their associated plasticity mechanisms are, by contrast, mainly used to process this information and implement the crucial learning features. Here, we propose a novel Turing complete paradigm of Neural Computation where the essential information is encoded into discrete synaptic states, and the updating of this information achieved via synaptic plasticity mechanisms. More specifically, we prove that any 2-counter machine—and hence any Turing machine—can be simulated by a rational-weighted recurrent Neural network employing spike-timing-dependent plasticity (STDP) rules. The Computational states and counter values of the machine are encoded into discrete synaptic strengths. The transitions between those synaptic weights are then achieved via STDP. These considerations show that a Turing complete synaptic-based paradigm of Neural Computation is theoretically possible and potentially exploitable. They support the idea that synapses are not only crucially involved in information processing and learning features, but also in the encoding of essential information. This approach represents a paradigm shift in the field of Neural Computation.

  • ICANN (1) - Neural Computation with Spiking Neural Networks Composed of Synfire Rings
    Artificial Neural Networks and Machine Learning – ICANN 2017, 2017
    Co-Authors: Jérémie Cabessa, Ginette Horcholle-bossavit, Brigitte Quenet
    Abstract:

    We show that any finite state automaton can be simulated by some Neural network of Izhikevich spiking neurons composed of interconnected synfire rings. The construction turns out to be robust to the introduction of two kinds of synaptic noises. These considerations show that a biological paradigm of Neural Computation based on sustained activities of cell assemblies is indeed possible.