Synaptic Weight

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 360 Experts worldwide ranked by ideXlab platform

P. J. Edwards - One of the best experts on this subject based on the ideXlab platform.

  • can deterministic penalty terms model the effects of Synaptic Weight noise on network fault tolerance
    International Journal of Neural Systems, 1995
    Co-Authors: P. J. Edwards, Alan F. Murray
    Abstract:

    This paper investigates fault tolerance in feedforward neural networks, for a realistic fault model based on analog hardware. In our previous work with Synaptic Weight noise26 we showed significant fault tolerance enhancement over standard training algorithms. We proposed that when introduced into training, Weight noise distributes the network computation more evenly across the Weights and thus enhances fault tolerance. Here we compare those results with an approximation to the mechanisms induced by stochastic Weight noise, incorporated into training deterministically via penalty terms. The penalty terms are an approximation to Weight saliency and therefore, in addition, we assess a number of other Weight saliency measures and perform comparison experiments. The results show that the first term approximation is an incomplete model of Weight noise in terms of fault tolerance. Also the error Hessian is shown to be the most accurate measure of Weight saliency.

  • enhanced mlp performance and fault tolerance resulting from Synaptic Weight noise during training
    IEEE Transactions on Neural Networks, 1994
    Co-Authors: Alan F. Murray, P. J. Edwards
    Abstract:

    We analyze the effects of analog noise on the Synaptic arithmetic during multilayer perceptron training, by expanding the cost function to include noise-mediated terms. Predictions are made in the light of these calculations that suggest that fault tolerance, training quality and training trajectory should be improved by such noise-injection. Extensive simulation experiments on two distinct classification problems substantiate the claims. The results appear to be perfectly general for all training schemes where Weights are adjusted incrementally, and have wide-ranging implications for all applications, particularly those involving "inaccurate" analog neural VLSI. >

  • Synaptic Weight noise during multilayer perceptron training: fault tolerance and training improvements
    IEEE transactions on neural networks, 1993
    Co-Authors: Alan F. Murray, P. J. Edwards
    Abstract:

    The authors develop a mathematical model of the effects of Synaptic arithmetic noise in multilayer perceptron training. Predictions are made regarding enhanced fault-tolerance and generalization ability and improved learning trajectory. These predictions are subsequently verified by simulation. The results are perfectly general and have profound implications for the accuracy requirements in multilayer perceptron (MLP) training, particularly in the analog domain. >

  • Synaptic Weight noise during mlp learning enhances fault tolerance generalization and learning trajectory
    Neural Information Processing Systems, 1992
    Co-Authors: Alan F. Murray, P. J. Edwards
    Abstract:

    We analyse the effects of analog noise on the Synaptic arithmetic during MultiLayer Perceptron training, by expanding the cost function to include noise-mediated penalty terms. Predictions are made in the light of these calculations which suggest that fault tolerance, generalisation ability and learning trajectory should be improved by such noise-injection. Extensive simulation experiments on two distinct classification problems substantiate the claims. The results appear to be perfectly general for all training schemes where Weights are adjusted incrementally, and have wide-ranging implications for all applications, particularly those involving "inaccurate" analog neural VLSI.

Alan F. Murray - One of the best experts on this subject based on the ideXlab platform.

  • can deterministic penalty terms model the effects of Synaptic Weight noise on network fault tolerance
    International Journal of Neural Systems, 1995
    Co-Authors: P. J. Edwards, Alan F. Murray
    Abstract:

    This paper investigates fault tolerance in feedforward neural networks, for a realistic fault model based on analog hardware. In our previous work with Synaptic Weight noise26 we showed significant fault tolerance enhancement over standard training algorithms. We proposed that when introduced into training, Weight noise distributes the network computation more evenly across the Weights and thus enhances fault tolerance. Here we compare those results with an approximation to the mechanisms induced by stochastic Weight noise, incorporated into training deterministically via penalty terms. The penalty terms are an approximation to Weight saliency and therefore, in addition, we assess a number of other Weight saliency measures and perform comparison experiments. The results show that the first term approximation is an incomplete model of Weight noise in terms of fault tolerance. Also the error Hessian is shown to be the most accurate measure of Weight saliency.

  • enhanced mlp performance and fault tolerance resulting from Synaptic Weight noise during training
    IEEE Transactions on Neural Networks, 1994
    Co-Authors: Alan F. Murray, P. J. Edwards
    Abstract:

    We analyze the effects of analog noise on the Synaptic arithmetic during multilayer perceptron training, by expanding the cost function to include noise-mediated terms. Predictions are made in the light of these calculations that suggest that fault tolerance, training quality and training trajectory should be improved by such noise-injection. Extensive simulation experiments on two distinct classification problems substantiate the claims. The results appear to be perfectly general for all training schemes where Weights are adjusted incrementally, and have wide-ranging implications for all applications, particularly those involving "inaccurate" analog neural VLSI. >

  • Synaptic Weight noise during multilayer perceptron training: fault tolerance and training improvements
    IEEE transactions on neural networks, 1993
    Co-Authors: Alan F. Murray, P. J. Edwards
    Abstract:

    The authors develop a mathematical model of the effects of Synaptic arithmetic noise in multilayer perceptron training. Predictions are made regarding enhanced fault-tolerance and generalization ability and improved learning trajectory. These predictions are subsequently verified by simulation. The results are perfectly general and have profound implications for the accuracy requirements in multilayer perceptron (MLP) training, particularly in the analog domain. >

  • Synaptic Weight noise during mlp learning enhances fault tolerance generalization and learning trajectory
    Neural Information Processing Systems, 1992
    Co-Authors: Alan F. Murray, P. J. Edwards
    Abstract:

    We analyse the effects of analog noise on the Synaptic arithmetic during MultiLayer Perceptron training, by expanding the cost function to include noise-mediated penalty terms. Predictions are made in the light of these calculations which suggest that fault tolerance, generalisation ability and learning trajectory should be improved by such noise-injection. Extensive simulation experiments on two distinct classification problems substantiate the claims. The results appear to be perfectly general for all training schemes where Weights are adjusted incrementally, and have wide-ranging implications for all applications, particularly those involving "inaccurate" analog neural VLSI.

K Meier - One of the best experts on this subject based on the ideXlab platform.

  • is a 4 bit Synaptic Weight resolution enough constraints on enabling spike timing dependent plasticity in neuromorphic hardware
    Frontiers in Neuroscience, 2012
    Co-Authors: Thomas Pfeil, Markus Diesmann, Tobias C Potjans, Sven Schrader, Wiebke Potjans, Johannes Schemmel, K Meier
    Abstract:

    Large-scale neuromorphic hardware systems typically bear the trade-off be- tween detail level and required chip resources. Especially when implementing spike-timing-dependent plasticity, reduction in resources leads to limitations as compared to floating point precision. By design, a natural modification that saves resources would be reducing Synaptic Weight resolution. In this study, we give an estimate for the impact of Synaptic Weight discretization on different levels, ranging from random walks of individual Weights to computer simulations of spiking neural networks. The FACETS wafer-scale hardware system offers a 4-bit resolution of Synaptic Weights, which is shown to be sufficient within the scope of our network benchmark. Our findings indicate that increasing the resolution may not even be useful in light of further restrictions of customized mixed-signal synapses. In ad- dition, variations due to production imperfections are investigated and shown to be uncritical in the context of the presented study. Our results represent a general framework for setting up and configuring hardware-constrained synapses. We sug- gest how Weight discretization could be considered for other backends dedicated to large-scale simulations. Thus, our proposition of a good hardware verification practice may rise synergy effects between hardware developers and neuroscientists.

Giacomo Indiveri - One of the best experts on this subject based on the ideXlab platform.

  • kernelized Synaptic Weight matrices
    International Conference on Machine Learning, 2018
    Co-Authors: Lorenz K Muller, Julien N P Martel, Giacomo Indiveri
    Abstract:

    In this paper we introduce a novel neural network architecture, in which Weight matrices are re-parametrized in terms of low-dimensional vectors, interacting through kernel functions. A layer of our network can be interpreted as introducing a (potentially infinitely wide) linear layer between input and output. We describe the theory underpinning this model and validate it with concrete examples, exploring how it can be used to impose structure on neural networks in diverse applications ranging from data visualization to recommender systems. We achieve state-of-the-art performance in a collaborative filtering task (MovieLens).

  • an event based neural network architecture with an asynchronous programmable Synaptic memory
    IEEE Transactions on Biomedical Circuits and Systems, 2014
    Co-Authors: Saber Moradi, Giacomo Indiveri
    Abstract:

    We present a hybrid analog/digital very large scale integration (VLSI) implementation of a spiking neural network with programmable Synaptic Weights. The Synaptic Weight values are stored in an asynchronous Static Random Access Memory (SRAM) module, which is interfaced to a fast current-mode event-driven DAC for producing Synaptic currents with the appropriate amplitude values. These currents are further integrated by current-mode integrator synapses to produce biophysically realistic temporal dynamics. The synapse output currents are then integrated by compact and efficient integrate and fire silicon neuron circuits with spike-frequency adaptation and adjustable refractory period and spike-reset voltage settings. The fabricated chip comprises a total of 32 × 32 SRAM cells, 4 × 32 synapse circuits and 32 × 1 silicon neurons. It acts as a transceiver, receiving asynchronous events in input, performing neural computation with hybrid analog/digital circuits on the input spikes, and eventually producing digital asynchronous events in output. Input, output, and Synaptic Weight values are transmitted to/from the chip using a common communication protocol based on the Address Event Representation (AER). Using this representation it is possible to interface the device to a workstation or a micro-controller and explore the effect of different types of Spike-Timing Dependent Plasticity (STDP) learning algorithms for updating the Synaptic Weights values in the SRAM module. We present experimental results demonstrating the correct operation of all the circuits present on the chip.

  • reliable computation in noisy backgrounds using real time neuromorphic hardware
    Biomedical Circuits and Systems Conference, 2007
    Co-Authors: Hsiping Wang, Giacomo Indiveri, Elisabetta Chicca, Terrence J Sejnowski
    Abstract:

    Spike-time based coding of neural information, in contrast to rate coding, requires that neurons reliably and precisely fire spikes in response to repeated identical inputs, despite a high degree of noise from stochastic Synaptic firing and extraneous background inputs. We investigated the degree of reliability and precision achievable in various noisy background conditions using real-time neuromorphic VLSI hardware which models integrate-and-fire spiking neurons and dynamic synapses. To do so, we varied two properties of the inputs to a single neuron, Synaptic Weight and synchrony magnitude (number of synchronously firing pre-Synaptic neurons). Thanks to the realtime response properties of the VLSI system we could carry out extensive exploration of the parameter space, and measure the neurons firing rate and reliability in real-time. Reliability of output spiking was primarily influenced by the amount of synchronicity of Synaptic input, rather than the Synaptic Weight of those synapses. These results highlight possible regimes in which real-time neuromorphic systems might be better able to reliably compute with spikes despite noisy input.

Wenrui Zhang - One of the best experts on this subject based on the ideXlab platform.

  • skip connected self recurrent spiking neural networks with joint intrinsic parameter and Synaptic Weight training
    Neural Computation, 2021
    Co-Authors: Wenrui Zhang
    Abstract:

    Abstract As an important class of spiking neural networks (SNNs), recurrent spiking neural networks (RSNNs) possess great computational power and have been widely used for processing sequential data like audio and text. However, most RSNNs suffer from two problems. First, due to the lack of architectural guidance, random recurrent connectivity is often adopted, which does not guarantee good performance. Second, training of RSNNs is in general challenging, bottlenecking achievable model accuracy. To address these problems, we propose a new type of RSNN, skip-connected self-recurrent SNNs (ScSr-SNNs). Recurrence in ScSr-SNNs is introduced by adding self-recurrent connections to spiking neurons. The SNNs with self-recurrent connections can realize recurrent behaviors similar to those of more complex RSNNs, while the error gradients can be more straightforwardly calculated due to the mostly feedforward nature of the network. The network dynamics is enriched by skip connections between nonadjacent layers. Moreover, we propose a new backpropagation (BP) method, backpropagated intrinsic plasticity (BIP), to boost the performance of ScSr-SNNs further by training intrinsic model parameters. Unlike standard intrinsic plasticity rules that adjust the neuron's intrinsic parameters according to neuronal activity, the proposed BIP method optimizes intrinsic parameters based on the backpropagated error gradient of a well-defined global loss function in addition to Synaptic Weight training. Based on challenging speech, neuromorphic speech, and neuromorphic image data sets, the proposed ScSr-SNNs can boost performance by up to 2.85% compared with other types of RSNNs trained by state-of-the-art BP methods.

  • skip connected self recurrent spiking neural networks with joint intrinsic parameter and Synaptic Weight training
    arXiv: Neural and Evolutionary Computing, 2020
    Co-Authors: Wenrui Zhang
    Abstract:

    As an important class of spiking neural networks (SNNs), recurrent spiking neural networks (RSNNs) possess great computational power and have been widely used for processing sequential data like audio and text. However, most RSNNs suffer from two problems. 1. Due to a lack of architectural guidance, random recurrent connectivity is often adopted, which does not guarantee good performance. 2. Training of RSNNs is in general challenging, bottlenecking achievable model accuracy. To address these problems, we propose a new type of RSNNs called Skip-Connected Self-Recurrent SNNs (ScSr-SNNs). Recurrence in ScSr-SNNs is introduced in a stereotyped manner by adding self-recurrent connections to spiking neurons, which implements local memory. The network dynamics is enriched by skip connections between nonadjacent layers. Constructed by simplified self-recurrent and skip connections, ScSr-SNNs are able to realize recurrent behaviors similar to those of more complex RSNNs while the error gradients can be more straightforwardly calculated due to the mostly feedforward nature of the network. Moreover, we propose a new backpropagation (BP) method called backpropagated intrinsic plasticity (BIP) to further boost the performance of ScSr-SNNs by training intrinsic model parameters. Unlike standard intrinsic plasticity rules that adjust the neuron's intrinsic parameters according to neuronal activity, the proposed BIP methods optimize intrinsic parameters based on the backpropagated error gradient of a well-defined global loss function in addition to Synaptic Weight training. Based upon challenging speech and neuromorphic speech datasets including TI46-Alpha, TI46-Digits, and N-TIDIGITS, the proposed ScSr-SNNs can boost performance by up to 2.55% compared with other types of RSNNs trained by state-of-the-art BP methods.