Supervised Training

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 91608 Experts worldwide ranked by ideXlab platform

Diego Fernandezprieto - One of the best experts on this subject based on the ideXlab platform.

Lorenzo Bruzzone - One of the best experts on this subject based on the ideXlab platform.

George B Hanna - One of the best experts on this subject based on the ideXlab platform.

Jurgen Kurths - One of the best experts on this subject based on the ideXlab platform.

  • an efficient Supervised Training algorithm for multilayer spiking neural networks
    PLOS ONE, 2016
    Co-Authors: Xiurui Xie, Guisong Liu, Malu Zhang, Jurgen Kurths
    Abstract:

    The spiking neural networks (SNNs) are the third generation of neural networks and perform remarkably well in cognitive tasks such as pattern recognition. The spike emitting and information processing mechanisms found in biological cognitive systems motivate the application of the hierarchical structure and temporal encoding mechanism in spiking neural networks, which have exhibited strong computational capability. However, the hierarchical structure and temporal encoding approach require neurons to process information serially in space and time respectively, which reduce the Training efficiency significantly. For Training the hierarchical SNNs, most existing methods are based on the traditional back-propagation algorithm, inheriting its drawbacks of the gradient diffusion and the sensitivity on parameters. To keep the powerful computation capability of the hierarchical structure and temporal encoding mechanism, but to overcome the low efficiency of the existing algorithms, a new Training algorithm, the Normalized Spiking Error Back Propagation (NSEBP) is proposed in this paper. In the feedforward calculation, the output spike times are calculated by solving the quadratic function in the spike response model instead of detecting postsynaptic voltage states at all time points in traditional algorithms. Besides, in the feedback weight modification, the computational error is propagated to previous layers by the presynaptic spike jitter instead of the gradient decent rule, which realizes the layer-wised Training. Furthermore, our algorithm investigates the mathematical relation between the weight variation and voltage error change, which makes the normalization in the weight modification applicable. Adopting these strategies, our algorithm outperforms the traditional SNN multi-layer algorithms in terms of learning efficiency and parameter sensitivity, that are also demonstrated by the comprehensive experimental results in this paper.

Hynek Hermansky - One of the best experts on this subject based on the ideXlab platform.

  • deep neural network features and semi Supervised Training for low resource speech recognition
    International Conference on Acoustics Speech and Signal Processing, 2013
    Co-Authors: Samuel Thomas, Michael L Seltzer, Kenneth Church, Hynek Hermansky
    Abstract:

    We propose a new technique for Training deep neural networks (DNNs) as data-driven feature front-ends for large vocabulary continuous speech recognition (LVCSR) in low resource settings. To circumvent the lack of sufficient Training data for acoustic modeling in these scenarios, we use transcribed multilingual data and semi-Supervised Training to build the proposed feature front-ends. In our experiments, the proposed features provide an absolute improvement of 16% in a low-resource LVCSR setting with only one hour of in-domain Training data. While close to three-fourths of these gains come from DNN-based features, the remaining are from semi-Supervised Training.