The Experts below are selected from a list of 91608 Experts worldwide ranked by ideXlab platform
Diego Fernandezprieto - One of the best experts on this subject based on the ideXlab platform.
-
classification of remote sensing images using radial basis function neural networks a Supervised Training technique
Remote Sensing, 1998Co-Authors: Lorenzo Bruzzone, Diego FernandezprietoAbstract:A Supervised technique for Training Radial Basis Function (RBF) neural classifiers is proposed. Such a technique, unlike traditional ones, considers the class-memberships of Training samples to select the centers and widths of the kernel functions associated with the hidden neurons of an RBF network. The proposed method has significant advantages over traditional ones in terms of classification accuracy and stability of the network. Experimental results, carried out on a multisensor remote-sensing data set, confirm the validity of the proposed technique.
Lorenzo Bruzzone - One of the best experts on this subject based on the ideXlab platform.
-
classification of remote sensing images using radial basis function neural networks a Supervised Training technique
Remote Sensing, 1998Co-Authors: Lorenzo Bruzzone, Diego FernandezprietoAbstract:A Supervised technique for Training Radial Basis Function (RBF) neural classifiers is proposed. Such a technique, unlike traditional ones, considers the class-memberships of Training samples to select the centers and widths of the kernel functions associated with the hidden neurons of an RBF network. The proposed method has significant advantages over traditional ones in terms of classification accuracy and stability of the network. Experimental results, carried out on a multisensor remote-sensing data set, confirm the validity of the proposed technique.
-
Supervised Training technique for radial basis function neural networks
Electronics Letters, 1998Co-Authors: Lorenzo Bruzzone, Fernandez D PrietoAbstract:A novel Supervised technique for Training classifiers based on radial basis function (RBF) neural networks is presented. Unlike traditional techniques, this considers the class-membership of Training samples to select the centres and widths of the kernel functions associated with the hidden units of an RBF network. Experiments carried out to solve an industrial visual inspection problem confirmed the effectiveness of the proposed technique.
George B Hanna - One of the best experts on this subject based on the ideXlab platform.
-
clinical and educational proficiency gain of Supervised laparoscopic colorectal surgical trainees
Surgical Endoscopy and Other Interventional Techniques, 2013Co-Authors: Hugh Mackenzie, Danilo Miskovic, Mark G Coleman, Amjad Parvaiz, A G Acheson, John T Jenkins, John F Griffith, George B HannaAbstract:Background The self-taught learning curve in laparoscopic colorectal surgery (LCS) is between 100 and 150 cases. Supervised Training has been shown to shorten the proficiency gain curve of senior specialist surgeons. Little is known about the learning curve of LCS trainees undergoing mentored Training. The aim of this study was to analyze the proficiency gain curve and clinical outcomes of English surgical trainees during laparoscopic colorectal surgery fellowships.
-
development validation and implementation of a monitoring tool for Training in laparoscopic colorectal surgery in the english national Training program
Surgical Endoscopy and Other Interventional Techniques, 2011Co-Authors: Danilo Miskovic, Susannah M Wyles, F Carter, Mark G Coleman, George B HannaAbstract:Introduction The National Training Program for laparoscopic colorectal surgery (LCS) provides Supervised Training to colorectal surgeons in England. The purpose of this study was to create, validate, and implement a method for monitoring Training progression in laparoscopic colorectal surgery that met the requirements of a good assessment tool.
Jurgen Kurths - One of the best experts on this subject based on the ideXlab platform.
-
an efficient Supervised Training algorithm for multilayer spiking neural networks
PLOS ONE, 2016Co-Authors: Xiurui Xie, Guisong Liu, Malu Zhang, Jurgen KurthsAbstract:The spiking neural networks (SNNs) are the third generation of neural networks and perform remarkably well in cognitive tasks such as pattern recognition. The spike emitting and information processing mechanisms found in biological cognitive systems motivate the application of the hierarchical structure and temporal encoding mechanism in spiking neural networks, which have exhibited strong computational capability. However, the hierarchical structure and temporal encoding approach require neurons to process information serially in space and time respectively, which reduce the Training efficiency significantly. For Training the hierarchical SNNs, most existing methods are based on the traditional back-propagation algorithm, inheriting its drawbacks of the gradient diffusion and the sensitivity on parameters. To keep the powerful computation capability of the hierarchical structure and temporal encoding mechanism, but to overcome the low efficiency of the existing algorithms, a new Training algorithm, the Normalized Spiking Error Back Propagation (NSEBP) is proposed in this paper. In the feedforward calculation, the output spike times are calculated by solving the quadratic function in the spike response model instead of detecting postsynaptic voltage states at all time points in traditional algorithms. Besides, in the feedback weight modification, the computational error is propagated to previous layers by the presynaptic spike jitter instead of the gradient decent rule, which realizes the layer-wised Training. Furthermore, our algorithm investigates the mathematical relation between the weight variation and voltage error change, which makes the normalization in the weight modification applicable. Adopting these strategies, our algorithm outperforms the traditional SNN multi-layer algorithms in terms of learning efficiency and parameter sensitivity, that are also demonstrated by the comprehensive experimental results in this paper.
Hynek Hermansky - One of the best experts on this subject based on the ideXlab platform.
-
deep neural network features and semi Supervised Training for low resource speech recognition
International Conference on Acoustics Speech and Signal Processing, 2013Co-Authors: Samuel Thomas, Michael L Seltzer, Kenneth Church, Hynek HermanskyAbstract:We propose a new technique for Training deep neural networks (DNNs) as data-driven feature front-ends for large vocabulary continuous speech recognition (LVCSR) in low resource settings. To circumvent the lack of sufficient Training data for acoustic modeling in these scenarios, we use transcribed multilingual data and semi-Supervised Training to build the proposed feature front-ends. In our experiments, the proposed features provide an absolute improvement of 16% in a low-resource LVCSR setting with only one hour of in-domain Training data. While close to three-fourths of these gains come from DNN-based features, the remaining are from semi-Supervised Training.