Probabilistic Neural Network

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 13194 Experts worldwide ranked by ideXlab platform

A Zaknich - One of the best experts on this subject based on the ideXlab platform.

  • introduction to the modified Probabilistic Neural Network for general signal processing applications
    IEEE Transactions on Signal Processing, 1998
    Co-Authors: A Zaknich
    Abstract:

    This paper introduces a practical and easy-to-understand Network for signal processing called the modified Probabilistic Neural Network (MPNN). It begins with a short introduction to the application of artificial Neural Networks to signal processing followed by a background and review of the MPNN theory. The MPNN is a regression technique similar to Specht's (1991) general regression Neural Network, which is based on a single radial basis function kernel whose bandwidth is related to the noise statistics. It has advantages in application to time and spatial series signal processing problems because it is constructed directly and simply from the training signal waveform characteristics or features. An illustrative example involving noisy Doppler-shifted swept frequency sonar signal detection compares the effectiveness of the first- and second-order Volterra, multilayer perceptron Neural Network, radial basis function Neural Network, general regression Neural Network and MPNN filters, demonstrating some features of the MPNN for practical design.

  • a vector quantisation reduction method for the Probabilistic Neural Network
    Proceedings of International Conference on Neural Networks (ICNN'97), 1997
    Co-Authors: A Zaknich
    Abstract:

    This paper introduces a vector quantisation method to reduce the Probabilistic Neural Network classifier size. It has been derived from the modified Probabilistic Neural Network which was developed as a general regression technique but can also be used for classification. It is a very practical and easy to implement method requiring a very low level of computation. The method is described and demonstrated using 4 different sets of classification data.

Maciej Kusy - One of the best experts on this subject based on the ideXlab platform.

  • sensitivity analysis for Probabilistic Neural Network structure reduction
    IEEE Transactions on Neural Networks, 2018
    Co-Authors: Piotr A. Kowalski, Maciej Kusy
    Abstract:

    In this paper, we propose the use of local sensitivity analysis (LSA) for the structure simplification of the Probabilistic Neural Network (PNN). Three algorithms are introduced. The first algorithm applies LSA to the PNN input layer reduction by selecting significant features of input patterns. The second algorithm utilizes LSA to remove redundant pattern neurons of the Network. The third algorithm combines the proposed two and constitutes the solution of how they can work together. PNN with a product kernel estimator is used, where each multiplicand computes a one-dimensional Cauchy function. Therefore, the smoothing parameter is separately calculated for each dimension by means of the plug-in method. The classification qualities of the reduced and full structure PNN are compared. Furthermore, we evaluate the performance of PNN, for which global sensitivity analysis (GSA) and the common reduction methods are applied, both in the input layer and the pattern layer. The models are tested on the classification problems of eight repository data sets. A 10-fold cross validation procedure is used to determine the prediction ability of the Networks. Based on the obtained results, it is shown that the LSA can be used as an alternative PNN reduction approach.

  • Weighted Probabilistic Neural Network
    Information Sciences, 2018
    Co-Authors: Maciej Kusy, Piotr A. Kowalski
    Abstract:

    In this work, the modification of the Probabilistic Neural Network (PNN) is proposed. The traditional Network is adjusted by introducing the weight coefficients between pattern and summation layer. The weights are derived using the sensitivity analysis (SA) procedure. The performance of the weighted PNN (WPNN) is examined in data classification problems on benchmark data sets. The obtained WPNN's efficiency results are compared with these achieved by a modified PNN model put forward in literature, the original PNN and selected state-of-the-art classification algorithms: support vector machine, multilayer perceptron, radial basis function Neural Network, k-nearest neighbor method and gene expression programming algorithm. All classifiers are collated by computing the prediction accuracy obtained with the use of a k-fold cross validation procedure. It is shown that in seven out of ten classification cases, WPNN outperforms both the weighted PNN classifier introduced in literature and the original model. Furthermore, according to the ranking statistics, the proposed WPNN takes the first place among all tested algorithms.

  • application of reinforcement learning algorithms for the adaptive computation of the smoothing parameter for Probabilistic Neural Network
    IEEE Transactions on Neural Networks, 2015
    Co-Authors: Maciej Kusy, Roman Zajdel
    Abstract:

    In this paper, we propose new methods for the choice and adaptation of the smoothing parameter of the Probabilistic Neural Network (PNN). These methods are based on three reinforcement learning algorithms: $Q(0)$ -learning, $Q(\lambda )$ -learning, and stateless $Q$ -learning. We regard three types of PNN classifiers: the model that uses single smoothing parameter for the whole Network, the model that utilizes single smoothing parameter for each data attribute, and the model that possesses the matrix of smoothing parameters different for each data variable and data class. Reinforcement learning is applied as the method of finding such a value of the smoothing parameter, which ensures the maximization of the prediction ability. PNN models with smoothing parameters computed according to the proposed algorithms are tested on eight databases by calculating the test error with the use of the cross validation procedure. The results are compared with state-of-the-art methods for PNN training published in the literature up to date and, additionally, with PNN whose sigma is determined by means of the conjugate gradient approach. The results demonstrate that the proposed approaches can be used as alternative PNN training procedures.

S Gopal - One of the best experts on this subject based on the ideXlab platform.

  • orthogonal least square center selection technique a robust scheme for multiple source partial discharge pattern recognition using radial basis Probabilistic Neural Network
    Expert Systems With Applications, 2011
    Co-Authors: S Venkatesh, S Gopal
    Abstract:

    Partial Discharge (PD) pattern recognition has emerged as a subject of vital interest for the diagnosis of complex insulation system of power equipment to personnel handling power system utilities and researchers alike, since the phenomenon serves inherently as an excellent non-intrusive testing technique. Recently, the focus of researchers has shifted to the recognition of defects in insulation due to multiple PD sources, as it is often encountered during real-time PD measurements. A survey of research literature indicates clearly that the recognition of fully overlapped PD patterns is yet an unresolved issue and that techniques such as Mixed Weibull Function, Neural Network (NN), Wavelet Transformation, etc. have been attempted with only reasonable success. Since most digital PD online acquisition systems record data for a stipulated and considerable duration as mandated by international standards, the database is large. This poses substantial complexity in classification during the training phase of the NNs. These difficulties may be attributed to ill-conditioned data, non-Markovian nature of discharges, curse of dimensionality of the data, etc. Since training methods based on random selection of centers from a large training set of fixed size are found to be relatively insensitive and detrimental to classification in many cases, a Forward Orthogonal Least Square algorithm (FOLS) is utilized in order to reduce the number of hidden layer neurons and obtain a parsimonious yet optimal set of centers. This algorithm, in addition, obviates the need for a separate clustering method making the procedure inherently viable for on-line PD recognition. This research work proposes a novel approach of utilizing Radial Basis Probabilistic Neural Network (RBPNN) with FOLS center selection algorithm for classification of multiple PD sources. Exhaustive analysis is carried out to ascertain the efficacy of classification of the proposed RBPNN-FOLS algorithm to cater to large training data set. A detailed comparison of the performance of the proposed scheme with that of the standard version of Probabilistic Neural Network (PNN) and Heteroscedastic PNN (HRPNN) that was taken up for study by the authors in their previous work indicates firstly the effectiveness of FOLS algorithm in obtaining parsimonious centers, points out secondly the capability of the Radial Basis Probabilistic Neural Network (RBPNN) model to integrate the advantages of the Radial Basis Function Neural Network (RBFNN) and PNN in classifying multiple PD sources and finally throws light on the exceptional capability of the FOLS-RBPNN in discriminating the sources of PD due to varying applied voltages also.

  • conception of complex Probabilistic Neural Network system for classification of partial discharge patterns using multifarious inputs
    Expert Systems With Applications, 2005
    Co-Authors: B Karthikeyan, S Gopal, M Vimala
    Abstract:

    Pattern recognition has a long history within electrical engineering but has recently become much more widespread as the automated capture of signal and images has been cheaper. Very many of the application of Neural Networks are to classification, and so are within the field of pattern recognition and classification. In this paper, we explore how Probabilistic Neural Networks fit into the earlier framework of pattern recognition of partial discharge patterns since the PD patterns are an important tool for diagnosis of HV insulation systems. Skilled humans can identify the possible insulation defects in various representations of partial discharge (PD) data. One of the most widely used representation is phase resolved PD (PRPD) patterns. Also this paper describes a method for the automated recognition of PRPD patterns using a novel complex Probabilistic Neural Network system for the actual classification task. The efficacy of composite Neural Network developed using Probabilistic Neural Network is examined.

Hojjat Adeli - One of the best experts on this subject based on the ideXlab platform.

  • computer aided diagnosis of parkinson s disease using enhanced Probabilistic Neural Network
    Journal of Medical Systems, 2015
    Co-Authors: Thomas J Hirschauer, Hojjat Adeli, John A Buford
    Abstract:

    Early and accurate diagnosis of Parkinson's disease (PD) remains challenging. Neuropathological studies using brain bank specimens have estimated that a large percentages of clinical diagnoses of PD may be incorrect especially in the early stages. In this paper, a comprehensive computer model is presented for the diagnosis of PD based on motor, non-motor, and neuroimaging features using the recently-developed enhanced Probabilistic Neural Network (EPNN). The model is tested for differentiating PD patients from those with scans without evidence of dopaminergic deficit (SWEDDs) using the Parkinson's Progression Markers Initiative (PPMI) database, an observational, multi-center study designed to identify PD biomarkers for diagnosis and disease progression. The results are compared to four other commonly-used machine learning algorithms: the Probabilistic Neural Network (PNN), support vector machine (SVM), k-nearest neighbors (k-NN) algorithm, and classification tree (CT). The EPNN had the highest classification accuracy at 92.5 % followed by the PNN (91.6 %), k-NN (90.8 %) and CT (90.2 %). The EPNN exhibited an accuracy of 98.6 % when classifying healthy control (HC) versus PD, higher than any previous studies.

  • enhanced Probabilistic Neural Network with local decision circles a robust classifier
    Computer-Aided Engineering, 2010
    Co-Authors: Mehran Ahmadlou, Hojjat Adeli
    Abstract:

    In recent years the Probabilistic Neural Network (PPN) has been used in a large number of applications due to its simplicity and efficiency. PNN assigns the test data to the class with maximum likelihood compared with other classes. Likelihood of the test data to each training data is computed in the pattern layer through a kernel density estimation using a simple Bayesian rule. The kernel is usually a standard probability distribution function such as a Gaussian function. A spread parameter is used as a global parameter which determines the width of the kernel. The Bayesian rule in the pattern layer estimates the conditional probability of each class given an input vector without considering any probable local densities or heterogeneity in the training data. In this paper, an enhanced and generalized PNN (EPNN) is presented using local decision circles (LDCs) to overcome the aforementioned shortcoming and improve its robustness to noise in the data. Local decision circles enable EPNN to incorporate local information and non-homogeneity existing in the training population. The circle has a radius which limits the contribution of the local decision. In the conventional PNN the spread parameter can be optimized for maximum classification accuracy. In the proposed EPNN two parameters, the spread parameter and the radius of local decision circles, are optimized to maximize the performance of the model. Accuracy and robustness of EPNN are compared with PNN using three different benchmark classification problems, iris data, diabetic data, and breast cancer data, and five different ratios of training data to testing data: 90:10, 80:20, 70:30, 60:40, and 50:50. EPNN provided the most accurate results consistently for all ratios. Robustness of PNN and EPNN is investigated using different values of signal to noise ratio (SNR). Accuracy of EPNN is consistently higher than accuracy of PNN at different levels of SNR and for all ratios of training data to testing data.

  • a Probabilistic Neural Network for earthquake magnitude prediction
    Neural Networks, 2009
    Co-Authors: Hojjat Adeli, Ashif Panakkat
    Abstract:

    A Probabilistic Neural Network (PNN) is presented for predicting the magnitude of the largest earthquake in a pre-defined future time period in a seismic region using eight mathematically computed parameters known as seismicity indicators. The indicators considered are the time elapsed during a particular number (n) of significant seismic events before the month in question, the slope of the Gutenberg-Richter inverse power law curve for the n events, the mean square deviation about the regression line based on the Gutenberg-Richter inverse power law for the n events, the average magnitude of the last n events, the difference between the observed maximum magnitude among the last n events and that expected through the Gutenberg-Richter relationship known as the magnitude deficit, the rate of square root of seismic energy released during the n events, the mean time or period between characteristic events, and the coefficient of variation of the mean time. Prediction accuracies of the model are evaluated using three different statistical measures: the probability of detection, the false alarm ratio, and the true skill score or R score. The PNN model is trained and tested using data for the Southern California region. The model yields good prediction accuracies for earthquakes of magnitude between 4.5 and 6.0. The PNN model presented in this paper complements the recurrent Neural Network model developed by the authors previously, where good results were reported for predicting earthquakes with magnitude greater than 6.0.

Xinhua Liu - One of the best experts on this subject based on the ideXlab platform.

  • a cutting pattern recognition method for shearers based on improved ensemble empirical mode decomposition and a Probabilistic Neural Network
    Sensors, 2015
    Co-Authors: Zhongbin Wang, Chao Tan, Xinhua Liu
    Abstract:

    In order to guarantee the stable operation of shearers and promote construction of an automatic coal mining working face, an online cutting pattern recognition method with high accuracy and speed based on Improved Ensemble Empirical Mode Decomposition (IEEMD) and Probabilistic Neural Network (PNN) is proposed. An industrial microphone is installed on the shearer and the cutting sound is collected as the recognition criterion to overcome the disadvantages of giant size, contact measurement and low identification rate of traditional detectors. To avoid end-point effects and get rid of undesirable intrinsic mode function (IMF) components in the initial signal, IEEMD is conducted on the sound. The end-point continuation based on the practical storage data is performed first to overcome the end-point effect. Next the average correlation coefficient, which is calculated by the correlation of the first IMF with others, is introduced to select essential IMFs. Then the energy and standard deviation of the reminder IMFs are extracted as features and PNN is applied to classify the cutting patterns. Finally, a simulation example, with an accuracy of 92.67%, and an industrial application prove the efficiency and correctness of the proposed method.