Orthonormal Function

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 99 Experts worldwide ranked by ideXlab platform

Junghui Chen - One of the best experts on this subject based on the ideXlab platform.

  • Optimal Batch Trajectory Design Based on an Intelligent Data-Driven Method
    Industrial & Engineering Chemistry Research, 2003
    Co-Authors: Junghui Chen, Rong-guey Sheui
    Abstract:

    The goal of this paper is to extend recently developed work [Chen, J.; Sheui, R.-G. Ind. Eng. Chem. Res. 2002, 41 (9), 2226] to design the optimal trajectory for a batch process on the basis of an intelligent data-driven experimental design scheme. This method integrates Orthonormal Function approximation with soft-computing techniques. The continuous batch trajectories of process measurements represented by a set of Orthonormal Functions are mapped onto a finite number of coefficients in the Function space. These coefficients capture the trajectory behavior of the process measurements. The optimal trajectory can be obtained as long as the locations of the coefficients are properly adjusted. An adjustment algorithm combining a neural network with a genetic algorithm is developed to search for optimal coefficients. The neural network is used to identify the relationship between the coefficients and the objective Function and to predict the quality response. Once a suitable neural-network model is obtained,...

  • Post analysis on different operating time processes using Orthonormal Function approximation and multiway principal component analysis
    Journal of Process Control, 2000
    Co-Authors: Junghui Chen
    Abstract:

    Abstract In most batch process operations, operators often need to adjust the different operating time in each batch run to get the desired product quality since the input specifications provided are different. The proposed method is the combination of the Orthonormal Function approximation and the multiway principal component analysis (MPCA). It is used to analyze and monitor batch processes at the different operating time. Like the philosophy of statistical process control in the traditional MPCA, this method leads to simple monitoring charts, easy tracking of the progress on each batch run and monitoring the occurrence of observable upsets. The only information needed to exploit the procedure is the historical data collected from the past successful batches. The methodology has been applied to two examples, a batch chemical reactor and a wafer plasma etching process, to illustrate the general use of this proposed method.

  • Process Monitoring Using Principal Component Analysis in Different Operating Time Processes
    IFAC Proceedings Volumes, 1999
    Co-Authors: Junghui Chen
    Abstract:

    Abstract Based on principal component analysis, a method for the multivariate statistical process control is developed to analyze and monitor batch processes at the different operating time. In most batch process operations, operators often need to adjust the different operating time in each batch run to get the desired product quality since the input specifications provided are different. The proposed method is the combination of the Orthonormal Function approximation and the multiway principal component analysis (MPCA). The only information needed to exploit the procedure is the historical data collected from the past successful batches. Like the philosophy of statistical process control in the traditional MPCA, this method leads to simple monitoring charts, easy tracking of the progress in each batch run and monitoring the occurrence of observable upsets. Two examples illustrate the effectiveness of the proposed methodology.

B Mlllgrew - One of the best experts on this subject based on the ideXlab platform.

  • Orthonormal Function neural network for nonlinear system modeling
    Proceedings of International Conference on Neural Networks (ICNN'96), 1996
    Co-Authors: I Scott, B Mlllgrew
    Abstract:

    Nonlinear system identification is often solved by determining a set of coefficients for a finite number of fixed nonlinear basis Functions. However, if the input data is drawn from a high-dimensional space, the number of required basis Functions grows exponentially with dimension, and this has led many authors to consider subset model selection techniques. In this paper we describe a one hidden layer neural network which employs a set of signal independent Orthonormal expansions, and a scaling derived from an estimate of the vector probability density Function of the input data. The Orthonormality of the basis Functions allows the contribution of each basis Function to the model to be calculated independently, since their contribution is directly related to the magnitude of the corresponding weights in the output layer. The resulting neural network retains the desirable linearity in the parameters nature of radial basis Function neural networks.

  • ICNN - Orthonormal Function neural network for nonlinear system modeling
    Proceedings of International Conference on Neural Networks (ICNN'96), 1
    Co-Authors: I Scott, B Mlllgrew
    Abstract:

    Nonlinear system identification is often solved by determining a set of coefficients for a finite number of fixed nonlinear basis Functions. However, if the input data is drawn from a high-dimensional space, the number of required basis Functions grows exponentially with dimension, and this has led many authors to consider subset model selection techniques. In this paper we describe a one hidden layer neural network which employs a set of signal independent Orthonormal expansions, and a scaling derived from an estimate of the vector probability density Function of the input data. The Orthonormality of the basis Functions allows the contribution of each basis Function to the model to be calculated independently, since their contribution is directly related to the magnitude of the corresponding weights in the output layer. The resulting neural network retains the desirable linearity in the parameters nature of radial basis Function neural networks.

I Scott - One of the best experts on this subject based on the ideXlab platform.

  • Orthonormal Function neural network for nonlinear system modeling
    Proceedings of International Conference on Neural Networks (ICNN'96), 1996
    Co-Authors: I Scott, B Mlllgrew
    Abstract:

    Nonlinear system identification is often solved by determining a set of coefficients for a finite number of fixed nonlinear basis Functions. However, if the input data is drawn from a high-dimensional space, the number of required basis Functions grows exponentially with dimension, and this has led many authors to consider subset model selection techniques. In this paper we describe a one hidden layer neural network which employs a set of signal independent Orthonormal expansions, and a scaling derived from an estimate of the vector probability density Function of the input data. The Orthonormality of the basis Functions allows the contribution of each basis Function to the model to be calculated independently, since their contribution is directly related to the magnitude of the corresponding weights in the output layer. The resulting neural network retains the desirable linearity in the parameters nature of radial basis Function neural networks.

  • ICNN - Orthonormal Function neural network for nonlinear system modeling
    Proceedings of International Conference on Neural Networks (ICNN'96), 1
    Co-Authors: I Scott, B Mlllgrew
    Abstract:

    Nonlinear system identification is often solved by determining a set of coefficients for a finite number of fixed nonlinear basis Functions. However, if the input data is drawn from a high-dimensional space, the number of required basis Functions grows exponentially with dimension, and this has led many authors to consider subset model selection techniques. In this paper we describe a one hidden layer neural network which employs a set of signal independent Orthonormal expansions, and a scaling derived from an estimate of the vector probability density Function of the input data. The Orthonormality of the basis Functions allows the contribution of each basis Function to the model to be calculated independently, since their contribution is directly related to the magnitude of the corresponding weights in the output layer. The resulting neural network retains the desirable linearity in the parameters nature of radial basis Function neural networks.

Yinliang Zhao - One of the best experts on this subject based on the ideXlab platform.

  • ISNN (1) - Least squares support vector machine on gaussian wavelet kernel Function set
    Advances in Neural Networks - ISNN 2006, 2006
    Co-Authors: Yinliang Zhao
    Abstract:

    The kernel Function of support vector machine (SVM) is an important factor for the learning result of SVM. Based on the wavelet decomposition and conditions of the support vector kernel Function, Gaussian wavelet kernel Function set for SVM is proposed. Each one of these kernel Functions is a kind of Orthonormal Function, and it can simulate almost any curve in quadratic continuous integral space, thus it enhances the generalization ability of the SVM. According to the wavelet kernel Function and the regularization theory, Least squares support vector machine on Gaussian wavelet kernel Function set (LS-GWSVM) is proposed to greatly simplify the solving process of GWSVM. The LS-GWSVM is then applied to the regression analysis and classifying. Experiment results show that the regression’s precision is improved by LS-GWSVM, compared with LS-SVM whose kernel Function is Gaussian Function.

  • Least Squares Support Vector Machine on Moret Wavelet kernel Function
    2005 International Conference on Neural Networks and Brain, 1
    Co-Authors: Yinliang Zhao
    Abstract:

    Based on the wavelet decomposition and conditions of the support vector kernel Function, Morlet wavelet kernel Function for support vector machine (SVM) is proposed, which is a kind of approximately Orthonormal Function. This kernel Function can simulate almost any curve in quadratic continuous integral space, thus it enhances the generalization ability of the SVM. According to the wavelet kernel Function and the regularization theory, least squares support vector machine on Morlet wavelet kernel Function (LS-MWSVM) is proposed to simplify the process of MWSVM. The LS-MWSVM is then applied to the regression analysis or this kind of Function has already existed, and it is the precision is improved by LS-MWSVM, compared with LS-SVM whose kernel Function is Gauss Function under the same conditions

Bernard Mulgrew - One of the best experts on this subject based on the ideXlab platform.

  • ICASSP (3) - Orthonormal Functions for nonlinear signal processing and adaptive filtering
    Proceedings of ICASSP '94. IEEE International Conference on Acoustics Speech and Signal Processing, 1
    Co-Authors: Bernard Mulgrew
    Abstract:

    A systematic approach to constructing a nonlinear adaptive filter is presented. The approach is based on a signal dependent Orthonormal expansion implemented in two stages: (i) a signal independent standard Orthonormal expansion; (ii) scaling using an estimate of the vector probability density Function (pdf). Further it is demonstrated that the standard Orthonormal Function set can also provide an estimate of the pdf when used in conjunction with an inverse Fourier transform. The Orthonormality has two implications for adaptive filtering: (i) model order reduction is trivial because the size of a coefficient in the final linear combiner is directly related to its contribution to the overall mean squared error; (ii) consistent, rapid convergence of stochastic gradient algorithms is assured. A typical nonlinear adaptive algorithm is presented. >