Scan Science and Technology
Contact Leading Edge Experts & Companies
Approximation of Function
The Experts below are selected from a list of 66 Experts worldwide ranked by ideXlab platform
T Trancong – 1st expert on this subject based on the ideXlab platform

Approximation of Function and its derivatives using radial basis Function networks
Applied Mathematical Modelling, 2003CoAuthors: N Maiduy, T TrancongAbstract:Abstract This paper presents a numerical approach, based on radial basis Function networks (RBFNs), for the Approximation of a Function and its derivatives (scattered data interpolation). The approach proposed here is called the indirect radial basis Function network (IRBFN) Approximation which is compared with the usual direct approach. In the direct method (DRBFN) the closed form RBFN approximating Function is first obtained from a set of training points and the derivative Functions are then calculated directly by differentiating such closed form RBFN. In the indirect method (IRBFN) the formulation of the problem starts with the decomposition of the derivative of the Function into RBFs. The derivative expression is then integrated to yield an expression for the original Function, which is then solved via the general linear least squares principle, given an appropriate set of discrete data points. The IRBFN method allows the filtering of noise arisen from the interpolation of the original Function from a discrete set of data points and produces a greatly improved Approximation of its derivatives. In both cases the input data consists of a set of unstructured discrete data points (Function values), which eliminates the need for a discretisation of the domain into a number of finite elements. The results obtained are compared with those obtained by the feed forward neural network approach where appropriate and the “finite element” methods. In all examples considered, the IRBFN approach yields a superior accuracy. For example, all partial derivatives up to second order of the Function of three variables y = x 1 2 + x 1 x 2 −2 x 2 2 − x 2 x 3 + x 3 2 are approximated with at least an order of magnitude better in the L 2 norm in comparison with the usual DRBFN approach.
N Maiduy – 2nd expert on this subject based on the ideXlab platform

Approximation of Function and its derivatives using radial basis Function networks
Applied Mathematical Modelling, 2003CoAuthors: N Maiduy, T TrancongAbstract:Abstract This paper presents a numerical approach, based on radial basis Function networks (RBFNs), for the Approximation of a Function and its derivatives (scattered data interpolation). The approach proposed here is called the indirect radial basis Function network (IRBFN) Approximation which is compared with the usual direct approach. In the direct method (DRBFN) the closed form RBFN approximating Function is first obtained from a set of training points and the derivative Functions are then calculated directly by differentiating such closed form RBFN. In the indirect method (IRBFN) the formulation of the problem starts with the decomposition of the derivative of the Function into RBFs. The derivative expression is then integrated to yield an expression for the original Function, which is then solved via the general linear least squares principle, given an appropriate set of discrete data points. The IRBFN method allows the filtering of noise arisen from the interpolation of the original Function from a discrete set of data points and produces a greatly improved Approximation of its derivatives. In both cases the input data consists of a set of unstructured discrete data points (Function values), which eliminates the need for a discretisation of the domain into a number of finite elements. The results obtained are compared with those obtained by the feed forward neural network approach where appropriate and the “finite element” methods. In all examples considered, the IRBFN approach yields a superior accuracy. For example, all partial derivatives up to second order of the Function of three variables y = x 1 2 + x 1 x 2 −2 x 2 2 − x 2 x 3 + x 3 2 are approximated with at least an order of magnitude better in the L 2 norm in comparison with the usual DRBFN approach.
Huang Yaping – 3rd expert on this subject based on the ideXlab platform

Numerical solution of differential equations by radial basis Function neural networks
Proceedings of the 2002 International Joint Conference on Neural Networks. IJCNN'02 (Cat. No.02CH37290), 2002CoAuthors: Li Jianyu, Luo Siwei, Qi Yingjian, Huang YapingAbstract:In this paper we present a method for solving linear ordinary differential equations (ODE) based on multiquadric (MQ) radial basis Function networks (RBFNs). According to the thought of Approximation of Function and/or its derivatives by using radial basis Function networks, another new RBFN Approximation procedures different from are developed in this paper for solving ODE. This technique can determine all the parameters at the same time without a learning process. The advantage of this technique is that it doesn’t need sufficient data, just relies on the domain and the boundary. Our results are more accurate.