Orthonormal Polynomial

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 231 Experts worldwide ranked by ideXlab platform

Nina B. Bogdanova - One of the best experts on this subject based on the ideXlab platform.

  • Orthonormal Polynomial Expansion of Different Types of Silver Nanoparticles Spectroscopic Data
    Physics of Particles and Nuclei Letters, 2019
    Co-Authors: Nina B. Bogdanova, M. Koleva
    Abstract:

    Our Orthonormal Polynomial Expansion Method (OPEM) in one-dimensional version is applied to describe the original different silver nanoparticles spectroscopic data. We construct orthogonal (Orthonormal) Polynomials for presenting the curves. The corridors defined by errors of given data by the help of the weights construct the optimal behavior of sought for curve. We have received from experiments some curves in thousands points for analysis, but two of them are important—with ten and five laser pulses. In this way the paper is a continuation and a generalization of the investigation in the previous paper. We have chosen the subintervals in two curves. The most important subintervals of spectra data are investigated, where the minimum (Surface Plasmon Resonance Absorption) is looking for. This study describes the Ag nanoparticles produced by laser approach in a ZnO medium forming a AgNPs/ZnO nanocomposite heterostructure.

  • Orthonormal Polynomial Approximation of Mineral Water Data with Errors in Both Variables
    arXiv: Computational Physics, 2012
    Co-Authors: Nina B. Bogdanova, Stefan Todorov
    Abstract:

    In this paper we introduce the data from mineral water probe with errors in both variables. For this case we apply our Orthonormal Polynomial expansion(OPEM) method to describe the data in the new error corridor. It receives the approximating curves and their derivatives including the errors by weighting approach. The numerical method and approximation results are presented and discussed. The special criteria are carried out for Orthonormal and evaluated from it usual expansion. The numerical results are shown in tables and figures.

  • A New Version of Orthonormal Polynomial Expansion Method
    AIP Conference Proceedings, 2007
    Co-Authors: Nina B. Bogdanova
    Abstract:

    The proposed version of our Orthonormal Polynomial Expansion Method presents the combination of two new features: including the errors in independent variable with the help of new total variance and using the usual coefficients for calculating the approximating values.

  • Thermometric characteristics approximation of germanium film temperature microsensors by Orthonormal Polynomials
    Review of Scientific Instruments, 2005
    Co-Authors: Nina B. Bogdanova, B. M. Terziyska, H. Madge
    Abstract:

    Approximations of thermometric characteristics of germanium film temperature microsensors are presented using a mathematical approach based on their expansion with Orthonormal Polynomials. A weighted Orthonormal Polynomial expansion method (OPEM) is applied, involving the experimental errors of calibration test data at every point. The thermometric functions R(T) and T(R) of resistance and temperature are described in the whole temperature range (1.7–300K) and in three subintervals (1.7–20, 20–150, and 150–300K). The absolute, relative, and specific sensitivities of the sensor, as well as the main approximation characteristics, are discussed. The OPEM is extended to obtain a mathematical description of R(T) and T(R) functions by usual Polynomial coefficients calculated by Orthonormal ones. A comparison between maximal relative deviations of R(T) and T(R) three-interval approximations, correspondingly, by usual and Orthonormal Polynomials, is presented. Numerical results of the approximation parameters of ...

  • A description of cryogenic temperature sensor characteristics by the weighted Orthonormal Polynomial expansion method: Germanium and platinum thermometer calibration test data approximation
    Review of Scientific Instruments, 1996
    Co-Authors: Nina B. Bogdanova, Bonka M. Terzijska
    Abstract:

    We consider a new approximation of T=f(R) and R=g(T) functions for germanium (GRT) and platinum (PRT) cryogenic resistance thermometers from 1.2 to 85 K and from 14 to 325 K, respectively, using the weighted Orthonormal Polynomial expansion method (OPEM). The main aspect in this description is an evolving new type of weighting function referred to the specific material characteristics of the investigated cryogenic resistance temperature sensors. The OPEM fitting errors for GRT and PRT characteristics in the considered approximation within the whole useful temperature ranges determine the perspectives of our method in cryogenic thermometry. The proposed OPEM description of R and T functions of the investigated thermoresistors defines the hopefulness of our algorithm with respect to both the calibration of GRT and PRT sensors and the automation of thermal measurements at low temperatures. The demonstrated values of the OPEM fitting errors with the R function approximation as well as the evaluation of the fi...

Tom Oomen - One of the best experts on this subject based on the ideXlab platform.

  • CDC - Numerically Reliable Identification of Fast Sampled Systems: A Novel δ-Domain Data-Dependent Orthonormal Polynomial Approach
    2018 IEEE Conference on Decision and Control (CDC), 2018
    Co-Authors: R.j. Voorhoeve, Tom Oomen
    Abstract:

    The practical utility of system identification algorithms is often limited by the reliability of their implementation in finite precision arithmetic. The aim of this paper is to develop a method for the numerically reliable identification of fast sampled systems. In this paper, a data-dependent Orthonormal Polynomial approach is developed for systems parametrized in the δ -domain. This effectively addresses both the numerical conditioning issues encountered in frequency-domain system identification and the inherent numerical round-off problems of fast-sampled systems in the common Z-domain description. Superiority of the proposed approach is shown in an example.

  • bi Orthonormal Polynomial basis function framework with applications in system identification
    IEEE Transactions on Automatic Control, 2016
    Co-Authors: Robbert Van Herpen, Okko Bosgra, Tom Oomen
    Abstract:

    Numerical aspects are of central importance in identification and control. Many computations in these fields involve approximations using Polynomial or rational functions that are obtained using orthogonal or oblique projections. The aim of this paper is to develop a new and general theoretical framework to solve a large class of relevant problems. The proposed method is built on the introduction of bi-Orthonormal Polynomials with respect to a data-dependent bi-linear form. This bi-linear form generalises the conventional inner product and allows for asymmetric and indefinite problems. The proposed approach is shown to lead to optimal numerical conditioning $(\kappa=1)$ in a recent frequency-domain instrumental variable system identification algorithm. In comparison, it is shown that these recent algorithms exhibit extremely poor numerical properties when solved using traditional approaches.

Shinjini Nandi - One of the best experts on this subject based on the ideXlab platform.

  • lpitrack eye movement pattern recognition algorithm and application to biometric identification
    Machine Learning, 2018
    Co-Authors: Subhadeep Mukhopadhyay, Shinjini Nandi
    Abstract:

    A comprehensive nonparametric statistical learning framework, called LPiTrack, is introduced for large-scale eye-movement pattern discovery. The foundation of our data-compression scheme is based on a new Karhunen–Loeve-type representation of the stochastic process in Hilbert space by specially designed Orthonormal Polynomial expansions. We apply this novel nonlinear transformation-based statistical data-processing algorithm to extract temporal-spatial-static characteristics from eye-movement trajectory data in an automated, robust way for biometric authentication. This is a significant step towards designing a next-generation gaze-based biometric identification system. We elucidate the essential components of our algorithm through data from the second Eye Movements Verification and Identification Competition, organized as a part of the 2014 International Joint Conference on Biometrics.

  • LPiTrack: Eye movement pattern recognition algorithm and application to biometric identification
    Machine Learning, 2018
    Co-Authors: Subhadeep Mukhopadhyay, Shinjini Nandi
    Abstract:

    A comprehensive nonparametric statistical learning framework, called LPiTrack , is introduced for large-scale eye-movement pattern discovery. The foundation of our data-compression scheme is based on a new Karhunen–Loéve-type representation of the stochastic process in Hilbert space by specially designed Orthonormal Polynomial expansions. We apply this novel nonlinear transformation-based statistical data-processing algorithm to extract temporal-spatial-static characteristics from eye-movement trajectory data in an automated, robust way for biometric authentication. This is a significant step towards designing a next-generation gaze-based biometric identification system. We elucidate the essential components of our algorithm through data from the second Eye Movements Verification and Identification Competition, organized as a part of the 2014 International Joint Conference on Biometrics.

Herschel Rabitz - One of the best experts on this subject based on the ideXlab platform.

  • random sampling high dimensional model representation rs hdmr with nonuniformly distributed variables application to an integrated multimedia multipathway exposure and dose model for trichloroethylene
    Journal of Physical Chemistry A, 2003
    Co-Authors: Sheng Wei Wang, Panos G. Georgopoulos, Herschel Rabitz
    Abstract:

    The high dimensional model representation (HDMR) technique is a procedure for representing high dimensional functions efficiently. A practical form of the technique, random sampling-high dimensional model representation (RS-HDMR), is based on randomly sampling the overall function. In reality, the samples are often obtained according to some probability density functions (pdfs). This paper extends our previous RS-HDMR work with uniformly distributed random samples to those with a nonuniform distribution and treats uniform sampling as a special case. Weighted Orthonormal Polynomial expansions are introduced to approximate the RS-HDMR component functions. Different pdfs give special formulas for the weighted Orthonormal Polynomials. However, the structure of the formulas for the RS-HDMR component functions represented by the Monte Carlo integration approximation are the same for all pdfs. The correlation method to reduce the variance of the Monte Carlo integration and the method to represent the high order terms by lower order terms in uniform RS-HDMR can also be used for nonuniform RS-HDMR. The theoretical basis of nonuniform RS-HDMR is provided, and an application is presented to an integrated environmental exposure and dose model for trichloroethylene.

  • Correlation method for variance reduction of Monte Carlo integration in RS-HDMR
    Journal of computational chemistry, 2003
    Co-Authors: Herschel Rabitz, Sheng Wei Wang, Panos G. Georgopoulos
    Abstract:

    The High Dimensional Model Representation (HDMR) technique is a procedure for efficiently representing high-dimensional functions. A practical form of the technique, RS-HDMR, is based on randomly sampling the overall function and utilizing Orthonormal Polynomial expansions. The determination of expansion coefficients employs Monte Carlo integration, which controls the accuracy of RS-HDMR expansions. In this article, a correlation method is used to reduce the Monte Carlo integration error. The determination of the expansion coefficients becomes an iteration procedure, and the resultant RS-HDMR expansion has much better accuracy than that achieved by direct Monte Carlo integration. For an illustration in four dimensions a few hundred random samples are sufficient to construct an RS-HDMR expansion by the correlation method with an accuracy comparable to that obtained by direct Monte Carlo integration with thousands of samples.

Panos G. Georgopoulos - One of the best experts on this subject based on the ideXlab platform.

  • random sampling high dimensional model representation rs hdmr with nonuniformly distributed variables application to an integrated multimedia multipathway exposure and dose model for trichloroethylene
    Journal of Physical Chemistry A, 2003
    Co-Authors: Sheng Wei Wang, Panos G. Georgopoulos, Herschel Rabitz
    Abstract:

    The high dimensional model representation (HDMR) technique is a procedure for representing high dimensional functions efficiently. A practical form of the technique, random sampling-high dimensional model representation (RS-HDMR), is based on randomly sampling the overall function. In reality, the samples are often obtained according to some probability density functions (pdfs). This paper extends our previous RS-HDMR work with uniformly distributed random samples to those with a nonuniform distribution and treats uniform sampling as a special case. Weighted Orthonormal Polynomial expansions are introduced to approximate the RS-HDMR component functions. Different pdfs give special formulas for the weighted Orthonormal Polynomials. However, the structure of the formulas for the RS-HDMR component functions represented by the Monte Carlo integration approximation are the same for all pdfs. The correlation method to reduce the variance of the Monte Carlo integration and the method to represent the high order terms by lower order terms in uniform RS-HDMR can also be used for nonuniform RS-HDMR. The theoretical basis of nonuniform RS-HDMR is provided, and an application is presented to an integrated environmental exposure and dose model for trichloroethylene.

  • Correlation method for variance reduction of Monte Carlo integration in RS-HDMR
    Journal of computational chemistry, 2003
    Co-Authors: Herschel Rabitz, Sheng Wei Wang, Panos G. Georgopoulos
    Abstract:

    The High Dimensional Model Representation (HDMR) technique is a procedure for efficiently representing high-dimensional functions. A practical form of the technique, RS-HDMR, is based on randomly sampling the overall function and utilizing Orthonormal Polynomial expansions. The determination of expansion coefficients employs Monte Carlo integration, which controls the accuracy of RS-HDMR expansions. In this article, a correlation method is used to reduce the Monte Carlo integration error. The determination of the expansion coefficients becomes an iteration procedure, and the resultant RS-HDMR expansion has much better accuracy than that achieved by direct Monte Carlo integration. For an illustration in four dimensions a few hundred random samples are sufficient to construct an RS-HDMR expansion by the correlation method with an accuracy comparable to that obtained by direct Monte Carlo integration with thousands of samples.