Least Squares

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 264 Experts worldwide ranked by ideXlab platform

Stephen Boyd - One of the best experts on this subject based on the ideXlab platform.

  • Least Squares auto-tuning
    Engineering Optimization, 2020
    Co-Authors: Shane Barratt, Stephen Boyd
    Abstract:

    Least Squares auto-tuning automatically finds hyper-parameters in Least Squares problems that minimize another (true) objective. The Least Squares tuning optimization problem is non-convex, so it c...

  • Least Squares Auto-Tuning
    arXiv: Optimization and Control, 2019
    Co-Authors: Shane Barratt, Stephen Boyd
    Abstract:

    Least Squares is by far the simplest and most commonly applied computational method in many fields. In almost all applications, the Least Squares objective is rarely the true objective. We account for this discrepancy by parametrizing the Least Squares problem and automatically adjusting these parameters using an optimization algorithm. We apply our method, which we call Least Squares auto-tuning, to data fitting.

M P Wand - One of the best experts on this subject based on the ideXlab platform.

  • multivariate locally weighted Least Squares regression
    Annals of Statistics, 1994
    Co-Authors: David Ruppert, M P Wand
    Abstract:

    Nonparametric regression using locally weighted Least Squares was first discussed by Stone and by Cleveland. Recently, it was shown by Fan and by Fan and Gijbels that the local linear kernel-weighted Least Squares regression estimator has asymptotic properties making it superior, in certain senses, to the Nadaraya-Watson and Gasser-Muller kernel estimators. In this paper we extend their results on asymptotic bias and variance to the case of multivariate predictor variables. We are able to derive the leading bias and variance terms for general multivariate kernel weights using weighted Least Squares matrix theory. This approach is especially convenient when analysing the asymptotic conditional bias and variance of the estimator at points near the boundary of the support of the predictors. We also investigate the asymptotic properties of the multivariate local quadratic Least Squares regression estimator discussed by Cleveland and Devlin and, in the univariate case, higher-order polynomial fits and derivative estimation.

Ivan Markovsky - One of the best experts on this subject based on the ideXlab platform.

  • A Recursive Restricted Total Least-Squares Algorithm
    IEEE Transactions on Signal Processing, 2014
    Co-Authors: Stephan Rhode, Ivan Markovsky, Konstantin Usevich, Frank Gauterin
    Abstract:

    We show that the generalized total Least Squares (GTLS) problem with a singular noise covariance matrix is equivalent to the restricted total Least Squares (RTLS) problem and propose a recursive method for its numerical solution. The method is based on the generalized inverse iteration. The estimation error covariance matrix and the estimated augmented correction are also characterized and computed recursively. The algorithm is cheap to compute and is suitable for online implementation. Simulation results in Least Squares (LS), data Least Squares (DLS), total Least Squares (TLS), and RTLS noise scenarios show fast convergence of the parameter estimates to their optimal values obtained by corresponding batch algorithms.

  • overview of total Least Squares methods
    Signal Processing, 2007
    Co-Authors: Ivan Markovsky, Sabine Van Huffel
    Abstract:

    We review the development and extensions of the classical total Least-Squares method and describe algorithms for its generalization to weighted and structured approximation problems. In the generic case, the classical total Least-Squares problem has a unique solution, which is given in analytic form in terms of the singular value decomposition of the data matrix. The weighted and structured total Least-Squares problems have no such analytic solution and are currently solved numerically by local optimization methods. We explain how special structure of the weight matrix and the data matrix can be exploited for efficient cost function and first derivative computation. This allows to obtain computationally efficient solution methods. The total Least-Squares family of methods has a wide range of applications in system theory, signal processing, and computer algebra. We describe the applications for deconvolution, linear prediction, and errors-in-variables system identification.

Shane Barratt - One of the best experts on this subject based on the ideXlab platform.

  • Least Squares auto-tuning
    Engineering Optimization, 2020
    Co-Authors: Shane Barratt, Stephen Boyd
    Abstract:

    Least Squares auto-tuning automatically finds hyper-parameters in Least Squares problems that minimize another (true) objective. The Least Squares tuning optimization problem is non-convex, so it c...

  • Least Squares Auto-Tuning
    arXiv: Optimization and Control, 2019
    Co-Authors: Shane Barratt, Stephen Boyd
    Abstract:

    Least Squares is by far the simplest and most commonly applied computational method in many fields. In almost all applications, the Least Squares objective is rarely the true objective. We account for this discrepancy by parametrizing the Least Squares problem and automatically adjusting these parameters using an optimization algorithm. We apply our method, which we call Least Squares auto-tuning, to data fitting.

Lindsay M Faunt - One of the best experts on this subject based on the ideXlab platform.

  • parameter estimation by Least Squares methods
    Methods in Enzymology, 1992
    Co-Authors: Michael L. Johnson, Lindsay M Faunt
    Abstract:

    Publisher Summary This chapter presents an overview of Least-Squares methods for the estimation of parameters by fitting experimental data. Least-Squares methods produce the estimated parameters with the highest probability (maximum likelihood) of being correct if several critical assumptions are warranted. The chapter discusses several Least-Squares parameter estimation procedures and methods for the evaluation of confidence intervals for the determined parameters. It discusses the practical aspects of applying Least-Squares techniques to experimental data. The chapter provides an overview of several Least-Squares methods that can be applied to the evaluation of constants from experimental data. It outlines some of the Least-Squares methods available for evaluating the set of parameters with the highest probability of being correct in given a set of experimental data. Nonlinear Least-Squares analysis actually comprises a group of numerical procedures that can be used to evaluate the optimal values of the parameters in vector a for the experimental data. The chapter discusses inherent assumptions of Least-Squares methods. The chapter reviews several of the more common algorithms—the Gauss–Newton method and derivatives and the Nelder-Mead simplex method. The Gauss–Newton Least-Squares method is formulated as a system of Taylor series expansions of the fitting function. The Marquardt method is the most commonly used procedure for improving the convergence properties of the Gauss–Newton method.