Robust Regression

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 82860 Experts worldwide ranked by ideXlab platform

Rama Chellappa - One of the best experts on this subject based on the ideXlab platform.

  • Analysis of Sparse Regularization Based Robust Regression Approaches
    IEEE Transactions on Signal Processing, 2013
    Co-Authors: Kaushik Mitra, Ashok Veeraraghavan, Rama Chellappa
    Abstract:

    Regression in the presence of outliers is an inherently combinatorial problem. However, compressive sensing theory suggests that certain combinatorial optimization problems can be exactly solved using polynomial-time algorithms. Motivated by this connection, several research groups have proposed polynomial-time algorithms for Robust Regression. In this paper we specifically address the traditional Robust Regression problem, where the number of observations is more than the number of unknown Regression parameters and the structure of the regressor matrix is defined by the training dataset (and hence it may not satisfy properties such as Restricted Isometry Property or incoherence). We derive the precise conditions under which the sparse regularization (l0 and l1-norm) approaches solve the Robust Regression problem. We show that the smallest principal angle between the regressor subspace and all k-dimensional outlier subspaces is the fundamental quantity that determines the performance of these algorithms. In terms of this angle we provide an estimate of the number of outliers the sparse regularization based approaches can handle. We then empirically evaluate the sparse (l1-norm) regularization approach against other traditional Robust Regression algorithms to identify accurate and efficient algorithms for high-dimensional Regression problems.

  • ICASSP - Robust Regression using sparse learning for high dimensional parameter estimation problems
    2010 IEEE International Conference on Acoustics Speech and Signal Processing, 2010
    Co-Authors: Kaushik Mitra, Ashok Veeraraghavan, Rama Chellappa
    Abstract:

    Algorithms such as Least Median of Squares (LMedS) and Random Sample Consensus (RANSAC) have been very successful for low-dimensional Robust Regression problems. However, the combinatorial nature of these algorithms makes them practically unusable for high-dimensional applications. In this paper, we introduce algorithms that have cubic time complexity in the dimension of the problem, which make them computationally efficient for high-dimensional problems. We formulate the Robust Regression problem by projecting the dependent variable onto the null space of the independent variables which receives significant contributions only from the outliers. We then identify the outliers using sparse representation/learning based algorithms. Under certain conditions, that follow from the theory of sparse representation, these polynomial algorithms can accurately solve the Robust Regression problem which is, in general, a combinatorial problem. We present experimental results that demonstrate the efficacy of the proposed algorithms. We also analyze the intrinsic parameter space of Robust Regression and identify an efficient and accurate class of algorithms for different operating conditions. An application to facial age estimation is presented.

David E. Booth - One of the best experts on this subject based on the ideXlab platform.

  • A Decision Model for the Robot Selection Problem Using Robust Regression
    Decision Sciences, 1991
    Co-Authors: Moutaz Khouja, David E. Booth
    Abstract:

    Industrial robots are increasingly used by many manufacturing firms. The number of robot manufacturers has also increased with many of these firms now offering a wide range of models. A potential user is thus faced with many options in both performance and cost. This paper proposes a decision model for the robot selection problem. The proposed model uses Robust Regression to identify, based on manufacturers' specifications, the robots that are the better performers for a given cost. Robust Regression is used because it identifies and is resistant to the effects of outlying observations, key components in the proposed model. The robots selected by the model become candidates for testing to verify manufacturers' specifications. The model is tested on a real data set and an example is presented.

Kaushik Mitra - One of the best experts on this subject based on the ideXlab platform.

  • Analysis of Sparse Regularization Based Robust Regression Approaches
    IEEE Transactions on Signal Processing, 2013
    Co-Authors: Kaushik Mitra, Ashok Veeraraghavan, Rama Chellappa
    Abstract:

    Regression in the presence of outliers is an inherently combinatorial problem. However, compressive sensing theory suggests that certain combinatorial optimization problems can be exactly solved using polynomial-time algorithms. Motivated by this connection, several research groups have proposed polynomial-time algorithms for Robust Regression. In this paper we specifically address the traditional Robust Regression problem, where the number of observations is more than the number of unknown Regression parameters and the structure of the regressor matrix is defined by the training dataset (and hence it may not satisfy properties such as Restricted Isometry Property or incoherence). We derive the precise conditions under which the sparse regularization (l0 and l1-norm) approaches solve the Robust Regression problem. We show that the smallest principal angle between the regressor subspace and all k-dimensional outlier subspaces is the fundamental quantity that determines the performance of these algorithms. In terms of this angle we provide an estimate of the number of outliers the sparse regularization based approaches can handle. We then empirically evaluate the sparse (l1-norm) regularization approach against other traditional Robust Regression algorithms to identify accurate and efficient algorithms for high-dimensional Regression problems.

  • ICASSP - Robust Regression using sparse learning for high dimensional parameter estimation problems
    2010 IEEE International Conference on Acoustics Speech and Signal Processing, 2010
    Co-Authors: Kaushik Mitra, Ashok Veeraraghavan, Rama Chellappa
    Abstract:

    Algorithms such as Least Median of Squares (LMedS) and Random Sample Consensus (RANSAC) have been very successful for low-dimensional Robust Regression problems. However, the combinatorial nature of these algorithms makes them practically unusable for high-dimensional applications. In this paper, we introduce algorithms that have cubic time complexity in the dimension of the problem, which make them computationally efficient for high-dimensional problems. We formulate the Robust Regression problem by projecting the dependent variable onto the null space of the independent variables which receives significant contributions only from the outliers. We then identify the outliers using sparse representation/learning based algorithms. Under certain conditions, that follow from the theory of sparse representation, these polynomial algorithms can accurately solve the Robust Regression problem which is, in general, a combinatorial problem. We present experimental results that demonstrate the efficacy of the proposed algorithms. We also analyze the intrinsic parameter space of Robust Regression and identify an efficient and accurate class of algorithms for different operating conditions. An application to facial age estimation is presented.

Moutaz Khouja - One of the best experts on this subject based on the ideXlab platform.

  • A Decision Model for the Robot Selection Problem Using Robust Regression
    Decision Sciences, 1991
    Co-Authors: Moutaz Khouja, David E. Booth
    Abstract:

    Industrial robots are increasingly used by many manufacturing firms. The number of robot manufacturers has also increased with many of these firms now offering a wide range of models. A potential user is thus faced with many options in both performance and cost. This paper proposes a decision model for the robot selection problem. The proposed model uses Robust Regression to identify, based on manufacturers' specifications, the robots that are the better performers for a given cost. Robust Regression is used because it identifies and is resistant to the effects of outlying observations, key components in the proposed model. The robots selected by the model become candidates for testing to verify manufacturers' specifications. The model is tested on a real data set and an example is presented.

Marco Riani - One of the best experts on this subject based on the ideXlab platform.

  • reliable Robust Regression diagnostics
    International Statistical Review, 2016
    Co-Authors: Silvia Salini, Fabrizio Laurini, Andrea Cerioli, Marco Riani
    Abstract:

    Summary Motivated by the requirement of controlling the number of false discoveries that arises in several application fields, we study the behaviour of diagnostic procedures obtained from popular high-breakdown Regression estimators when no outlier is present in the data. We find that the empirical error rates for many of the available techniques are surprisingly far from the prescribed nominal level. Therefore, we propose a simulation-based approach to correct the liberal diagnostics and reach reliable inferences. We provide evidence that our approach performs well in a wide range of settings of practical interest and for a variety of Robust Regression techniques, thus showing general appeal. We also evaluate the loss of power that can be expected from our corrections under different contamination schemes and show that this loss is often not dramatic. Finally, we detail some possible extensions that may further enhance the applicability of the method.

  • Benchmark testing of algorithms for very Robust Regression: FS, LMS and LTS
    Computational Statistics & Data Analysis, 2012
    Co-Authors: Francesca Torti, Domenico Perrotta, Anthony C. Atkinson, Marco Riani
    Abstract:

    The methods of very Robust Regression resist up to 50% of outliers. The algorithms for very Robust Regression rely on selecting numerous subsamples of the data. New algorithms for LMS and LTS estimators that have increased computational efficiency due to improved combinatorial sampling are proposed. These and other publicly available algorithms are compared for outlier detection. Timings and estimator quality are also considered. An algorithm using the forward search (FS) has the best properties for both size and power of the outlier tests.