Quadratic Loss

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 309 Experts worldwide ranked by ideXlab platform

William E Strawderman - One of the best experts on this subject based on the ideXlab platform.

  • matrix superharmonic priors for bayes estimation under matrix Quadratic Loss
    arXiv: Statistics Theory, 2020
    Co-Authors: Takeru Matsuda, William E Strawderman
    Abstract:

    We investigate Bayes estimation of a normal mean matrix under the matrix Quadratic Loss, which is viewed as a class of Loss functions including the Frobenius Loss and Quadratic Loss for each column. First, we derive an unbiased estimate of risk and show that the Efron--Morris estimator is minimax. Next, we introduce a notion of matrix superharmonicity for matrix-variate functions and show that it has analogous properties with usual superharmonic functions, which may be of independent interest. Then, we show that the generalized Bayes estimator with respect to a matrix superharmonic prior is minimax. We also provide a class of matrix superharmonic priors that include the previously proposed generalization of Stein's prior. Numerical results demonstrate that matrix superharmonic priors work well for low rank matrices.

  • minimax estimation of location parameters for spherically symmetric unimodal distributions under Quadratic Loss
    Social Science Research Network, 2006
    Co-Authors: Ann Cohen Brandwein, William E Strawderman
    Abstract:

    Families of minimax estimators are found for the location parameter of a p-variate (p> or = 3) spherically symmetric unimodal(s.s.u.)distribution with respect to general Quadratic Loss. The estimators of James and Stein, Baranchik, Bock and Strawderman are all considered for this general problem. Specifically, when the Loss is general Quadratic Loss given by L(delta,theta) = (delta - theta)'D(delta - theta) where D is a known pxp positive definite matrix, one main result, for one observation, X, on a multivariate s.s.u. distribution about theta, presents a class of minimax estimators whose risks dominate the risk of X, provided p> or = 3 and trace D > or equal 2dL where dL is the maximum eigenvalue of D. This class is given by Delta a,r(X)=(1-a(r(||X||2)/||X||2))X where 0 or = 4 and co = .96 when p=3.

  • on minimax estimation of a normal mean vector for general Quadratic Loss
    2003
    Co-Authors: William E Strawderman
    Abstract:

    Let X ~ ??(T,S>) (S known) and consider the problem of estimating the mean vector when Loss is general Quadratic Loss (d ? 9)'Q(6 ? T). Many results are known for the case S = Q = I. There is also a relatively large literature for the case of general S and Q but it is relatively less well developed. The purpose of this paper is to unify many of the results in the general case by relating them to the simpler case S = Q = I. We give a reduction of the general case to a canonical form (S = I, Q ? Diagonal) and show that a natural correspondence between priors, marginals, and estimators in the two versions of the problem preserves risk, admissibility, minimaxity and Bayesianity. This allows many results on minimaxity and admissibility in the case S = Q = I to be extended to the general case and allows an expansion of the classes of known minimax estimators in the general case. It also seems to make the general case somewhat more comprehensible.

Ron C Mittelhammer - One of the best experts on this subject based on the ideXlab platform.

  • combining estimators to improve structural model estimation and inference under Quadratic Loss
    Journal of Econometrics, 2005
    Co-Authors: Ron C Mittelhammer, George G Judge
    Abstract:

    Asymptotically, semi parametric estimators of the parameters in linear structural models have the same sampling properties. In finite samples the sampling properties of these estimators vary and large biases may result for sample sizes often found in practice. With a goal of improving asymptotic risk performance and finite sample efficiency properties, we investigate the idea of combining correlated structural equation estimators with different finite and asymptotic sampling characteristics. Based on a Quadratic Loss measure, we present evidence that the finite sample performance of the resulting combination estimator can be notably superior to that of a leading traditional moment based estimator.

  • a semiparametric basis for combining estimation problems under Quadratic Loss
    Journal of the American Statistical Association, 2004
    Co-Authors: George G Judge, Ron C Mittelhammer
    Abstract:

    When there is uncertainty concerning the appropriate statistical model-estimator to use in representing the data sampling process, we consider a basis for optimally combining estimation problems. The objective is to produce natural adaptive estimators that are free of subjective choices and tuning parameters. In the context of two competing multivariate linear statistical models–estimators, we demonstrate a semiparametric Stein-like (SPSL) estimator, , that, under Quadratic Loss, has superior risk performance relative to the conventional least squares estimator. The relationship of the SPSL estimator to the family of Stein estimators is noted, and asymptotic and analytic finite-sample risk properties of the estimator are developed for some special cases. As an application we consider the problem of combining two polar linear models and demonstrate a corresponding SPSL estimator. An extensive sampling experiment is used to investigate the finite-sample performance of the SPSL estimator over a wide range of...

  • a semi parametric basis for combining estimation problems under Quadratic Loss
    Social Science Research Network, 2004
    Co-Authors: George G Judge, Ron C Mittelhammer
    Abstract:

    When there is uncertainty concerning the appropriate statistical model-estimator to use in representing the data sampling process, we consider a basis for optimally combining estimation problems. The objective is to produce natural adaptive estimators that are free of subjective choices and tuning parameters. In the context of two competing multivariate linear statistical models-estimators, we demonstrate a semi-parametric Stein-like (SPSL) estimator, beta overbar (alpha hat), that under Quadratic Loss, has superior risk performance relative to the conventional least squares estimator. The relationship of the SPSL estimator to the family of Stein estimators is noted and asymptotic and analytic finite sample risk properties of the estimator are developed for some special cases. As an application we consider the problem of a combining two polar linear models and demonstrate a corresponding SPSL estimator. An extensive sampling experiment is used to investigate the finite sample performance of the SPSL estimator over a wide range of data sampling designs and symmetric and skewed distributions. Bootstrapping procedures are used to develop confidence sets and a basis for inference.

  • estimating the link function in multinomial response models under endogeneity and Quadratic Loss
    2004
    Co-Authors: George G Judge, Ron C Mittelhammer
    Abstract:

    This paper considers estimation and inference for the multinomial response model in the case where endogenous variables are arguments of the unknown link function. Semiparametric estimators are proposed that avoid the parametric assumptions underlying the likelihood approach as well as the Loss of precision when using nonparametric estimation. A data based shrinkage estimator that seeks an optimal combination of estimators and results in superior risk performance under Quadratic Loss is also developed.

  • estimating the link function in multinomial response models under endogeneity and Quadratic Loss
    Research Papers in Economics, 2004
    Co-Authors: George G Judge, Ron C Mittelhammer
    Abstract:

    Author(s): Judge, George G.; Mittelhammer, Ron C | Abstract: This paper considers estimation and inference for the multinomial response model in the case where endogenous variables are arguments of the unknown link function. Semiparametric estimators are proposed that avoid the parametric assumptions underlying the likelihood approach as well as the Loss of precision when using nonparametric estimation. A data based shrinkage estimator that seeks an optimal combination of estimators and results in superior risk performance under Quadratic Loss is also developed.

Changha Hwang - One of the best experts on this subject based on the ideXlab platform.

  • support vector quantile regression with weighted Quadratic Loss function
    Communications for Statistical Applications and Methods, 2010
    Co-Authors: Jooyong Shim, Changha Hwang
    Abstract:

    Support vector quantile regression(SVQR) is capable of providing more complete description of the linear and nonlinear relationships among random variables. In this paper we propose an iterative reweighted least squares(IRWLS) procedure to solve the problem of SVQR with a weighted Quadratic Loss function. Furthermore, we introduce the generalized approximate cross validation function to select the hyperparameters which affect the performance of SVQR. Experimental results are then presented which illustrate the performance of the IRWLS procedure for SVQR.

  • Interval regression analysis using Quadratic Loss support vector machine
    IEEE Transactions on Fuzzy Systems, 2005
    Co-Authors: Dug Hun Hong, Changha Hwang
    Abstract:

    Support vector machines (SVMs) have been very successful in pattern recognition and function estimation problems for crisp data. This paper proposes a new method to evaluate interval linear and nonlinear regression models combining the possibility and necessity estimation formulation with the principle of Quadratic Loss SVM. This version of SVM utilizes Quadratic Loss function, unlike the traditional SVM. For data sets with crisp inputs and interval outputs, the possibility and necessity models have been recently utilized, which are based on Quadratic programming approach giving more diverse spread coefficients than a linear programming one. The Quadratic Loss SVM also uses Quadratic programming approach whose another advantage in interval regression analysis is to be able to integrate both the property of central tendency in least squares and the possibilistic property in fuzzy regression. However, this is not a computationally expensive way. The Quadratic Loss SVM allows us to perform interval nonlinear regression analysis by constructing an interval linear regression function in a high dimensional feature space. The proposed algorithm is a very attractive approach to modeling nonlinear interval data, and is model-free method in the sense that we do not have to assume the underlying model function for interval nonlinear regression model with crisp inputs and interval output. Experimental results are then presented which indicate the performance of this algorithm.

  • Quadratic Loss support vector interval regression machine for crisp input output data
    Journal of the Korean Data and Information Science Society, 2004
    Co-Authors: Changha Hwang
    Abstract:

    Support vector machine (SVM) has been very successful in pattern recognition and function estimation problems for crisp data. This paper proposes a new method to evaluate interval regression models for crisp input-output data. The proposed method is based on Quadratic Loss SVM, which implements Quadratic programming approach giving more diverse spread coefficients than a linear programming one. The proposed algorithm here is model-free method in the sense that we do not have to assume the underlying model function. Experimental result is then presented which indicate the performance of this algorithm.

George G Judge - One of the best experts on this subject based on the ideXlab platform.

  • combining estimators to improve structural model estimation and inference under Quadratic Loss
    Journal of Econometrics, 2005
    Co-Authors: Ron C Mittelhammer, George G Judge
    Abstract:

    Asymptotically, semi parametric estimators of the parameters in linear structural models have the same sampling properties. In finite samples the sampling properties of these estimators vary and large biases may result for sample sizes often found in practice. With a goal of improving asymptotic risk performance and finite sample efficiency properties, we investigate the idea of combining correlated structural equation estimators with different finite and asymptotic sampling characteristics. Based on a Quadratic Loss measure, we present evidence that the finite sample performance of the resulting combination estimator can be notably superior to that of a leading traditional moment based estimator.

  • a semiparametric basis for combining estimation problems under Quadratic Loss
    Journal of the American Statistical Association, 2004
    Co-Authors: George G Judge, Ron C Mittelhammer
    Abstract:

    When there is uncertainty concerning the appropriate statistical model-estimator to use in representing the data sampling process, we consider a basis for optimally combining estimation problems. The objective is to produce natural adaptive estimators that are free of subjective choices and tuning parameters. In the context of two competing multivariate linear statistical models–estimators, we demonstrate a semiparametric Stein-like (SPSL) estimator, , that, under Quadratic Loss, has superior risk performance relative to the conventional least squares estimator. The relationship of the SPSL estimator to the family of Stein estimators is noted, and asymptotic and analytic finite-sample risk properties of the estimator are developed for some special cases. As an application we consider the problem of combining two polar linear models and demonstrate a corresponding SPSL estimator. An extensive sampling experiment is used to investigate the finite-sample performance of the SPSL estimator over a wide range of...

  • a semi parametric basis for combining estimation problems under Quadratic Loss
    Social Science Research Network, 2004
    Co-Authors: George G Judge, Ron C Mittelhammer
    Abstract:

    When there is uncertainty concerning the appropriate statistical model-estimator to use in representing the data sampling process, we consider a basis for optimally combining estimation problems. The objective is to produce natural adaptive estimators that are free of subjective choices and tuning parameters. In the context of two competing multivariate linear statistical models-estimators, we demonstrate a semi-parametric Stein-like (SPSL) estimator, beta overbar (alpha hat), that under Quadratic Loss, has superior risk performance relative to the conventional least squares estimator. The relationship of the SPSL estimator to the family of Stein estimators is noted and asymptotic and analytic finite sample risk properties of the estimator are developed for some special cases. As an application we consider the problem of a combining two polar linear models and demonstrate a corresponding SPSL estimator. An extensive sampling experiment is used to investigate the finite sample performance of the SPSL estimator over a wide range of data sampling designs and symmetric and skewed distributions. Bootstrapping procedures are used to develop confidence sets and a basis for inference.

  • estimating the link function in multinomial response models under endogeneity and Quadratic Loss
    2004
    Co-Authors: George G Judge, Ron C Mittelhammer
    Abstract:

    This paper considers estimation and inference for the multinomial response model in the case where endogenous variables are arguments of the unknown link function. Semiparametric estimators are proposed that avoid the parametric assumptions underlying the likelihood approach as well as the Loss of precision when using nonparametric estimation. A data based shrinkage estimator that seeks an optimal combination of estimators and results in superior risk performance under Quadratic Loss is also developed.

  • estimating the link function in multinomial response models under endogeneity and Quadratic Loss
    Research Papers in Economics, 2004
    Co-Authors: George G Judge, Ron C Mittelhammer
    Abstract:

    Author(s): Judge, George G.; Mittelhammer, Ron C | Abstract: This paper considers estimation and inference for the multinomial response model in the case where endogenous variables are arguments of the unknown link function. Semiparametric estimators are proposed that avoid the parametric assumptions underlying the likelihood approach as well as the Loss of precision when using nonparametric estimation. A data based shrinkage estimator that seeks an optimal combination of estimators and results in superior risk performance under Quadratic Loss is also developed.

Alessandro Sperduti - One of the best experts on this subject based on the ideXlab platform.

  • support vector regression with a generalized Quadratic Loss
    Italian Workshop on Neural Nets, 2005
    Co-Authors: Filippo Portera, Alessandro Sperduti
    Abstract:

    The standard SVR formulation for real-valued function approximation on multidimensional spaces is based on the e-insensitive Loss function, where errors are considered not correlated. Due to this, local information in the feature space which can be useful to improve the prediction model is disregarded. In this paper we address this problem by defining a generalized Quadratic Loss where the co-occurrence of errors is weighted according to a kernel similarity measure in the feature space. We show that the resulting dual problem can be expressed as a hard margin SVR in a different feature space when the co-occurrence error matrix is invertible. We compare our approach against a standard SVR on two regression tasks. Experimental results seem to show an improvement in the performance.

  • a generalized Quadratic Loss for support vector machines
    European Conference on Artificial Intelligence, 2004
    Co-Authors: Filippo Portera, Alessandro Sperduti
    Abstract:

    The standard SVM formulation for binary classification is based on the Hinge Loss function, where errors are considered not correlated. Due to this, local information in the feature space which can be useful to improve the prediction model is disregarded. In this paper we address this problem by defining a generalized Quadratic Loss where the co-occurrence of errors is weighted according to a kernel similarity measure in the feature space. In particular the proposed approach weights pairs of errors according to the distribution of the related patterns in the feature space. The generalized Quadratic Loss includes also target information in order to penalize errors on pairs of patterns that are similar and of the same class. We show that the resulting dual problem can be expressed as a hard margin SVM in a different feature space when the co-occurrence error matrix is invertible. We compare our approach against a standard SVM on some binary classification tasks. Experimental results obtained for different instances of the co-occurrence error matrix on these problems, seems to show an improvement in the performance.