Quantile Regression

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 30417 Experts worldwide ranked by ideXlab platform

Roger Koenker - One of the best experts on this subject based on the ideXlab platform.

  • Quantile Regression 40 years on
    2017
    Co-Authors: Roger Koenker
    Abstract:

    Since Quetelet's work in the nineteenth century, social science has iconified the average man, that hypothetical man without qualities who is comfortable with his head in the oven and his feet in a bucket of ice. Conventional statistical methods since Quetelet have sought to estimate the effects of policy treatments for this average man. However, such effects are often quite heterogeneous: Medical treatments may improve life expectancy but also impose serious short-term risks; reducing class sizes may improve the performance of good students but not help weaker ones, or vice versa. Quantile Regression methods can help to explore these heterogeneous effects. Some recent developments in Quantile Regression methods are surveyed in this review.

  • Quantile Regression methods for reference growth charts
    Statistics in Medicine, 2006
    Co-Authors: Ying Wei, Anneli Pere, Roger Koenker
    Abstract:

    Estimation of reference growth curves for children's height and weight has traditionally relied on normal theory to construct families of Quantile curves based on samples from the reference population. Age-specific parametric transformation has been used to significantly broaden the applicability of these normal theory methods. Non-parametric Quantile Regression methods offer a complementary strategy for estimating conditional Quantile functions. We compare estimated reference curves for height using the penalized likelihood approach of Cole and Green (Statistics in Medicine 1992; 11:1305–1319) with Quantile Regression curves based on data used for modern Finnish reference charts. An advantage of the Quantile Regression approach is that it is relatively easy to incorporate prior growth and other covariates into the analysis of longitudinal growth data. Quantile specific autoregressive models for unequally spaced measurements are introduced and their application to diagnostic screening is illustrated. Copyright © 2005 John Wiley & Sons, Ltd.

  • Inequality constrained Quantile Regression
    Sankhya, 2005
    Co-Authors: Roger Koenker
    Abstract:

    Abstract. An algorithm for computing parametric linear Quantile Regression estimates subject to linear inequality constraints is described. The algorithm is a variant of the interior point algorithm described in Koenker and Portnoy (1997) for unconstrained Quantile Regression and is consequently quite efficient even for large problems, particularly when the inherent sparsity of the resulting linear algebra is exploited. Applications to qualitatively constrained nonparametric Regression are described in the penultimate section. Implementations of the algorithm are available in MATLAB and R.

  • Quantile Regression for longitudinal data
    Journal of Multivariate Analysis, 2004
    Co-Authors: Roger Koenker
    Abstract:

    AbstractThe penalized least squares interpretation of the classical random effects estimator suggests a possible way forward for Quantile Regression models with a large number of “fixed effects”. The introduction of a large number of individual fixed effects can significantly inflate the variability of estimates of other covariate effects. Regularization, or shrinkage of these individual effects toward a common value can help to modify this inflation effect. A general approach to estimating Quantile Regression models for longitudinal data is proposed employing ℓ1 regularization methods. Sparse linear algebra and interior point methods for solving large linear programs are essential computational tools

  • Inference on the Quantile Regression Process
    2000
    Co-Authors: Roger Koenker
    Abstract:

    Quantile Regression is gradually evolving into a comprehensive approach to the statistical analysis of linear and nonlinear response models for conditional Quantile functions. Just as classical linear Regression methods based on minimizing sums of squared residuals enable one to estimate models for conditional mean functions, Quantile Regression methods based on minimizing asymmetrically weighted {\it absolute} residuals offer a mechanism for estimating models for the conditional median function, and the full Tests based on the Quantile Regression process can be formulated like the classical Kolmogorov-Smirnov and Cramer-von-Mises tests of goodness-of-fit employing the theory of Bessel processes as in Kiefer (1959). However, it is frequently desirable to formulate hypotheses involving unknown nuisance parameters, thereby jeopardizing the distribution free character of these tests. We characterize this situation as ``the Durbin problem'' since it was posed in Durbin (1973), for parametric empirical processes. In this paper we consider an approach to the Durbin problem involving a martingale transformation of the parametric empirical process suggested by Khmaladze (1981) and show that it can be adapted to a wide variety of inference problems involving the Quantile Regression process. In particular, we suggest new tests of the location shift and location-scale shift models that underlie much of classical econometric inference. The methods are illustrated in some limited Monte-Carlo experiments and with a reanalysis of data on unemployment durations from the Pennsylvania Reemployment Bonus Experiments. The Pennsylvania experiments, conducted in 1988-89, were designed to test the efficacy of cash bonuses paid for early reemployment in shortening the duration of insured unemployment spells.

Victor Chernozhukov - One of the best experts on this subject based on the ideXlab platform.

  • Fast algorithms for the Quantile Regression process
    Empirical Economics, 2020
    Co-Authors: Victor Chernozhukov, Ivan Fernandez-val, Blaise Melly
    Abstract:

    The widespread use of Quantile Regression methods depends crucially on the existence of fast algorithms. Despite numerous algorithmic improvements, the computation time is still non-negligible because researchers often estimate many Quantile Regressions and use the bootstrap for inference. We suggest two new fast algorithms for the estimation of a sequence of Quantile Regressions at many Quantile indexes. The first algorithm applies the preprocessing idea of Portnoy and Koenker (Stat Sci 12(4):279–300, 1997 ) but exploits a previously estimated Quantile Regression to guess the sign of the residuals. This step allows for a reduction in the effective sample size. The second algorithm starts from a previously estimated Quantile Regression at a similar Quantile index and updates it using a single Newton–Raphson iteration. The first algorithm is exact, while the second is only asymptotically equivalent to the traditional Quantile Regression estimator. We also apply the preprocessing idea to the bootstrap by using the sample estimates to guess the sign of the residuals in the bootstrap sample. Simulations show that our new algorithms provide very large improvements in computation time without significant (if any) cost in the quality of the estimates. For instance, we divide by 100 the time required to estimate 99 Quantile Regressions with 20 regressors and 50,000 observations.

  • Extremal Quantile Regression: An Overview
    2017
    Co-Authors: Victor Chernozhukov, Ivan Fernandez-val, Tetsuya Kaji
    Abstract:

    Extremal Quantile Regression, i.e. Quantile Regression applied to the tails of the conditional distribution, counts with an increasing number of economic and financial applications such as value-at-risk, production frontiers, determinants of low infant birth weights, and auction models. This chapter provides an overview of recent developments in the theory and empirics of extremal Quantile Regression. The advances in the theory have relied on the use of extreme value approximations to the law of the Koenker and Bassett (1978) Quantile Regression estimator. Extreme value laws not only have been shown to provide more accurate approximations than Gaussian laws at the tails, but also have served as the basis to develop bias corrected estimators and inference methods using simulation and suitable variations of bootstrap and subsampling. The applicability of these methods is illustrated with two empirical examples on conditional value-at-risk and financial contagion.

  • Vector Quantile Regression
    2014
    Co-Authors: Guillaume Carlier, Victor Chernozhukov, Alfred Galichon
    Abstract:

    We propose a notion of conditional vector Quantile function and a vector Quantile Regression. A conditional vector Quantile function (CVQF) of a random vector Y, taking values in ℝd given covariates Z=z, taking values in ℝk, is a map u↦QY∣Z(u,z), which is monotone, in the sense of being a gradient of a convex function, and such that given that vector U follows a reference non-atomic distribution FU, for instance uniform distribution on a unit cube in ℝd, the random vector QY∣Z(U,z) has the distribution of Y conditional on Z=z. Moreover, we have a strong representation, Y=QY∣Z(U,Z) almost surely, for some version of U. The vector Quantile Regression (VQR) is a linear model for CVQF of Y given Z. Under correct specification, the notion produces strong representation, Y=β(U)⊤f(Z), for f(Z) denoting a known set of transformations of Z, where u↦β(u)⊤f(Z) is a monotone map, the gradient of a convex function, and the Quantile Regression coefficients u↦β(u) have the interpretations analogous to that of the standard scalar Quantile Regression. As f(Z) becomes a richer class of transformations of Z, the model becomes nonparametric, as in series modelling. A key property of VQR is the embedding of the classical Monge-Kantorovich's optimal transportation problem at its core as a special case. In the classical case, where Y is scalar, VQR reduces to a version of the classical QR, and CVQF reduces to the scalar conditional Quantile function. Several applications to diverse problems such as multiple Engel curve estimation, and measurement of financial risk, are considered.

  • Extremal Quantile Regression
    The Annals of Statistics, 2005
    Co-Authors: Victor Chernozhukov
    Abstract:

    Quantile Regression is an important tool for estimation of conditional Quantiles of a response Y given a vector of covariates X. It can be used to measure the effect of covariates not only in the center of a distribution, but also in the upper and lower tails. This paper develops a theory of Quantile Regression in the tails. Specifically, it obtains the large sample properties of extremal (extreme order and intermediate order) Quantile Regression estimators for the linear Quantile Regression model with the tails restricted to the domain of minimum attraction and closed under tail equivalence across regressor values. This modeling setup combines restrictions of extreme value theory with leading homoscedastic and heteroscedastic linear specifications of Regression analysis. In large samples, extreme order Regression Quantiles converge weakly to \argmin functionals of stochastic integrals of Poisson processes that depend on regressors, while intermediate Regression Quantiles and their functionals converge to normal vectors with variance matrices dependent on the tail parameters and the regressor design.

Xue Huang - One of the best experts on this subject based on the ideXlab platform.

  • Block average Quantile Regression for massive dataset
    Statistical Papers, 2017
    Co-Authors: Chao Cai, Cuixia Jiang, Fang Sun, Xue Huang
    Abstract:

    Nowadays, researchers are frequently confronted with challenges from large-scale data computing. Quantile Regression on massive dataset is challenging due to the limitations of computer primary memory. Our proposed block average Quantile Regression provides a simple and efficient way to implement Quantile Regression on massive dataset. The major novelty of this method is splitting the entire data into a few blocks, applying the convectional Quantile Regression onto the data within each block, and deriving final results through aggregating these Quantile Regression results via simple average approach. While our approach can significantly reduce the storage volume needed for estimation, the resulting estimator is theoretically as efficient as the traditional Quantile Regression on entire dataset. On the statistical side, asymptotic properties of the resulting estimator are investigated. We verify and illustrate our proposed method via extensive Monte Carlo simulation studies as well as a real-world application.

  • Weighted Quantile Regression via support vector machine
    Expert Systems with Applications, 2015
    Co-Authors: Jinxiu Zhang, Cuixia Jiang, Xue Huang
    Abstract:

    The support vector weighted Quantile Regression approach is proposed.It nests several popular Quantile Regression methods as special cases.We provide a simple conditional Quantile estimator of the proposed approach.It shows superior predictive ability in numerical simulations and applications. We propose a new support vector weighted Quantile Regression approach that is closely built upon the idea of support vector machine. We extend the methodology of several popular Quantile Regressions to a more general approach. It can be estimated by solving a Lagrangian dual problem of quadratic programming and is able to implement the nonlinear Quantile Regression by introducing a kernel function. The Monte Carlo simulation studies show that the proposed approach outperforms some widely used Quantile Regression methods in terms of prediction accuracy. Finally, we demonstrate the efficacy of our proposed method on three benchmark data sets. It reveals that our method performs better in terms of prediction accuracy, which illustrates the importance of taking into account of the heterogeneous nonlinear structure among predictors across Quantiles.

Claudia Czado - One of the best experts on this subject based on the ideXlab platform.

  • d vine copula based Quantile Regression
    Computational Statistics & Data Analysis, 2017
    Co-Authors: Daniel Kraus, Claudia Czado
    Abstract:

    Quantile Regression, that is the prediction of conditional Quantiles, has steadily gained importance in statistical modeling and financial applications. A new semiparametric Quantile Regression method is introduced. It is based on sequentially fitting a likelihood optimal D-vine copula to given data resulting in highly flexible models with easily extractable conditional Quantiles. As a subclass of regular vine copulas, D-vines enable the modeling of multivariate copulas in terms of bivariate building blocks, a so-called pair-copula construction (PCC). The proposed algorithm works fast and accurate even in high dimensions and incorporates an automatic variable selection by maximizing the conditional log-likelihood. Further, typical issues of Quantile Regression such as Quantile crossing or transformations, interactions and collinearity of variables are automatically taken care of. In a simulation study the improved accuracy and reduced computation time of the approach in comparison with established Quantile Regression methods is highlighted. An extensive financial application to international credit default swap (CDS) data including stress testing and Value-at-Risk (VaR) prediction demonstrates the usefulness of the proposed method.

  • D-vine Quantile Regression with discrete variables
    arXiv: Methodology, 2017
    Co-Authors: Niklas Schallhorn, Daniel Kraus, Thomas Nagler, Claudia Czado
    Abstract:

    Quantile Regression, the prediction of conditional Quantiles, finds applications in various fields. Often, some or all of the variables are discrete. The authors propose two new Quantile Regression approaches to handle such mixed discrete-continuous data. Both of them generalize the continuous D-vine Quantile Regression, where the dependence between the response and the covariates is modeled by a parametric D-vine. D-vine Quantile Regression provides very flexible models, that enable accurate and fast predictions. Moreover, it automatically takes care of major issues of classical Quantile Regression, such as Quantile crossing and interactions between the covariates. The first approach keeps the parametric estimation of the D-vines, but modifies the formulas to account for the discreteness. The second approach estimates the D-vine using continuous convolution to make the discrete variables continuous and then estimates the D-vine nonparametrically. A simulation study is presented examining for which scenarios the discrete-continuous D-vine Quantile Regression can provide superior prediction abilities. Lastly, the functionality of the two introduced methods is demonstrated by a real-world example predicting the number of bike rentals.

César Sánchez-sellero - One of the best experts on this subject based on the ideXlab platform.

  • A plug-in bandwidth selector for nonparametric Quantile Regression
    TEST, 2019
    Co-Authors: Mercedes Conde-amboage, César Sánchez-sellero
    Abstract:

    In the framework of Quantile Regression, local linear smoothing techniques have been studied by several authors, particularly by Yu and Jones (J Am Stat Assoc 93:228–237, 1998 ). The problem of bandwidth selection was addressed in the literature by the usual approaches, such as cross-validation or plug-in methods. Most of the plug-in methods rely on restrictive assumptions on the Quantile Regression model in relation to the mean Regression, or on parametric assumptions. Here we present a plug-in bandwidth selector for nonparametric Quantile Regression that is defined from a completely nonparametric approach. To this end, the curvature of the Quantile Regression function and the integrated squared sparsity (inverse of the conditional density) are both nonparametrically estimated. The new bandwidth selector is shown to work well in different simulated scenarios, particularly when the conditions commonly assumed in the literature are not satisfied. A real data application is also given.