Running Interval Smoother

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 33 Experts worldwide ranked by ideXlab platform

Rand R. Wilcox - One of the best experts on this subject based on the ideXlab platform.

  • Comparisons of Two Quantile Regression Smoothers
    Journal of Modern Applied Statistical Methods, 2016
    Co-Authors: Rand R. Wilcox
    Abstract:

    The small-sample properties of two non-parametric quantile regression estimators are compared. The first is based on constrained B-spline smoothing (COBS) and the other is based on a variation and slight extension of a Running Interval Smoother. R functions for applying the methods were used in conjunction with default settings for the various optional arguments. Results indicate that the modified Running Interval Smoother has practical value. Manipulation of the optional arguments might impact the relative merits of the two methods, but the extent to which this is the case remains unknown.

  • Comparisons of two quantile regression Smoothers
    arXiv: Methodology, 2015
    Co-Authors: Rand R. Wilcox
    Abstract:

    The paper compares the small-sample properties of two non-parametric quantile regression estimators. The first is based on constrained B-spline smoothing (COBS) and the other is based on a variation and slight extension of a Running Interval Smoother, which apparently has not been studied via simulations. The motivation for this paper stems from the Well Elderly 2 study, a portion of which was aimed at understanding the association between the cortisol awakening response and two measures of stress. COBS indicated what appeared be an usual form of curvature. The modified Running Interval Smoother gave a strikingly different estimate, which raised the issue of how it compares to COBS in terms of mean squared error and bias as well as its ability to avoid a spurious indication of curvature. R functions for applying the methods were used in conjunction with default settings for the various optional arguments. The results indicate that the modified Running Interval Smoother has practical value. Manipulation of the optional arguments might impact the relative merits of the two methods, but the extent to which this is the case remains unknown.

  • Nonparametric Regression When Estimating the Probability of Success: A Comparison of Four Extant Estimators
    Journal of Statistical Theory and Practice, 2012
    Co-Authors: Rand R. Wilcox
    Abstract:

    For the random variables Y, X _1,..., X _p, where Y is binary, let M ( x _1,..., x _p) = P ( Y = 1 (X_1,... X _p) = ( x _1,... x _p)). The article compares four Smoothers aimed at estimating M ( x _1,..., x _p), three of which can be used when p > 1. Evidently there are no published comparisons of Smoothers when p > 1 and Y is binary. One of the estimators stems from Hosmer and Lemeshow (1989, 85), which is limited to p = 1. A simple modification of this estimator (called method E3 here) is proposed that can be used when p > 1. Roughly, a weighted mean of the Y values is used, where the weights are based on a robust analog of Mahalanobis distance that replaces the usual covariance matrix with the minimum volume estimator. Another estimator stems from Signorini and Jones (1984) and is based in part on an estimate of the probability density function of X _1,..., X _p. Here, an adaptive kernel density estimator is used. No estimator dominated in terms of mean squared error and bias. And for p = 1, the differences among three of the estimators, in terms of mean squared error and bias, are not particularly striking. But for p > 1, differences among the estimators are magnified, with method E3 performing relatively well. An estimator based on the Running Interval Smoother performs about as well as E3, but for general use, E3 is found to be preferable. The estimator studied by Signorini and Jones (1984) is not recommended, particularly when p > 1.

  • Robust ANCOVA using a Smoother with bootstrap bagging.
    The British journal of mathematical and statistical psychology, 2008
    Co-Authors: Rand R. Wilcox
    Abstract:

    Many robust analogs of the classic analysis of covariance (ANCOVA) method have been proposed, some of which are based on some type of regression Smoother. A method that first appeared in this journal, which is relatively simple and performs well in simulations, is based on a Running Interval Smoother combined with comparing medians or 20% trimmed means. It makes no parametric assumption about the regression lines and does not assume that the regression lines are parallel. A possible way of improving the efficiency of the Running Interval Smoother is to use bootstrap bagging and a minor goal here is to report some results supporting this approach. The major goal is to consider how ANCOVA might be performed when bootstrap bagging is used. Simple extensions of extant approaches that use some type of bootstrap method were found to be unsatisfactory. However, a basic percentile bootstrap method was found to perform well in simulations. And a reanalysis of data dealing with teachers' expectations about the cognitive ability of students illustrates that bootstrap bagging can make a practical difference.

  • Robust ANCOVA: Some Small-sample Results when there are Multiple Groups and Multiple Covariates
    Journal of Applied Statistics, 2007
    Co-Authors: Rand R. Wilcox
    Abstract:

    Numerous methods have been proposed for dealing with the serious practical problems associated with the conventional analysis of covariance method, with an emphasis on comparing two groups when there is a single covariate. Recently, Wilcox (2005a: section 11.8.2) outlined a method for handling multiple covariates that allows nonlinearity and heteroscedasticity. The method is readily extended to multiple groups, but nothing is known about its small-sample properties. This paper compares three variations of the method, each method based on one of three measures of location: means, medians and 20% trimmed means. The methods based on a 20% trimmed mean or median are found to avoid Type I error probabilities well above the nominal level, but the method based on medians can be too conservative in various situations; using a 20% trimmed mean gave the best results in terms of Type I errors. The methods are based in part on a Running Interval Smoother approximation of the regression surface. Included are comments on required sample sizes that are relevant to the so-called curse of dimensionality.