The Experts below are selected from a list of 183 Experts worldwide ranked by ideXlab platform
William E. Strawderman - One of the best experts on this subject based on the ideXlab platform.
-
On Bayes and unbiased Estimators of loss
Annals of the Institute of Statistical Mathematics, 2003Co-Authors: Dominique Fourdrinier, William E. StrawdermanAbstract:We consider estimation of loss for generalized Bayes or pseudo-Bayes Estimators of a multivariate normal mean vector, θ. In 3 and higher dimensions, the MLE X is UMVUE and minimax but is inadmissible. It is dominated by the James-Stein Estimator and by many others. Johnstone (1988, On inadmissibility of some unbiased estimates of loss, Statistical Decision Theory and Related Topics, IV (eds. S. S. Gupta and J. O. Berger), Vol. 1, 361–379, Springer, New York) considered the estimation of loss for the Usual Estimator X and the James-Stein Estimator. He found improvements over the Stein unbiased Estimator of risk. In this paper, for a generalized Bayes point Estimator of θ, we compare generalized Bayes Estimators to unbiased Estimators of loss. We find, somewhat surprisingly, that the unbiased Estimator often dominates the corresponding generalized Bayes Estimator of loss for priors which give minimax Estimators in the original point estimation problem. In particular, we give a class of priors for which the generalized Bayes Estimator of θ is admissible and minimax but for which the unbiased Estimator of loss dominates the generalized Bayes Estimator of loss. We also give a general inadmissibility result for a generalized Bayes Estimator of loss.
-
Estimation of a parameter vector restricted to a cone
Statistics & Probability Letters, 2002Co-Authors: Idir Ouassou, William E. StrawdermanAbstract:Abstract We study estimation of a location vector restricted to a convex cone when the dimension, p, is at least 3. We find Estimators which improve on the “Usual” Estimator (the MLE in the normal case) in the general case of a spherically symmetric distribution with unknown scale. The improved Estimators may be viewed as Stein-type shrinkage Estimators on the set where the Usual unbiased Estimator (in the unrestricted case) satisfies the restriction. The improved procedures have the extremely strong property of improving on the “Usual” Estimator uniformly and simultaneously for all spherically symmetric distributions.
-
Sequential estimation of the variance of a normal distribution
Sequential Analysis, 1995Co-Authors: Jayalakshmi Natarajan, Robert Wood Johnson, William E. StrawdermanAbstract:In estimating the variance of a normal distribution when the loss function is essentially squared error, a sequential version of Stein's Estimator is used to show the existence of sequential Estimators which are better both in risk (expected loss) and sample size than the Usual Estimator of a given fixed sample size.
-
Stein estimation for non-normal spherically symmetric location families in three dimensions
Journal of Multivariate Analysis, 1992Co-Authors: Stefan S. Ralescu, Ann Cohen Brandwein, William E. StrawdermanAbstract:Abstract We consider estimation of a location vector in the presence of known or unknown scale parameter in three dimensions. The technique of proof is Stein's integration by parts and it is used to cover several cases (e.g., non-unimodal distributions) for which previous results were known only in the cases of four and higher dimensions. Additionally, we give a necessary and sufficient condition on the shrinkage constant for improvement on the Usual Estimator for the spherical uniform distribution.
-
a james stein type Estimator for combining unbiased and possibly biased Estimators
Journal of the American Statistical Association, 1991Co-Authors: Edwin J Green, William E. StrawdermanAbstract:Abstract We present a method for combining unbiased sample data with possibly biased auxiliary information. The Estimator we derive is similar in spirit to the James–Stein Estimator. We prove that the Estimator dominates the sample mean under quadratic loss. When the auxiliary information is unbiased, our Estimator has risk slightly greater than the Usual combined Estimator. As the bias increases, however, the risk of the Usual Estimator is unbounded, while the risk of our Estimator is bounded by the risk of the sample mean. We show how our Estimator can be considered an approximation to the best linear combination of the sample data and the auxiliary information, allude to how it can be derived as an empirical Bayes Estimator, and suggest a method for constructing confidence sets. Finally, the performance of our Estimator is compared to that of the sample mean and the Usual combined Estimator using real forestry data.
Kazuhiro Ohtani - One of the best experts on this subject based on the ideXlab platform.
-
Comparison of the Stein and the Usual Estimators for the regression error variance under the Pitman nearness criterion when variables are omitted
Statistical Papers, 2007Co-Authors: Kazuhiro Ohtani, Alan T. K. WanAbstract:This paper compares the Stein and the Usual Estimators of the error variance under the Pitman nearness (PN) criterion in a regression model which is mis-specified due to missing relevant explanatory variables. The exact expression of the PN-probability is derived and numerically evaluated. Contrary to the well-known result under mean squared errors (MSE), with the PN criterion the Stein variance Estimator is uniformly dominated by the Usual Estimator when no relevant variables are excluded from the model. With an increased degree of model mis-specification, neither Estimator strictly dominates the other.
-
Exact distribution of a pre-test Estimator for regression error variance when there are omitted variables
Statistics & Probability Letters, 2002Co-Authors: Kazuhiro OhtaniAbstract:In this paper, we derive the exact distribution of a pre-test Estimator for regression error variance when the relevant independent variables are omitted, and show theoretically that the MSE dominance of the Stein variance Estimator over the Usual Estimator is robust to the specification error.
-
the exact distribution and density functions of the stein type Estimator for normal variance
Communications in Statistics-theory and Methods, 1993Co-Authors: Kazuhiro OhtaniAbstract:In this paper, we derive the exact distribution and density functions of the Stein-type Estimator for the normal variance. It is shown by numerical evaluation that the density function of the Stein-type Estimator is unimodal and concentrates around the mode more than that of the Usual Estimator.
M. Z. Anis - One of the best experts on this subject based on the ideXlab platform.
-
Estimating the Mean of Normal Distribution with Known Coefficient of Variation
American Journal of Mathematical and Management Sciences, 2008Co-Authors: M. Z. AnisAbstract:SYNOPTIC ABSTRACTIn many situations, the coefficient of variation is known though the mean and variance may not be known. This additional information on the coefficient of variation can be used to improve upon the Usual Estimator of the unknown mean. Three biased but simple Estimators for the mean of the normal distribution when the coefficient of variation is known are proposed and their properties are studied. The performance of these Estimators is compared with some other existing methods; and it turns out that the new proposed Estimators compete well.
Edwin J Green - One of the best experts on this subject based on the ideXlab platform.
-
a james stein type Estimator for combining unbiased and possibly biased Estimators
Journal of the American Statistical Association, 1991Co-Authors: Edwin J Green, William E. StrawdermanAbstract:Abstract We present a method for combining unbiased sample data with possibly biased auxiliary information. The Estimator we derive is similar in spirit to the James–Stein Estimator. We prove that the Estimator dominates the sample mean under quadratic loss. When the auxiliary information is unbiased, our Estimator has risk slightly greater than the Usual combined Estimator. As the bias increases, however, the risk of the Usual Estimator is unbounded, while the risk of our Estimator is bounded by the risk of the sample mean. We show how our Estimator can be considered an approximation to the best linear combination of the sample data and the auxiliary information, allude to how it can be derived as an empirical Bayes Estimator, and suggest a method for constructing confidence sets. Finally, the performance of our Estimator is compared to that of the sample mean and the Usual combined Estimator using real forestry data.
Nels Grevstad - One of the best experts on this subject based on the ideXlab platform.
-
Simultaneous estimation of negative binomial dispersion parameters
Statistics & Probability Letters, 2011Co-Authors: Nels GrevstadAbstract:Abstract Simultaneous Estimators of the dispersion parameters in m negative binomial distributions with known success probabilities are shown to dominate the Usual Estimator under squared error loss by shrinking it toward either a fixed or a data-driven point.