Sampling Variance

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 76983 Experts worldwide ranked by ideXlab platform

Ca Dric J Sallaberry - One of the best experts on this subject based on the ideXlab platform.

  • use of replicated latin hypercube Sampling to estimate Sampling Variance in uncertainty and sensitivity analysis results for the geologic disposal of radioactive waste
    Reliability Engineering & System Safety, 2012
    Co-Authors: Clifford W Hansen, Jon C Helton, Ca Dric J Sallaberry
    Abstract:

    The 2008 performance assessment (PA) for the proposed repository for high-level radioactive waste at Yucca Mountain (YM), Nevada, used a Latin hypercube sample (LHS) of size 300 in the propagation of the epistemic uncertainty present in 392 analysis input variables. To assess the adequacy of this sample size, the 2008 YM PA was repeated with three independently generated (i.e., replicated) LHSs of size 300 from the indicated 392 input variables and their associated distributions. Comparison of the uncertainty and sensitivity analysis results obtained with the three replicated LHSs showed that the three samples lead to similar results and that the use of any one of three samples would have produced the same assessment of the effects and implications of epistemic uncertainty. Uncertainty and sensitivity analysis results obtained with the three LHSs were compared by (i) simple visual inspection, (ii) use of the t-distribution to provide a formal representation of sample-to-sample variability in the determination of expected values over epistemic uncertainty and other distributional quantities, and (iii) use of the top down coefficient of concordance to determine agreement with respect to the importance of individual variables indicated in sensitivity analyses performed with the replicated samples. The presented analyses established that an LHS of size 300 was adequate for the propagation and analysis of the effects and implications of epistemic uncertainty in the 2008 YM PA.

  • use of replicated latin hypercube Sampling to estimate Sampling Variance in uncertainty and sensitivity analysis results for the geologic disposal of radioactive waste
    Procedia - Social and Behavioral Sciences, 2010
    Co-Authors: Clifford W Hansen, Jon C Helton, Ca Dric J Sallaberry
    Abstract:

    Abstract Sampling-based methods are commonly used to propagate uncertainty through models for complex systems ( Helton and Davis (2003) , Helton et al. (2006) ). Replicated Sampling involves repeating a Sampling-based uncertainty propagation for several independent samples of the same size ( Iman (1982) ). Variance between replicates indicates the numerical uncertainty in analysis results that derives from the Sampling-based method. Results from the replicates can be used to estimate confidence intervals for analysis results and to determine whether the sample size in use is sufficient to obtain statistically stable results. Replicated Sampling can be used to assess the adequacy of the sample size in situations where more formal statistical procedures are not applicable ( Iman (1982) ).

Jurgen Branke - One of the best experts on this subject based on the ideXlab platform.

  • reducing the Sampling Variance when searching for robust solutions
    Genetic and Evolutionary Computation Conference, 2001
    Co-Authors: Jurgen Branke
    Abstract:

    For real world problems it is often not sufficient to find solutions of high quality, but the solutions should also be robust. By robust it is meant that possible deviations from the solution should be tolerated, still yielding a good expected performance. One way to reach this goal is to evaluate each individual several times under a number of different scenarios, taking the average performance as fitness. But although this method is effective, it requires significant computational power. In this paper, we continue some previous work aimed at minimizing the search effort while still providing the desired robustness. In particular, we examine the effectiveness of de-randomizing the Sampling mechanism using Variance reduction methods, and the question whether the same scenarios should be used for all individuals in the population or not. As will be shown, a significant performance gain can be obtained by taking these ideas into account, without any additional computational cost.

Peter Dowd - One of the best experts on this subject based on the ideXlab platform.

  • Variance–CoVariance Matrix of the Experimental Variogram: Assessing Variogram Uncertainty
    Mathematical Geology, 2001
    Co-Authors: Eulogio Pardo-igúzquiza, Peter Dowd
    Abstract:

    Assessment of the Sampling Variance of the experimental variogram is an important topic in geostatistics as it gives the uncertainty of the variogram estimates. This assessment, however, is repeatedly overlooked in most applications mainly, perhaps, because a general approach has not been implemented in the most commonly used software packages for variogram analysis. In this paper the authors propose a solution that can be implemented easily in a computer program, and which, subject to certain assumptions, is exact. These assumptions are not very restrictive: second-order stationarity (the process has a finite Variance and the variogram has a sill) and, solely for the purpose of evaluating fourth-order moments, a Gaussian distribution for the random function. The approach described here gives the Variance–coVariance matrix of the experimental variogram, which takes into account not only the correlation among the experiemental values but also the multiple use of data in the variogram computation. Among other applications, standard errors may be attached to the variogram estimates and the Variance–coVariance matrix may be used for fitting a theoretical model by weighted, or by generalized, least squares. Confidence regions that hold a given confidence level for all the variogram lag estimates simultaneously have been calculated using the Bonferroni method for rectangular intervals, and using the multivariate Gaussian assumption for K-dimensional elliptical intervals (where K is the number of experimental variogram estimates). A general approach for incorporating the uncertainty of the experimental variogram into the uncertainty of the variogram model parameters is also shown. A case study with rainfall data is used to illustrate the proposed approach.

Daniel E Ruzzante - One of the best experts on this subject based on the ideXlab platform.

  • a comparison of several measures of genetic distance and population structure with microsatellite data bias and Sampling Variance
    Canadian Journal of Fisheries and Aquatic Sciences, 1998
    Co-Authors: Daniel E Ruzzante
    Abstract:

    Because of their rapid mutation rate and resulting large number of alleles, microsatellite DNA are well suited to examine the genetic or demographic structure of fish populations. However, the large number of alleles imply that large sample sizes are required for accurate reflection of genotypic frequencies. Estimates of genetic distance are often biased at small sample sizes, and biases and Sampling Variances can be affected by the number of, and distances between, alleles. Using data from a large collection of larval cod (Gadus morhua) from a single area, I examined the effect of sample size on seven genetic distance and two structure metrics. Pairs of samples (equal or unequal) of various sizes were drawn at random from a pool of 856 individuals scored for six microsatellite loci. ( delta µ)2, DSW, RST, and FST were the best performers in terms of bias and Variance. Sample sizes of 50 <= N <= 100 individuals were generally necessary for precise estimation of genetic distances and this value depended on...

  • a comparison of several measures of genetic distance and population structure with microsatellite data bias and Sampling Variance
    Canadian Journal of Fisheries and Aquatic Sciences, 1998
    Co-Authors: Daniel E Ruzzante
    Abstract:

    Because of their rapid mutation rate and resulting large number of alleles, microsatellite DNA are well suited to examine the genetic or demographic structure of fish populations. However, the large number of alleles imply that large sample sizes are required for accurate reflection of genotypic frequencies. Estimates of genetic distance are often biased at small sample sizes, and biases and Sampling Variances can be affected by the number of, and distances between, alleles. Using data from a large collection of larval cod (Gadus morhua) from a single area, I examined the effect of sample size on seven genetic distance and two structure metrics. Pairs of samples (equal or unequal) of various sizes were drawn at random from a pool of 856 individuals scored for six microsatellite loci. ( δμ) 2 , DSW, RST ,a ndFST were the best performers in terms of bias and Variance. Sample sizes of 50 ≤ N ≤ 100 individuals were generally necessary for precise estimation of genetic distances and this value depended on number of loci, number of alleles, and range in allele size. (δμ) 2 and DSW were biased at small sample sizes. Resume : Parce qu'ils sont le siege de mutations rapides, ce qui se traduit par un nombre d'alleles eleve, les microsatellites de l'ADN conviennent bien a l'etude genetique et demographique des populations de poissons. Toutefois, etant donne le nombre eleve d'alleles en jeu, il faut des echantillons a effectif eleve pour que l'evaluation des frequences genotypiques soit exacte. L'estimation de la distance genetique est souvent biaisee avec les echantillons de faible effectif; en outre, le nombre d'alleles et la distance qui les separe peuvent influer sur le biais et sur la Variance d'echantillonnage. Nous avons utilise des donnees portant sur un vaste ensemble de larves de morue (Gadus morhua) provenant d'une meme zone pour etudier l'effet de l'effectif des echantillons sur sept distances genetiques et deux parametres de structure. Nous avons choisi au hasard des paires d'echantillons (egaux ou inegaux) d'effectifs varies dans un ensemble constitue de 856 specimens presentant six microsatellites. (δμ) 2 , DSW, RST et FST ont donne les meilleurs resultats (biais et Variance). Nous avons constate qu'en general, il faut des echantillons d'au plus 50 sujets et d'au moins 100 sujets (50 ≤ N ≤ 100) pour estimer avec precision les distances genetiques, la valeur de l'effectif dependant du nombre de locus, du nombre d'alleles et de la variation de la taille des alleles. (δμ) 2 et DSW presentaient un biais avec les echantillons de faible effectif. (Traduit par la Redaction)

Clifford W Hansen - One of the best experts on this subject based on the ideXlab platform.

  • use of replicated latin hypercube Sampling to estimate Sampling Variance in uncertainty and sensitivity analysis results for the geologic disposal of radioactive waste
    Reliability Engineering & System Safety, 2012
    Co-Authors: Clifford W Hansen, Jon C Helton, Ca Dric J Sallaberry
    Abstract:

    The 2008 performance assessment (PA) for the proposed repository for high-level radioactive waste at Yucca Mountain (YM), Nevada, used a Latin hypercube sample (LHS) of size 300 in the propagation of the epistemic uncertainty present in 392 analysis input variables. To assess the adequacy of this sample size, the 2008 YM PA was repeated with three independently generated (i.e., replicated) LHSs of size 300 from the indicated 392 input variables and their associated distributions. Comparison of the uncertainty and sensitivity analysis results obtained with the three replicated LHSs showed that the three samples lead to similar results and that the use of any one of three samples would have produced the same assessment of the effects and implications of epistemic uncertainty. Uncertainty and sensitivity analysis results obtained with the three LHSs were compared by (i) simple visual inspection, (ii) use of the t-distribution to provide a formal representation of sample-to-sample variability in the determination of expected values over epistemic uncertainty and other distributional quantities, and (iii) use of the top down coefficient of concordance to determine agreement with respect to the importance of individual variables indicated in sensitivity analyses performed with the replicated samples. The presented analyses established that an LHS of size 300 was adequate for the propagation and analysis of the effects and implications of epistemic uncertainty in the 2008 YM PA.

  • use of replicated latin hypercube Sampling to estimate Sampling Variance in uncertainty and sensitivity analysis results for the geologic disposal of radioactive waste
    Procedia - Social and Behavioral Sciences, 2010
    Co-Authors: Clifford W Hansen, Jon C Helton, Ca Dric J Sallaberry
    Abstract:

    Abstract Sampling-based methods are commonly used to propagate uncertainty through models for complex systems ( Helton and Davis (2003) , Helton et al. (2006) ). Replicated Sampling involves repeating a Sampling-based uncertainty propagation for several independent samples of the same size ( Iman (1982) ). Variance between replicates indicates the numerical uncertainty in analysis results that derives from the Sampling-based method. Results from the replicates can be used to estimate confidence intervals for analysis results and to determine whether the sample size in use is sufficient to obtain statistically stable results. Replicated Sampling can be used to assess the adequacy of the sample size in situations where more formal statistical procedures are not applicable ( Iman (1982) ).