Statistical Sampling

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 360 Experts worldwide ranked by ideXlab platform

Thomas R Loveland - One of the best experts on this subject based on the ideXlab platform.

  • land use pressure and a transition to forest cover loss in the eastern united states
    BioScience, 2010
    Co-Authors: Mark A Drummond, Thomas R Loveland
    Abstract:

    Contemporary land-use pressures have a significant impact on the extent and condition of forests in the eastern United States, causing a regional-scale decline in forest cover. Earlier in the 20th century, land cover was on a trajectory of forest expansion that followed agricultural abandonment. However, the potential for forest regeneration has slowed, and the extent of regional forest cover has declined by more than 4.0%. Using remote-sensing data, Statistical Sampling, and change-detection methods, this research shows how land conversion varies spatially and temporally across the East from 1973–2000, and how those changes affect regional land-change dynamics. The analysis shows that agricultural land use has continued to decline, and that this enables forest recovery; however, an important land-cover transition has occurred, from a mode of regional forest-cover gain to one of forest-cover loss caused by timber cutting cycles, urbanization, and other land-use demands.

  • land use pressure and a transition to forest cover loss in the eastern united states
    BioScience, 2010
    Co-Authors: Mark A Drummond, Thomas R Loveland
    Abstract:

    Contemporary land-use pressures have a significant impact on the extent and condition of forests in the eastern United States, causing a regional-scale decline in forest cover. Earlier in the 20th century, land cover was on a trajectory of forest expansion that followed agricultural abandonment. However, the potential for forest regeneration has slowed, and the extent of regional forest cover has declined by more than 4.0%. Using remote-sensing data, Statistical Sampling, and change-detection methods, this research shows how land conversion varies spatially and temporally across the East from 1973–2000, and how those changes affect regional land-change dynamics. The analysis shows that agricultural land use has continued to decline, and that this enables forest recovery; however, an important land-cover transition has occurred, from a mode of regional forest-cover gain to one of forest-cover loss caused by timber cutting cycles, urbanization, and other land-use demands.

  • Statistical Sampling to characterize recent united states land cover change
    Remote Sensing of Environment, 2003
    Co-Authors: Stephen V Stehman, Terry L Sohl, Thomas R Loveland
    Abstract:

    Abstract The U.S. Geological Survey, in conjunction with the U.S. Environmental Protection Agency, is conducting a study focused on developing methods for estimating changes in land-cover and landscape pattern for the conterminous United States from 1973 to 2000. Eleven land-cover and land-use classes are interpreted from Landsat imagery for five Sampling dates. Because of the high cost and potential effect of classification error associated with developing change estimates from wall-to-wall land-cover maps, a probability Sampling approach is employed. The basic Sampling unit is a 20×20 km area, and land cover is obtained for each 60×60 m pixel within the Sampling unit. The Sampling design is stratified based on ecoregions, and land-cover change estimates are constructed for each stratum. The Sampling design and analyses are documented, and estimates of change accompanied by standard errors are presented to demonstrate the methodology. Analyses of the completed strata suggest that the Sampling unit should be reduced to a 10×10 km block, and poststratified estimation and regression estimation are viable options to improve precision of estimated change.

Daniel M Zuckerman - One of the best experts on this subject based on the ideXlab platform.

  • steady state simulations using weighted ensemble path Sampling
    Journal of Chemical Physics, 2010
    Co-Authors: Divesh Bhatt, Bin W Zhang, Daniel M Zuckerman
    Abstract:

    We extend the weighted ensemble (WE) path Sampling method to perform rigorous Statistical Sampling for systems at steady state. A straightforward steady-state implementation of WE is directly practical for simple landscapes, but not when significant metastable intermediates states are present. We therefore develop an enhanced WE scheme, building on existing ideas, which accelerates attainment of steady state in complex systems. We apply both WE approaches to several model systems, confirming their correctness and efficiency by comparison with brute-force results. The enhanced version is significantly faster than the brute force and straightforward WE for systems with WE bins that accurately reflect the reaction coordinate(s). The new WE methods can also be applied to equilibrium Sampling, since equilibrium is a steady state.

  • a black box re weighting analysis can correct flawed simulation data
    Proceedings of the National Academy of Sciences of the United States of America, 2008
    Co-Authors: Marty F Ytreberg, Daniel M Zuckerman
    Abstract:

    There is a great need for improved Statistical Sampling in a range of physical, chemical, and biological systems. Even simulations based on correct algorithms suffer from Statistical error, which can be substantial or even dominant when slow processes are involved. Further, in key biomolecular applications, such as the determination of protein structures from NMR data, non-Boltzmann-distributed ensembles are generated. We therefore have developed the “black-box” strategy for reweighting a set of configurations generated by arbitrary means to produce an ensemble distributed according to any target distribution. In contrast to previous algorithmic efforts, the black-box approach exploits the configuration-space density observed in a simulation, rather than assuming a desired distribution has been generated. Successful implementations of the strategy, which reduce both Statistical error and bias, are developed for a one-dimensional system, and a 50-atom peptide, for which the correct 250-to-1 population ratio is recovered from a heavily biased ensemble.

Charles E Lawrence - One of the best experts on this subject based on the ideXlab platform.

  • sfold web server for Statistical folding and rational design of nucleic acids
    Nucleic Acids Research, 2004
    Co-Authors: Ye Ding, Chi Yu Chan, Charles E Lawrence
    Abstract:

    TheSfoldwebserverprovidesuser-friendlyaccessto Sfold, a recently developed nucleic acid folding software package, via the World Wide Web (WWW). The software is based on a new Statistical Sampling paradigm for the prediction of RNA secondary structure. One of the main objectives of this software is to offer computational tools for the rational design of RNAtargetingnucleicacids,whichincludesmallinterfering RNAs (siRNAs), antisense oligonucleotides and trans-cleaving ribozymes for gene knock-down studies. The methodology for siRNA design is based on a combination of RNA target accessibility prediction, siRNA duplex thermodynamic properties and empirical design rules. Our approach to target accessibility evaluation is an original extension of the underlying RNA folding algorithm to account for the likely existence of a population of structures for the target mRNA. In addition to the application modules Sirna, Soligo and Sribo for siRNAs, antisense oligos and ribozymes, respectively, the moduleSrna offers comprehensive features for Statistical representation of sampledstructures.Detailedoutputin both graphical andtextformatsisavailableforallmodules.TheSfold server is available at http://sfold.wadsworth.org and http://www.bioinfo.rpi.edu/applications/sfold.

  • a Statistical Sampling algorithm for rna secondary structure prediction
    Nucleic Acids Research, 2003
    Co-Authors: Ye Ding, Charles E Lawrence
    Abstract:

    An RNA molecule, particularly a long-chain mRNA, may exist as a population of structures. Furthermore, multiple structures have been demonstrated to play important functional roles. Thus, a representation of the ensemble of probable structures is of interest. We present a Statistical algorithm to sample rigorously and exactly from the Boltzmann ensemble of secondary structures. The forward step of the algorithm computes the equilibrium partition functions of RNA secondary structures with recent thermodynamic parameters. Using conditional probabilities computed with the partition functions in a recursive Sampling process, the backward step of the algorithm quickly generates a Statistically representative sample of structures. With cubic run time for the forward step, quadratic run time in the worst case for the Sampling step, and quadratic storage, the algorithm is efficient for broad applicability. We demonstrate that, by classifying sampled structures, the algorithm enables a Statistical delineation and representation of the Boltzmann ensemble. Applications of the algorithm show that alternative biological structures are revealed through Sampling. Statistical Sampling provides a means to estimate the probability of any structural motif, with or without constraints. For example, the algorithm enables probability profiling of single-stranded regions in RNA secondary structure. Probability profiling for specific loop types is also illustrated. By overlaying probability profiles, a mutual accessibility plot can be displayed for predicting RNA:RNA interactions. Boltzmann probability-weighted density of states and free energy distributions of sampled structures can be readily computed. We show that a sample of moderate size from the ensemble of an enormous number of possible structures is sufficient to guarantee Statistical reproducibility in the estimates of typical Sampling statistics. Our applications suggest that the Sampling algorithm may be well suited to prediction of mRNA structure and target accessibility. The algorithm is applicable to the rational design of small interfering RNAs (siRNAs), antisense oligonucleotides, and trans-cleaving ribozymes in gene knock-down studies.

Woo Y Yoon - One of the best experts on this subject based on the ideXlab platform.

  • uncertainty analysis of a nondestructive radioassay system for transuranic waste
    Waste Management, 1996
    Co-Authors: Yale D Harker, Larry G Blackwood, Teresa R Meachum, Woo Y Yoon
    Abstract:

    Radioassay of transuranic waste in 207 liter drums currently stored at the Idaho National Engineering Laboratory is achieved using a Passive Active Neutron (PAN) nondestructive assay system. In order to meet data quality assurance requirements for shipping and eventual permanent storage of these drums at the Waste Isolation Pilot Plant in Carlsbad, New Mexico, the total uncertainty of the PAN system measurements must be assessed. In particular, the uncertainty calculations are required to include the effects of variations in waste matrix parameters and related variables on the final measurement results. Because of the complexities involved in introducing waste matrix parameter effects into the uncertainty calculations, standard methods of analysis (e.g. experimentation followed by propagation of errors) could not be implemented. Instead, a modified Statistical Sampling and verification approach was developed. In this modified approach the total performance of the PAN system is simulated using computer models of the assay system and the resultant output is compared with the known input to assess the total uncertainty. This paper describes the simulation process and illustrates its application to waste comprised of weapons grade plutonium-contaminated graphite molds.

Ye Ding - One of the best experts on this subject based on the ideXlab platform.

  • sfold web server for Statistical folding and rational design of nucleic acids
    Nucleic Acids Research, 2004
    Co-Authors: Ye Ding, Chi Yu Chan, Charles E Lawrence
    Abstract:

    TheSfoldwebserverprovidesuser-friendlyaccessto Sfold, a recently developed nucleic acid folding software package, via the World Wide Web (WWW). The software is based on a new Statistical Sampling paradigm for the prediction of RNA secondary structure. One of the main objectives of this software is to offer computational tools for the rational design of RNAtargetingnucleicacids,whichincludesmallinterfering RNAs (siRNAs), antisense oligonucleotides and trans-cleaving ribozymes for gene knock-down studies. The methodology for siRNA design is based on a combination of RNA target accessibility prediction, siRNA duplex thermodynamic properties and empirical design rules. Our approach to target accessibility evaluation is an original extension of the underlying RNA folding algorithm to account for the likely existence of a population of structures for the target mRNA. In addition to the application modules Sirna, Soligo and Sribo for siRNAs, antisense oligos and ribozymes, respectively, the moduleSrna offers comprehensive features for Statistical representation of sampledstructures.Detailedoutputin both graphical andtextformatsisavailableforallmodules.TheSfold server is available at http://sfold.wadsworth.org and http://www.bioinfo.rpi.edu/applications/sfold.

  • a Statistical Sampling algorithm for rna secondary structure prediction
    Nucleic Acids Research, 2003
    Co-Authors: Ye Ding, Charles E Lawrence
    Abstract:

    An RNA molecule, particularly a long-chain mRNA, may exist as a population of structures. Furthermore, multiple structures have been demonstrated to play important functional roles. Thus, a representation of the ensemble of probable structures is of interest. We present a Statistical algorithm to sample rigorously and exactly from the Boltzmann ensemble of secondary structures. The forward step of the algorithm computes the equilibrium partition functions of RNA secondary structures with recent thermodynamic parameters. Using conditional probabilities computed with the partition functions in a recursive Sampling process, the backward step of the algorithm quickly generates a Statistically representative sample of structures. With cubic run time for the forward step, quadratic run time in the worst case for the Sampling step, and quadratic storage, the algorithm is efficient for broad applicability. We demonstrate that, by classifying sampled structures, the algorithm enables a Statistical delineation and representation of the Boltzmann ensemble. Applications of the algorithm show that alternative biological structures are revealed through Sampling. Statistical Sampling provides a means to estimate the probability of any structural motif, with or without constraints. For example, the algorithm enables probability profiling of single-stranded regions in RNA secondary structure. Probability profiling for specific loop types is also illustrated. By overlaying probability profiles, a mutual accessibility plot can be displayed for predicting RNA:RNA interactions. Boltzmann probability-weighted density of states and free energy distributions of sampled structures can be readily computed. We show that a sample of moderate size from the ensemble of an enormous number of possible structures is sufficient to guarantee Statistical reproducibility in the estimates of typical Sampling statistics. Our applications suggest that the Sampling algorithm may be well suited to prediction of mRNA structure and target accessibility. The algorithm is applicable to the rational design of small interfering RNAs (siRNAs), antisense oligonucleotides, and trans-cleaving ribozymes in gene knock-down studies.