Importance Sampling

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 149154 Experts worldwide ranked by ideXlab platform

Marek J Druzdzel - One of the best experts on this subject based on the ideXlab platform.

  • An Importance Sampling Algorithm Based on Evidence Pre-propagation
    arXiv: Artificial Intelligence, 2012
    Co-Authors: Changhe Yuan, Marek J Druzdzel
    Abstract:

    Precision achieved by stochastic Sampling algorithms for Bayesian networks typically deteriorates in face of extremely unlikely evidence. To address this problem, we propose the Evidence Pre-propagation Importance Sampling algorithm (EPIS-BN), an Importance Sampling algorithm that computes an approximate Importance function by the heuristic methods: loopy belief Propagation and e-cutoff. We tested the performance of e-cutoff on three large real Bayesian networks: ANDES, CPCS, and PATHFINDER. We observed that on each of these networks the EPIS-BN algorithm gives us a considerable improvement over the current state of the art algorithm, the AIS-BN algorithm. In addition, it avoids the costly learning stage of the AIS-BN algorithm.

  • theoretical analysis and practical insights on Importance Sampling in bayesian networks
    International Journal of Approximate Reasoning, 2007
    Co-Authors: Changhe Yuan, Marek J Druzdzel
    Abstract:

    The AIS-BN algorithm [J. Cheng, M.J. Druzdzel, BN-AIS: An adaptive Importance Sampling algorithm for evidential reasoning in large Bayesian networks, Journal of Artificial Intelligence Research 13 (2000) 155-188] is a successful Importance Sampling-based algorithm for Bayesian networks that relies on two heuristic methods to obtain an initial Importance function: @e-cutoff, replacing small probabilities in the conditional probability tables by a larger @e, and setting the probability distributions of the parents of evidence nodes to uniform. However, why the simple heuristics are so effective was not well understood. In this paper, we point out that it is due to a practical requirement for the Importance function, which says that a good Importance function should possess thicker tails than the actual posterior probability distribution. By studying the basic assumptions behind Importance Sampling and the properties of Importance Sampling in Bayesian networks, we develop several theoretical insights into the desirability of thick tails for Importance functions. These insights not only shed light on the success of the two heuristics of AIS-BN, but also provide a common theoretical basis for several other successful heuristic methods.

  • an Importance Sampling algorithm based on evidence pre propagation
    Uncertainty in Artificial Intelligence, 2002
    Co-Authors: Changhe Yuan, Marek J Druzdzel
    Abstract:

    Precision achieved by stochastic Sampling algorithms for Bayesian networks typically deteriorates in face of extremely unlikely evidence. To address this problem, we propose the Evidence Pre-propagation Importance Sampling algorithm (EPIS-BN), an Importance Sampling algorithm that computes an approximate Importance function using two techniques: loopy belief propagation [19, 25] and e-cutoff heuristic [2]. We tested the performance of EPIS-BN on three large real Bayesian networks: ANDES [3], CPCS [21], and PATHFINDER[11]. We observed that on each of these networks the EPIS-BN algorithm outperforms AISBN [2], the current state of the art algorithm, while avoiding its costly learning stage.

Changhe Yuan - One of the best experts on this subject based on the ideXlab platform.

  • An Importance Sampling Algorithm Based on Evidence Pre-propagation
    arXiv: Artificial Intelligence, 2012
    Co-Authors: Changhe Yuan, Marek J Druzdzel
    Abstract:

    Precision achieved by stochastic Sampling algorithms for Bayesian networks typically deteriorates in face of extremely unlikely evidence. To address this problem, we propose the Evidence Pre-propagation Importance Sampling algorithm (EPIS-BN), an Importance Sampling algorithm that computes an approximate Importance function by the heuristic methods: loopy belief Propagation and e-cutoff. We tested the performance of e-cutoff on three large real Bayesian networks: ANDES, CPCS, and PATHFINDER. We observed that on each of these networks the EPIS-BN algorithm gives us a considerable improvement over the current state of the art algorithm, the AIS-BN algorithm. In addition, it avoids the costly learning stage of the AIS-BN algorithm.

  • theoretical analysis and practical insights on Importance Sampling in bayesian networks
    International Journal of Approximate Reasoning, 2007
    Co-Authors: Changhe Yuan, Marek J Druzdzel
    Abstract:

    The AIS-BN algorithm [J. Cheng, M.J. Druzdzel, BN-AIS: An adaptive Importance Sampling algorithm for evidential reasoning in large Bayesian networks, Journal of Artificial Intelligence Research 13 (2000) 155-188] is a successful Importance Sampling-based algorithm for Bayesian networks that relies on two heuristic methods to obtain an initial Importance function: @e-cutoff, replacing small probabilities in the conditional probability tables by a larger @e, and setting the probability distributions of the parents of evidence nodes to uniform. However, why the simple heuristics are so effective was not well understood. In this paper, we point out that it is due to a practical requirement for the Importance function, which says that a good Importance function should possess thicker tails than the actual posterior probability distribution. By studying the basic assumptions behind Importance Sampling and the properties of Importance Sampling in Bayesian networks, we develop several theoretical insights into the desirability of thick tails for Importance functions. These insights not only shed light on the success of the two heuristics of AIS-BN, but also provide a common theoretical basis for several other successful heuristic methods.

  • an Importance Sampling algorithm based on evidence pre propagation
    Uncertainty in Artificial Intelligence, 2002
    Co-Authors: Changhe Yuan, Marek J Druzdzel
    Abstract:

    Precision achieved by stochastic Sampling algorithms for Bayesian networks typically deteriorates in face of extremely unlikely evidence. To address this problem, we propose the Evidence Pre-propagation Importance Sampling algorithm (EPIS-BN), an Importance Sampling algorithm that computes an approximate Importance function using two techniques: loopy belief propagation [19, 25] and e-cutoff heuristic [2]. We tested the performance of EPIS-BN on three large real Bayesian networks: ANDES [3], CPCS [21], and PATHFINDER[11]. We observed that on each of these networks the EPIS-BN algorithm outperforms AISBN [2], the current state of the art algorithm, while avoiding its costly learning stage.

Curtis R. Menyuk - One of the best experts on this subject based on the ideXlab platform.

  • Importance Sampling for polarization mode dispersion
    IEEE Photonics Technology Letters, 2002
    Co-Authors: Gino Biondini, William L Kath, Curtis R. Menyuk
    Abstract:

    We describe the application of Importance Sampling to Monte-Carlo simulations of polarization-mode dispersion (PMD) in optical fibers. The method allows rare differential group delay (DGD) events to be simulated much more efficiently than with standard Monte-Carlo methods and, thus, it can be used to assess PMD-induced system outage probabilities at realistic bit-error rates. We demonstrate the technique by accurately calculating the tails of the DGD probability distribution with a relatively small number of Monte-Carlo trials.

  • analysis of polarization mode dispersion compensators using Importance Sampling
    Optical Fiber Communication Conference, 2001
    Co-Authors: I T Lima, William L Kath, Gino Biondini, B S Marks, Curtis R. Menyuk
    Abstract:

    We use Importance Sampling to analyze polarization-mode dispersion (PMD) compensators that consist of a single differential-group delay (DGD) element. Using this technique we show that while these compensators improve the average penalty due to PMD, they may degrade the outage probability.

Daniel Straub - One of the best experts on this subject based on the ideXlab platform.

  • reliability sensitivity estimation with sequential Importance Sampling
    Structural Safety, 2018
    Co-Authors: Iason Papaioannou, Karl Breitung, Daniel Straub
    Abstract:

    Abstract In applications of reliability analysis, the sensitivity of the probability of failure to design parameters is often crucial for decision-making. A common sensitivity measure is the partial derivative of the probability of failure with respect to the design parameter. If the design parameter enters the definition of the reliability problem through the limit-state function, i.e. the function defining the failure event, then the partial derivative is given by a surface integral over the limit-state surface. Direct application of standard Monte Carlo methods for estimation of surface integrals is not possible. To circumvent this difficulty, an approximation of the surface integral in terms of a domain integral has been proposed by the authors. In this paper, we propose estimation of the domain integral through application of a method termed sequential Importance Sampling (SIS). The basic idea of SIS is to gradually translate samples from the distribution of the random variables to samples from an approximately optimal Importance Sampling density. The transition of the samples is defined through the construction of a sequence of intermediate distributions, which are sampled through application of a resample-move scheme. We demonstrate effectiveness of the proposed method in estimating reliability sensitivities to both distribution and limit-state parameters with numerical examples.

  • sequential Importance Sampling for structural reliability analysis
    Structural Safety, 2016
    Co-Authors: Iason Papaioannou, Costas Papadimitriou, Daniel Straub
    Abstract:

    Abstract This paper proposes the application of sequential Importance Sampling (SIS) to the estimation of the probability of failure in structural reliability. SIS was developed originally in the statistical community for exploring posterior distributions and estimating normalizing constants in the context of Bayesian analysis. The basic idea of SIS is to gradually translate samples from the prior distribution to samples from the posterior distribution through a sequential reweighting operation. In the context of structural reliability, SIS can be applied to produce samples of an approximately optimal Importance Sampling density, which can then be used for estimating the sought probability. The transition of the samples is defined through the construction of a sequence of intermediate distributions. We present a particular choice of the intermediate distributions and discuss the properties of the derived algorithm. Moreover, we introduce two MCMC algorithms for application within the SIS procedure; one that is applicable to general problems with small to moderate number of random variables and one that is especially efficient for tackling high-dimensional problems.

Richard S Sutton - One of the best experts on this subject based on the ideXlab platform.

  • multi step off policy learning without Importance Sampling ratios
    arXiv: Learning, 2017
    Co-Authors: Ashique Rupam Mahmood, Richard S Sutton
    Abstract:

    To estimate the value functions of policies from exploratory data, most model-free off-policy algorithms rely on Importance Sampling, where the use of Importance Sampling ratios often leads to estimates with severe variance. It is thus desirable to learn off-policy without using the ratios. However, such an algorithm does not exist for multi-step learning with function approximation. In this paper, we introduce the first such algorithm based on temporal-difference (TD) learning updates. We show that an explicit use of Importance Sampling ratios can be eliminated by varying the amount of bootstrapping in TD updates in an action-dependent manner. Our new algorithm achieves stability using a two-timescale gradient-based TD update. A prior algorithm based on lookup table representation called Tree Backup can also be retrieved using action-dependent bootstrapping, becoming a special case of our algorithm. In two challenging off-policy tasks, we demonstrate that our algorithm is stable, effectively avoids the large variance issue, and can perform substantially better than its state-of-the-art counterpart.

  • weighted Importance Sampling for off policy learning with linear function approximation
    Neural Information Processing Systems, 2014
    Co-Authors: Rupam A Mahmood, Hado Van Hasselt, Richard S Sutton
    Abstract:

    Importance Sampling is an essential component of off-policy model-free reinforcement learning algorithms. However, its most effective variant, weighted Importance Sampling, does not carry over easily to function approximation and, because of this, it is not utilized in existing off-policy learning algorithms. In this paper, we take two steps toward bridging this gap. First, we show that weighted Importance Sampling can be viewed as a special case of weighting the error of individual training samples, and that this weighting has theoretical and empirical benefits similar to those of weighted Importance Sampling. Second, we show that these benefits extend to a new weighted-Importance-Sampling version of off-policy LSTD(λ). We show empirically that our new WIS-LSTD(λ) algorithm can result in much more rapid and reliable convergence than conventional off-policy LSTD(λ) (Yu 2010, Bertsekas & Yu 2009).