Probability Tree

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 47166 Experts worldwide ranked by ideXlab platform

Alexander Strehl - One of the best experts on this subject based on the ideXlab platform.

  • conditional Probability Tree estimation analysis and algorithms
    Uncertainty in Artificial Intelligence, 2009
    Co-Authors: Alina Beygelzimer, John Langford, Yuri Lifshits, Gregory B Sorkin, Alexander Strehl
    Abstract:

    We consider the problem of estimating the conditional Probability of a label in time O(log n), where n is the number of possible labels. We analyze a natural reduction of this problem to a set of binary regression problems organized in a Tree structure, proving a regret bound that scales with the depth of the Tree. Motivated by this analysis, we propose the first online algorithm which provably constructs a logarithmic depth Tree on the set of labels to solve this problem. We test the algorithm empirically, showing that it works succesfully on a dataset with roughly 106 labels.

Serafen Moral - One of the best experts on this subject based on the ideXlab platform.

  • approximate inference in bayesian networks using binary Probability Trees
    International Journal of Approximate Reasoning, 2011
    Co-Authors: Andrés Cano, Manuel Gemezolmedo, Serafen Moral
    Abstract:

    The present paper introduces a new kind of representation for the potentials in a Bayesian network: Binary Probability Trees. They enable the representation of context-specific independences in more detail than Probability Trees. This enhanced capability leads to more efficient inference algorithms for some types of Bayesian networks. This paper explains the procedure for building a binary Probability Tree from a given potential, which is similar to the one employed for building standard Probability Trees. It also offers a way of pruning a binary Tree in order to reduce its size. This allows us to obtain exact or approximate results in inference depending on an input threshold. This paper also provides detailed algorithms for performing the basic operations on potentials (restriction, combination and marginalization) directly to binary Trees. Finally, some experiments are described where binary Trees are used with the variable elimination algorithm to compare the performance with that obtained for standard Probability Trees.

Andrés Cano - One of the best experts on this subject based on the ideXlab platform.

  • approximate inference in bayesian networks using binary Probability Trees
    International Journal of Approximate Reasoning, 2011
    Co-Authors: Andrés Cano, Manuel Gemezolmedo, Serafen Moral
    Abstract:

    The present paper introduces a new kind of representation for the potentials in a Bayesian network: Binary Probability Trees. They enable the representation of context-specific independences in more detail than Probability Trees. This enhanced capability leads to more efficient inference algorithms for some types of Bayesian networks. This paper explains the procedure for building a binary Probability Tree from a given potential, which is similar to the one employed for building standard Probability Trees. It also offers a way of pruning a binary Tree in order to reduce its size. This allows us to obtain exact or approximate results in inference depending on an input threshold. This paper also provides detailed algorithms for performing the basic operations on potentials (restriction, combination and marginalization) directly to binary Trees. Finally, some experiments are described where binary Trees are used with the variable elimination algorithm to compare the performance with that obtained for standard Probability Trees.

  • novel strategies to approximate Probability Trees in penniless propagation
    International Journal of Intelligent Systems, 2003
    Co-Authors: Andrés Cano, Serafín Moral, Antonio Salmeron
    Abstract:

    In this article we introduce some modifications over the Penniless propagation algorithm. When a message through the join Tree is approximated, the corresponding error is quantified in terms of an improved information measure, which leads to a new way of pruning several values in a Probability Tree (representing a message) by a single one, computed from the value stored in the Tree being pruned but taking into account the message stored in the opposite direction. Also, we have considered the possibility of replacing small Probability values by zero. Locally, this is not an optimal approximation strategy, but in Penniless propagation many different local approximations are carried out in order to estimate the posterior probabilities and, as we show in some experiments, replacing by zeros can improve the quality of the final approximations. © 2003 Wiley Periodicals, Inc.

Jude Shavlik - One of the best experts on this subject based on the ideXlab platform.

  • Gradient-based boosting for statistical relational learning: The relational dependency network case
    Machine Learning, 2012
    Co-Authors: Sriraam Natarajan, Bernd Gutmann, Tushar Khot, Kristian Kersting, Jude Shavlik
    Abstract:

    Dependency networks approximate a joint Probability distribution over multiple random variables as a product of conditional distributions. Relational Dependency Networks (RDNs) are graphical models that extend dependency networks to relational domains. This higher expressivity, however, comes at the expense of a more complex model-selection problem: an unbounded number of relational abstraction levels might need to be explored. Whereas current learning approaches for RDNs learn a single Probability Tree per random variable, we propose to turn the problem into a series of relational function-approximation problems using gradient-based boosting. In doing so, one can easily induce highly complex features over several iterations and in turn estimate quickly a very expressive model. Our experimental results in several different data sets show that this boosting method results in efficient learning of RDNs when compared to state-of-the-art statistical relational learning approaches.

Alina Beygelzimer - One of the best experts on this subject based on the ideXlab platform.

  • conditional Probability Tree estimation analysis and algorithms
    Uncertainty in Artificial Intelligence, 2009
    Co-Authors: Alina Beygelzimer, John Langford, Yuri Lifshits, Gregory B Sorkin, Alexander Strehl
    Abstract:

    We consider the problem of estimating the conditional Probability of a label in time O(log n), where n is the number of possible labels. We analyze a natural reduction of this problem to a set of binary regression problems organized in a Tree structure, proving a regret bound that scales with the depth of the Tree. Motivated by this analysis, we propose the first online algorithm which provably constructs a logarithmic depth Tree on the set of labels to solve this problem. We test the algorithm empirically, showing that it works succesfully on a dataset with roughly 106 labels.