Equivalence Classes

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 28188 Experts worldwide ranked by ideXlab platform

Michael D Perlman - One of the best experts on this subject based on the ideXlab platform.

  • characterizing markov Equivalence Classes for amp chain graph models
    Annals of Statistics, 2006
    Co-Authors: Steen A Andersson, Michael D Perlman
    Abstract:

    Chain graphs (CG) (= adicyclic graphs) use undirected and directed edges to represent both structural and associative dependences. Like acyclic directed graphs (ADGs), the CG associated with a statistical Markov model may not be unique, so CGs fall into Markov Equivalence Classes, which may be superexponentially large, leading to unidentifiability and computational inefficiency in model search and selection. It is shown here that, under the Andersson-Madigan-Perlman (AMP) interpretation of a CG, each Markov-Equivalence class can be uniquely represented by a single distinguished CG, the AMP essential graph, that is itself simultaneously Markov equivalent to all CGs in the AMP Markov Equivalence class. A complete characterization of AMP essential graphs is obtained. Like the essential graph previously introduced for ADGs, the AMP essential graph will play a fundamental role for inference and model search and selection for AMP CG models.

  • the size distribution for markov Equivalence Classes of acyclic digraph models
    Artificial Intelligence, 2002
    Co-Authors: Steven Gillispie, Michael D Perlman
    Abstract:

    Bayesian networks, equivalently graphical Markov models determined by acyclic digraphs or ADGs (also called directed acyclic graphs or dags), have proved to be both effective and efficient for representing complex multivariate dependence structures in terms of local relations. However, model search and selection is potentially complicated by the many-to-one correspondence between ADGs and the statistical models that they represent. If the ADGs/models ratio is large, search procedures based on unique graphical representations of Equivalence Classes of ADGs could provide substantial computational efficiency. Hitherto, the value of the ADGs/models ratio has been calculated only for graphs with n = 5 or fewer vertices. In the present study, a computer program was written to enumerate the Equivalence Classes of ADG models and study the distributions of class sizes and number of edges for graphs up to n = 10 vertices. The ratio of ADGs to numbers of Classes appears to approach an asymptote of about 3.7. Distributions of the Classes according to number of edges and class size were produced which also appear to be approaching asymptotic limits. Imposing a bound on the maximum number of parents to any vertex causes little change if the bound is sufficiently large, with four being a possible minimum. The program also includes a new variation of orderly algorithm for generating undirected graphs.

  • a characterization of markov Equivalence Classes for acyclic digraphs
    Annals of Statistics, 1997
    Co-Authors: Steen A Andersson, David Madigan, Michael D Perlman
    Abstract:

    Undirected graphs and acyclic digraphs (ADGs), as well as their mutual extension to chain graphs, are widely used to describe dependencies among variables in multivariate distributions. In particular, the likelihood functions of ADG models admit convenient recursive factorizations that often allow explicit maximum likelihood estimates and that are well suited to building Bayesian networks for expert systems. Whereas the undirected graph associated with a dependence model is uniquely determined, there may, however, be many ADGs that determine the same dependence (= Markov) model. Thus, the family of all ADGs with a given set of vertices is naturally partitioned into Markov-Equivalence Classes, each class being associated with a unique statistical model. Statistical procedures, such as model selection or model averaging, that fail to take into account these Equivalence Classes, may incur substantial computational or other inefficiencies. Here it is shown that each Markov-Equivalence class is uniquely determined by a single chain graph, the essential graph, that is itself simultaneously Markov equivalent to all ADGs in the Equivalence class. Essential graphs are characterized, a polynomial-time algorithm for their construction is given, and their applications to model selection and other statistical questions are described.

  • bayesian model averaging and model selection for markov Equivalence Classes of acyclic digraphs
    Communications in Statistics-theory and Methods, 1996
    Co-Authors: David Madigan, Steen A Andersson, Michael D Perlman, Chris Volinsky
    Abstract:

    Acyclic digraphs (ADGs) are widely used to describe dependences among variables in multivariate distributions. In particular, the likelihood functions of ADG models admit convenient recursive factorizations that often allow explicit maximum likelihood estimates and that are well suited to building Bayesian networks for expert systems. There may, however, be many ADGs that determine the same dependence (= Markov) model. Thus, the family of all ADGs with a given set of vertices is naturally partitioned into Markov-Equivalence Classes, each class being associated with a unique statistical model. Statistical procedures, such as model selection or model averaging, that fail to take into account these Equivalence Classes, may incur substantial computational or other inefficiencies. Recent results have shown that each Markov-Equivalence class is uniquely determined by a single chain graph, the essential graph, that is itself Markov-equivalent simultaneously to all ADGs in the Equivalence class. Here we propose t...

David Maxwell Chickering - One of the best experts on this subject based on the ideXlab platform.

  • learning Equivalence Classes of bayesian network structures
    Journal of Machine Learning Research, 2002
    Co-Authors: David Maxwell Chickering
    Abstract:

    Two Bayesian-network structures are said to be equivalent if the set of distributions that can be represented with one of those structures is identical to the set of distributions that can be represented with the other. Many scoring criteria that are used to learn Bayesian-network structures from data are score equivalent; that is, these criteria do not distinguish among networks that are equivalent. In this paper, we consider using a score equivalent criterion in conjunction with a heuristic search algorithm to perform model selection or model averaging. We argue that it is often appropriate to search among Equivalence Classes of network structures as opposed to the more common approach of searching among individual Bayesian-network structures. We describe a convenient graphical representation for an Equivalence class of structures, and introduce a set of operators that can be applied to that representation by a search algorithm to move among Equivalence Classes. We show that our Equivalence-class operators can be scored locally, and thus share the computational efficiency of traditional operators defined for individual structures. We show experimentally that a greedy model-selection algorithm using our representation yields slightly higher-scoring structures than the traditional approach without any additional time overhead, and we argue that more sophisticated search algorithms are likely to benefit much more.

  • learning Equivalence Classes of bayesian network structures
    Uncertainty in Artificial Intelligence, 1996
    Co-Authors: David Maxwell Chickering
    Abstract:

    Approaches to learning Bayesian networks from data typically combine a scoring function with a heuristic search procedure. Given a Bayesian network structure, many of the scoring functions derived in the literature return a score for the entire Equivalence class to which the structure belongs. When using such a scoring function, it is appropriate for the heuristic search algorithm to search over Equivalence Classes of Bayesian networks as opposed to individual structures. We present the general formulation of a search space for which the states of the search correspond to Equivalence Classes of structures. Using this space, any one of a number of heuristic search algorithms can easily be applied. We compare greedy search performance in the proposed search space to greedy search performance in a search space for which the states correspond to individual Bayesian network structures.

  • UAI - Learning Equivalence Classes of Bayesian network structures
    1996
    Co-Authors: David Maxwell Chickering
    Abstract:

    Approaches to learning Bayesian networks from data typically combine a scoring function with a heuristic search procedure. Given a Bayesian network structure, many of the scoring functions derived in the literature return a score for the entire Equivalence class to which the structure belongs. When using such a scoring function, it is appropriate for the heuristic search algorithm to search over Equivalence Classes of Bayesian networks as opposed to individual structures. We present the general formulation of a search space for which the states of the search correspond to Equivalence Classes of structures. Using this space, any one of a number of heuristic search algorithms can easily be applied. We compare greedy search performance in the proposed search space to greedy search performance in a search space for which the states correspond to individual Bayesian network structures.

Bin Yu - One of the best experts on this subject based on the ideXlab platform.

  • Formulas for Counting the Sizes of Markov Equivalence Classes of Directed Acyclic Graphs
    arXiv: Machine Learning, 2016
    Co-Authors: Yangbo He, Bin Yu
    Abstract:

    The sizes of Markov Equivalence Classes of directed acyclic graphs play important roles in measuring the uncertainty and complexity in causal learning. A Markov Equivalence class can be represented by an essential graph and its undirected subgraphs determine the size of the class. In this paper, we develop a method to derive the formulas for counting the sizes of Markov Equivalence Classes. We first introduce a new concept of core graph. The size of a Markov Equivalence class of interest is a polynomial of the number of vertices given its core graph. Then, we discuss the recursive and explicit formula of the polynomial, and provide an algorithm to derive the size formula via symbolic computation for any given core graph. The proposed size formula derivation sheds light on the relationships between the size of a Markov Equivalence class and its representation graph, and makes size counting efficient, even when the essential graphs contain non-sparse undirected subgraphs.

  • counting and exploring sizes of markov Equivalence Classes of directed acyclic graphs
    Journal of Machine Learning Research, 2015
    Co-Authors: Yangbo He, Bin Yu
    Abstract:

    When learning a directed acyclic graph (DAG) model via observational data, one generally cannot identify the underlying DAG, but can potentially obtain a Markov Equivalence class. The size (the number of DAGs) of a Markov Equivalence class is crucial to infer causal effects or to learn the exact causal DAG via further interventions. Given a set of Markov Equivalence Classes, the distribution of their sizes is a key consideration in developing learning methods. However, counting the size of an Equivalence class with many vertices is usually computationally infeasible, and the existing literature reports the size distributions only for Equivalence Classes with ten or fewer vertices. In this paper, we develop a method to compute the size of a Markov Equivalence class. We first show that there are five types of Markov Equivalence Classes whose sizes can be formulated as five functions of the number of vertices respectively. Then we introduce a new concept of a rooted sub-class. The graph representations of rooted subClasses of a Markov Equivalence class are used to partition this class recursively until the sizes of all rooted subClasses can be computed via the five functions. The proposed size counting is efficient for Markov Equivalence Classes of sparse DAGs with hundreds of vertices. Finally, we explore the size and edge distributions of Markov Equivalence Classes and find experimentally that, in general, (1) most Markov Equivalence Classes are half completed and their average sizes are small, and (2) the sizes of sparse Classes grow approximately exponentially with the numbers of vertices.

  • reversible mcmc on markov Equivalence Classes of sparse directed acyclic graphs
    Annals of Statistics, 2013
    Co-Authors: Yangbo He, Bin Yu
    Abstract:

    Author(s): He, Yangbo; Jia, Jinzhu; Yu, Bin | Abstract: Graphical models are popular statistical tools which are used to represent dependent or causal complex systems. Statistically equivalent causal or directed graphical models are said to belong to a Markov equivalent class. It is of great interest to describe and understand the space of such Classes. However, with currently known algorithms, sampling over such Classes is only feasible for graphs with fewer than approximately 20 vertices. In this paper, we design reversible irreducible Markov chains on the space of Markov equivalent Classes by proposing a perfect set of operators that determine the transitions of the Markov chain. The stationary distribution of a proposed Markov chain has a closed form and can be computed easily. Specifically, we construct a concrete perfect set of operators on sparse Markov Equivalence Classes by introducing appropriate conditions on each possible operator. Algorithms and their accelerated versions are provided to efficiently generate Markov chains and to explore properties of Markov Equivalence Classes of sparse directed acyclic graphs (DAGs) with thousands of vertices. We find experimentally that in most Markov Equivalence Classes of sparse DAGs, (1) most edges are directed, (2) most undirected subgraphs are small and (3) the number of these undirected subgraphs grows approximately linearly with the number of vertices. © Institute of Mathematical Statistics, 2013.

Steven Gillispie - One of the best experts on this subject based on the ideXlab platform.

  • formulas for counting acyclic digraph markov Equivalence Classes
    Journal of Statistical Planning and Inference, 2006
    Co-Authors: Steven Gillispie
    Abstract:

    Multivariate Markov dependencies between different variables often can be represented graphically using acyclic digraphs (ADGs). In certain cases, though, different ADGs represent the same statistical model, thus leading to a set of Equivalence Classes of ADGs that constitute the true universe of available graphical models. Building upon the previously known formulas for counting the number of acyclic digraphs and the number of Equivalence Classes of size 1, formulas are developed to count ADG Equivalence Classes of arbitrary size, based on the chordal graph configurations that produce a class of that size. Theorems to validate the formulas as well as to aid in determining the appropriate chordal graphs to use for a given class size are included.

  • the size distribution for markov Equivalence Classes of acyclic digraph models
    Artificial Intelligence, 2002
    Co-Authors: Steven Gillispie, Michael D Perlman
    Abstract:

    Bayesian networks, equivalently graphical Markov models determined by acyclic digraphs or ADGs (also called directed acyclic graphs or dags), have proved to be both effective and efficient for representing complex multivariate dependence structures in terms of local relations. However, model search and selection is potentially complicated by the many-to-one correspondence between ADGs and the statistical models that they represent. If the ADGs/models ratio is large, search procedures based on unique graphical representations of Equivalence Classes of ADGs could provide substantial computational efficiency. Hitherto, the value of the ADGs/models ratio has been calculated only for graphs with n = 5 or fewer vertices. In the present study, a computer program was written to enumerate the Equivalence Classes of ADG models and study the distributions of class sizes and number of edges for graphs up to n = 10 vertices. The ratio of ADGs to numbers of Classes appears to approach an asymptote of about 3.7. Distributions of the Classes according to number of edges and class size were produced which also appear to be approaching asymptotic limits. Imposing a bound on the maximum number of parents to any vertex causes little change if the bound is sufficiently large, with four being a possible minimum. The program also includes a new variation of orderly algorithm for generating undirected graphs.

  • enumerating markov Equivalence Classes of acyclic digraph models
    Uncertainty in Artificial Intelligence, 2001
    Co-Authors: Steven Gillispie, Christiane Lemieux
    Abstract:

    Graphical Markov models determined by acyclic digraphs (ADGs), also called directed acyclic graphs (DAGs), are widely studied in statistics, computer science (as Bayesian networks), operations research (as influence diagrams), and many related fields. Because different ADGs may determine the same Markov Equivalence class, it long has been of interest to determine the efficiency gained in model specification and search by working directly with Markov Equivalence Classes of ADGs rather than with ADGs themselves. A computer program was written to enumerate the Equivalence Classes of ADG models as specified by Pearl & Verma's Equivalence criterion. The program counted Equivalence Classes for models up to and including 10 vertices. The ratio of numbers of Classes to ADGs appears to approach an asymptote of about 0.267. Classes were analyzed according to number of edges and class size. By edges, the distribution of number of Classes approaches a Gaussian shape. By class size, Classes of size 1 are most common, with the proportions for larger sizes initially decreasing but then following a more irregular pattern. The maximum number of Classes generated by any undirected graph was found to increase approximately factorially. The program also includes a new variation of orderly algorithm for generating undirected graphs.

Yangbo He - One of the best experts on this subject based on the ideXlab platform.

  • Formulas for Counting the Sizes of Markov Equivalence Classes of Directed Acyclic Graphs
    arXiv: Machine Learning, 2016
    Co-Authors: Yangbo He, Bin Yu
    Abstract:

    The sizes of Markov Equivalence Classes of directed acyclic graphs play important roles in measuring the uncertainty and complexity in causal learning. A Markov Equivalence class can be represented by an essential graph and its undirected subgraphs determine the size of the class. In this paper, we develop a method to derive the formulas for counting the sizes of Markov Equivalence Classes. We first introduce a new concept of core graph. The size of a Markov Equivalence class of interest is a polynomial of the number of vertices given its core graph. Then, we discuss the recursive and explicit formula of the polynomial, and provide an algorithm to derive the size formula via symbolic computation for any given core graph. The proposed size formula derivation sheds light on the relationships between the size of a Markov Equivalence class and its representation graph, and makes size counting efficient, even when the essential graphs contain non-sparse undirected subgraphs.

  • counting and exploring sizes of markov Equivalence Classes of directed acyclic graphs
    Journal of Machine Learning Research, 2015
    Co-Authors: Yangbo He, Bin Yu
    Abstract:

    When learning a directed acyclic graph (DAG) model via observational data, one generally cannot identify the underlying DAG, but can potentially obtain a Markov Equivalence class. The size (the number of DAGs) of a Markov Equivalence class is crucial to infer causal effects or to learn the exact causal DAG via further interventions. Given a set of Markov Equivalence Classes, the distribution of their sizes is a key consideration in developing learning methods. However, counting the size of an Equivalence class with many vertices is usually computationally infeasible, and the existing literature reports the size distributions only for Equivalence Classes with ten or fewer vertices. In this paper, we develop a method to compute the size of a Markov Equivalence class. We first show that there are five types of Markov Equivalence Classes whose sizes can be formulated as five functions of the number of vertices respectively. Then we introduce a new concept of a rooted sub-class. The graph representations of rooted subClasses of a Markov Equivalence class are used to partition this class recursively until the sizes of all rooted subClasses can be computed via the five functions. The proposed size counting is efficient for Markov Equivalence Classes of sparse DAGs with hundreds of vertices. Finally, we explore the size and edge distributions of Markov Equivalence Classes and find experimentally that, in general, (1) most Markov Equivalence Classes are half completed and their average sizes are small, and (2) the sizes of sparse Classes grow approximately exponentially with the numbers of vertices.

  • reversible mcmc on markov Equivalence Classes of sparse directed acyclic graphs
    Annals of Statistics, 2013
    Co-Authors: Yangbo He, Bin Yu
    Abstract:

    Author(s): He, Yangbo; Jia, Jinzhu; Yu, Bin | Abstract: Graphical models are popular statistical tools which are used to represent dependent or causal complex systems. Statistically equivalent causal or directed graphical models are said to belong to a Markov equivalent class. It is of great interest to describe and understand the space of such Classes. However, with currently known algorithms, sampling over such Classes is only feasible for graphs with fewer than approximately 20 vertices. In this paper, we design reversible irreducible Markov chains on the space of Markov equivalent Classes by proposing a perfect set of operators that determine the transitions of the Markov chain. The stationary distribution of a proposed Markov chain has a closed form and can be computed easily. Specifically, we construct a concrete perfect set of operators on sparse Markov Equivalence Classes by introducing appropriate conditions on each possible operator. Algorithms and their accelerated versions are provided to efficiently generate Markov chains and to explore properties of Markov Equivalence Classes of sparse directed acyclic graphs (DAGs) with thousands of vertices. We find experimentally that in most Markov Equivalence Classes of sparse DAGs, (1) most edges are directed, (2) most undirected subgraphs are small and (3) the number of these undirected subgraphs grows approximately linearly with the number of vertices. © Institute of Mathematical Statistics, 2013.