Variable Elimination

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 3363 Experts worldwide ranked by ideXlab platform

Mohsen Kompanyzareh - One of the best experts on this subject based on the ideXlab platform.

  • uninformative Variable Elimination assisted by gram schmidt orthogonalization successive projection algorithm for descriptor selection in qsar
    Chemometrics and Intelligent Laboratory Systems, 2013
    Co-Authors: Nematollah Omidikia, Mohsen Kompanyzareh
    Abstract:

    Abstract Employment of Uninformative Variable Elimination (UVE) as a robust Variable selection method is reported in this study. Each regression coefficient represents the contribution of the corresponding Variable in the established model, but in the presence of uninformative Variables as well as collinearity reliability of the regression coefficient's magnitude is suspicious. Successive Projection Algorithm (SPA) and Gram–Schmidt Orthogonalization (GSO) were implemented as pre-selection technique for removing collinearity and redundancy among Variables in the model. Uninformative Variable Elimination-partial least squares (UVE-PLS) was performed on the pre-selected data set and C value 's were calculated for each descriptor. In this case the C value 's of UVE assisted by SPA or GSO could be used in order to rank the Variables according to their importance. Leave-many-out cross-validation (LMO-CV) was applied to ordered descriptors for selecting optimal number of descriptors. Selwood data including 31 molecules and 53 descriptors, and anti-HIV data including 107 molecules and 160 descriptors were utilized in this study. When GSO pre-selection method is used for the Selwood data and SPA for the anti-HIV data set, obtained results were desired not only in the prediction ability of the constructed model but also in the number of selected informative descriptors. By applying GSO-UVE-PLS to the Selwood data, in an optimized condition, seven descriptors out of 53 were selected with q 2  = 0.769 and R 2  = 0.915. Also applying SPA-UVE-PLS on the anti-HIV data, nine descriptors were selected out of 160 with q 2  = 0.81, R 2  = 0.84 and Q 2 F3  = 0.8.

Hendrik Blockeel - One of the best experts on this subject based on the ideXlab platform.

  • AAAI Workshop: Statistical Relational Artificial Intelligence - On the completeness of lifted Variable Elimination
    2020
    Co-Authors: Nima Taghipour, Daan Fierens, Guy Van Den Broeck, Jesse Davis, Hendrik Blockeel
    Abstract:

    Lifting aims at improving the efficiency of probabilistic inference by exploiting symmetries in the model. Various methods for lifted probabilistic inference have been proposed, but our understanding of these methods and the relationships between them is still limited, compared to their propositional counterparts. The only existing theoretical characterization of lifting is a completeness result for weighted first-order model counting. This paper addresses the question whether the same completeness result holds for other lifted inference algorithms. We answer this question positively for lifted Variable Elimination (LVE). Our proof relies on introducing a novel inference operator for LVE.

  • ILP - Generalized Counting for Lifted Variable Elimination
    Inductive Logic Programming, 2014
    Co-Authors: Nima Taghipour, Jesse Davis, Hendrik Blockeel
    Abstract:

    Lifted probabilistic inference methods exploit symmetries in the structure of probabilistic models to perform inference more efficiently. In lifted Variable Elimination, the symmetry among a group of interchangeable random Variables is captured by counting formulas, and exploited by operations that handle such formulas. In this paper, we generalize the structure of counting formulas and present a set of inference operators that introduce and eliminate these formulas from the model. This generalization expands the range of problems that can be solved in a lifted way. Our work is closely related to the recently introduced method of joint conversion. Due to its more fine grained formulation, however, our approach can provide more efficient solutions than joint conversion.

  • generalized counting for lifted Variable Elimination
    Inductive Logic Programming, 2013
    Co-Authors: Nima Taghipour, Jesse Davis, Hendrik Blockeel
    Abstract:

    Lifted probabilistic inference methods exploit symmetries in the structure of probabilistic models to perform inference more efficiently. In lifted Variable Elimination, the symmetry among a group of interchangeable random Variables is captured by counting formulas, and exploited by operations that handle such formulas. In this paper, we generalize the structure of counting formulas and present a set of inference operators that introduce and eliminate these formulas from the model. This generalization expands the range of problems that can be solved in a lifted way. Our work is closely related to the recently introduced method of joint conversion. Due to its more fine grained formulation, however, our approach can provide more efficient solutions than joint conversion.

  • completeness results for lifted Variable Elimination
    International Conference on Artificial Intelligence and Statistics, 2013
    Co-Authors: Nima Taghipour, Daan Fierens, Guy Van Den Broeck, Jesse Davis, Hendrik Blockeel
    Abstract:

    Lifting aims at improving the eciency of probabilistic inference by exploiting symmetries in the model. Various methods for lifted probabilistic inference have been proposed, but our understanding of these methods and the relationships between them is still limited, compared to their propositional counterparts. The only existing theoretical characterization of lifting is a completeness result for weighted rst-order model counting. This paper addresses the question whether the same completeness result holds for other lifted inference algorithms. We answer this question positively for lifted Variable Elimination (LVE). Our proof relies on introducing a novel inference operator for LVE.

  • AISTATS - Completeness Results for Lifted Variable Elimination
    2013
    Co-Authors: Nima Taghipour, Daan Fierens, Guy Van Den Broeck, Jesse Davis, Hendrik Blockeel
    Abstract:

    Lifting aims at improving the eciency of probabilistic inference by exploiting symmetries in the model. Various methods for lifted probabilistic inference have been proposed, but our understanding of these methods and the relationships between them is still limited, compared to their propositional counterparts. The only existing theoretical characterization of lifting is a completeness result for weighted rst-order model counting. This paper addresses the question whether the same completeness result holds for other lifted inference algorithms. We answer this question positively for lifted Variable Elimination (LVE). Our proof relies on introducing a novel inference operator for LVE.

Pawan Lingras - One of the best experts on this subject based on the ideXlab platform.

  • FLAIRS Conference - A Comparative Study of Variable Elimination and Arc Reversal in Bayesian Network Inference
    2020
    Co-Authors: Cory J. Butz, J. Chen, K. Konkel, Pawan Lingras
    Abstract:

    We compare two approaches to Bayesian network inference, called Variable Elimination (VE) and arc reversal (AR). It is established that VE never requires more space than AR, and never requires more computation (multiplications and additions) than AR.

  • Advances in Intelligent Information Systems - The CPT Structure of Variable Elimination in Discrete Bayesian Networks
    Advances in Intelligent Information Systems, 2020
    Co-Authors: Cory J. Butz, Pawan Lingras
    Abstract:

    We show that a conditional probability table (CPT) is obtained after every multiplication and every marginalization step when eliminating Variables from a discrete Bayesian network. The main advantage of our work is an improvement in presentation. The probability distributions constructed during Variable Elimination in Bayesian networks have always been denoted as potentials. Since CPTs are a special case of potential, our description is more precise and readable.

  • join tree propagation utilizing both arc reversal and Variable Elimination
    International Journal of Approximate Reasoning, 2011
    Co-Authors: Cory J. Butz, K. Konkel, Pawan Lingras
    Abstract:

    In this paper, we put forth the first join tree propagation algorithm that selectively applies either arc reversal (AR) or Variable Elimination (VE) to build the propagated messages. Our approach utilizes a recent method for identifying the propagated join tree messages a priori. When it is determined that a join tree node will construct a single distribution to be sent to a neighbouring node, VE is utilized as it builds a single distribution in the most direct fashion; otherwise, AR is applied as it maintains a factorization of distributions allowing for barren Variables to be exploited during propagation later on in the join tree. Experimental results, involving evidence processing in four benchmark Bayesian networks, empirically demonstrate that selectively applying VE and AR is faster than applying one of these methods exclusively on the entire network.

  • A formal comparison of Variable Elimination and arc reversal in Bayesian network inference
    Intelligent Decision Technologies, 2009
    Co-Authors: Cory J. Butz, J. Chen, K. Konkel, Pawan Lingras
    Abstract:

    We present a comparative study of two approaches to Bayesian network inference, called Variable Elimination (VE) and arc reversal (AR). It is established that VE never requires more space than AR, and never requires more computation (multiplications and additions) than AR. These two characteristics are supported by experimental results on six large BNs, which indicate that VE is never slower than AR and can perform inference significantly faster than AR.

  • join tree propagation utilizing both arc reversal and Variable Elimination
    The Florida AI Research Society, 2009
    Co-Authors: Cory J. Butz, K. Konkel, Pawan Lingras
    Abstract:

    In this paper, we put forth the first join tree propagation algorithm  that selectively applies either arc reversal (AR) or Variable Elimination (VE) to build the propagated messages. Our approach utilizes a recent method for identifying the propagated join tree messages \`{a} priori. When it is determined that precisely one message is to be constructed at a join tree node, VE is utilized to build this distribution; otherwise, AR is applied as it is better suited to construct multiple distributions passed between  neighboring join tree nodes. Experimental results, involving evidence processing in  seven real-world and one benchmark Bayesian network,  empirically demonstrate that selectively applying VE and AR is faster than applying one of these methods exclusively on the entire network.

M Daszykowski - One of the best experts on this subject based on the ideXlab platform.

  • retention prediction of peptides based on uninformative Variable Elimination by partial least squares
    Journal of Proteome Research, 2006
    Co-Authors: M Daszykowski, Tomasz Baczek, Vander Y Heyden
    Abstract:

    A quantitative structure-retention relationship analysis was performed on the chromatographic retention data of 90 peptides, measured by gradient elution reversed-phase liquid chromatography, and a large set of molecular descriptors computed for each peptide. Such approach may be useful in proteomics research in order to improve the correct identification of peptides. A principal component analysis on the set of 1726 molecular descriptors reveals a high information overlap in the descriptor space. Since Variable selection is advisable, the retention of the peptides is modeled with uninformative Variable Elimination partial least squares, besides classic partial least squares regression. The Kennard and Stone algorithm was used to select a calibration set (63 peptides) from the available samples. This set was used to build the quantitative structure-retention relationship models. The remaining 27 peptides were used as independent external test set to evaluate the predictive power of the constructed models....

  • improving qsar models for the biological activity of hiv reverse transcriptase inhibitors aspects of outlier detection and uninformative Variable Elimination
    Talanta, 2005
    Co-Authors: M Daszykowski, I Stanimirova, B Walczak, Frits Daeyaert, M R De Jonge, Jan Heeres, Luc Koymans, Paulus Joannes Lewi
    Abstract:

    The goal of this study is to derive a methodology for modeling the biological activity of non-nucleoside HIV Reverse Transcriptase (RT) inhibitors. The difficulties that were encountered during the modeling attempts are discussed, together with their origin and solutions. With the selected multivariate techniques: robust principal component analysis, partial least squares, robust partial least squares and uninformative Variable Elimination partial least squares, it is possible to explore and to model the contaminated data satisfactory. It is shown that these techniques are versatile and valuable tools in modeling and exploring biochemical data.

Cory J. Butz - One of the best experts on this subject based on the ideXlab platform.

  • FLAIRS Conference - A Comparative Study of Variable Elimination and Arc Reversal in Bayesian Network Inference
    2020
    Co-Authors: Cory J. Butz, J. Chen, K. Konkel, Pawan Lingras
    Abstract:

    We compare two approaches to Bayesian network inference, called Variable Elimination (VE) and arc reversal (AR). It is established that VE never requires more space than AR, and never requires more computation (multiplications and additions) than AR.

  • Advances in Intelligent Information Systems - The CPT Structure of Variable Elimination in Discrete Bayesian Networks
    Advances in Intelligent Information Systems, 2020
    Co-Authors: Cory J. Butz, Pawan Lingras
    Abstract:

    We show that a conditional probability table (CPT) is obtained after every multiplication and every marginalization step when eliminating Variables from a discrete Bayesian network. The main advantage of our work is an improvement in presentation. The probability distributions constructed during Variable Elimination in Bayesian networks have always been denoted as potentials. Since CPTs are a special case of potential, our description is more precise and readable.

  • join tree propagation utilizing both arc reversal and Variable Elimination
    International Journal of Approximate Reasoning, 2011
    Co-Authors: Cory J. Butz, K. Konkel, Pawan Lingras
    Abstract:

    In this paper, we put forth the first join tree propagation algorithm that selectively applies either arc reversal (AR) or Variable Elimination (VE) to build the propagated messages. Our approach utilizes a recent method for identifying the propagated join tree messages a priori. When it is determined that a join tree node will construct a single distribution to be sent to a neighbouring node, VE is utilized as it builds a single distribution in the most direct fashion; otherwise, AR is applied as it maintains a factorization of distributions allowing for barren Variables to be exploited during propagation later on in the join tree. Experimental results, involving evidence processing in four benchmark Bayesian networks, empirically demonstrate that selectively applying VE and AR is faster than applying one of these methods exclusively on the entire network.

  • A formal comparison of Variable Elimination and arc reversal in Bayesian network inference
    Intelligent Decision Technologies, 2009
    Co-Authors: Cory J. Butz, J. Chen, K. Konkel, Pawan Lingras
    Abstract:

    We present a comparative study of two approaches to Bayesian network inference, called Variable Elimination (VE) and arc reversal (AR). It is established that VE never requires more space than AR, and never requires more computation (multiplications and additions) than AR. These two characteristics are supported by experimental results on six large BNs, which indicate that VE is never slower than AR and can perform inference significantly faster than AR.

  • join tree propagation utilizing both arc reversal and Variable Elimination
    The Florida AI Research Society, 2009
    Co-Authors: Cory J. Butz, K. Konkel, Pawan Lingras
    Abstract:

    In this paper, we put forth the first join tree propagation algorithm  that selectively applies either arc reversal (AR) or Variable Elimination (VE) to build the propagated messages. Our approach utilizes a recent method for identifying the propagated join tree messages \`{a} priori. When it is determined that precisely one message is to be constructed at a join tree node, VE is utilized to build this distribution; otherwise, AR is applied as it is better suited to construct multiple distributions passed between  neighboring join tree nodes. Experimental results, involving evidence processing in  seven real-world and one benchmark Bayesian network,  empirically demonstrate that selectively applying VE and AR is faster than applying one of these methods exclusively on the entire network.