Probabilistic Logic

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 20358 Experts worldwide ranked by ideXlab platform

Luc De Raedt - One of the best experts on this subject based on the ideXlab platform.

  • Semantic and geometric reasoning for robotic grasping: a Probabilistic Logic approach
    Autonomous Robots, 2019
    Co-Authors: Laura Antanas, Plinio Moreno, Marion Neumann, Kristian Kersting, Rui Pimentel Figueiredo, José Santos-victor, Luc De Raedt
    Abstract:

    While any grasp must satisfy the grasping stability criteria, good grasps depend on the specific manipulation scenario: the object, its properties and functionalities, as well as the task and grasp constraints. We propose a Probabilistic Logic approach for robot grasping, which improves grasping capabilities by leveraging semantic object parts. It provides the robot with semantic reasoning skills about the most likely object part to be grasped, given the task constraints and object properties, while also dealing with the uncertainty of visual perception and grasp planning. The Probabilistic Logic framework is task-dependent . It semantically reasons about pre-grasp configurations with respect to the intended task and employs object-task affordances and object/task ontologies to encode rules that generalize over similar object parts and object/task categories. The use of Probabilistic Logic for task-dependent grasping contrasts with current approaches that usually learn direct mappings from visual perceptions to task-dependent grasping points. The Logic-based module receives data from a low-level module that extracts semantic objects parts, and sends information to the low-level grasp planner. These three modules define our Probabilistic Logic framework, which is able to perform robotic grasping in realistic kitchen-related scenarios.

  • Neural Probabilistic Logic Programming in DeepProbLog
    arXiv: Artificial Intelligence, 2019
    Co-Authors: Robin Manhaeve, Sebastijan Dumancic, Angelika Kimmig, Thomas Demeester, Luc De Raedt
    Abstract:

    We introduce DeepProbLog, a neural Probabilistic Logic programming language that incorporates deep learning by means of neural predicates. We show how existing inference and learning techniques of the underlying Probabilistic Logic programming language ProbLog can be adapted for the new language. We theoretically and experimentally demonstrate that DeepProbLog supports (i) both symbolic and subsymbolic representations and inference, (ii) program induction, (iii) Probabilistic (Logic) programming, and (iv) (deep) learning from examples. To the best of our knowledge, this work is the first to propose a framework where general-purpose neural networks and expressive Probabilistic-Logical modeling and reasoning are integrated in a way that exploits the full expressiveness and strengths of both worlds and can be trained end-to-end based on examples.

  • NeurIPS - DeepProbLog: Neural Probabilistic Logic Programming
    2018
    Co-Authors: Robin Manhaeve, Sebastijan Dumancic, Angelika Kimmig, Thomas Demeester, Luc De Raedt
    Abstract:

    We introduce DeepProbLog, a Probabilistic Logic programming language that incorporates deep learning by means of neural predicates. We show how existing inference and learning techniques can be adapted for the new language. Our experiments demonstrate that DeepProbLog supports (i) both symbolic and subsymbolic representations and inference, (ii) program induction, (iii) Probabilistic (Logic) programming, and (iv) (deep) learning from examples. To the best of our knowledge, this work is the first to propose a framework where general-purpose neural networks and expressive Probabilistic-Logical modeling and reasoning are integrated in a way that exploits the full expressiveness and strengths of both worlds and can be trained end-to-end based on examples.

  • deepproblog neural Probabilistic Logic programming
    Neural Information Processing Systems, 2018
    Co-Authors: Robin Manhaeve, Sebastijan Dumancic, Angelika Kimmig, Thomas Demeester, Luc De Raedt
    Abstract:

    We introduce DeepProbLog, a Probabilistic Logic programming language that incorporates deep learning by means of neural predicates. We show how existing inference and learning techniques can be adapted for the new language. Our experiments demonstrate that DeepProbLog supports (i) both symbolic and subsymbolic representations and inference, (ii) program induction, (iii) Probabilistic (Logic) programming, and (iv) (deep) learning from examples. To the best of our knowledge, this work is the first to propose a framework where general-purpose neural networks and expressive Probabilistic-Logical modeling and reasoning are integrated in a way that exploits the full expressiveness and strengths of both worlds and can be trained end-to-end based on examples.

  • T P -Compilation for inference in Probabilistic Logic programs
    International Journal of Approximate Reasoning, 2016
    Co-Authors: Jonas Vlasselaer, Guy Van Den Broeck, Angelika Kimmig, Wannes Meert, Luc De Raedt
    Abstract:

    We propose T P -compilation, a new inference technique for Probabilistic Logic programs that is based on forward reasoning. T P -compilation proceeds incrementally in that it interleaves the knowledge compilation step for weighted model counting with forward reasoning on the Logic program. This leads to a novel anytime algorithm that provides hard bounds on the inferred probabilities. The main difference with existing inference techniques for Probabilistic Logic programs is that these are a sequence of isolated transformations. Typically, these transformations include conversion of the ground program into an equivalent propositional formula and compilation of this formula into a more tractable target representation for weighted model counting. An empirical evaluation shows that T P -compilation effectively handles larger instances of complex or cyclic real-world problems than current sequential approaches, both for exact and anytime approximate inference. Furthermore, we show that T P -compilation is conducive to inference in dynamic domains as it supports efficient updates to the compiled model. A new inference technique for Probabilistic Logic programs.We interleave knowledge compilation with forward reasoning.Exact as well as anytime approximate inference.Our approach is conducive to inference in dynamic models.Empirical evaluation on various domains.

Thomas Lukasiewicz - One of the best experts on this subject based on the ideXlab platform.

  • Probabilistic Logic Programming under Inheritance with Overriding
    arXiv: Artificial Intelligence, 2013
    Co-Authors: Thomas Lukasiewicz
    Abstract:

    We present Probabilistic Logic programming under inheritance with overriding. This approach is based on new notions of entailment for reasoning with conditional constraints, which are obtained from the classical notion of Logical entailment by adding the principle of inheritance with overriding. This is done by using recent approaches to Probabilistic default reasoning with conditional constraints. We analyze the semantic properties of the new entailment relations. We also present algorithms for Probabilistic Logic programming under inheritance with overriding, and program transformations for an increased efficiency.

  • Combining Probabilistic Logic programming with the power of maximum entropy
    Artificial Intelligence, 2004
    Co-Authors: Gabriele Kern-isberner, Thomas Lukasiewicz
    Abstract:

    This paper is on the combination of two powerful approaches to uncertain reasoning: Logic programming in a Probabilistic setting, on the one hand, and the information-theoretical principle of maximum entropy, on the other hand. More precisely, we present two approaches to Probabilistic Logic programming under maximum entropy. The first one is based on the usual notion of entailment under maximum entropy, and is defined for the very general case of Probabilistic Logic programs over Boolean events. The second one is based on a new notion of entailment under maximum entropy, where the principle of maximum entropy is coupled with the closed world assumption (CWA) from classical Logic programming. It is only defined for the more restricted case of Probabilistic Logic programs over conjunctive events. We then analyze the nonmonotonic behavior of both approaches along benchmark examples and along general properties for default reasoning from conditional knowledge bases. It turns out that both approaches have very nice nonmonotonic features. In particular, they realize some inheritance of Probabilistic knowledge along subclass relationships, without suffering from the problem of inheritance blocking and from the drowning problem. They both also satisfy the property of rational monotonicity and several irrelevance properties. We finally present algorithms for both approaches, which are based on generalizations of recent techniques for Probabilistic Logic programming under Logical entailment. The algorithm for the first approach still produces quite large weighted entrophy maximization problems, while the one for the second approach generates optimization problems of the same size as the ones produced in Probabilistic Logic programming under Logical entailment.

  • Probabilistic Logic under coherence, model-theoretic Probabilistic Logic, and default reasoning in System P
    Journal of Applied Non-Classical Logics, 2002
    Co-Authors: Veronica Biazzo, Thomas Lukasiewicz, Angelo Gilio, Giuseppe Sanfilippo
    Abstract:

    We study Probabilistic Logic under the viewpoint of the coherence principle of de Finetti. In detail, we explore how Probabilistic reasoning under coherence is related to model-theoretic probabilis...

  • ECSQARU - Probabilistic Logic under Coherence, Model-Theoretic Probabilistic Logic, and Default Reasoning
    Lecture Notes in Computer Science, 2001
    Co-Authors: Veronica Biazzo, Thomas Lukasiewicz, Angelo Gilio, Giuseppe Sanfilippo
    Abstract:

    We study Probabilistic Logic under the viewpoint of the coherence principle of de Finetti. In detail, we explore the relationship between coherence-based and model-theoretic Probabilistic Logic. Interestingly, we show that the notions of g-coherence and of g-coherent entailment can be expressed by combining notions in model-theoretic Probabilistic Logic with concepts from default reasoning. Crucially, we even show that Probabilistic reasoning under coherence is a Probabilistic generalization of default reasoning in system P. That is, we provide a new Probabilistic semantics for system P, which is neither based on infinitesimal probabilities nor on atomic-bound (or also big-stepped) probabilities. These results also give new insight into default reasoning with conditional objects.

  • UAI - Probabilistic Logic Programming under Inheritance with Overriding
    2001
    Co-Authors: Thomas Lukasiewicz
    Abstract:

    We present Probabilistic Logic programming un­ der inheritance with overriding. This approach is based on new notions of entailment for reasoning with conditional constraints, which are obtained from the classical notion of Logical entailment by adding inheritance with overriding. This is done by using recent approaches to Probabilistic de­ fault reasoning with conditional constraints. We analyze the semantic properties of the new en­ tailment relations. We also present algorithm s for Probabilistic Logic prograrruning under inh er­ itance with overriding, and we analyze its com­ plexity in the propositional case.

Fabrizio Riguzzi - One of the best experts on this subject based on the ideXlab platform.

  • AI*IA - Expectation Maximization in Deep Probabilistic Logic Programming
    AI*IA 2018 – Advances in Artificial Intelligence, 2018
    Co-Authors: Arnaud Nguembang Fadja, Fabrizio Riguzzi, Evelina Lamma
    Abstract:

    Probabilistic Logic Programming (PLP) combines Logic and probability for representing and reasoning over domains with uncertainty. Hierarchical probability Logic Programming (HPLP) is a recent language of PLP whose clauses are hierarchically organized forming a deep neural network or arithmetic circuit. Inference in HPLP is done by circuit evaluation and learning is therefore cheaper than any generic PLP language. We present in this paper an Expectation Maximization algorithm, called Expectation Maximization Parameter learning for HIerarchical Probabilistic Logic programs (EMPHIL), for learning HPLP parameters. The algorithm converts an arithmetic circuit into a Bayesian network and performs the belief propagation algorithm over the corresponding factor graph.

  • Lifted discriminative learning of Probabilistic Logic programs
    Machine Learning, 2018
    Co-Authors: Arnaud Nguembang Fadja, Fabrizio Riguzzi
    Abstract:

    Probabilistic Logic programming (PLP) provides a powerful tool for reasoning with uncertain relational models. However, learning Probabilistic Logic programs is expensive due to the high cost of inference. Among the proposals to overcome this problem, one of the most promising is lifted inference. In this paper we consider PLP models that are amenable to lifted inference and present an algorithm for performing parameter and structure learning of these models from positive and negative examples. We discuss parameter learning with EM and LBFGS and structure learning with LIFTCOVER, an algorithm similar to SLIPCOVER. The results of the comparison of LIFTCOVER with SLIPCOVER on 12 datasets show that it can achieve solutions of similar or better quality in a fraction of the time.

  • Probabilistic Logic Programming in Action
    Towards Integrative Machine Learning and Knowledge Extraction, 2017
    Co-Authors: Arnaud Nguembang Fadja, Fabrizio Riguzzi
    Abstract:

    Probabilistic Programming (PP) has recently emerged as an effective approach for building complex Probabilistic models. Until recently PP was mostly focused on functional programming while now Probabilistic Logic Programming (PLP) forms a significant subfield. In this paper we aim at presenting a quick overview of the features of current languages and systems for PLP. We first present the basic semantics for Probabilistic Logic programs and then consider extensions for dealing with infinite structures and continuous random variables. To show the modeling features of PLP in action, we present several examples: a simple generator of random 2D tile maps, an encoding of Markov Logic Networks, the truel game, the coupon collector problem, the one-dimensional random walk, latent Dirichlet allocation and the Indian GPA problem. These examples show the maturity of PLP.

  • Probabilistic Logic programming on the web
    Software: Practice and Experience, 2015
    Co-Authors: Fabrizio Riguzzi, Riccardo Zese, Elena Bellodi, Evelina Lamma, Giuseppe Cota
    Abstract:

    We present the web application 'cplint on SWI-Prolog for SHaring that allows the user to write SWISH' Probabilistic Logic Programs and submit the computation of the probability of queries with a web browser. The application is based on SWISH, a web framework for Logic Programming. SWISH is based on various features and packages of SWI-Prolog, in particular, its web server and its Pengine library, that allow to create remote Prolog engines and to pose queries to them. In order to develop the web application, we started from the PITA system, which is included in cplint, a suite of programs for reasoning over Logic Programs with Annotated Disjunctions, by porting PITA to SWI-Prolog. Moreover, we modified the PITA library so that it can be executed in a multi-threading environment. Developing 'cplint on SWISH' also required modification of the JavaScript SWISH code that creates and queries Pengines. 'cplint on SWISH' includes a number of examples that cover a wide range of domains and provide interesting applications of Probabilistic Logic Programming. By providing a web interface to cplint, we allow users to experiment with Probabilistic Logic Programming without the need to install a system, a procedure that is often complex, error prone, and limited mainly to the Linux platform. In this way, we aim to reach out to a wider audience and popularize Probabilistic Logic Programming. Copyright © 2015 John Wiley & Sons, Ltd.

  • Bandit-based Monte-Carlo structure learning of Probabilistic Logic programs
    Machine Learning, 2015
    Co-Authors: Nicola Di Mauro, Elena Bellodi, Fabrizio Riguzzi
    Abstract:

    Probabilistic Logic programming can be used to model domains with complex and uncertain relationships among entities. While the problem of learning the parameters of such programs has been considered by various authors, the problem of learning the structure is yet to be explored in depth. In this work we present an approximate search method based on a one-player game approach, called LEMUR . It sees the problem of learning the structure of a Probabilistic Logic program as a multi-armed bandit problem, relying on the Monte-Carlo tree search UCT algorithm that combines the precision of tree search with the generality of random sampling. LEMUR works by modifying the UCT algorithm in a fashion similar to FUSE , that considers a finite unknown horizon and deals with the problem of having a huge branching factor. The proposed system has been tested on various real-world datasets and has shown good performance with respect to other state of the art statistical relational learning approaches in terms of classification abilities.

Angelika Kimmig - One of the best experts on this subject based on the ideXlab platform.

  • Beyond the Grounding Bottleneck: Datalog Techniques for Inference in Probabilistic Logic Programs
    Proceedings of the AAAI Conference on Artificial Intelligence, 2020
    Co-Authors: Efthymia Tsamoura, Victor Gutierrez-basulto, Angelika Kimmig
    Abstract:

    State-of-the-art inference approaches in Probabilistic Logic programming typically start by computing the relevant ground program with respect to the queries of interest, and then use this program for Probabilistic inference using knowledge compilation and weighted model counting. We propose an alternative approach that uses efficient Datalog techniques to integrate knowledge compilation with forward reasoning with a non-ground program. This effectively eliminates the grounding bottleneck that so far has prohibited the application of Probabilistic Logic programming in query answering scenarios over knowledge graphs, while also providing fast approximations on classical benchmarks in the field.

  • Neural Probabilistic Logic Programming in DeepProbLog
    arXiv: Artificial Intelligence, 2019
    Co-Authors: Robin Manhaeve, Sebastijan Dumancic, Angelika Kimmig, Thomas Demeester, Luc De Raedt
    Abstract:

    We introduce DeepProbLog, a neural Probabilistic Logic programming language that incorporates deep learning by means of neural predicates. We show how existing inference and learning techniques of the underlying Probabilistic Logic programming language ProbLog can be adapted for the new language. We theoretically and experimentally demonstrate that DeepProbLog supports (i) both symbolic and subsymbolic representations and inference, (ii) program induction, (iii) Probabilistic (Logic) programming, and (iv) (deep) learning from examples. To the best of our knowledge, this work is the first to propose a framework where general-purpose neural networks and expressive Probabilistic-Logical modeling and reasoning are integrated in a way that exploits the full expressiveness and strengths of both worlds and can be trained end-to-end based on examples.

  • NeurIPS - DeepProbLog: Neural Probabilistic Logic Programming
    2018
    Co-Authors: Robin Manhaeve, Sebastijan Dumancic, Angelika Kimmig, Thomas Demeester, Luc De Raedt
    Abstract:

    We introduce DeepProbLog, a Probabilistic Logic programming language that incorporates deep learning by means of neural predicates. We show how existing inference and learning techniques can be adapted for the new language. Our experiments demonstrate that DeepProbLog supports (i) both symbolic and subsymbolic representations and inference, (ii) program induction, (iii) Probabilistic (Logic) programming, and (iv) (deep) learning from examples. To the best of our knowledge, this work is the first to propose a framework where general-purpose neural networks and expressive Probabilistic-Logical modeling and reasoning are integrated in a way that exploits the full expressiveness and strengths of both worlds and can be trained end-to-end based on examples.

  • deepproblog neural Probabilistic Logic programming
    Neural Information Processing Systems, 2018
    Co-Authors: Robin Manhaeve, Sebastijan Dumancic, Angelika Kimmig, Thomas Demeester, Luc De Raedt
    Abstract:

    We introduce DeepProbLog, a Probabilistic Logic programming language that incorporates deep learning by means of neural predicates. We show how existing inference and learning techniques can be adapted for the new language. Our experiments demonstrate that DeepProbLog supports (i) both symbolic and subsymbolic representations and inference, (ii) program induction, (iii) Probabilistic (Logic) programming, and (iv) (deep) learning from examples. To the best of our knowledge, this work is the first to propose a framework where general-purpose neural networks and expressive Probabilistic-Logical modeling and reasoning are integrated in a way that exploits the full expressiveness and strengths of both worlds and can be trained end-to-end based on examples.

  • T P -Compilation for inference in Probabilistic Logic programs
    International Journal of Approximate Reasoning, 2016
    Co-Authors: Jonas Vlasselaer, Guy Van Den Broeck, Angelika Kimmig, Wannes Meert, Luc De Raedt
    Abstract:

    We propose T P -compilation, a new inference technique for Probabilistic Logic programs that is based on forward reasoning. T P -compilation proceeds incrementally in that it interleaves the knowledge compilation step for weighted model counting with forward reasoning on the Logic program. This leads to a novel anytime algorithm that provides hard bounds on the inferred probabilities. The main difference with existing inference techniques for Probabilistic Logic programs is that these are a sequence of isolated transformations. Typically, these transformations include conversion of the ground program into an equivalent propositional formula and compilation of this formula into a more tractable target representation for weighted model counting. An empirical evaluation shows that T P -compilation effectively handles larger instances of complex or cyclic real-world problems than current sequential approaches, both for exact and anytime approximate inference. Furthermore, we show that T P -compilation is conducive to inference in dynamic domains as it supports efficient updates to the compiled model. A new inference technique for Probabilistic Logic programs.We interleave knowledge compilation with forward reasoning.Exact as well as anytime approximate inference.Our approach is conducive to inference in dynamic models.Empirical evaluation on various domains.

Daan Fierens - One of the best experts on this subject based on the ideXlab platform.

  • inference and learning in Probabilistic Logic programs using weighted boolean formulas
    arXiv: Artificial Intelligence, 2013
    Co-Authors: Daan Fierens, Guy Van Den Broeck, Ingo Thon, Bernd Gutmann, Joris Renkens, Dimitar Shterionov, Gerda Janssens, Luc De Raedt
    Abstract:

    Probabilistic Logic programs are Logic programs in which some of the facts are annotated with probabilities. This paper investigates how classical inference and learning tasks known from the graphical model community can be tackled for Probabilistic Logic programs. Several such tasks such as computing the marginals given evidence and learning from (partial) interpretations have not really been addressed for Probabilistic Logic programs before. The first contribution of this paper is a suite of efficient algorithms for various inference tasks. It is based on a conversion of the program and the queries and evidence to a weighted Boolean formula. This allows us to reduce the inference tasks to well-studied tasks such as weighted model counting, which can be solved using state-of-the-art methods known from the graphical model and knowledge compilation literature. The second contribution is an algorithm for parameter estimation in the learning from interpretations setting. The algorithm employs Expectation Maximization, and is built on top of the developed inference algorithms. The proposed approach is experimentally evaluated. The results show that the inference algorithms improve upon the state-of-the-art in Probabilistic Logic programming and that it is indeed possible to learn the parameters of a Probabilistic Logic program from interpretations.

  • Inference in Probabilistic Logic Programs using Weighted CNF's
    arXiv: Artificial Intelligence, 2012
    Co-Authors: Daan Fierens, Guy Van Den Broeck, Ingo Thon, Bernd Gutmann, Luc De Raedt
    Abstract:

    Probabilistic Logic programs are Logic programs in which some of the facts are annotated with probabilities. Several classical Probabilistic inference tasks (such as MAP and computing marginals) have not yet received a lot of attention for this formalism. The contribution of this paper is that we develop efficient inference algorithms for these tasks. This is based on a conversion of the Probabilistic Logic program and the query and evidence to a weighted CNF formula. This allows us to reduce the inference tasks to well-studied tasks such as weighted model counting. To solve such tasks, we employ state-of-the-art methods. We consider multiple methods for the conversion of the programs as well as for inference on the weighted CNF. The resulting approach is evaluated experimentally and shown to improve upon the state-of-the-art in Probabilistic Logic programming.

  • UAI - Inference in Probabilistic Logic programs using weighted CNF's
    2011
    Co-Authors: Daan Fierens, Guy Van Den Broeck, Ingo Thon, Bernd Gutmann, Luc De Raedt
    Abstract:

    Probabilistic Logic programs are Logic programs in which some of the facts are annotated with probabilities. Several classical Probabilistic inference tasks (such as MAP and computing marginals) have not yet received a lot of attention for this formalism. The contribution of this paper is that we develop efficient inference algorithms for these tasks. This is based on a conversion of the Probabilistic Logic program and the query and evidence to a weighted CNF formula. This allows us to reduce the inference tasks to well-studied tasks such as weighted model counting. To solve such tasks, we employ state-of-the-art methods. We consider multiple methods for the conversion of the programs as well as for inference on the weighted CNF. The resulting approach is evaluated experimentally and shown to improve upon the state-of-the-art in Probabilistic Logic programming.