Logic Programming

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 91287 Experts worldwide ranked by ideXlab platform

Luc De Raedt - One of the best experts on this subject based on the ideXlab platform.

  • deepproblog neural probabilistic Logic Programming
    Neural Information Processing Systems, 2018
    Co-Authors: Robin Manhaeve, Sebastijan Dumancic, Angelika Kimmig, Thomas Demeester, Luc De Raedt
    Abstract:

    We introduce DeepProbLog, a probabilistic Logic Programming language that incorporates deep learning by means of neural predicates. We show how existing inference and learning techniques can be adapted for the new language. Our experiments demonstrate that DeepProbLog supports (i) both symbolic and subsymbolic representations and inference, (ii) program induction, (iii) probabilistic (Logic) Programming, and (iv) (deep) learning from examples. To the best of our knowledge, this work is the first to propose a framework where general-purpose neural networks and expressive probabilistic-Logical modeling and reasoning are integrated in a way that exploits the full expressiveness and strengths of both worlds and can be trained end-to-end based on examples.

  • Probabilistic Inductive Logic Programming - Probabilistic inductive Logic Programming
    Probabilistic Inductive Logic Programming, 2008
    Co-Authors: Luc De Raedt, Kristian Kersting
    Abstract:

    Probabilistic inductive Logic Programming aka. statistical relational learning addresses one of the central questions of artificial intelligence: the integration of probabilistic reasoning with machine learning and first order and relational Logic representations. A rich variety of different formalisms and learning techniques have been developed. A unifying characterization of the underlying learning settings, however, is missing so far. In this chapter, we start from inductive Logic Programming and sketch how the inductive Logic Programming formalisms, settings and techniques can be extended to the statistical case. More precisely, we outline three classical settings for inductive Logic Programming, namely learning from entailment, learning from interpretations, and learning from proofs or traces, and show how they can be adapted to cover state-of-the-art statistical relational learning approaches.

  • probabilistic inductive Logic Programming
    Inductive Logic Programming, 2008
    Co-Authors: Luc De Raedt, Kristian Kersting
    Abstract:

    Probabilistic inductive Logic Programming aka. statistical relational learning addresses one of the central questions of artificial intelligence: the integration of probabilistic reasoning with machine learning and first order and relational Logic representations. A rich variety of different formalisms and learning techniques have been developed. A unifying characterization of the underlying learning settings, however, is missing so far. In this chapter, we start from inductive Logic Programming and sketch how the inductive Logic Programming formalisms, settings and techniques can be extended to the statistical case. More precisely, we outline three classical settings for inductive Logic Programming, namely learning from entailment, learning from interpretations, and learning from proofs or traces, and show how they can be adapted to cover state-of-the-art statistical relational learning approaches.

  • probabilistic inductive Logic Programming
    Algorithmic Learning Theory, 2004
    Co-Authors: Luc De Raedt, Kristian Kersting
    Abstract:

    Probabilistic inductive Logic Programming, sometimes also called statistical relational learning, addresses one of the central questions of artificial intelligence: the integration of probabilistic reasoning with first order Logic representations and machine learning. A rich variety of different formalisms and learning techniques have been developed. In the present paper, we start from inductive Logic Programming and sketch how it can be extended with probabilistic methods.

  • probabilistic inductive Logic Programming
    Lecture Notes in Computer Science, 2004
    Co-Authors: Luc De Raedt, Kristian Kersting
    Abstract:

    Probabilistic inductive Logic Programming, sometimes also called statistical relational learning, addresses one of the central questions of artificial intelligence: the integration of probabilistic reasoning with first order Logic representations and machine learning. A rich variety of different formalisms and learning techniques have been developed. In the present paper, we start from inductive Logic Programming and sketch how it can be extended with probabilistic methods. More precisely, we outline three classical settings for inductive Logic Programming, namely learning from entailment, learning from interpretations, and learning from proofs or traces, and show how they can be used to learn different types of probabilistic representations.

Kristian Kersting - One of the best experts on this subject based on the ideXlab platform.

  • Probabilistic Inductive Logic Programming - Probabilistic inductive Logic Programming
    Probabilistic Inductive Logic Programming, 2008
    Co-Authors: Luc De Raedt, Kristian Kersting
    Abstract:

    Probabilistic inductive Logic Programming aka. statistical relational learning addresses one of the central questions of artificial intelligence: the integration of probabilistic reasoning with machine learning and first order and relational Logic representations. A rich variety of different formalisms and learning techniques have been developed. A unifying characterization of the underlying learning settings, however, is missing so far. In this chapter, we start from inductive Logic Programming and sketch how the inductive Logic Programming formalisms, settings and techniques can be extended to the statistical case. More precisely, we outline three classical settings for inductive Logic Programming, namely learning from entailment, learning from interpretations, and learning from proofs or traces, and show how they can be adapted to cover state-of-the-art statistical relational learning approaches.

  • probabilistic inductive Logic Programming
    Inductive Logic Programming, 2008
    Co-Authors: Luc De Raedt, Kristian Kersting
    Abstract:

    Probabilistic inductive Logic Programming aka. statistical relational learning addresses one of the central questions of artificial intelligence: the integration of probabilistic reasoning with machine learning and first order and relational Logic representations. A rich variety of different formalisms and learning techniques have been developed. A unifying characterization of the underlying learning settings, however, is missing so far. In this chapter, we start from inductive Logic Programming and sketch how the inductive Logic Programming formalisms, settings and techniques can be extended to the statistical case. More precisely, we outline three classical settings for inductive Logic Programming, namely learning from entailment, learning from interpretations, and learning from proofs or traces, and show how they can be adapted to cover state-of-the-art statistical relational learning approaches.

  • probabilistic inductive Logic Programming
    Algorithmic Learning Theory, 2004
    Co-Authors: Luc De Raedt, Kristian Kersting
    Abstract:

    Probabilistic inductive Logic Programming, sometimes also called statistical relational learning, addresses one of the central questions of artificial intelligence: the integration of probabilistic reasoning with first order Logic representations and machine learning. A rich variety of different formalisms and learning techniques have been developed. In the present paper, we start from inductive Logic Programming and sketch how it can be extended with probabilistic methods.

  • probabilistic inductive Logic Programming
    Lecture Notes in Computer Science, 2004
    Co-Authors: Luc De Raedt, Kristian Kersting
    Abstract:

    Probabilistic inductive Logic Programming, sometimes also called statistical relational learning, addresses one of the central questions of artificial intelligence: the integration of probabilistic reasoning with first order Logic representations and machine learning. A rich variety of different formalisms and learning techniques have been developed. In the present paper, we start from inductive Logic Programming and sketch how it can be extended with probabilistic methods. More precisely, we outline three classical settings for inductive Logic Programming, namely learning from entailment, learning from interpretations, and learning from proofs or traces, and show how they can be used to learn different types of probabilistic representations.

David S. Warren - One of the best experts on this subject based on the ideXlab platform.

  • Computational Logic - A System for Tabled Constraint Logic Programming
    Lecture Notes in Computer Science, 2000
    Co-Authors: David S. Warren
    Abstract:

    As extensions to traditional Logic Programming, both tabling and Constraint Logic Programming (CLP) have proven powerful tools in many areas. They make Logic Programming more efficient and more declarative. However, combining the techniques of tabling and constraint solving is still a relatively new research area. In this paper, we show how to build a Tabled Constraint Logic Programming (TCLP) system based on XSB -- a tabled Logic Programming system. We first discuss how to extend XSB with the fundamental mechanism of constraint solving, basically the introduction of attributed variables to XSB, and then present a general framework for building a TCLP system. An interface among the XSB tabling engine, the corresponding constraint solver, and the user's program is designed to fully utilize the power of tabling in TCLP programs.

  • a system for tabled constraint Logic Programming
    Lecture Notes in Computer Science, 2000
    Co-Authors: David S. Warren
    Abstract:

    As extensions to traditional Logic Programming, both tabling and Constraint Logic Programming (CLP) have proven powerful tools in many areas. They make Logic Programming more efficient and more declarative. However, combining the techniques of tabling and constraint solving is still a relatively new research area. In this paper, we show how to build a Tabled Constraint Logic Programming (TCLP) system based on XSB -- a tabled Logic Programming system. We first discuss how to extend XSB with the fundamental mechanism of constraint solving, basically the introduction of attributed variables to XSB, and then present a general framework for building a TCLP system. An interface among the XSB tabling engine, the corresponding constraint solver, and the user's program is designed to fully utilize the power of tabling in TCLP programs.

  • A Logic Programming View of CLP
    1993
    Co-Authors: David S. Warren
    Abstract:

    We address the problem of lifting definitions, results, and even proofs for the theory of Logic Programming, so that they apply to constraint Logic Programming (CLP). We attempt to systematize this lifting, where it is possible, and delineate where it is not possible. We show that the Independence of Negated Constraints property of constraint domains is fundamental to several different aspects of constraint Logic Programming. This is a principal cause for the inability to lift sonie traditional Logic Programming results to constraint Logic Programming.

Chiaki Sakama - One of the best experts on this subject based on the ideXlab platform.

  • towards the integration of inductive and nonmonotonic Logic Programming
    Discovery Science, 2002
    Co-Authors: Chiaki Sakama
    Abstract:

    Commonsense reasoning and machine learning are two important topics in AI. These techniques are realized in Logic Programming as nonmonotonic Logic Programming (NMLP) and inductive Logic Programming (ILP), respectively. NMLP and ILP have seemingly different motivations and goals, but they have much in common in the background of problems. This article overviews the author's recent research results for realizing induction from nonmonotonic Logic programs.

  • nonmonotonic inductive Logic Programming
    International Conference on Logic Programming, 2001
    Co-Authors: Chiaki Sakama
    Abstract:

    Nonmonotonic Logic Programming (NMLP) and inductive Logic Programming (ILP) are two important extensions of Logic Programming. The former aims at representing incomplete knowledge and reasoning with commonsense, while the latter targets the problem of inductive construction of a general theory from examples and background knowledge. NMLP and ILP thus have seemingly different motivations and goals, but they have much in common in the background of problems, and techniques developed in each field are related to one another. This paper presents techniques for combining these two fields of Logic Programming in the context of nonmonotonic inductive Logic Programming (NMILP). We review recent results and problems to realize NMILP.

  • Abductive Logic Programming and disjunctive Logic Programming: their relationship and transferability
    The Journal of Logic Programming, 2000
    Co-Authors: Chiaki Sakama, Katsumi Inoue
    Abstract:

    Abductive Logic Programming (ALP) and disjunctive Logic Programming (DLP) are two different extensions of Logic Programming. This paper investigates the relationship between ALP and DLP from the program transformation viewpoint. It is shown that the belief set semantics of an abductive program is expressed by the answer set semantics and the possible model semantics of a disjunctive program. In converse, the possible model semantics of a disjunctive program is equivalently expressed by the belief set semantics of an abductive program, while such a transformation is generally impossible for the answer set semantics. Moreover, it is shown that abductive disjunctive programs are always reducible to disjunctive programs both under the answer set semantics and the possible model semantics. These transformations are verified from the complexity viewpoint. The results of this paper turn out that ALP and DLP are just different ways of looking at the same problem if we choose an appropriate semantics.

Andrei Voronkov - One of the best experts on this subject based on the ideXlab platform.

  • complexity and expressive power of Logic Programming
    ACM Computing Surveys, 2001
    Co-Authors: Evgeny Dantsin, Thomas Eiter, Georg Gottlob, Andrei Voronkov
    Abstract:

    This article surveys various complexity and expressiveness results on different forms of Logic Programming. The main focus is on decidable forms of Logic Programming, in particular, propositional Logic Programming and datalog, but we also mention general Logic Programming with function symbols. Next to classical results on plain Logic Programming (pure Horn clause programs), more recent results on various important extensions of Logic Programming are surveyed. These include Logic Programming with different forms of negation, disjunctive Logic Programming, Logic Programming with equality, and constraint Logic Programming.

  • Complexity and expressive power of Logic Programming
    Proceedings of Computational Complexity. Twelfth Annual IEEE Conference, 1997
    Co-Authors: Evgeny Dantsin, Thomas Eiter, Georg Gottlob, Andrei Voronkov
    Abstract:

    This paper surveys various complexity results on different forms of Logic Programming. The main focus is on decidable forms of Logic Programming, in particular propositional Logic Programming and datalog, but we also mention general Logic Programming with function symbols. Next to classical results on plain Logic Programming (pure Horn clause programs), more recent results on various important extensions of Logic Programming are surveyed. These include Logic Programming with different forms of negation, disjunctive Logic Programming, Logic Programming with equality, and constraint Logic Programming. The complexity of the unification problem is also addressed.