Default Reasoning

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 6990 Experts worldwide ranked by ideXlab platform

Dieuwke Hupkes - One of the best experts on this subject based on the ideXlab platform.

  • analysing neural language models contextual decomposition reveals Default Reasoning in number and gender assignment
    Conference on Computational Natural Language Learning, 2019
    Co-Authors: Jaap Jumelet, Willem Zuidema, Dieuwke Hupkes
    Abstract:

    Extensive research has recently shown that recurrent neural language models are able to process a wide range of grammatical phenomena. How these models are able to perform these remarkable feats so well, however, is still an open question. To gain more insight into what information LSTMs base their decisions on, we propose a generalisation of Contextual Decomposition (GCD). In particular, this setup enables us to accurately distil which part of a prediction stems from semantic heuristics, which part truly emanates from syntactic cues and which part arise from the model biases themselves instead. We investigate this technique on tasks pertaining to syntactic agreement and co-reference resolution and discover that the model strongly relies on a Default Reasoning effect to perform these tasks.

Angelo Gilio - One of the best experts on this subject based on the ideXlab platform.

  • Generalizing inference rules in a coherence-based probabilistic Default Reasoning
    International Journal of Approximate Reasoning, 2012
    Co-Authors: Angelo Gilio
    Abstract:

    AbstractIn this paper we first recall some notions and results on the coherence-based probabilistic treatment of uncertainty. Then, we deepen some probabilistic aspects in nonmonotonic Reasoning, by generalizing OR, CM, and Cut rules. We also illustrate the degradation of these inference rules when the number of premises increases. Finally, we show that the lower bounds obtained when applying OR and Quasi-Conjunction inference rules coincide, respectively, with Hamacher and Lukasiewicz t-norms; the upper bounds in both rules coincide with Hamacher t-conorm

  • quasi conjunction and inclusion relation in probabilistic Default Reasoning
    European Conference on Symbolic and Quantitative Approaches to Reasoning and Uncertainty, 2011
    Co-Authors: Angelo Gilio, Giuseppe Sanfilippo
    Abstract:

    We study in the setting of probabilistic Default Reasoning under coherence the quasi conjunction, which is a basic notion for defining consistency of conditional knowledge bases, and the Goodman & Nguyen inclusion relation for conditional events. We deepen two results given in a previous paper: the first result concerns p-entailment from a finite family F of conditional events to the quasi conjunction C(S), for each nonempty subset S of F; the second result analyzes the equivalence between p-entailment from F and p-entailment from C(S), where S is some nonempty subset of F. We also characterize p-entailment by some alternative theorems. Finally, we deepen the connections between p-entailment and inclusion relation, by introducing for a pair (F,E|H) the class of the subsets S of F such that C(S) implies E|H. This class is additive and has a greatest element which can be determined by applying a suitable algorithm.

  • quasi conjunction and inclusion relation in probabilistic Default Reasoning
    European Conference on Symbolic and Quantitative Approaches to Reasoning and Uncertainty, 2011
    Co-Authors: Angelo Gilio, Giuseppe Sanfilippo
    Abstract:

    We study in the setting of probabilistic Default Reasoning under coherence the quasi conjunction, which is a basic notion for defining consistency of conditional knowledge bases, and the Goodman & Nguyen inclusion relation for conditional events. We deepen two results given in a previous paper: the first result concerns p-entailment from a finite family F of conditional events to the quasi conjunction C(S), for each nonempty subset S of F; the second result analyzes the equivalence between p-entailment from F and p-entailment from C(S), where S is some nonempty subset of F. We also characterize p-entailment by some alternative theorems. Finally, we deepen the connections between p-entailment and inclusion relation, by introducing for a pair (F,E|H) the class of the subsets S of F such that C(S) implies E|H. This class is additive and has a greatest element which can be determined by applying a suitable algorithm.

  • probabilistic logic under coherence model theoretic probabilistic logic and Default Reasoning
    European Conference on Symbolic and Quantitative Approaches to Reasoning and Uncertainty, 2001
    Co-Authors: Veronica Biazzo, Angelo Gilio, Thomas Lukasiewicz, Giuseppe Sanfilippo
    Abstract:

    We study probabilistic logic under the viewpoint of the coherence principle of de Finetti. In detail, we explore the relationship between coherence-based and model-theoretic probabilistic logic. Interestingly, we show that the notions of g-coherence and of g-coherent entailment can be expressed by combining notions in model-theoretic probabilistic logic with concepts from Default Reasoning. Crucially, we even show that probabilistic Reasoning under coherence is a probabilistic generalization of Default Reasoning in system P. That is, we provide a new probabilistic semantics for system P, which is neither based on infinitesimal probabilities nor on atomic-bound (or also big-stepped) probabilities. These results also give new insight into Default Reasoning with conditional objects.

Jaap Jumelet - One of the best experts on this subject based on the ideXlab platform.

  • analysing neural language models contextual decomposition reveals Default Reasoning in number and gender assignment
    Conference on Computational Natural Language Learning, 2019
    Co-Authors: Jaap Jumelet, Willem Zuidema, Dieuwke Hupkes
    Abstract:

    Extensive research has recently shown that recurrent neural language models are able to process a wide range of grammatical phenomena. How these models are able to perform these remarkable feats so well, however, is still an open question. To gain more insight into what information LSTMs base their decisions on, we propose a generalisation of Contextual Decomposition (GCD). In particular, this setup enables us to accurately distil which part of a prediction stems from semantic heuristics, which part truly emanates from syntactic cues and which part arise from the model biases themselves instead. We investigate this technique on tasks pertaining to syntactic agreement and co-reference resolution and discover that the model strongly relies on a Default Reasoning effect to perform these tasks.

Craig Boutilier - One of the best experts on this subject based on the ideXlab platform.

  • unifying Default Reasoning and belief revision in a modal framework
    Artificial Intelligence, 1994
    Co-Authors: Craig Boutilier
    Abstract:

    Abstract We present a logic for Reasoning about belief revision in which the process of revising a knowledge base by some sentence is represented with a conditional connective. The conditional is not primitive however; it is defined in terms of two unary modal operators. We show that our notion of revision is equivalent to that determined by the classic AGM postulates. Furthermore, unlike current models of revision, our approach does not require the Limit Assumption. We also present a model for subjunctive query answering that allows the expression of subjunctive or factual premises, integrity constraints, and notions of entrenchment and plausibility. The modal framework we adopt is sufficiently general to allow the expression of other forms of defeasible Reasoning, and facilitates the demonstration of some interesting connections between revision, Default Reasoning and autoepistemic logic. In particular, we show that the normative conditional for Default Reasoning (developed in a companion paper) and our subjunctive conditional are identical. Default Reasoning can thus be viewed as the revision of a theory of expectations in manner that naturally relates priorities of Default rules to the entrenchment of expectations.

  • conditional logics for Default Reasoning and belief revision
    1992
    Co-Authors: Craig Boutilier
    Abstract:

    Much of what passes for knowledge about the world is defeasible, or can be mistaken. Our perceptions and premises can never be certain, we are forced to jump to conclusions in the presence of incomplete information, and we have to cut our deliberations short when our environment closes in. For this reason, any theory of artificial intelligence requires at its heart a theory of Default Reasoning, the process of reaching plausible, but uncertain, conclusions; and a theory of belief revision, the process of retracting and adding certain beliefs as information becomes available. In this thesis, we will address both of these problems from a logical point of view. We will provide a semantic account of these processes and develop conditional logics to represent and reason with Default or normative statements, about normal or typical states of affairs, and statements of belief revision. The conditional logics will be based on standard modal systems, and the possible worlds approach will provide a uniform framework for the development of a number of such systems. Within this framework, we will compare the two types of Reasoning, determining that they are remarkably similar processes at a formal level of analysis. We will also show how a number of disparate types of Reasoning may be analyzed within these modal systems, and to a large extent unified. These include normative Default Reasoning, probabilistic Default Reasoning, autoepistemic Reasoning, belief revision, subjunctive, hypothetical or counterfactual Reasoning, and abduction.

Judea Pearl - One of the best experts on this subject based on the ideXlab platform.

  • Qualitative probabilities for Default Reasoning, belief revision, and causal modeling
    Artificial Intelligence, 1996
    Co-Authors: Moises Goldszmidt, Judea Pearl
    Abstract:

    Abstract This paper presents a formalism that combines useful properties of both logic and probabilities. Like logic, the formalism admits qualitative sentences and provides symbolic machinery for deriving deductively closed beliefs and, like probability, it permits us to express if-then rules with different levels of firmness and to retract beliefs in response to changing observations. Rules are interpreted as order-of-magnitude approximations of conditional probabilities which impose constraints over the rankings of worlds. Inferences are supported by a unique priority ordering on rules which is syntactically derived from the knowledge base. This ordering accounts for rule interactions, respects specificity considerations and facilitates the construction of coherent states of beliefs. Practical algorithms are developed and analyzed for testing consistency, computing rule ordering, and answering queries. Imprecise observations are incorporated using qualitative versions of Jeffrey's rule and Bayesian updating, with the result that coherent belief revision is embodied naturally and tractably. Finally, causal rules are interpreted as imposing Markovian conditions that further constrain world rankings to reflect the modularity of causal organizations. These constraints are shown to facilitate Reasoning about causal projections, explanations, actions and change.

  • specificity and inheritance in Default Reasoning
    International Joint Conference on Artificial Intelligence, 1995
    Co-Authors: Sekwah Tan, Judea Pearl
    Abstract:

    When specificity considerations are incorporated in Default Reasoning systems, it is hard to ensure that exceptional subclasses inherit all legitimate features of their parent classes To reconcile these two requirements specificity and inheritance, this paper proposes the addition of a new rule called coherence rule, to the desiderata for Default inference The coherence rule captures the intuition that formulae which are more compatible with the Defaults in the database are more believable. We offer a formal definition of this extended desiderata and analyze the behavior of its associated closure relation which we call coference closure. We provide a concrete embodiment of a system satisfying the extended desiderata by taking the coherence closure of system Z A procedure for computing the (unique) most compact, be lief ranking in the coherence closure of system Z is also described.

  • Default Reasoning causal and conditional theories
    1992
    Co-Authors: Hector Geffner, Judea Pearl
    Abstract:

    A system of defeasible inference based on probabilities high probabilities and preferential structures irrelevance and prioritized preferential structures the causal dimension - evidence vs. explanation proofs.

  • conditional entailment bridging two approaches to Default Reasoning
    Artificial Intelligence, 1992
    Co-Authors: Hector Geffner, Judea Pearl
    Abstract:

    Abstract In recent years, two conceptually different interpretations of Default expressions have been advanced: extensional interpretations, in which Defaults are regarded as prescriptions for extending one's set of beliefs, and conditional interpretations, in which Defaults are regarded as beliefs whose validity is bound to a particular context. The two interpretations possess virtues and limitations that are practically orthogonal to each other. The conditional interpretations successfully resolve arguments of different “specificity” (e.g., “penguins don't fly in spite of being birds”) but fail to capture arguments of “irrelevance” (e.g., concluding “ red birds fly” from “birds fly”). The opposite is true for the extensional interpretations. This paper develops a new account of Defaults, called conditional entailment , which combines the benefits of the two interpretations. Like prioritized circumscriptions, conditional entailment resolves arguments by enforcing priorities among Defaults. However, instead of having to be specified by the user, these priorities are extracted automatically from the knowledge base. Similarly, conditional entailment possesses a sound and complete proof theory, based on interacting arguments and amenable to implementation in conventional ATMSs.