Plausibility

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 501708 Experts worldwide ranked by ideXlab platform

Joseph Y. Halpern - One of the best experts on this subject based on the ideXlab platform.

  • Plausibility Measures: A User's Guide
    arXiv: Artificial Intelligence, 2013
    Co-Authors: Nir Friedman, Joseph Y. Halpern
    Abstract:

    We examine a new approach to modeling uncertainty based on Plausibility measures, where a Plausibility measure just associates with an event its Plausibility, an element is some partially ordered set. This approach is easily seen to generalize other approaches to modeling uncertainty, such as probability measures, belief functions, and possibility measures. The lack of structure in a Plausibility measure makes it easy for us to add structure on an "as needed" basis, letting us examine what is required to ensure that a Plausibility measure has certain properties of interest. This gives us insight into the essential features of the properties in question, while allowing us to prove general results that apply to many approaches to reasoning about uncertainty. Plausibility measures have already proved useful in analyzing default reasoning. In this paper, we examine their "algebraic properties," analogues to the use of + and * in probability theory. An understanding of such properties will be essential if Plausibility measures are to be used in practice as a representation tool.

  • Conditional Plausibility Measures and Bayesian Networks
    Journal of Artificial Intelligence Research, 2001
    Co-Authors: Joseph Y. Halpern
    Abstract:

    A general notion of algebraic conditional Plausibility measures is defined. Probability measures, ranking functions, possibility measures, and (under the appropriate definitions) sets of probability measures can all be viewed as defining algebraic conditional Plausibility measures. It is shown that the technology of Bayesian networks can be applied to algebraic conditional Plausibility measures.

  • Plausibility measures and default reasoning
    arXiv: Artificial Intelligence, 1998
    Co-Authors: Nir Friedman, Joseph Y. Halpern
    Abstract:

    We introduce a new approach to modeling uncertainty based on Plausibility measures. This approach is easily seen to generalize other approaches to modeling uncertainty, such as probability measures, belief functions, and possibility measures. We focus on one application of Plausibility measures in this paper: default reasoning. In recent years, a number of different semantics for defaults have been proposed, such as preferential structures, $\epsilon$-semantics, possibilistic structures, and $\kappa$-rankings, that have been shown to be characterized by the same set of axioms, known as the KLM properties. While this was viewed as a surprise, we show here that it is almost inevitable. In the framework of Plausibility measures, we can give a necessary condition for the KLM axioms to be sound, and an additional condition necessary and sufficient to ensure that the KLM axioms are complete. This additional condition is so weak that it is almost always met whenever the axioms are sound. In particular, it is easily seen to hold for all the proposals made in the literature.

  • Plausibility measures and default reasoning
    National Conference on Artificial Intelligence, 1996
    Co-Authors: Nir Friedman, Joseph Y. Halpern
    Abstract:

    In recent years, a number of different semantics for defaults have been proposed, such as preferential structures, e-semantics, possibilistic structures, and κ-rankings, that have been shown to be characterized by the same set of axioms, known as the KLM properties (for Kraus, Lehmann, and Magidor). While this was viewed as a surprise, we show here that it is almost inevitable. We do this by giving yet another semantics for defaults that uses Plausibility measures, a new approach to modeling uncertainty that generalize other approaches, such as probability measures, belief functions, and possibility measures. We show that all the earlier approaches to default reasoning can be embedded in the framework of Plausibility. We then provide a necessary and sufficient condition on plausibilities for the KLM properties to be sound, and an additional condition necessary and sufficient for the KLM properties to be complete. These conditions are easily seen to hold for all the earlier approaches, thus explaining why they are characterized by the KLM properties.

  • UAI - Plausibility measures: a user's guide
    1995
    Co-Authors: Nir Friedman, Joseph Y. Halpern
    Abstract:

    We examine a new approach to modeling uncertainty based on Plausibility measures, where a Plausibility measure just associates with an event its Plausibility, an element is some partially ordered set. This approach is easily seen to generalize other approaches to modeling uncertainty, such as probability measures, belief functions, and possibility measures. The lack of structure in a Plausibility measure makes it easy for us to add structure on an "as needed" basis, letting us examine what is required to ensure that a Plausibility measure has certain properties of interest. This gives us insight into the essential features of the properties in question, while allowing us to prove general results that apply to many approaches to reasoning about uncertainty. Plausibility measures have already proved useful in analyzing default reasoning. In this paper, we examine their "algebraic properties", analogues to the use of + and × in probability theory. An understanding of such properties will be essential if Plausibility measures are to be used in practice as a representation tool.

Nir Friedman - One of the best experts on this subject based on the ideXlab platform.

  • Plausibility Measures: A User's Guide
    arXiv: Artificial Intelligence, 2013
    Co-Authors: Nir Friedman, Joseph Y. Halpern
    Abstract:

    We examine a new approach to modeling uncertainty based on Plausibility measures, where a Plausibility measure just associates with an event its Plausibility, an element is some partially ordered set. This approach is easily seen to generalize other approaches to modeling uncertainty, such as probability measures, belief functions, and possibility measures. The lack of structure in a Plausibility measure makes it easy for us to add structure on an "as needed" basis, letting us examine what is required to ensure that a Plausibility measure has certain properties of interest. This gives us insight into the essential features of the properties in question, while allowing us to prove general results that apply to many approaches to reasoning about uncertainty. Plausibility measures have already proved useful in analyzing default reasoning. In this paper, we examine their "algebraic properties," analogues to the use of + and * in probability theory. An understanding of such properties will be essential if Plausibility measures are to be used in practice as a representation tool.

  • Plausibility measures and default reasoning
    arXiv: Artificial Intelligence, 1998
    Co-Authors: Nir Friedman, Joseph Y. Halpern
    Abstract:

    We introduce a new approach to modeling uncertainty based on Plausibility measures. This approach is easily seen to generalize other approaches to modeling uncertainty, such as probability measures, belief functions, and possibility measures. We focus on one application of Plausibility measures in this paper: default reasoning. In recent years, a number of different semantics for defaults have been proposed, such as preferential structures, $\epsilon$-semantics, possibilistic structures, and $\kappa$-rankings, that have been shown to be characterized by the same set of axioms, known as the KLM properties. While this was viewed as a surprise, we show here that it is almost inevitable. In the framework of Plausibility measures, we can give a necessary condition for the KLM axioms to be sound, and an additional condition necessary and sufficient to ensure that the KLM axioms are complete. This additional condition is so weak that it is almost always met whenever the axioms are sound. In particular, it is easily seen to hold for all the proposals made in the literature.

  • Plausibility measures and default reasoning
    National Conference on Artificial Intelligence, 1996
    Co-Authors: Nir Friedman, Joseph Y. Halpern
    Abstract:

    In recent years, a number of different semantics for defaults have been proposed, such as preferential structures, e-semantics, possibilistic structures, and κ-rankings, that have been shown to be characterized by the same set of axioms, known as the KLM properties (for Kraus, Lehmann, and Magidor). While this was viewed as a surprise, we show here that it is almost inevitable. We do this by giving yet another semantics for defaults that uses Plausibility measures, a new approach to modeling uncertainty that generalize other approaches, such as probability measures, belief functions, and possibility measures. We show that all the earlier approaches to default reasoning can be embedded in the framework of Plausibility. We then provide a necessary and sufficient condition on plausibilities for the KLM properties to be sound, and an additional condition necessary and sufficient for the KLM properties to be complete. These conditions are easily seen to hold for all the earlier approaches, thus explaining why they are characterized by the KLM properties.

  • UAI - Plausibility measures: a user's guide
    1995
    Co-Authors: Nir Friedman, Joseph Y. Halpern
    Abstract:

    We examine a new approach to modeling uncertainty based on Plausibility measures, where a Plausibility measure just associates with an event its Plausibility, an element is some partially ordered set. This approach is easily seen to generalize other approaches to modeling uncertainty, such as probability measures, belief functions, and possibility measures. The lack of structure in a Plausibility measure makes it easy for us to add structure on an "as needed" basis, letting us examine what is required to ensure that a Plausibility measure has certain properties of interest. This gives us insight into the essential features of the properties in question, while allowing us to prove general results that apply to many approaches to reasoning about uncertainty. Plausibility measures have already proved useful in analyzing default reasoning. In this paper, we examine their "algebraic properties", analogues to the use of + and × in probability theory. An understanding of such properties will be essential if Plausibility measures are to be used in practice as a representation tool.

Wan Ahmad Tajuddin Wan Abdullah - One of the best experts on this subject based on the ideXlab platform.

  • Looking for Plausibility
    arXiv: Artificial Intelligence, 2010
    Co-Authors: Wan Ahmad Tajuddin Wan Abdullah
    Abstract:

    In the interpretation of experimental data, one is actually looking for plausible explanations. We look for a measure of Plausibility, with which we can compare different possible explanations, and which can be combined when there are different sets of data. This is contrasted to the conventional measure for probabilities as well as to the proposed measure of possibilities. We define what characteristics this measure of Plausibility should have. In getting to the conception of this measure, we explore the relation of Plausibility to abductive reasoning, and to Bayesian probabilities. We also compare with the Dempster-Schaefer theory of evidence, which also has its own definition for Plausibility. Abduction can be associated with biconditionality in inference rules, and this provides a platform to relate to the Collins-Michalski theory of Plausibility. Finally, using a formalism for wiring logic onto Hopfield neural networks, we ask if this is relevant in obtaining this measure.

Mark T. Keane - One of the best experts on this subject based on the ideXlab platform.

  • A model of Plausibility
    Cognitive science, 2006
    Co-Authors: Louise Connell, Mark T. Keane
    Abstract:

    Plausibility has been implicated as playing a critical role in many cognitive phenomena from comprehension to problem solving. Yet, across cognitive science, Plausibility is usually treated as an operationalized variable or metric rather than being explained or studied in itself. This article describes a new cognitive model of Plausibility, the Plausibility Analysis Model (PAM), which is aimed at modeling human Plausibility judgment. This model uses commonsense knowledge of concept-coherence to determine the degree of Plausibility of a target scenario. In essence, a highly plausible scenario is one that fits prior knowledge well: with many different sources of corroboration, without complexity of explanation, and with minimal conjecture. A detailed simulation of empirical Plausibility findings is reported, which shows a close correspondence between the model and human judgments. In addition, a sensitivity analysis demonstrates that PAM is robust in its operations.

  • Broadening Plausibility: A Sensitivity Analysis of PAM
    2005
    Co-Authors: Louise Connell, Mark T. Keane
    Abstract:

    Broadening Plausibility: A Sensitivity Analysis of PAM Louise Connell (louise.connell@northumbria.ac.uk) Division of Psychology, Northumbria University, Newcastle upon Tyne, NE1 8ST, UK Mark T. Keane (mark.keane@ucd.ie) Department of Computer Science, University College Dublin, Belfield, Dublin 4, Ireland produced (Costello & Keane, 2000; Lynott, Tagalakis & Keane, 2004). Many of these tasks have broad implications for models of cognition and underscore the centrality of Plausibility. In this paper, we explore the computational aspects of our research program on Plausibility. Specifically, we outline a computational model of Plausibility and demonstrate its robustness as a model with an extensive sensitivity analysis. Abstract The judgement of Plausibility is severely under-specified in cognitive science despite its diverse uses in many cognitive tasks. Recently, a model of human Plausibility judgement, called the Plausibility Analysis Model (PAM), has been proposed and has been shown to closely model human Plausibility ratings of event scenarios. In the present study, we present a sensitivity analysis to explore PAM’s robustness with a view to assessing its broader implications in cognitive science and cognitive modelling. Overall, this analysis shows that PAM is consistent with its underlying theory and is robust in a wide range of operational contexts, thus indicating that the model is well grounded in its characterisation of Plausibility effects. Plausibility and the Knowledge-Fitting Theory In the Knowledge-Fitting Theory of Plausibility, Connell and Keane (2003, in prep.) define Plausibility judgements as being about assessing how well a scenario fits with prior knowledge. They show that the Plausibility rating of a scenario depends upon its concept-coherence (i.e., the inference and prior knowledge used to connect the scenario’s events). In addition, Connell and Keane (2004) have shown that the type of connection between a scenario’s events influences its Plausibility (see Table 1 for examples). People consider events linked by causal connections (e.g., event Y was caused by event X) to be the most plausible, followed by events linked by the assertion of a previous entity’s attribute (e.g., proposition Y adds an attribute to entity X), followed by events linked by temporal connections (e.g., event Y follows event X in time). Lastly, and perhaps more obviously, people consider scenarios containing unrelated events to be the least plausible of all. In the Knowledge-Fitting Theory, Plausibility judgement spans two stages: comprehension (where a representation of the scenario is formed) and assessment (where this representation is analysed to ascertain its concept- coherence). The Knowledge-Fitting Theory holds that three key aspects of the representation interact to determine a scenario’s concept-coherence: complexity, corroboration and conjecture. Briefly stated, as complexity increases, Plausibility decreases. This, however, is tempered by the corroboration of the scenario, as even a very complex scenario will be plausible if it is corroborated by prior knowledge. In addition, the interaction of complexity and corroboration is affected by conjecture, as conjecture will make even the simplest, best-supported scenario seem less plausible. In essence, the most plausible scenarios are those with high concept-coherence. Introduction People make consistent and constant use of Plausibility judgements in everyday life for a variety of reasons, from assessing the quality of a movie plot, to determining guilt in a tabloid murder trial, to considering a child’s excuse for a broken dish. Yet, Plausibility remains poorly understood or explored in cognitive science. Recently, Connell and Keane (2003, in prep.) have advanced the Plausibility Analysis Model (PAM) as the first cognitive model of human Plausibility judgements. In this paper, we consider the implications of this model in a broader context and illustrate the robustness of its performance with sensitivity analyses. We know of very few cognitive models that make explicit use of Plausibility to guide, for example, decision-making, problem solving or natural language understanding. Yet, people constantly seem to use Plausibility judgements to guide diverse cognitive tasks. For example, people often use Plausibility judgements in place of costly retrieval from long-term memory, especially when verbatim memory has faded (Lemaire & Fayol, 1995; Reder, Wible & Martin, 1986). Plausibility is also used as a kind of cognitive shortcut in reading, to speed parsing and resolve ambiguities (Pickering & Traxler, 1998; Speer & Clifton, 1998). In everyday thinking, plausible reasoning that uses prior knowledge appears to be commonplace (Collins & Michalski, 1989), and can even aid people in making inductive inferences about familiar topics (Smith, Shafir & Osherson, 1993). It has also been argued that Plausibility plays a fundamental role in understanding novel word combinations by helping to constrain the interpretations

  • What plausibly affects Plausibility? Concept coherence and distributional word coherence as factors influencing Plausibility judgments.
    Memory & cognition, 2004
    Co-Authors: Louise Connell, Mark T. Keane
    Abstract:

    Our goal was to investigate the basis of human Plausibility judgements. Previous research had suggested that Plausibility is affected by two factors: concept coherence (the inferences made between parts of a discourse) and word coherence (the distributional properties of the words used). In two experiments, participants were asked to rate the Plausibility of sentence pairs describing events. In the first, we manipulated concept coherence by using different inference types to link the sentences in a pair (e.g., causal or temporal). In the second, we manipulated word coherence by using latent semantic analysis, so two sentence pairs describing the same event had different distributional properties. The results showed that inference type affects Plausibility; sentence pairs linked by causal inferences were rated highest, followed by attributal, temporal, and unrelated inferences. The distributional manipulations had no reliable effect on Plausibility ratings. We conclude that the processes involved in rating Plausibility are based on evaluating concept coherence, not word coherence.

  • PAM: A Cognitive Model of Plausibility
    2003
    Co-Authors: Louise Connell, Mark T. Keane
    Abstract:

    Plausibility has been implicated as playing a critical role in many cognitive phenomena from comprehension to problem solving. Yet, Plausibility is usually treated as an operationalised variable (i.e., a Plausibility rating) rather than being explained or studied in itself. This paper reports on a new model of Plausibility that is aimed at modeling several direct studies of Plausibility. This model, the Plausibility Analysis Model (PAM), used distributional knowledge about word co-occurrence (word-coherence) and commonsense knowledge of conceptual structure and relatedness (conceptcoherence) to determine the degree of Plausibility of some target description. A detailed simulation of several Plausibility findings is reported, which shows a close correspondence between the model and human judgments.

  • The Roots of Plausibility: The Role of Coherence and Distributional Knowledge in Plausibility Judgements
    2002
    Co-Authors: Louise Connell, Mark T. Keane
    Abstract:

    The Roots of Plausibility: The Role of Coherence and Distributional Knowledge in Plausibility Judgements Louise Connell (louise.connell@ucd.ie) Mark T. Keane (mark.keane@ucd.ie) Department of Computer Science, University College Dublin, Belfield, Dublin 4, Ireland Introduction Plausibility plays a central role in human cognition, whether one is considering the alibi of a murder suspect in a crime novel, or assessing the answers of a candidate in a job interview. Other studies have mentioned Plausibility judgements in the service of other phenomena (e.g. Reder, 1982), but often without being investigated in their own right. This paper presents evidence that Plausibility judgements depend on inferential coherence and distributional information. In the first experiment, we show that the type of inference being made affects the Plausibility of a sentence pair. The second experiment demonstrates that the distributional properties of the words in a sentence pair directly influence Plausibility. Experiments Two experiments advance a novel paradigm in which people make Plausibility judgements about sentence pairs. These sentence pairs are manipulated to invite different bridging inferences and to control their distributional scores (as determined by the Latent Semantic Analysis model LSA; Landauer & Dumais, 1997). In Experiment 1, 40 participants were asked to judge the Plausibility of sentence pairs on a scale from 0 – 10 that had been manipulated to support causal, attributal or temporal inferences, or not to invite any obvious inferences at all (i.e. unrelated pairs). The distributional information of each pair (the LSA score of the first sentence against the second) was controlled across inference types. In Experiment 2, we manipulated distributional information across the causal and attributal sentences to look at the action of both factors together. 24 participants saw two versions of each sentence pair per page (see Table 1), one of which had a relatively high LSA score between the sentences (a strong distributional link) and the other of which had a relatively low score (a weak distributional link). Participants were asked to judge the Plausibility of each pair as before, but to make certain that any perceived difference in Plausibility between the two versions of each sentence pair was reflected in the scores. Results & Discussion Experiment 1’s results demonstrate that different inference types differentially affect the perceived Plausibility of a discourse. The causal pairs were rated the highest in Plausibility (M=7.8), followed as predicted by attributal (M=5.5), temporal (M=4.2) and unrelated (M=2.0). An analysis of variance yielded a significant effect of inference type on Plausibility scores, F (3, 472) = 93.683, p < 0.0001. Table 1: Sample Experiment 2 sentence pair variants. Sentence 1 The pack saw the fox. Inference X Distribution The hounds growled. Causal Strong The hounds snarled. Causal Weak The hounds were fierce. Attributal Strong The hounds were vicious. Attributal Weak Sentence 2 Experiment 2’s results show that the distributional information of a sentence pair affects how plausible it is perceived to be. We examined the proportion of times a participant judged either the strong or weak version of a sentence pair to be more plausible. This analysis shows that in both the causal pairs [M=59.4%, t(10)=4.893, p

Paul-andré Monney - One of the best experts on this subject based on the ideXlab platform.

  • From Likelihood to Plausibility
    arXiv: Artificial Intelligence, 2013
    Co-Authors: Paul-andré Monney
    Abstract:

    Several authors have explained that the likelihood ratio measures the strength of the evidence represented by observations in statistical problems. This idea works fine when the goal is to evaluate the strength of the available evidence for a simple hypothesis versus another simple hypothesis. However, the applicability of this idea is limited to simple hypotheses because the likelihood function is primarily defined on points (simple hypotheses) of the parameter space. In this paper we define a general weight of evidence that is applicable to both simple and composite hypotheses. It is based on the Dempster-Shafer concept of Plausibility and is shown to be a generalization of the likelihood ratio. Functional models are of a fundamental importance for the general weight of evidence proposed in this paper. The relevant concepts and ideas are explained by means of a familiar urn problem and the general analysis of a real-world medical problem is presented.

  • UAI - From likelihood to Plausibility
    1998
    Co-Authors: Paul-andré Monney
    Abstract:

    Several authors have explained that the likelihood ratio measures the strength of the evidence represented by observations in statistical problems. This idea works fine when the goal is to evaluate the strength of the available evidence for a simple hypothesis versus another simple hypothesis. However, the applicability of this idea is limited to simple hypotheses because the likelihood function is primarily defined on points - simple hypotheses - of the parameter space. In this paper we define a general weight of evidence that is applicable to both simple and composite hypotheses. It is based on the Dempster-Shafer concept of Plausibility and is shown to be a generalization of the likelihood ratio. Functional models are of a fundamental importance for the general weight of evidence proposed in this paper. The relevant concepts and ideas are explained by means of a familiar urn problem and the general analysis of a real-world medical problem is presented.