Natural Language Semantics

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 25296 Experts worldwide ranked by ideXlab platform

Raymond J Mooney - One of the best experts on this subject based on the ideXlab platform.

  • utexas Natural Language Semantics using distributional Semantics and probabilistic logic
    International Conference on Computational Linguistics, 2014
    Co-Authors: Islam Beltagy, Katrin Erk, Stephen Roller, Gemma Boleda, Raymond J Mooney
    Abstract:

    We represent Natural Language Semantics by combining logical and distributional information in probabilistic logic. We use Markov Logic Networks (MLN) for the RTE task, and Probabilistic Soft Logic (PSL) for the STS task. The system is evaluated on the SICK dataset. Our best system achieves 73% accuracy on the RTE task, and a Pearson’s correlation of 0.71 on the STS task.

  • efficient markov logic inference for Natural Language Semantics
    National Conference on Artificial Intelligence, 2014
    Co-Authors: Islam Beltagy, Raymond J Mooney
    Abstract:

    Using Markov logic to integrate logical and distributional information in Natural-Language Semantics results in complex inference problems involving long, complicated formulae. Current inference methods for Markov logic are ineffective on such problems. To address this problem, we propose a new inference algorithm based on SampleSearch that computes probabilities of complete formulae rather than ground atoms. We also introduce a modified closed-world assumption that significantly reduces the size of the ground network, thereby making inference feasible. Our approach is evaluated on the recognizing textual entailment task, and experiments demonstrate its dramatic impact on the efficiency of inference.

  • a formal approach to linking logical form and vector space lexical Semantics
    2014
    Co-Authors: Dan Garrette, Katrin Erk, Raymond J Mooney
    Abstract:

    First-order logic provides a powerful and flexible mechanism for representing Natural Language Semantics. However, it is an open question of how best to integrate it with uncertain, weighted knowledge, for example regarding word meaning. This paper describes a mapping between predicates of logical form and points in a vector space. This mapping is then used to project distributional inferences to inference rules in logical form. We then describe first steps of an approach that uses this mapping to recast first-order Semantics into the probabilistic models that are part of Statistical Relational AI. Specifically, we show how Discourse Representation Structures can be combined with distributional models for word meaning inside a Markov Logic Network and used to successfully perform inferences that take advantage of logical concepts such as negation and factivity as well as weighted information on word meaning in context.

  • integrating logical representations with probabilistic information using markov logic
    IWCS '11 Proceedings of the Ninth International Conference on Computational Semantics, 2011
    Co-Authors: Dan Garrette, Katrin Erk, Raymond J Mooney
    Abstract:

    First-order logic provides a powerful and flexible mechanism for representing Natural Language Semantics. However, it is an open question of how best to integrate it with uncertain, probabilistic knowledge, for example regarding word meaning. This paper describes the first steps of an approach to recasting first-order Semantics into the probabilistic models that are part of Statistical Relational AI. Specifically, we show how Discourse Representation Structures can be combined with distributional models for word meaning inside a Markov Logic Network and used to successfully perform inferences that take advantage of logical concepts such as factivity as well as probabilistic information on word meaning in context.

Islam Beltagy - One of the best experts on this subject based on the ideXlab platform.

  • Natural Language Semantics using Probabilistic Logic
    2014
    Co-Authors: Islam Beltagy
    Abstract:

    Abstract : With better Natural Language semantic representations, computers can do more applications more efficiently as a result of better understanding of Natural text. However, no single semantic representation at this time fulfills all requirements needed for a satisfactory representation. Logic-based representations like first-order logic capture many of the linguistic phenomena using logical constructs, and they come with standardized inference mechanisms, but standard first-order logic fails to capture the graded aspect of meaning in Languages. Distributional models use contextual similarity to predict the graded semantic similarity of words and phrases but they do not adequately capture logical structure. In addition there are a few recent attempts to combine both representations either on the logic side (still, not a graded representation), or in the distribution side (not full logic). We propose using probabilistic logic to represent Natural Language Semantics combining the expressivity and the automated inference of logic, and the gradedness of distributional representations. We evaluate this semantic representation on two tasks, Recognizing Textual Entailment (RTE) and Semantic Textual Similarity (STS). Doing RTE and STS better is an indication of a better semantic understanding. Our system has three main components, 1. Parsing and Task Representation, 2. Knowledge Base Construction, and 3. Inference. The input Natural sentences of the RTE/STS task are mapped to logical form using Boxer which is a rule based system built on top of a CCG parser, then they are used to formulate the RTE/STS problem in probabilistic logic. Then, a knowledge base is represented as weighted inference rules collected from different sources like WordNet and on-the-fly lexical rules from distributional Semantics.

  • utexas Natural Language Semantics using distributional Semantics and probabilistic logic
    International Conference on Computational Linguistics, 2014
    Co-Authors: Islam Beltagy, Katrin Erk, Stephen Roller, Gemma Boleda, Raymond J Mooney
    Abstract:

    We represent Natural Language Semantics by combining logical and distributional information in probabilistic logic. We use Markov Logic Networks (MLN) for the RTE task, and Probabilistic Soft Logic (PSL) for the STS task. The system is evaluated on the SICK dataset. Our best system achieves 73% accuracy on the RTE task, and a Pearson’s correlation of 0.71 on the STS task.

  • efficient markov logic inference for Natural Language Semantics
    National Conference on Artificial Intelligence, 2014
    Co-Authors: Islam Beltagy, Raymond J Mooney
    Abstract:

    Using Markov logic to integrate logical and distributional information in Natural-Language Semantics results in complex inference problems involving long, complicated formulae. Current inference methods for Markov logic are ineffective on such problems. To address this problem, we propose a new inference algorithm based on SampleSearch that computes probabilities of complete formulae rather than ground atoms. We also introduce a modified closed-world assumption that significantly reduces the size of the ground network, thereby making inference feasible. Our approach is evaluated on the recognizing textual entailment task, and experiments demonstrate its dramatic impact on the efficiency of inference.

Katrin Erk - One of the best experts on this subject based on the ideXlab platform.

  • utexas Natural Language Semantics using distributional Semantics and probabilistic logic
    International Conference on Computational Linguistics, 2014
    Co-Authors: Islam Beltagy, Katrin Erk, Stephen Roller, Gemma Boleda, Raymond J Mooney
    Abstract:

    We represent Natural Language Semantics by combining logical and distributional information in probabilistic logic. We use Markov Logic Networks (MLN) for the RTE task, and Probabilistic Soft Logic (PSL) for the STS task. The system is evaluated on the SICK dataset. Our best system achieves 73% accuracy on the RTE task, and a Pearson’s correlation of 0.71 on the STS task.

  • a formal approach to linking logical form and vector space lexical Semantics
    2014
    Co-Authors: Dan Garrette, Katrin Erk, Raymond J Mooney
    Abstract:

    First-order logic provides a powerful and flexible mechanism for representing Natural Language Semantics. However, it is an open question of how best to integrate it with uncertain, weighted knowledge, for example regarding word meaning. This paper describes a mapping between predicates of logical form and points in a vector space. This mapping is then used to project distributional inferences to inference rules in logical form. We then describe first steps of an approach that uses this mapping to recast first-order Semantics into the probabilistic models that are part of Statistical Relational AI. Specifically, we show how Discourse Representation Structures can be combined with distributional models for word meaning inside a Markov Logic Network and used to successfully perform inferences that take advantage of logical concepts such as negation and factivity as well as weighted information on word meaning in context.

  • vector space models of word meaning and phrase meaning a survey
    Language and Linguistics Compass, 2012
    Co-Authors: Katrin Erk
    Abstract:

    Distributional models represent a word through the contexts in which it has been observed. They can be used to predict similarity in meaning, based on the distributional hypothesis, which states that two words that occur in similar contexts tend to have similar meanings. Distributional approaches are often implemented in vector space models. They represent a word as a point in high-dimensional space, where each dimension stands for a context item, and a word's coordinates represent its context counts. Occurrence in similar contexts then means proximity in space. In this survey we look at the use of vector space models to describe the meaning of words and phrases: the phenomena that vector space models address, and the techniques that they use to do so. Many word meaning phenomena can be described in terms of semantic similarity: synonymy, priming, categorization, and the typicality of a predicate's arguments. But vector space models can do more than just predict semantic similarity. They are a very flexible tool, because they can make use of all of linear algebra, with all its data structures and operations. The dimensions of a vector space can stand for many things: context words, or non-linguistic context like images, or properties of a concept. And vector space models can use matrices or higher-order arrays instead of vectors for representing more complex relationships. Polysemy is a tough problem for distributional approaches, as a representation that is learned from all of a word's contexts will conflate the different senses of the word. It can be addressed, using either clustering or vector combination techniques. Finally, we look at vector space models for phrases, which are usually constructed by combining word vectors. Vector space models for phrases can predict phrase similarity, and some argue that they can form the basis for a general-purpose representation framework for Natural Language Semantics.

  • integrating logical representations with probabilistic information using markov logic
    IWCS '11 Proceedings of the Ninth International Conference on Computational Semantics, 2011
    Co-Authors: Dan Garrette, Katrin Erk, Raymond J Mooney
    Abstract:

    First-order logic provides a powerful and flexible mechanism for representing Natural Language Semantics. However, it is an open question of how best to integrate it with uncertain, probabilistic knowledge, for example regarding word meaning. This paper describes the first steps of an approach to recasting first-order Semantics into the probabilistic models that are part of Statistical Relational AI. Specifically, we show how Discourse Representation Structures can be combined with distributional models for word meaning inside a Markov Logic Network and used to successfully perform inferences that take advantage of logical concepts such as factivity as well as probabilistic information on word meaning in context.

Tim Fernando - One of the best experts on this subject based on the ideXlab platform.

  • finite state temporal projection
    Lecture Notes in Computer Science, 2006
    Co-Authors: Tim Fernando
    Abstract:

    Finite-state methods are applied to determine the consequences of events, represented as strings of sets of fluents. Developed to flesh out events used in Natural Language Semantics, the approach supports reasoning about action in AI, including the frame problem and inertia. Representational and inferential aspects of the approach are explored, centering on conciseness of Language, context update and constraint application with bias.

  • a finite state approach to events in Natural Language Semantics
    Journal of Logic and Computation, 2004
    Co-Authors: Tim Fernando
    Abstract:

    Events in Natural Language Semantics are characterized in terms of regular Languages, each string in which can be regarded as a temporal sequence of observations. The usual regular constructs (concatenation, etc.) are supplemented with superposition, inducing a useful notion of entailment, distinct from that given by models of predicate logic.

Noah D. Goodman - One of the best experts on this subject based on the ideXlab platform.

  • probabilistic Semantics and pragmatics uncertainty in Language and thought
    Handbook of Contemporary Semantic Theory The, 2015
    Co-Authors: Noah D. Goodman, Daniel Lassiter
    Abstract:

    Language is used to communicate ideas. Ideas are mental tools for coping with a complex and uncertain world. Thus human conceptual structures should be key to Language meaning, and probability—the mathematics of uncertainty— should be indispensable for describing both Language and thought. Indeed, probabilistic models are enormously useful in modeling human cognition (Tenenbaum et al., 2011) and aspects of Natural Language (Bod et al., 2003; Chater et al., 2006). With a few early exceptions (e.g. Adams, 1975; Cohen, 1999b), probabilistic tools have only recently been used in Natural Language Semantics and pragmatics. In this chapter we synthesize several of these modeling advances, exploring a formal model of interpretation grounded, via lexical Semantics and pragmatic inference, in conceptual structure. Flexible human cognition is derived in large part from our ability to imagine possibilities (or possible worlds). A rich set of concepts, intuitive theories, and other mental representations support imagining and reasoning about possible worlds—together we will call these the conceptual lexicon. We posit that this collection of concepts also forms the set of primitive elements available for lexical Semantics: word meanings can be built from the pieces of conceptual structure. Larger semantic structures are then built from word meanings by composition, ultimately resulting in a sentence meaning which is a phrase in the “Language of thought” provided by the conceptual lexicon. This expression is truth-functional in that it takes on a Boolean value for each imagined world, and it can thus be used as the basis for belief updating. However, the connection between cognition, Semantics, and belief is not direct: because Language must flexibly adapt to the context of communication, the connection between lexical representation and interpreted meaning is mediated by pragmatic inference.

  • how many kinds of reasoning inference probability and Natural Language Semantics
    Cognition, 2015
    Co-Authors: Daniel Lassiter, Noah D. Goodman
    Abstract:

    Abstract The “new paradigm” unifying deductive and inductive reasoning in a Bayesian framework (Oaksford & Chater, 2007; Over, 2009 ) has been claimed to be falsified by results which show sharp differences between reasoning about necessity vs. plausibility (Heit & Rotello, 2010; Rips, 2001; Rotello & Heit, 2009). We provide a probabilistic model of reasoning with modal expressions such as “necessary” and “plausible” informed by recent work in formal Semantics of Natural Language, and show that it predicts the possibility of non-linear response patterns which have been claimed to be problematic. Our model also makes a strong monotonicity prediction, while two-dimensional theories predict the possibility of reversals in argument strength depending on the modal word chosen. Predictions were tested using a novel experimental paradigm that replicates the previously-reported response patterns with a minimal manipulation, changing only one word of the stimulus between conditions. We found a spectrum of reasoning “modes” corresponding to different modal words, and strong support for our model’s monotonicity prediction. This indicates that probabilistic approaches to reasoning can account in a clear and parsimonious way for data previously argued to falsify them, as well as new, more fine-grained, data. It also illustrates the importance of careful attention to the Semantics of Language employed in reasoning experiments.

  • how many kinds of reasoning inference probability and Natural Language Semantics
    Cognitive Science, 2012
    Co-Authors: Daniel Lassiter, Noah D. Goodman
    Abstract:

    How many kinds of reasoning? Inference, probability, and Natural Language Semantics Daniel Lassiter, Noah D. Goodman Department of Psychology, Stanford University {danlassiter, ngoodman} @ stanford.edu Abstract lar, Rips and Heit & Rotello argue that non-linearities can- not be accounted for by probabilistic theories of reasoning, which identify the strength of an argument with the condi- tional probability of the conclusion given the premises (Heit, 1998; Kemp & Tenenbaum, 2009; Oaksford & Chater, 2007; Tenenbaum, Griffiths, & Kemp, 2006). On the other hand, they claim that the results are consistent with two-process the- ories of reasoning (Evans & Over, 1996). We argue that the manipulation involving “necessary” and “plausible” hinges not on a qualitative distinction between two reasoning processes, but rather on facts about the se- mantics of these words which can be modeled using a single underlying scale of argument strength—conditional probabil- ity. We propose a semantically motivated model of reasoning with epistemic concepts which predicts non-linear response patterns depending on the choice of modal similar to those observed in previous work, and makes detailed quantitative predictions about response patterns in invalid arguments. We test the claim that the modal word is the crucial factor using a new paradigm that isolates its effects. Our arguments had the same form as the examples above, except that we placed the modal word of interest in the conclusion: Previous research (Heit & Rotello, 2010; Rips, 2001; Rotello & Heit, 2009) has suggested that differences between inductive and deductive reasoning cannot be explained by probabilistic theories, and instead support two-process accounts of reason- ing. We provide a probabilistic model that predicts the ob- served non-linearities and makes quantitative predictions about responses as a function of argument strength. Predictions were tested using a novel experimental paradigm that elicits the previously-reported response patterns with a minimal manip- ulation, changing only one word between conditions. We also found a good fit with quantitative model predictions, indicating that a probabilistic theory of reasoning can account in a clear and parsimonious way for qualitative and quantitative data pre- viously argued to falsify them. We also relate our model to recent work in linguistics, arguing that careful attention to the Semantics of Language used to pose reasoning problems will sharpen the questions asked in the psychology of reasoning. Keywords: Reasoning, induction, deduction, probabilistic model, formal Semantics. Suppose that you have learned a new biological fact about mammals: whales and dogs both use enzyme B-32 to digest their food. Is it now necessary that horses do the same? Is it plausible, possible, or more likely than not? Expressions of this type—known as epistemic modals in linguistics— have played a crucial role in recent work that argues for a sharp qualitative distinction between inductive and deductive modes of reasoning. In the paradigm introduced by Rips (2001) and extended by Heit and Rotello (2010); Rotello and Heit (2009), participants are divided into two conditions and are either asked to judge whether a conclusion is “necessary” assuming that some premises are true, or whether it is “plau- sible”. The former is identified with the deductive mode of reasoning, and the latter with the inductive mode. These authors asked participants in both conditions to eval- uate a variety of logically valid and logically invalid ar- guments. An example invalid argument might be “Cows have sesamoid bones; Mice have sesamoid bones; therefore, Horses have sesamoid bones”. An example valid argument might be “Mammals have sesamoid bones; therefore, horses have sesamoid bones.” They found that there was a non-linear relationship between the endorsement rates of arguments de- pending on condition: participants in both conditions gener- ally endorsed logically valid arguments, but participants in the deductive condition were much less likely to endorse in- valid arguments than those in the inductive condition. These results are interpreted as a challenge to theories of reason- ing which rely on a single dimension of argument strength and interpret deductive validity as simply the upper extreme of this dimension(Harman, 1999; Johnson-Laird, 1994; Os- herson, Smith, Wilkie, Lopez, & Shafir, 1990). In particu- Premise 1: Cows have sesamoid bones. Premise 2: Mice have sesamoid bones. Conclusion: It is {plausible/necessary/possible/likely/ probable/certain} that horses have sesamoid bones. We will refer to configurations such as “It is plausi- ble/possible/etc. that C” as a modal frame. If varying the modal frame gives rise to a non-linear pattern of responses similar to the one found in previous work, this would indicate that an explanation of these results should be framed in terms of the meaning of these modal words. Together, the model and experimental evidence indicate that the negative conclusions of previous work regarding one- dimensional theories of argument strength are not warranted: it is possible to explain non-linear response patterns with a probabilistic account of argument strength. Previous Work Rips (2001) conducted a reasoning experiment designed to investigate the traditional distinction between deductive and inductive reasoning. Participants in two groups were asked to judge arguments either according to whether the conclu- sion was necessary (assuming that the premises were true) or whether it was plausible. Most participants in both conditions accepted logically valid arguments and rejected invalid argu- ments whose conclusion was not causally consistent with the