Entailment

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 397884 Experts worldwide ranked by ideXlab platform

Ido Dagan - One of the best experts on this subject based on the ideXlab platform.

  • efficient global learning of Entailment graphs
    Computational Linguistics, 2015
    Co-Authors: Jonathan Berant, Ido Dagan, Noga Alon, Jacob Goldberger
    Abstract:

    Entailment rules between predicates are fundamental to many semantic-inference applications. Consequently, learning such rules has been an active field of research in recent years. Methods for learning Entailment rules between predicates that take into account dependencies between different rules e.g., Entailment is a transitive relation have been shown to improve rule quality, but suffer from scalability issues, that is, the number of predicates handled is often quite small. In this article, we present methods for learning transitive graphs that contain tens of thousands of nodes, where nodes represent predicates and edges correspond to Entailment rules termed Entailment graphs. Our methods are able to scale to a large number of predicates by exploiting structural properties of Entailment graphs such as the fact that they exhibit a "tree-like" property. We apply our methods on two data sets and demonstrate that our methods find high-quality solutions faster than methods proposed in the past, and moreover our methods for the first time scale to large graphs containing 20,000 nodes and more than 100,000 edges.

  • focused Entailment graphs for open ie propositions
    Conference on Computational Natural Language Learning, 2014
    Co-Authors: Omer Levy, Ido Dagan, Jacob Goldberger
    Abstract:

    Open IE methods extract structured propositions from text. However, these propositions are neither consolidated nor generalized, and querying them may lead to insufficient or redundant information. This work suggests an approach to organize open IE propositions using Entailment graphs. The Entailment relation unifies equivalent propositions and induces a specific-to-general structure. We create a large dataset of gold-standard proposition Entailment graphs, and provide a novel algorithm for automatically constructing them. Our analysis shows that predicate Entailment is extremely context-sensitive, and that current lexical-semantic resources do not capture many of the lexical inferences induced by proposition Entailment.

  • recognizing partial textual Entailment
    Meeting of the Association for Computational Linguistics, 2013
    Co-Authors: Omer Levy, Ido Dagan, Torsten Zesch, Iryna Gurevych
    Abstract:

    Textual Entailment is an asymmetric relation between two text fragments that describes whether one fragment can be inferred from the other. It thus cannot capture the notion that the target fragment is “almost entailed” by the given text. The recently suggested idea of partial textual Entailment may remedy this problem. We investigate partial Entailment under the faceted Entailment model and the possibility of adapting existing textual Entailment methods to this setting. Indeed, our results show that these methods are useful for rec- ognizing partial Entailment. We also provide a preliminary assessment of how partial Entailment may be used for recognizing (complete) textual Entailment.

  • ACL (2) - Recognizing Partial Textual Entailment
    2013
    Co-Authors: Omer Levy, Ido Dagan, Torsten Zesch, Iryna Gurevych
    Abstract:

    Textual Entailment is an asymmetric relation between two text fragments that describes whether one fragment can be inferred from the other. It thus cannot capture the notion that the target fragment is “almost entailed” by the given text. The recently suggested idea of partial textual Entailment may remedy this problem. We investigate partial Entailment under the faceted Entailment model and the possibility of adapting existing textual Entailment methods to this setting. Indeed, our results show that these methods are useful for rec- ognizing partial Entailment. We also provide a preliminary assessment of how partial Entailment may be used for recognizing (complete) textual Entailment.

  • recognizing textual Entailment models and applications
    2013
    Co-Authors: Ido Dagan, Dan Roth, Mark Sammons, Fabio Massimo Zanzotto
    Abstract:

    * List of Figures* List of Tables* Preface* Acknowledgments* Textual Entailment* Architectures and Approaches* Alignment, Classification, and Learning* Case Studies* Knowledge Acquisition for Textual Entailment* Research Directions in RTE* Bibliography* Authors' Biographies

Omer Levy - One of the best experts on this subject based on the ideXlab platform.

  • focused Entailment graphs for open ie propositions
    Conference on Computational Natural Language Learning, 2014
    Co-Authors: Omer Levy, Ido Dagan, Jacob Goldberger
    Abstract:

    Open IE methods extract structured propositions from text. However, these propositions are neither consolidated nor generalized, and querying them may lead to insufficient or redundant information. This work suggests an approach to organize open IE propositions using Entailment graphs. The Entailment relation unifies equivalent propositions and induces a specific-to-general structure. We create a large dataset of gold-standard proposition Entailment graphs, and provide a novel algorithm for automatically constructing them. Our analysis shows that predicate Entailment is extremely context-sensitive, and that current lexical-semantic resources do not capture many of the lexical inferences induced by proposition Entailment.

  • recognizing partial textual Entailment
    Meeting of the Association for Computational Linguistics, 2013
    Co-Authors: Omer Levy, Ido Dagan, Torsten Zesch, Iryna Gurevych
    Abstract:

    Textual Entailment is an asymmetric relation between two text fragments that describes whether one fragment can be inferred from the other. It thus cannot capture the notion that the target fragment is “almost entailed” by the given text. The recently suggested idea of partial textual Entailment may remedy this problem. We investigate partial Entailment under the faceted Entailment model and the possibility of adapting existing textual Entailment methods to this setting. Indeed, our results show that these methods are useful for rec- ognizing partial Entailment. We also provide a preliminary assessment of how partial Entailment may be used for recognizing (complete) textual Entailment.

  • ACL (2) - Recognizing Partial Textual Entailment
    2013
    Co-Authors: Omer Levy, Ido Dagan, Torsten Zesch, Iryna Gurevych
    Abstract:

    Textual Entailment is an asymmetric relation between two text fragments that describes whether one fragment can be inferred from the other. It thus cannot capture the notion that the target fragment is “almost entailed” by the given text. The recently suggested idea of partial textual Entailment may remedy this problem. We investigate partial Entailment under the faceted Entailment model and the possibility of adapting existing textual Entailment methods to this setting. Indeed, our results show that these methods are useful for rec- ognizing partial Entailment. We also provide a preliminary assessment of how partial Entailment may be used for recognizing (complete) textual Entailment.

Iryna Gurevych - One of the best experts on this subject based on the ideXlab platform.

  • recognizing partial textual Entailment
    Meeting of the Association for Computational Linguistics, 2013
    Co-Authors: Omer Levy, Ido Dagan, Torsten Zesch, Iryna Gurevych
    Abstract:

    Textual Entailment is an asymmetric relation between two text fragments that describes whether one fragment can be inferred from the other. It thus cannot capture the notion that the target fragment is “almost entailed” by the given text. The recently suggested idea of partial textual Entailment may remedy this problem. We investigate partial Entailment under the faceted Entailment model and the possibility of adapting existing textual Entailment methods to this setting. Indeed, our results show that these methods are useful for rec- ognizing partial Entailment. We also provide a preliminary assessment of how partial Entailment may be used for recognizing (complete) textual Entailment.

  • ACL (2) - Recognizing Partial Textual Entailment
    2013
    Co-Authors: Omer Levy, Ido Dagan, Torsten Zesch, Iryna Gurevych
    Abstract:

    Textual Entailment is an asymmetric relation between two text fragments that describes whether one fragment can be inferred from the other. It thus cannot capture the notion that the target fragment is “almost entailed” by the given text. The recently suggested idea of partial textual Entailment may remedy this problem. We investigate partial Entailment under the faceted Entailment model and the possibility of adapting existing textual Entailment methods to this setting. Indeed, our results show that these methods are useful for rec- ognizing partial Entailment. We also provide a preliminary assessment of how partial Entailment may be used for recognizing (complete) textual Entailment.

Bill Dolan - One of the best experts on this subject based on the ideXlab platform.

  • the third pascal recognizing textual Entailment challenge
    Meeting of the Association for Computational Linguistics, 2007
    Co-Authors: Danilo Giampiccolo, Bernardo Magnini, Ido Dagan, Bill Dolan
    Abstract:

    This paper presents the Third PASCAL Recognising Textual Entailment Challenge (RTE-3), providing an overview of the dataset creating methodology and the submitted systems. In creating this year's dataset, a number of longer texts were introduced to make the challenge more oriented to realistic scenarios. Additionally, a pool of resources was offered so that the participants could share common tools. A pilot task was also set up, aimed at differentiating unknown Entailments from identified contradictions and providing justifications for overall system decisions. 26 participants submitted 44 runs, using different approaches and generally presenting new Entailment models and achieving higher scores than in the previous challenges.

Danilo Giampiccolo - One of the best experts on this subject based on the ideXlab platform.

  • semeval 2013 task 7 the joint student response analysis and 8th recognizing textual Entailment challenge
    Joint Conference on Lexical and Computational Semantics, 2013
    Co-Authors: Myroslava O Dzikovska, Danilo Giampiccolo, Ido Dagan, Rodney D Nielsen, Chris Brew, Claudia Leacock, Luisa Bentivogli, Peter Clark, Hoa Trang Dang
    Abstract:

    We present the results of the Joint Student Response Analysis and 8th Recognizing Textual Entailment Challenge, aiming to bring together researchers in educational NLP technology and textual Entailment. The task of giving feedback on student answers requires semantic inference and therefore is related to recognizing textual Entailment. Thus, we offered to the community a 5-way student response labeling task, as well as 3-way and 2way RTE-style tasks on educational data. In addition, a partial Entailment task was piloted. We present and compare results from 9 participating teams, and discuss future directions.

  • the third pascal recognizing textual Entailment challenge
    Meeting of the Association for Computational Linguistics, 2007
    Co-Authors: Danilo Giampiccolo, Bernardo Magnini, Ido Dagan, Bill Dolan
    Abstract:

    This paper presents the Third PASCAL Recognising Textual Entailment Challenge (RTE-3), providing an overview of the dataset creating methodology and the submitted systems. In creating this year's dataset, a number of longer texts were introduced to make the challenge more oriented to realistic scenarios. Additionally, a pool of resources was offered so that the participants could share common tools. A pilot task was also set up, aimed at differentiating unknown Entailments from identified contradictions and providing justifications for overall system decisions. 26 participants submitted 44 runs, using different approaches and generally presenting new Entailment models and achieving higher scores than in the previous challenges.