Anaphora - Explore the Science & Experts | ideXlab

Scan Science and Technology

Contact Leading Edge Experts & Companies

Anaphora

The Experts below are selected from a list of 8379 Experts worldwide ranked by ideXlab platform

Ryu Iida – 1st expert on this subject based on the ideXlab platform

  • intra sentential zero Anaphora resolution using subject sharing recognition
    Empirical Methods in Natural Language Processing, 2015
    Co-Authors: Ryu Iida, Kentaro Torisawa, Chikara Hashimoto, Jonghoon Oh, Julien Kloetzer

    Abstract:

    In this work, we improve the performance of intra-sentential zero Anaphora resolution in Japanese using a novel method of recognizing subject sharing relations. In Japanese, a large portion of intrasentential zero Anaphora can be regarded as subject sharing relations between predicates, that is, the subject of some predicate is also the unrealized subject of other predicates. We develop an accurate recognizer of subject sharing relations for pairs of predicates in a single sentence, and then construct a subject shared predicate network, which is a set of predicates that are linked by the subject sharing relations recognized by our recognizer. We finally combine our zero Anaphora resolution method exploiting the subject shared predicate network and a state-ofthe-art ILP-based zero Anaphora resolution method. Our combined method achieved a significant improvement over the the ILPbased method alone on intra-sentential zero Anaphora resolution in Japanese. To the best of our knowledge, this is the first work to explicitly use an independent subject sharing recognizer in zero Anaphora resolution.

  • a cross lingual ilp solution to zero Anaphora resolution
    Meeting of the Association for Computational Linguistics, 2011
    Co-Authors: Ryu Iida, Massimo Poesio

    Abstract:

    We present an ILP-based model of zero Anaphora detection and resolution that builds on the joint determination of anaphoricity and coreference model proposed by Denis and Baldridge (2007), but revises it and extends it into a three-way ILP problem also incorporating subject detection. We show that this new model outperforms several baselines and competing models, as well as a direct translation of the Denis/Baldridge model, for both Italian and Japanese zero Anaphora. We incorporate our model in complete anaphoric resolvers for both Italian and Japanese, showing that our approach leads to improved performance also when not used in isolation, provided that separate classifiers are used for zeros and for explicitly realized anaphors.

  • zero Anaphora resolution by learning rich syntactic pattern features
    ACM Transactions on Asian Language Information Processing, 2007
    Co-Authors: Ryu Iida, Kentaro Inui, Yuji Matsumoto

    Abstract:

    We approach the zero-Anaphora resolution problem by decomposing it into intrasentential and intersentential zero-Anaphora resolution tasks. For the former task, syntactic patterns of zeropronouns and their antecedents are useful clues. Taking Japanese as a target language, we empirically demonstrate that incorporating rich syntactic pattern features in a state-of-the-art learning-based Anaphora resolution model dramatically improves the accuracy of intrasentential zero-Anaphora, which consequently improves the overall performance of zero-Anaphora resolution.

Graeme Hirst – 2nd expert on this subject based on the ideXlab platform

  • resolving this issue Anaphora
    Empirical Methods in Natural Language Processing, 2012
    Co-Authors: Varada Kolhatkar, Graeme Hirst

    Abstract:

    We annotate and resolve a particular case of abstract Anaphora, namely, this-issue Anaphora. We propose a candidate ranking model for this-issue Anaphora resolution that explores different issue-specific and general abstract-Anaphora features. The model is not restricted to nominal or verbal antecedents; rather, it is able to identify antecedents that are arbitrary spans of text. Our results show that (a) the model outperforms the strong adjacent-sentence baseline; (b) general abstract-Anaphora features, as distinguished from issue-specific features, play a crucial role in this-issue Anaphora resolution, suggesting that our approach can be generalized for other NPs such as this problem and this debate; and (c) it is possible to reduce the search space in order to improve performance.

  • EMNLP-CoNLL – Resolving this-issue Anaphora
    , 2012
    Co-Authors: Varada Kolhatkar, Graeme Hirst

    Abstract:

    We annotate and resolve a particular case of abstract Anaphora, namely, this-issue Anaphora. We propose a candidate ranking model for this-issue Anaphora resolution that explores different issue-specific and general abstract-Anaphora features. The model is not restricted to nominal or verbal antecedents; rather, it is able to identify antecedents that are arbitrary spans of text. Our results show that (a) the model outperforms the strong adjacent-sentence baseline; (b) general abstract-Anaphora features, as distinguished from issue-specific features, play a crucial role in this-issue Anaphora resolution, suggesting that our approach can be generalized for other NPs such as this problem and this debate; and (c) it is possible to reduce the search space in order to improve performance.

Malvina Nissim – 3rd expert on this subject based on the ideXlab platform

  • comparing knowledge sources for nominal Anaphora resolution
    Computational Linguistics, 2005
    Co-Authors: Katja Markert, Malvina Nissim

    Abstract:

    We compare two ways of obtaining lexical knowledge for antecedent selection in other-Anaphora and definite noun phrase coreference. Specifically, we compare an algorithm that relies on links encoded in the manually created lexical hierarchy WordNet and an algorithm that mines corpora by means of shallow lexico-semantic patterns. As corpora we use the British National Corpus (BNC), as well as the Web, which has not been previously used for this task. Our results show that (a) the knowledge encoded in WordNet is often insufficient, especially for anaphor–antecedent relations that exploit subjective or context-dependent knowledge; (b) for other-Anaphora, the Web-based method outperforms the WordNet-based method; (c) for definite NP coreference, the Web-based method yields results comparable to those obtained using WordNet over the whole data set and outperforms the WordNet-based method on subsets of the data set; (d) in both case studies, the BNC-based method is worse than the other methods because of data sparseness. Thus, in our studies, the Web-based method alleviated the lexical knowledge gap often encountered in Anaphora resolution and handled examples with context-dependent relations between anaphor and antecedent. Because it is inexpensive and needs no hand-modeling of lexical knowledge, it is a promising knowledge source to integrate into Anaphora resolution systems.

  • using the web in machine learning for other Anaphora resolution
    Empirical Methods in Natural Language Processing, 2003
    Co-Authors: Natalia N Modjeska, Katja Markert, Malvina Nissim

    Abstract:

    We present a machine learning framework for resolving other-Anaphora. Besides morpho-syntactic, recency, and semantic features based on existing lexical knowledge resources, our algorithm obtains additional semantic knowledge from the Web. We search the Web via lexico-syntactic patterns that are specific to other-anaphors. Incorporating this innovative feature leads to an 11.4 percentage point improvement in the classifier’s F-measure (25% improvement relative to results without this feature).