Anaphora

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 8379 Experts worldwide ranked by ideXlab platform

Ryu Iida - One of the best experts on this subject based on the ideXlab platform.

  • intra sentential zero Anaphora resolution using subject sharing recognition
    Empirical Methods in Natural Language Processing, 2015
    Co-Authors: Ryu Iida, Kentaro Torisawa, Chikara Hashimoto, Jonghoon Oh, Julien Kloetzer
    Abstract:

    In this work, we improve the performance of intra-sentential zero Anaphora resolution in Japanese using a novel method of recognizing subject sharing relations. In Japanese, a large portion of intrasentential zero Anaphora can be regarded as subject sharing relations between predicates, that is, the subject of some predicate is also the unrealized subject of other predicates. We develop an accurate recognizer of subject sharing relations for pairs of predicates in a single sentence, and then construct a subject shared predicate network, which is a set of predicates that are linked by the subject sharing relations recognized by our recognizer. We finally combine our zero Anaphora resolution method exploiting the subject shared predicate network and a state-ofthe-art ILP-based zero Anaphora resolution method. Our combined method achieved a significant improvement over the the ILPbased method alone on intra-sentential zero Anaphora resolution in Japanese. To the best of our knowledge, this is the first work to explicitly use an independent subject sharing recognizer in zero Anaphora resolution.

  • a cross lingual ilp solution to zero Anaphora resolution
    Meeting of the Association for Computational Linguistics, 2011
    Co-Authors: Ryu Iida, Massimo Poesio
    Abstract:

    We present an ILP-based model of zero Anaphora detection and resolution that builds on the joint determination of anaphoricity and coreference model proposed by Denis and Baldridge (2007), but revises it and extends it into a three-way ILP problem also incorporating subject detection. We show that this new model outperforms several baselines and competing models, as well as a direct translation of the Denis/Baldridge model, for both Italian and Japanese zero Anaphora. We incorporate our model in complete anaphoric resolvers for both Italian and Japanese, showing that our approach leads to improved performance also when not used in isolation, provided that separate classifiers are used for zeros and for explicitly realized anaphors.

  • zero Anaphora resolution by learning rich syntactic pattern features
    ACM Transactions on Asian Language Information Processing, 2007
    Co-Authors: Ryu Iida, Kentaro Inui, Yuji Matsumoto
    Abstract:

    We approach the zero-Anaphora resolution problem by decomposing it into intrasentential and intersentential zero-Anaphora resolution tasks. For the former task, syntactic patterns of zeropronouns and their antecedents are useful clues. Taking Japanese as a target language, we empirically demonstrate that incorporating rich syntactic pattern features in a state-of-the-art learning-based Anaphora resolution model dramatically improves the accuracy of intrasentential zero-Anaphora, which consequently improves the overall performance of zero-Anaphora resolution.

  • exploiting syntactic patterns as clues in zero Anaphora resolution
    Meeting of the Association for Computational Linguistics, 2006
    Co-Authors: Ryu Iida, Kentaro Inui, Yuji Matsumoto
    Abstract:

    We approach the zero-Anaphora resolution problem by decomposing it into intra-sentential and inter-sentential zero-Anaphora resolution. For the former problem, syntactic patterns of the appearance of zero-pronouns and their antecedents are useful clues. Taking Japanese as a target language, we empirically demonstrate that incorporating rich syntactic pattern features in a state-of-the-art learning-based Anaphora resolution model dramatically improves the accuracy of intra-sentential zero-Anaphora, which consequently improves the overall performance of zero-Anaphora resolution.

Graeme Hirst - One of the best experts on this subject based on the ideXlab platform.

  • resolving this issue Anaphora
    Empirical Methods in Natural Language Processing, 2012
    Co-Authors: Varada Kolhatkar, Graeme Hirst
    Abstract:

    We annotate and resolve a particular case of abstract Anaphora, namely, this-issue Anaphora. We propose a candidate ranking model for this-issue Anaphora resolution that explores different issue-specific and general abstract-Anaphora features. The model is not restricted to nominal or verbal antecedents; rather, it is able to identify antecedents that are arbitrary spans of text. Our results show that (a) the model outperforms the strong adjacent-sentence baseline; (b) general abstract-Anaphora features, as distinguished from issue-specific features, play a crucial role in this-issue Anaphora resolution, suggesting that our approach can be generalized for other NPs such as this problem and this debate; and (c) it is possible to reduce the search space in order to improve performance.

  • EMNLP-CoNLL - Resolving this-issue Anaphora
    2012
    Co-Authors: Varada Kolhatkar, Graeme Hirst
    Abstract:

    We annotate and resolve a particular case of abstract Anaphora, namely, this-issue Anaphora. We propose a candidate ranking model for this-issue Anaphora resolution that explores different issue-specific and general abstract-Anaphora features. The model is not restricted to nominal or verbal antecedents; rather, it is able to identify antecedents that are arbitrary spans of text. Our results show that (a) the model outperforms the strong adjacent-sentence baseline; (b) general abstract-Anaphora features, as distinguished from issue-specific features, play a crucial role in this-issue Anaphora resolution, suggesting that our approach can be generalized for other NPs such as this problem and this debate; and (c) it is possible to reduce the search space in order to improve performance.

Malvina Nissim - One of the best experts on this subject based on the ideXlab platform.

  • comparing knowledge sources for nominal Anaphora resolution
    Computational Linguistics, 2005
    Co-Authors: Katja Markert, Malvina Nissim
    Abstract:

    We compare two ways of obtaining lexical knowledge for antecedent selection in other-Anaphora and definite noun phrase coreference. Specifically, we compare an algorithm that relies on links encoded in the manually created lexical hierarchy WordNet and an algorithm that mines corpora by means of shallow lexico-semantic patterns. As corpora we use the British National Corpus (BNC), as well as the Web, which has not been previously used for this task. Our results show that (a) the knowledge encoded in WordNet is often insufficient, especially for anaphor–antecedent relations that exploit subjective or context-dependent knowledge; (b) for other-Anaphora, the Web-based method outperforms the WordNet-based method; (c) for definite NP coreference, the Web-based method yields results comparable to those obtained using WordNet over the whole data set and outperforms the WordNet-based method on subsets of the data set; (d) in both case studies, the BNC-based method is worse than the other methods because of data sparseness. Thus, in our studies, the Web-based method alleviated the lexical knowledge gap often encountered in Anaphora resolution and handled examples with context-dependent relations between anaphor and antecedent. Because it is inexpensive and needs no hand-modeling of lexical knowledge, it is a promising knowledge source to integrate into Anaphora resolution systems.

  • using the web in machine learning for other Anaphora resolution
    Empirical Methods in Natural Language Processing, 2003
    Co-Authors: Natalia N Modjeska, Katja Markert, Malvina Nissim
    Abstract:

    We present a machine learning framework for resolving other-Anaphora. Besides morpho-syntactic, recency, and semantic features based on existing lexical knowledge resources, our algorithm obtains additional semantic knowledge from the Web. We search the Web via lexico-syntactic patterns that are specific to other-anaphors. Incorporating this innovative feature leads to an 11.4 percentage point improvement in the classifier's F-measure (25% improvement relative to results without this feature).

Amit Almor - One of the best experts on this subject based on the ideXlab platform.

  • The Repeated Name Penalty, the Overt Pronoun Penalty, and Topic in Japanese
    Journal of Psycholinguistic Research, 2017
    Co-Authors: Shinichi Shoji, Stanley Dubinsky, Amit Almor
    Abstract:

    When reading sentences with an anaphoric reference to a subject antecedent, repeated-name anaphors result in slower reading times relative to pronouns (the Repeated Name Penalty: RNP), and overt pronouns are read slower than null pronouns (the Overt Pronoun Penalty: OPP). Because in most languages previously tested, the grammatical subject is typically also the discourse topic it remains unclear whether these effects reflect anaphors’ subject-hood or their topic-hood. To address this question we conducted a self-paced reading experiment in Japanese, a language which morphologically marks both subjects and topics overtly. Our results show that both repeated-name topic-subject anaphors and repeated-name non-topic-subject anaphors exhibit the RNP and that both overt-pronoun topic-subject and overt-pronoun non-topic-subject anaphors show the OPP. However, a detailed examination of performance revealed an interaction between the anaphor topic marking, reference form, and the antecedent’s grammatical status, indicating that the effect of the antecedent’s grammatical status is strongest for null pronoun and repeated name subject anaphors and that the overt form most similar to null pronouns is the repeated name topic anaphor. We discuss the implications of these findings for theories of anaphor processing.

  • noun phrase Anaphora and focus the informational load hypothesis
    Psychological Review, 1999
    Co-Authors: Amit Almor
    Abstract:

    : The processing of noun-phrase (NP) anaphors in discourse is argued to reflect constraints on the activation and processing of semantic information in working memory. The proposed theory views NP anaphor processing as an optimization process that is based on the principle that processing cost, defined in terms of activating semantic information, should serve some discourse function--identifying the antecedent, adding new information, or both. In a series of 5 self-paced reading experiments, anaphors' functionality was manipulated by changing the discourse focus, and their cost was manipulated by changing the semantic relation between the anaphors and their antecedents. The results show that reading times of NP anaphors reflect their functional justification: Anaphors were read faster when their cost had a better functional justification. These results are incompatible with any theory that treats NP anaphors as one homogeneous class regardless of discourse function and processing cost.

Varada Kolhatkar - One of the best experts on this subject based on the ideXlab platform.

  • resolving this issue Anaphora
    Empirical Methods in Natural Language Processing, 2012
    Co-Authors: Varada Kolhatkar, Graeme Hirst
    Abstract:

    We annotate and resolve a particular case of abstract Anaphora, namely, this-issue Anaphora. We propose a candidate ranking model for this-issue Anaphora resolution that explores different issue-specific and general abstract-Anaphora features. The model is not restricted to nominal or verbal antecedents; rather, it is able to identify antecedents that are arbitrary spans of text. Our results show that (a) the model outperforms the strong adjacent-sentence baseline; (b) general abstract-Anaphora features, as distinguished from issue-specific features, play a crucial role in this-issue Anaphora resolution, suggesting that our approach can be generalized for other NPs such as this problem and this debate; and (c) it is possible to reduce the search space in order to improve performance.

  • EMNLP-CoNLL - Resolving this-issue Anaphora
    2012
    Co-Authors: Varada Kolhatkar, Graeme Hirst
    Abstract:

    We annotate and resolve a particular case of abstract Anaphora, namely, this-issue Anaphora. We propose a candidate ranking model for this-issue Anaphora resolution that explores different issue-specific and general abstract-Anaphora features. The model is not restricted to nominal or verbal antecedents; rather, it is able to identify antecedents that are arbitrary spans of text. Our results show that (a) the model outperforms the strong adjacent-sentence baseline; (b) general abstract-Anaphora features, as distinguished from issue-specific features, play a crucial role in this-issue Anaphora resolution, suggesting that our approach can be generalized for other NPs such as this problem and this debate; and (c) it is possible to reduce the search space in order to improve performance.