Natural Language Generation

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 20499 Experts worldwide ranked by ideXlab platform

Robert Malouf - One of the best experts on this subject based on the ideXlab platform.

Verena Rieser - One of the best experts on this subject based on the ideXlab platform.

  • Semantic Noise Matters for Neural Natural Language Generation
    arXiv: Computation and Language, 2019
    Co-Authors: Ondřej Dušek, David M. Howcroft, Verena Rieser
    Abstract:

    Neural Natural Language Generation (NNLG) systems are known for their pathological outputs, i.e. generating text which is unrelated to the input specification. In this paper, we show the impact of semantic noise on state-of-the-art NNLG models which implement different semantic control mechanisms. We find that cleaned data can improve semantic correctness by up to 97%, while maintaining fluency. We also find that the most common error is omitting information, rather than hallucination.

  • Referenceless Quality Estimation for Natural Language Generation
    Proceedings of the ICML’17 LGML Workshop, 2017
    Co-Authors: Ondřej Dušek, Jekaterina Novikova, Verena Rieser
    Abstract:

    Traditional automatic evaluation measures for Natural Language Generation (NLG) use costly human-authored references to estimate the qual-ity of a system output. In this paper, we propose a referenceless quality estimation (QE) approach based on recurrent neural networks, which pre-dicts a quality score for a NLG system output by comparing it to the source meaning represen-tation only. Our method outperforms traditional metrics and a constant baseline in most respects; we also show that synthetic data helps to increase correlation results by 21% compared to the base system. Our results are comparable to results ob-tained in similar QE tasks despite the more chal-lenging setting.

  • Adaptive Natural Language Generation
    Reinforcement Learning for Adaptive Dialogue Systems, 2011
    Co-Authors: Verena Rieser, Oliver Lemon
    Abstract:

    This Chapter shows how the framework developed throughout the book can be applied to a related set of problems, in Natural Language Generation (NLG) for interactive systems. It therefore provides some evidence for the generality of our approach, as well as drawing out some new insights regarding its application.

  • Empirical Methods in Natural Language Generation - Natural Language Generation as Planning Under Uncertainty for Spoken Dialogue Systems
    2009
    Co-Authors: Verena Rieser, Oliver Lemon
    Abstract:

    We present and evaluate a new model for Natural Language Generation (NLG) in Spoken Dialogue Systems, based on statistical planning, given noisy feedback from the current Generation context (e.g. a user and a surface realiser). We study its use in a standard NLG problem: how to present information (in this case a set of search results) to users, given the complex tradeoffs between utterance length, amount of information conveyed, and cognitive load. We set these trade-offs by analysing existing match data. We then train a NLG policy using Reinforcement Learning (RL), which adapts its behaviour to noisy feedback from the current Generation context. This policy is compared to several baselines derived from previous work in this area. The learned policy significantly outperforms all the prior approaches.

Marie Meteer - One of the best experts on this subject based on the ideXlab platform.

Krahmeremiel - One of the best experts on this subject based on the ideXlab platform.

Ondřej Dušek - One of the best experts on this subject based on the ideXlab platform.

  • Semantic Noise Matters for Neural Natural Language Generation
    arXiv: Computation and Language, 2019
    Co-Authors: Ondřej Dušek, David M. Howcroft, Verena Rieser
    Abstract:

    Neural Natural Language Generation (NNLG) systems are known for their pathological outputs, i.e. generating text which is unrelated to the input specification. In this paper, we show the impact of semantic noise on state-of-the-art NNLG models which implement different semantic control mechanisms. We find that cleaned data can improve semantic correctness by up to 97%, while maintaining fluency. We also find that the most common error is omitting information, rather than hallucination.

  • Referenceless Quality Estimation for Natural Language Generation
    Proceedings of the ICML’17 LGML Workshop, 2017
    Co-Authors: Ondřej Dušek, Jekaterina Novikova, Verena Rieser
    Abstract:

    Traditional automatic evaluation measures for Natural Language Generation (NLG) use costly human-authored references to estimate the qual-ity of a system output. In this paper, we propose a referenceless quality estimation (QE) approach based on recurrent neural networks, which pre-dicts a quality score for a NLG system output by comparing it to the source meaning represen-tation only. Our method outperforms traditional metrics and a constant baseline in most respects; we also show that synthetic data helps to increase correlation results by 21% compared to the base system. Our results are comparable to results ob-tained in similar QE tasks despite the more chal-lenging setting.