The Experts below are selected from a list of 20499 Experts worldwide ranked by ideXlab platform
Robert Malouf - One of the best experts on this subject based on the ideXlab platform.
-
the order of prenominal adjectives in Natural Language Generation
Meeting of the Association for Computational Linguistics, 2000Co-Authors: Robert MaloufAbstract:The order of prenominal adjectival modifiers in English is governed by complex and difficult to describe constraints which straddle the boundary between competence and performance. This paper describes and compares a number of statistical and machine learning techniques for ordering sequences of adjectives in the context of a Natural Language Generation system.
-
ACL - The order of prenominal adjectives in Natural Language Generation
Proceedings of the 38th Annual Meeting on Association for Computational Linguistics - ACL '00, 2000Co-Authors: Robert MaloufAbstract:The order of prenominal adjectival modifiers in English is governed by complex and difficult to describe constraints which straddle the boundary between competence and performance. This paper describes and compares a number of statistical and machine learning techniques for ordering sequences of adjectives in the context of a Natural Language Generation system.
Verena Rieser - One of the best experts on this subject based on the ideXlab platform.
-
Semantic Noise Matters for Neural Natural Language Generation
arXiv: Computation and Language, 2019Co-Authors: Ondřej Dušek, David M. Howcroft, Verena RieserAbstract:Neural Natural Language Generation (NNLG) systems are known for their pathological outputs, i.e. generating text which is unrelated to the input specification. In this paper, we show the impact of semantic noise on state-of-the-art NNLG models which implement different semantic control mechanisms. We find that cleaned data can improve semantic correctness by up to 97%, while maintaining fluency. We also find that the most common error is omitting information, rather than hallucination.
-
Referenceless Quality Estimation for Natural Language Generation
Proceedings of the ICML’17 LGML Workshop, 2017Co-Authors: Ondřej Dušek, Jekaterina Novikova, Verena RieserAbstract:Traditional automatic evaluation measures for Natural Language Generation (NLG) use costly human-authored references to estimate the qual-ity of a system output. In this paper, we propose a referenceless quality estimation (QE) approach based on recurrent neural networks, which pre-dicts a quality score for a NLG system output by comparing it to the source meaning represen-tation only. Our method outperforms traditional metrics and a constant baseline in most respects; we also show that synthetic data helps to increase correlation results by 21% compared to the base system. Our results are comparable to results ob-tained in similar QE tasks despite the more chal-lenging setting.
-
Adaptive Natural Language Generation
Reinforcement Learning for Adaptive Dialogue Systems, 2011Co-Authors: Verena Rieser, Oliver LemonAbstract:This Chapter shows how the framework developed throughout the book can be applied to a related set of problems, in Natural Language Generation (NLG) for interactive systems. It therefore provides some evidence for the generality of our approach, as well as drawing out some new insights regarding its application.
-
Empirical Methods in Natural Language Generation - Natural Language Generation as Planning Under Uncertainty for Spoken Dialogue Systems
2009Co-Authors: Verena Rieser, Oliver LemonAbstract:We present and evaluate a new model for Natural Language Generation (NLG) in Spoken Dialogue Systems, based on statistical planning, given noisy feedback from the current Generation context (e.g. a user and a surface realiser). We study its use in a standard NLG problem: how to present information (in this case a set of search results) to users, given the complex tradeoffs between utterance length, amount of information conveyed, and cognitive load. We set these trade-offs by analysing existing match data. We then train a NLG policy using Reinforcement Learning (RL), which adapts its behaviour to noisy feedback from the current Generation context. This policy is compared to several baselines derived from previous work in this area. The learned policy significantly outperforms all the prior approaches.
Marie Meteer - One of the best experts on this subject based on the ideXlab platform.
-
The Seventh International Workshop on Natural Language Generation
Ai Magazine, 1995Co-Authors: Koenraad De Smedt, Eduard Hovy, David D. Mcdonald, Marie MeteerAbstract:The Seventh International Workshop on Natural Language Generation was held from 21 to 24 June 1994 in Kennebunkport, Maine. Sixty-seven people from 13 countries attended this 4-day meeting on the study of Natural Language Generation in computational linguistics and AI. The goal of the workshop was to introduce new, cutting-edge work to the community and provide an atmosphere in which discussion and exchange would flourish.
-
Proceedings of the Seventh International Workshop on Natural Language Generation
1994Co-Authors: David D. Mcdonald, Marie MeteerAbstract:The Seventh International Workshop on Natural Language Generation was organized by a committee that was assembled at the end of the previous International Generation Workshop, held at Castel Ivano in Trento, Italy in April of 1992.
Krahmeremiel - One of the best experts on this subject based on the ideXlab platform.
-
Survey of the state of the art in Natural Language Generation
Journal of Artificial Intelligence Research, 2018Co-Authors: Gattalbert, KrahmeremielAbstract:This paper surveys the current state of the art in Natural Language Generation (NLG), defined as the task of generating text or speech from non-linguistic input. A survey of NLG is timely in view o...
Ondřej Dušek - One of the best experts on this subject based on the ideXlab platform.
-
Semantic Noise Matters for Neural Natural Language Generation
arXiv: Computation and Language, 2019Co-Authors: Ondřej Dušek, David M. Howcroft, Verena RieserAbstract:Neural Natural Language Generation (NNLG) systems are known for their pathological outputs, i.e. generating text which is unrelated to the input specification. In this paper, we show the impact of semantic noise on state-of-the-art NNLG models which implement different semantic control mechanisms. We find that cleaned data can improve semantic correctness by up to 97%, while maintaining fluency. We also find that the most common error is omitting information, rather than hallucination.
-
Referenceless Quality Estimation for Natural Language Generation
Proceedings of the ICML’17 LGML Workshop, 2017Co-Authors: Ondřej Dušek, Jekaterina Novikova, Verena RieserAbstract:Traditional automatic evaluation measures for Natural Language Generation (NLG) use costly human-authored references to estimate the qual-ity of a system output. In this paper, we propose a referenceless quality estimation (QE) approach based on recurrent neural networks, which pre-dicts a quality score for a NLG system output by comparing it to the source meaning represen-tation only. Our method outperforms traditional metrics and a constant baseline in most respects; we also show that synthetic data helps to increase correlation results by 21% compared to the base system. Our results are comparable to results ob-tained in similar QE tasks despite the more chal-lenging setting.