Dialog System

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 16371 Experts worldwide ranked by ideXlab platform

Gary Geunbae Lee - One of the best experts on this subject based on the ideXlab platform.

  • Natural Language Dialog Systems and Intelligent Assistants - Micro-Counseling Dialog System Based on Semantic Content
    Natural Language Dialog Systems and Intelligent Assistants, 2015
    Co-Authors: Sangdo Han, Yonghee Kim, Gary Geunbae Lee
    Abstract:

    This paper introduces a text Dialog System that can provide counseling Dialog based on the semantic content of user utterances. We extract emotion-, problem-, and reason-oriented semantic contents from user utterances to generate micro-counseling System responses. Our counseling strategy follows micro-counseling techniques to build a working relationship with a client and to discover the client’s concerns and problems. Extracting semantic contents allows the System to generate appropriate counseling responses for various user utterances. Experiments show that our System works well as a virtual counselor.

  • Natural Language Dialog Systems and Intelligent Assistants - DietTalk: Diet and Health Assistant Based on Spoken Dialog System
    Natural Language Dialog Systems and Intelligent Assistants, 2015
    Co-Authors: Sohyeon Jung, Seonghan Ryu, Sangdo Han, Gary Geunbae Lee
    Abstract:

    This paper presents DietTalk, a diet and health assistant based on a spoken Dialog System. The purpose of DietTalk is to help people to control their weight by consulting with it using natural language. DietTalk stores personal status, provides food and exercise information, and recommends appropriate food and exercise. To evaluate the effectiveness of DietTalk, we performed human user experiments. DietTalk had good accuracy and satisfied users; therefore, DietTalk is effective in helping users to control their weight.

  • Natural Language Dialog Systems and Intelligent Assistants - Detecting Multiple Domains from User’s Utterance in Spoken Dialog System
    Natural Language Dialog Systems and Intelligent Assistants, 2015
    Co-Authors: Seonghan Ryu, Jaiyoun Song, Sangjun Koo, Soonchoul Kwon, Gary Geunbae Lee
    Abstract:

    Multi-domain spoken Dialog System should be able to detect more than one domain from a user’s utterance. However, it is difficult to train an accurate binary classifier of a domain based on only positive and unlabeled examples. This paper improves hierarchical clustering algorithm to automatically identify reliable negative examples among unlabeled examples. This paper also verifies three linkage criteria that measure the distance between two clusters. In experiments, the proposed method resulted in the highest gain of F1 score compared to the existing methods.

  • Unsupervised spoken language understanding for a multi-domain Dialog System
    IEEE Transactions on Audio Speech and Language Processing, 2013
    Co-Authors: Donghyeon Lee, Seonghan Ryu, Minwoo Jeong, Kyungduk Kim, Gary Geunbae Lee
    Abstract:

    This paper proposes an unsupervised spoken language understanding (SLU) framework for a multi-domain Dialog System. Our unsupervised SLU framework applies a non-parametric Bayesian approach to Dialog acts, intents and slot entities, which are the components of a semantic frame. The proposed approach reduces the human effort necessary to obtain a semantically annotated corpus for Dialog System development. In this study, we analyze clustering results using various evaluation metrics for four Dialog corpora. We also introduce a multi-domain Dialog System that uses the unsupervised SLU framework. We argue that our unsupervised approach can help overcome the annotation acquisition bottleneck in developing Dialog Systems. To verify this claim, we report a Dialog System evaluation, in which our method achieves competitive results in comparison with a System that uses a manually annotated corpus. In addition, we conducted several experiments to explore the effect of our approach on reducing development costs. The results show that our approach be helpful for the rapid development of a prototype System and reducing the overall development costs.

  • Example-based Dialog modeling for practical multi-domain Dialog System
    Speech Communication, 2009
    Co-Authors: Cheongjae Lee, Seokhwan Kim, Sangkeun Jung, Gary Geunbae Lee
    Abstract:

    This paper proposes a generic Dialog modeling framework for a multi-domain Dialog System to simultaneously manage goal-oriented and chat Dialogs for both information access and entertainment. We developed a Dialog modeling technique using an example-based approach to implement multiple applications such as car navigation, weather information, TV program guidance, and chatbot. Example-based Dialog modeling (EBDM) is a simple and effective method for prototyping and deploying of various Dialog Systems. This paper also introduces the System architecture of multi-domain Dialog Systems using the EBDM framework and the domain spotting technique. In our experiments, we evaluate our System using both simulated and real users. We expect that our approach can support flexible management of multi-domain Dialogs on the same framework.

María Inés Torres - One of the best experts on this subject based on the ideXlab platform.

  • A Multi-lingual Evaluation of the vAssist Spoken Dialog System. Comparing Disco and RavenClaw
    Dialogues with Social Robots, 2017
    Co-Authors: Javier Mikel Olaso, Julia Himmelsbach, Pierrick Milhorat, Stephan Schlögl, Gérard Chollet, Jerome Boudy, María Inés Torres
    Abstract:

    vAssist (Voice Controlled Assistive CareOlaso, Javier Mikel and CommunicationMilhorat, Pierrick Services for the Home) is a EuropeanHimmelsbach, Julia project for which severalBoudy, Jérôme research institutesChollet, Gérard and companiesSchlögl, Stephan have beenTorres, María Inés working on the development of adapted spoken interfaces to support home care and communication services. This paper describes the spoken Dialog System that has been built. Its natural language understanding module includes a novel reference resolver and it introduces a new hierarchical paradigm to model Dialog tasks. The user-centered approach applied to the whole development process led to the setup of several experiment sessions with real users. Multilingual experiments carried out in Austria, France and Spain are described along with their analyses and results in terms of both System performance and user experience. An additional experimental comparison of the RavenClaw and Disco-LFF Dialog managers built into the vAssist spoken Dialog System highlighted similar performance and user acceptance.

  • A multi-lingual evaluation of the vAssist spoken Dialog System : comparing Disco and RavenClaw
    2016
    Co-Authors: Javier Mikel Olaso, Julia Himmelsbach, Pierrick Milhorat, Stephan Schlögl, Gérard Chollet, Jerome Boudy, María Inés Torres
    Abstract:

    vAssist (Voice Controlled Assistive Care and Communication Services for the Home) is a European project for which several research institutes and companies have been working on the development of adapted spoken interfaces to support home care and communication services. This paper describes the spoken Dialog System that has been built. Its natural language understanding module includes a novel reference resolver and it introduces a new hierarchical paradigm to model Dialog tasks. The user-centered approach applied to the whole development process led to the setup of several experiment sessions with real users. Multilingual experiments carried out in Austria, France and Spain are described along with the analyses and results in terms of both System performance and user experience. An additional experimental comparison of the RavenClaw and Disco-LFF Dialog managers built into the vAssist spoken Dialog System highlighted similar performance and user acceptance

Dan Bohus - One of the best experts on this subject based on the ideXlab platform.

  • learning to predict engagement with a spoken Dialog System in open world settings
    Annual Meeting of the Special Interest Group on Discourse and Dialogue, 2009
    Co-Authors: Dan Bohus, Eric Horvitz
    Abstract:

    We describe a machine learning approach that allows an open-world spoken Dialog System to learn to predict engagement intentions in situ, from interaction. The proposed approach does not require any developer supervision, and leverages spatiotemporal and attentional features automatically extracted from a visual analysis of people coming into the proximity of the System to produce models that are attuned to the characteristics of the environment the System is placed in. Experimental results indicate that a System using the proposed approach can learn to recognize engagement intentions at low false positive rates (e.g. 2--4%) up to 3--4 seconds prior to the actual moment of engagement.

  • SIGDIAL Conference - Learning to Predict Engagement with a Spoken Dialog System in Open-World Settings
    Proceedings of the SIGDIAL 2009 Conference on The 10th Annual Meeting of the Special Interest Group on Discourse and Dialogue - SIGDIAL '09, 2009
    Co-Authors: Dan Bohus, Eric Horvitz
    Abstract:

    We describe a machine learning approach that allows an open-world spoken Dialog System to learn to predict engagement intentions in situ, from interaction. The proposed approach does not require any developer supervision, and leverages spatiotemporal and attentional features automatically extracted from a visual analysis of people coming into the proximity of the System to produce models that are attuned to the characteristics of the environment the System is placed in. Experimental results indicate that a System using the proposed approach can learn to recognize engagement intentions at low false positive rates (e.g. 2--4%) up to 3--4 seconds prior to the actual moment of engagement.

  • ConQuest: An open-source Dialog System for conferences
    Proceeding NAACL-Short '07 Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguis, 2007
    Co-Authors: Dan Bohus, S Puerto
    Abstract:

    We describe ConQuest, an open-source, reusable spoken Dialog System that pro-vides technical program information dur-ing conferences. The System uses a transparent, modular and open infrastruc-ture, and aims to enable applied research in spoken language interfaces. The con-ference domain is a good platform for ap-plied research since it permits periodical redeployments and evaluations with a real user-base. In this paper, we describe the System's functionality, overall architec-ture, and we discuss two initial deploy-ments.

  • Let's go public! taking a spoken Dialog System to the real world
    in Proc. of Interspeech …, 2005
    Co-Authors: Antoine Raux, Brian Langner, Dan Bohus
    Abstract:

    In this paper, we describe how a research spoken Dialog System was made available to the general public. The Let's Go Public spoken Dialog System provides bus schedule information to the Pittsburgh population during off-peak times. This paper describes the changes necessary to make the System usable for the general public and presents analysis of the calls and strategies we have used to ensure high performance.

  • integrating multiple knowledge sources for utterance level confidence annotation in the cmu communicator spoken Dialog System
    2002
    Co-Authors: Dan Bohus, Alexander I. Rudnicky
    Abstract:

    Abstract : In the recent years, automated speech recognition has been the main drive behind the advent of spoken language interfaces, but at the same a time a severe limiting factor in the development of these Systems. We believe that increased robustness in the face of recognition errors can be achieved by making the Systems aware of their own misunderstandings, and employing appropriate recovery techniques when breakdowns in interacted occur. In this paper we address the first problem: the development of an utterance-level confidence annotator for a spoken Dialog System. After a brief introduction to the CMU Communicator spoken Dialog System (which provided the target platform for the developed annotator), we cast the confidence annotation problem as a machine learning classification task, and focus on selecting relevant features and on empirically identifying the best classification techniques for this task. The results indicate that significant reductions in classification error rate can be obtained using several different classifiers. Furthermore, we propose a data driven approach to assessing the impact of the errors committed by the confidence annotator on Dialog performance, with a view to optimally fine-tuning the annotator. Several models were constructed, and the resulting error costs were in accordance with our intuition. We found, surprisingly, that, at least for a mixed-initiative spoken Dialog System as the CMU Communicator, these errors trade-all equally over a wide operating characteristic range.

Adam Emfield - One of the best experts on this subject based on the ideXlab platform.

  • SLT - An end-to-end Dialog System for TV program discovery
    2014 IEEE Spoken Language Technology Workshop (SLT), 2014
    Co-Authors: Deepak Ramachandran, Benjamin Douglas, Ronald Provine, Adwait Ratnaparkhi, Jeremy Mendel, William Jarrold, Adam Emfield
    Abstract:

    In this paper, we present an end-to-end Dialog System for TV program discovery that uniquely combines several technologies such as trainable relation extraction, belief tracking over relational structures, mixed-initiative Dialog management, and inference over large-scale knowledge graphs. We present an evaluation of our end-to-end System with real users, and found that our System performed well along several dimensions such as usability and task success rate. These results demonstrate the effectiveness of our System in the target domain.

  • An end-to-end Dialog System for TV program discovery
    2014 IEEE Workshop on Spoken Language Technology SLT 2014 - Proceedings, 2014
    Co-Authors: Deepak Ramachandran, Benjamin Douglas, Ronald Provine, Adwait Ratnaparkhi, Jeremy Mendel, Peter Z. Yeh, William Jarrold, Adam Emfield
    Abstract:

    In this paper, we present an end-to-end Dialog System for TV program discovery that uniquely combines several technologies such as trainable relation extraction, belief tracking over relational structures, mixed-initiative Dialog management, and inference over large-scale knowledge graphs. We present an evaluation of our end-to-end System with real users, and found that our System performed well along several dimensions such as usability and task success rate. These results demonstrate the effectiveness of our System in the target domain.

Javier Mikel Olaso - One of the best experts on this subject based on the ideXlab platform.

  • A Multi-lingual Evaluation of the vAssist Spoken Dialog System. Comparing Disco and RavenClaw
    Dialogues with Social Robots, 2017
    Co-Authors: Javier Mikel Olaso, Julia Himmelsbach, Pierrick Milhorat, Stephan Schlögl, Gérard Chollet, Jerome Boudy, María Inés Torres
    Abstract:

    vAssist (Voice Controlled Assistive CareOlaso, Javier Mikel and CommunicationMilhorat, Pierrick Services for the Home) is a EuropeanHimmelsbach, Julia project for which severalBoudy, Jérôme research institutesChollet, Gérard and companiesSchlögl, Stephan have beenTorres, María Inés working on the development of adapted spoken interfaces to support home care and communication services. This paper describes the spoken Dialog System that has been built. Its natural language understanding module includes a novel reference resolver and it introduces a new hierarchical paradigm to model Dialog tasks. The user-centered approach applied to the whole development process led to the setup of several experiment sessions with real users. Multilingual experiments carried out in Austria, France and Spain are described along with their analyses and results in terms of both System performance and user experience. An additional experimental comparison of the RavenClaw and Disco-LFF Dialog managers built into the vAssist spoken Dialog System highlighted similar performance and user acceptance.

  • A multi-lingual evaluation of the vAssist spoken Dialog System : comparing Disco and RavenClaw
    2016
    Co-Authors: Javier Mikel Olaso, Julia Himmelsbach, Pierrick Milhorat, Stephan Schlögl, Gérard Chollet, Jerome Boudy, María Inés Torres
    Abstract:

    vAssist (Voice Controlled Assistive Care and Communication Services for the Home) is a European project for which several research institutes and companies have been working on the development of adapted spoken interfaces to support home care and communication services. This paper describes the spoken Dialog System that has been built. Its natural language understanding module includes a novel reference resolver and it introduces a new hierarchical paradigm to model Dialog tasks. The user-centered approach applied to the whole development process led to the setup of several experiment sessions with real users. Multilingual experiments carried out in Austria, France and Spain are described along with the analyses and results in terms of both System performance and user experience. An additional experimental comparison of the RavenClaw and Disco-LFF Dialog managers built into the vAssist spoken Dialog System highlighted similar performance and user acceptance