Face-To-Face Interaction

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 312 Experts worldwide ranked by ideXlab platform

Samer Al Moubayed - One of the best experts on this subject based on the ideXlab platform.

  • ICMI - IrisTK: a statechart-based toolkit for multi-party Face-To-Face Interaction
    Proceedings of the 14th ACM international conference on Multimodal interaction - ICMI '12, 2012
    Co-Authors: Gabriel Skantze, Samer Al Moubayed
    Abstract:

    In this paper, we present IrisTK - a toolkit for rapid development of real-time systems for multi-party Face-To-Face Interaction. The toolkit consists of a message passing system, a set of modules for multi-modal input and output, and a dialog authoring language based on the notion of statecharts. The toolkit has been applied to a large scale study in a public museum setting, where the back-projected robot head Furhat interacted with the visitors in multi-party dialog.

  • IrisTK: A Statechart-based Toolkit for Multi-party Face-To-Face Interaction
    Proceedings of the 14th ACM International Conference on Multimodal Interaction, 2012
    Co-Authors: Gabriel Skantze, Samer Al Moubayed
    Abstract:

    In this paper, we present IrisTK -a toolkit for rapid development of real-time systems for multi-party Face-To-Face Interaction. The toolkit consists of a message passing system, a set of modules for multi-modal input and output, and a dialog authoring language based on the notion of statecharts. The toolkit has been applied to a large scale study in a public museum setting, where the back-projected robot head Furhat interacted with the visitors in multi-party dialog.

  • Perception of gaze direction for situated Interaction
    Workshop on Eye Gaze in Intelligent Human Machine Interaction - Gaze-In '12, 2012
    Co-Authors: Samer Al Moubayed, Gabriel Skantze
    Abstract:

    Accurate human perception of robots' gaze direction is crucial for the design of a natural and fluent situated multimodal Face-To-Face Interaction between humans and machines. In this paper, we present an experiment targeted at quantifying the effects of different gaze cues synthesized using the Furhat back-projected robot head, on the accuracy of perceived spatial direction of gaze by humans using 18 test subjects. The study first quantifies the accuracy of the perceived gaze direction in a human-human setup, and compares that to the use of synthesized gaze movements in different conditions: viewing the robot eyes frontal or at a 45 degrees angle side view. We also study the effect of 3D gaze by controlling both eyes to indicate the depth of the focal point (vergence), the use of gaze or head pose, and the use of static or dynamic eyelids. The findings of the study are highly relevant to the design and control of robots and animated agents in situated Face-To-Face Interaction.

Gabriel Skantze - One of the best experts on this subject based on the ideXlab platform.

  • INTERSPEECH - The MonAMI Reminder : a spoken dialogue system for Face-To-Face Interaction
    2020
    Co-Authors: Jonas Beskow, Gabriel Skantze, Jens Edlund, Björn Granström, Joakim Gustafson, Helena Tobiasson
    Abstract:

    We describe the MonAMI Reminder, a multimodal spoken dialogue system which can assist elderly and disabled people in organising and initiating their daily activities. Based on deep interviews with potential users, we have designed a calendar and reminder application which uses an innovative mix of an embodied conversational agent, digital pen and paper, and the web to meet the needs of those users as well as the current constraints of speech technology. We also explore the use of head pose tracking for Interaction and attention control in human-computer Face-To-Face Interaction.

  • ICMI - IrisTK: a statechart-based toolkit for multi-party Face-To-Face Interaction
    Proceedings of the 14th ACM international conference on Multimodal interaction - ICMI '12, 2012
    Co-Authors: Gabriel Skantze, Samer Al Moubayed
    Abstract:

    In this paper, we present IrisTK - a toolkit for rapid development of real-time systems for multi-party Face-To-Face Interaction. The toolkit consists of a message passing system, a set of modules for multi-modal input and output, and a dialog authoring language based on the notion of statecharts. The toolkit has been applied to a large scale study in a public museum setting, where the back-projected robot head Furhat interacted with the visitors in multi-party dialog.

  • IrisTK: A Statechart-based Toolkit for Multi-party Face-To-Face Interaction
    Proceedings of the 14th ACM International Conference on Multimodal Interaction, 2012
    Co-Authors: Gabriel Skantze, Samer Al Moubayed
    Abstract:

    In this paper, we present IrisTK -a toolkit for rapid development of real-time systems for multi-party Face-To-Face Interaction. The toolkit consists of a message passing system, a set of modules for multi-modal input and output, and a dialog authoring language based on the notion of statecharts. The toolkit has been applied to a large scale study in a public museum setting, where the back-projected robot head Furhat interacted with the visitors in multi-party dialog.

  • Perception of gaze direction for situated Interaction
    Workshop on Eye Gaze in Intelligent Human Machine Interaction - Gaze-In '12, 2012
    Co-Authors: Samer Al Moubayed, Gabriel Skantze
    Abstract:

    Accurate human perception of robots' gaze direction is crucial for the design of a natural and fluent situated multimodal Face-To-Face Interaction between humans and machines. In this paper, we present an experiment targeted at quantifying the effects of different gaze cues synthesized using the Furhat back-projected robot head, on the accuracy of perceived spatial direction of gaze by humans using 18 test subjects. The study first quantifies the accuracy of the perceived gaze direction in a human-human setup, and compares that to the use of synthesized gaze movements in different conditions: viewing the robot eyes frontal or at a 45 degrees angle side view. We also study the effect of 3D gaze by controlling both eyes to indicate the depth of the focal point (vergence), the use of gaze or head pose, and the use of static or dynamic eyelids. The findings of the study are highly relevant to the design and control of robots and animated agents in situated Face-To-Face Interaction.

Frédéric Elisei - One of the best experts on this subject based on the ideXlab platform.

  • Perception-Action Loops and Face-To-Face Interaction
    Revue Francaise De Linguistique Appliquee, 2020
    Co-Authors: Gérard Bailly, Frédéric Elisei, Stephan Raidt
    Abstract:

    This article investigates a blossoming research field: Face-To-Face communication. The performance and the robustness of the technological components that are necessary to the implementation of Face-To-Face Interaction systems between a human being and a conversational agent – vocal technologies, computer vision, image synthesis, dialogue comprehension and generation, etc. – have now come to maturity. We draw a sketch for a research program centred on the modelling of the many perception-action loops that are involved in the Interaction processing and on the dynamic settings of these loops by the many comprehension levels concerning the scene in which human beings, robots and animated conversational agents will inevitably be immersed.

  • Face-To-Face Interaction with a conversationnal agent: eye-gaze and deixis
    2020
    Co-Authors: Stephan Raidt, Frédéric Elisei, Gérard Bailly
    Abstract:

    We present a series of experiments that involve a Face-To-Face Interaction between an embodied conversational agent (ECA) and a human interlocutor. The main challenge is to provide the interlocutor with implicit and explicit signs of mutual interest and attention and of the awareness of environmental conditions in which the Interaction takes place. A video realistic talking head with independent head and eye movements was used as a talking agent interacting with a user during a simple card game offering different levels of help and guidance. We analyzed the user performance and how he perceived the quality of assistance given by the embodied conversational agent. The experiment showed that users profit from its presence and its facial deictic cues.

  • Learning joint multimodal behaviors for Face-To-Face Interaction: performance & properties of statistical models
    2015
    Co-Authors: Gérard Bailly, Alaeddine Mihoub, Christian Wolf, Frédéric Elisei
    Abstract:

    We evaluate here the ability of statistical models, namely Hidden Markov Models (HMMs) and Dynamic Bayesian Networks (DBNs), in capturing the interplay and coordination between multimodal behaviors of two individuals involved in a Face-To-Face Interaction. We structure the intricate sensory-motor coupling of the joint multimodal scores by segmenting the whole Interaction into so-called Interaction units (IU). We show that the proposed statistical models are able to capture the natural dynamics of the Interaction and that DBNs are particularly suitable for reproducing original distributions of so-called coordination histograms.

  • Web Intelligence/IAT Workshops - Gaze Patterns during Face-To-Face Interaction
    2007 IEEE WIC ACM International Conferences on Web Intelligence and Intelligent Agent Technology - Workshops, 2007
    Co-Authors: Stephan Raidt, Gérard Bailly, Frédéric Elisei
    Abstract:

    We present here the analysis of multimodal data gathered during realistic Face-To-Face Interaction of a target speaker with a number of interlocutors. Videos and gaze of both interlocutors were monitored with an experimental setup using coupled cameras and screens equipped with eye trackers. With the aim to understand the functions of gaze in social Interaction and to develop a gaze control model for our talking heads we investigate the influence of cognitive state and social role on the observed gaze behaviour.

  • AVSP - Analyzing and modeling gaze during Face-To-Face Interaction
    2007
    Co-Authors: Stephan Raidt, Gérard Bailly, Frédéric Elisei
    Abstract:

    We present here the analysis of multimodal data gathered during realistic Face-To-Face Interaction of a target speaker with a number of interlocutors. Videos and gaze have been monitored with an experimental setup using coupled cameras and screens with integrated eye trackers. With the aim to understand the functions of gaze in social Interaction and to develop a coherent gaze control model for our talking heads we investigate the influence of cognitive state and social role on the observed gaze behavior.

Stephan Raidt - One of the best experts on this subject based on the ideXlab platform.

  • Face-To-Face Interaction with a conversationnal agent: eye-gaze and deixis
    2020
    Co-Authors: Stephan Raidt, Frédéric Elisei, Gérard Bailly
    Abstract:

    We present a series of experiments that involve a Face-To-Face Interaction between an embodied conversational agent (ECA) and a human interlocutor. The main challenge is to provide the interlocutor with implicit and explicit signs of mutual interest and attention and of the awareness of environmental conditions in which the Interaction takes place. A video realistic talking head with independent head and eye movements was used as a talking agent interacting with a user during a simple card game offering different levels of help and guidance. We analyzed the user performance and how he perceived the quality of assistance given by the embodied conversational agent. The experiment showed that users profit from its presence and its facial deictic cues.

  • Perception-Action Loops and Face-To-Face Interaction
    Revue Francaise De Linguistique Appliquee, 2020
    Co-Authors: Gérard Bailly, Frédéric Elisei, Stephan Raidt
    Abstract:

    This article investigates a blossoming research field: Face-To-Face communication. The performance and the robustness of the technological components that are necessary to the implementation of Face-To-Face Interaction systems between a human being and a conversational agent – vocal technologies, computer vision, image synthesis, dialogue comprehension and generation, etc. – have now come to maturity. We draw a sketch for a research program centred on the modelling of the many perception-action loops that are involved in the Interaction processing and on the dynamic settings of these loops by the many comprehension levels concerning the scene in which human beings, robots and animated conversational agents will inevitably be immersed.

  • Web Intelligence/IAT Workshops - Gaze Patterns during Face-To-Face Interaction
    2007 IEEE WIC ACM International Conferences on Web Intelligence and Intelligent Agent Technology - Workshops, 2007
    Co-Authors: Stephan Raidt, Gérard Bailly, Frédéric Elisei
    Abstract:

    We present here the analysis of multimodal data gathered during realistic Face-To-Face Interaction of a target speaker with a number of interlocutors. Videos and gaze of both interlocutors were monitored with an experimental setup using coupled cameras and screens equipped with eye trackers. With the aim to understand the functions of gaze in social Interaction and to develop a gaze control model for our talking heads we investigate the influence of cognitive state and social role on the observed gaze behaviour.

  • AVSP - Analyzing and modeling gaze during Face-To-Face Interaction
    2007
    Co-Authors: Stephan Raidt, Gérard Bailly, Frédéric Elisei
    Abstract:

    We present here the analysis of multimodal data gathered during realistic Face-To-Face Interaction of a target speaker with a number of interlocutors. Videos and gaze have been monitored with an experimental setup using coupled cameras and screens with integrated eye trackers. With the aim to understand the functions of gaze in social Interaction and to develop a coherent gaze control model for our talking heads we investigate the influence of cognitive state and social role on the observed gaze behavior.

  • IVA - Analyzing Gaze During Face-To-Face Interaction
    Intelligent Virtual Agents, 2007
    Co-Authors: Stephan Raidt, Gérard Bailly, Frédéric Elisei
    Abstract:

    We present here the analysis of multimodal data gathered during realistic Face-To-Face Interaction of a target speaker with a number of interlocutors. Videos and gaze have been monitored with an experimental setup using coupled cameras and screens with integrated eye trackers. With the aim to understand the functions of gaze in social Interaction and to develop a coherent gaze control model for our talking heads we investigate the influence of cognitive state and social role on the observed gaze behavior.

Serena Coppolino Perfumi - One of the best experts on this subject based on the ideXlab platform.