Public Speaking

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 59736 Experts worldwide ranked by ideXlab platform

Stefan Scherer - One of the best experts on this subject based on the ideXlab platform.

  • Training Public Speaking with virtual social interactions: effectiveness of real-time feedback and delayed feedback
    Journal on Multimodal User Interfaces, 2021
    Co-Authors: Mathieu Chollet, Stacy Marsella, Stefan Scherer
    Abstract:

    Social signal processing and virtual social interaction technologies have allowed the creation of social skills training applications, and initial studies have shown that such solutions can lead to positive training outcomes and could complement traditional teaching methods by providing cheap, accessible, safe tools for training social skills. However, these studies evaluated social skills training systems as a whole and it is unclear to what extent which components contributed to positive outcomes. In this paper, we describe an experimental study where we compared the relative efficacy of real-time interactive feedback and after-action feedback in the context of a Public Speaking training application. We observed that both components provide benefits to the overall training: the real-time interactive feedback made the experience more immersive and improved participants’ motivation in using the system, while the after-action feedback led to positive training outcomes when it contained personalized feedback elements. Taken in combination, these results confirm that both social signal processing technologies and virtual social interactions are both contributing to social skills training systems’ efficiency. Additionally, we observed that several individual factors, here the subjects’ initial level of Public Speaking anxiety, personality and tendency to immersion significantly influenced the training experience. This finding suggests that social skills training systems could benefit from being tailored to participants’ particular individual circumstances.

  • Public Speaking training with a multimodal interactive virtual audience framework
    International Conference on Multimodal Interfaces, 2015
    Co-Authors: Mathieu Chollet, Kalin Stefanov, Helmut Prendinger, Stefan Scherer
    Abstract:

    We have developed an interactive virtual audience platform for Public Speaking training. Users' Public Speaking behavior is automatically analyzed using multimodal sensors, and ultimodal feedback is produced by virtual characters and generic visual widgets depending on the user's behavior. The flexibility of our system allows to compare different interaction mediums (e.g. virtual reality vs normal interaction), social situations (e.g. one-on-one meetings vs large audiences) and trained behaviors (e.g. general Public Speaking performance vs specific behaviors).

  • automatic assessment and analysis of Public Speaking anxiety a virtual audience case study
    Affective Computing and Intelligent Interaction, 2015
    Co-Authors: Torsten Wortwein, Louis-philippe Morency, Stefan Scherer
    Abstract:

    Public Speaking has become an integral part of many professions and is central to career building opportunities. Yet, Public Speaking anxiety is often referred to as the most common fear in everyday life and can hinder one's ability to speak in Public severely. While virtual and real audiences have been successfully utilized to treat Public Speaking anxiety in the past, little work has been done on identifying behavioral characteristics of speakers suffering from anxiety. In this work, we focus on the characterization of behavioral indicators and the automatic assessment of Public Speaking anxiety. We identify several indicators for Public Speaking anxiety, among them are less eye contact with the audience, reduced variability in the voice, and more pauses. We automatically assess the Public Speaking anxiety as reported by the speakers through a self-assessment questionnaire using a speaker independent paradigm. Our approach using ensemble trees achieves a high correlation between ground truth and our estimation (r=0.825). Complementary to automatic measures of anxiety, we are also interested in speakers' perceptual differences when interacting with a virtual audience based on their level of anxiety in order to improve and further the development of virtual audiences for the training of Public Speaking and the reduction of anxiety.

  • exploring feedback strategies to improve Public Speaking an interactive virtual audience framework
    Ubiquitous Computing, 2015
    Co-Authors: Mathieu Chollet, Louis-philippe Morency, Ari Shapiro, Torsten Wortwein, Stefan Scherer
    Abstract:

    Good Public Speaking skills convey strong and effective communication, which is critical in many professions and used in everyday life. The ability to speak Publicly requires a lot of training and practice. Recent technological developments enable new approaches for Public Speaking training that allow users to practice in a safe and engaging environment. We explore feedback strategies for Public Speaking training that are based on an interactive virtual audience paradigm. We investigate three study conditions: (1) a non-interactive virtual audience (control condition), (2) direct visual feedback, and (3) nonverbal feedback from an interactive virtual audience. We perform a threefold evaluation based on self-assessment questionnaires, expert assessments, and two objectively annotated measures of eye-contact and avoidance of pause fillers. Our experiments show that the interactive virtual audience brings together the best of both worlds: increased engagement and challenge as well as improved Public Speaking skills as judged by experts.

  • an interactive virtual audience platform for Public Speaking training
    Adaptive Agents and Multi-Agents Systems, 2014
    Co-Authors: Mathieu Chollet, Louis-philippe Morency, Ari Shapiro, Giota Sratou, Stefan Scherer
    Abstract:

    We have developed an interactive virtual audience platform for Public Speaking training. Users' Public Speaking behavior is automatically analyzed using audiovisual sensors. The virtual characters display indirect feedback depending on user's behavior descriptors correlated with Public Speaking performance. We used the system to collect a dataset of Public Speaking performances in different training conditions.

Ralph R Behnke - One of the best experts on this subject based on the ideXlab platform.

  • anticipatory speech anxiety as a function of Public Speaking assignment type an earlier version of this paper was presented at the 2005 convention of the national communication association in boston
    Communication Education, 2006
    Co-Authors: Paul L Witt, Ralph R Behnke
    Abstract:

    This investigation included two studies relating anticipatory Public Speaking anxiety to the nature of the speech assignment. Based on uncertainty reduction theory, which suggests that communicators are less comfortable in unfamiliar or unpredictable contexts, two hypotheses were advanced on the presumption that various types of assignments in a speech performance course do not produce the same levels of anticipatory anxiety. The hypotheses were supported in both trait and state anxiety studies, where certain differences in narrowband anticipatory speech anxiety were detected among different types of informative speeches: impromptu, extemporaneous, and manuscript reading. These findings extend the tenets of uncertainty reduction theory to the Public Speaking context and suggest implications for both therapeutic intervention and pedagogical application.

  • anticipatory Public Speaking state anxiety as a function of body sensations and state of mind
    Communication Quarterly, 2006
    Co-Authors: Shannon C Mccullough, Shelly G Russell, Ralph R Behnke, Chris R Sawyer, Paul L Witt
    Abstract:

    This study examined the relationships among a Public speaker's body sensations, state of mind, and anticipatory Public Speaking state anxiety. A negative relationship was found to exist between speaker state of mind and anticipatory Public Speaking anxiety, and a positive relationship was found between speaker body sensations and anticipatory Public Speaking anxiety. Moreover, speaker state of mind and body sensations combined to predict anticipatory Public Speaking anxiety.

  • trait anticipatory Public Speaking anxiety as a function of self efficacy expectations and self handicapping strategies
    Communication Research Reports, 2003
    Co-Authors: Anne E Lucchetti, Gina L Phipps, Ralph R Behnke
    Abstract:

    The purpose of this study was to determine if the independent variables of self‐efficacy expectations and self‐handicapping strategies would predict trait anticipatory Public Speaking anxiety. A model was proposed and tested in which self‐efficacy expectations were found to be significant independent predictors of trait anticipatory Public Speaking anxiety. Self‐handicapping was not a significant predictor. Implications for future research are discussed.

  • milestones of anticipatory Public Speaking anxiety
    Communication Education, 1999
    Co-Authors: Ralph R Behnke, Chris R Sawyer
    Abstract:

    The purpose of the present study was to investigate the levels of anticipatory Public Speaking state and trait anxiety at three pre‐performance milestones or significant events: (1) the moment when the Public speech was assigned in class, (2) the mid‐point of a laboratory session during which the speeches were being prepared, and (3) the moment immediately preceding formal presentation of the speech to the class. The results indicate that both state and trait anxiety levels during these events were ordered in a quadratic, v‐shaped episodic pattern as follows: the highest level of anticipatory anxiety occurred just before Speaking, the second highest level occurred at the time the assignment was announced and explained, and the lowest level was measured during the time students were preparing their speeches.

Mathieu Chollet - One of the best experts on this subject based on the ideXlab platform.

  • Training Public Speaking with virtual social interactions: effectiveness of real-time feedback and delayed feedback
    Journal on Multimodal User Interfaces, 2021
    Co-Authors: Mathieu Chollet, Stacy Marsella, Stefan Scherer
    Abstract:

    Social signal processing and virtual social interaction technologies have allowed the creation of social skills training applications, and initial studies have shown that such solutions can lead to positive training outcomes and could complement traditional teaching methods by providing cheap, accessible, safe tools for training social skills. However, these studies evaluated social skills training systems as a whole and it is unclear to what extent which components contributed to positive outcomes. In this paper, we describe an experimental study where we compared the relative efficacy of real-time interactive feedback and after-action feedback in the context of a Public Speaking training application. We observed that both components provide benefits to the overall training: the real-time interactive feedback made the experience more immersive and improved participants’ motivation in using the system, while the after-action feedback led to positive training outcomes when it contained personalized feedback elements. Taken in combination, these results confirm that both social signal processing technologies and virtual social interactions are both contributing to social skills training systems’ efficiency. Additionally, we observed that several individual factors, here the subjects’ initial level of Public Speaking anxiety, personality and tendency to immersion significantly influenced the training experience. This finding suggests that social skills training systems could benefit from being tailored to participants’ particular individual circumstances.

  • Public Speaking training with a multimodal interactive virtual audience framework
    International Conference on Multimodal Interfaces, 2015
    Co-Authors: Mathieu Chollet, Kalin Stefanov, Helmut Prendinger, Stefan Scherer
    Abstract:

    We have developed an interactive virtual audience platform for Public Speaking training. Users' Public Speaking behavior is automatically analyzed using multimodal sensors, and ultimodal feedback is produced by virtual characters and generic visual widgets depending on the user's behavior. The flexibility of our system allows to compare different interaction mediums (e.g. virtual reality vs normal interaction), social situations (e.g. one-on-one meetings vs large audiences) and trained behaviors (e.g. general Public Speaking performance vs specific behaviors).

  • exploring feedback strategies to improve Public Speaking an interactive virtual audience framework
    Ubiquitous Computing, 2015
    Co-Authors: Mathieu Chollet, Louis-philippe Morency, Ari Shapiro, Torsten Wortwein, Stefan Scherer
    Abstract:

    Good Public Speaking skills convey strong and effective communication, which is critical in many professions and used in everyday life. The ability to speak Publicly requires a lot of training and practice. Recent technological developments enable new approaches for Public Speaking training that allow users to practice in a safe and engaging environment. We explore feedback strategies for Public Speaking training that are based on an interactive virtual audience paradigm. We investigate three study conditions: (1) a non-interactive virtual audience (control condition), (2) direct visual feedback, and (3) nonverbal feedback from an interactive virtual audience. We perform a threefold evaluation based on self-assessment questionnaires, expert assessments, and two objectively annotated measures of eye-contact and avoidance of pause fillers. Our experiments show that the interactive virtual audience brings together the best of both worlds: increased engagement and challenge as well as improved Public Speaking skills as judged by experts.

  • an interactive virtual audience platform for Public Speaking training
    Adaptive Agents and Multi-Agents Systems, 2014
    Co-Authors: Mathieu Chollet, Louis-philippe Morency, Ari Shapiro, Giota Sratou, Stefan Scherer
    Abstract:

    We have developed an interactive virtual audience platform for Public Speaking training. Users' Public Speaking behavior is automatically analyzed using audiovisual sensors. The virtual characters display indirect feedback depending on user's behavior descriptors correlated with Public Speaking performance. We used the system to collect a dataset of Public Speaking performances in different training conditions.

  • an interactive virtual audience platform for Public Speaking training demonstration
    2014
    Co-Authors: Mathieu Chollet, Louis-philippe Morency, Ari Shapiro, Giota Sratou, Stefan Scherer
    Abstract:

    We have developed an interactive virtual audience platform for Public Speaking training. Users’ Public Speaking behavior is automatically analyzed using audiovisual sensors. The virtual characters display indirect feedback depending on user’s behavior descriptors correlated with Public Speaking performance. We used the system to collect a dataset of Public Speaking performances in dierent training conditions.

Louis-philippe Morency - One of the best experts on this subject based on the ideXlab platform.

  • automatic assessment and analysis of Public Speaking anxiety a virtual audience case study
    Affective Computing and Intelligent Interaction, 2015
    Co-Authors: Torsten Wortwein, Louis-philippe Morency, Stefan Scherer
    Abstract:

    Public Speaking has become an integral part of many professions and is central to career building opportunities. Yet, Public Speaking anxiety is often referred to as the most common fear in everyday life and can hinder one's ability to speak in Public severely. While virtual and real audiences have been successfully utilized to treat Public Speaking anxiety in the past, little work has been done on identifying behavioral characteristics of speakers suffering from anxiety. In this work, we focus on the characterization of behavioral indicators and the automatic assessment of Public Speaking anxiety. We identify several indicators for Public Speaking anxiety, among them are less eye contact with the audience, reduced variability in the voice, and more pauses. We automatically assess the Public Speaking anxiety as reported by the speakers through a self-assessment questionnaire using a speaker independent paradigm. Our approach using ensemble trees achieves a high correlation between ground truth and our estimation (r=0.825). Complementary to automatic measures of anxiety, we are also interested in speakers' perceptual differences when interacting with a virtual audience based on their level of anxiety in order to improve and further the development of virtual audiences for the training of Public Speaking and the reduction of anxiety.

  • exploring feedback strategies to improve Public Speaking an interactive virtual audience framework
    Ubiquitous Computing, 2015
    Co-Authors: Mathieu Chollet, Louis-philippe Morency, Ari Shapiro, Torsten Wortwein, Stefan Scherer
    Abstract:

    Good Public Speaking skills convey strong and effective communication, which is critical in many professions and used in everyday life. The ability to speak Publicly requires a lot of training and practice. Recent technological developments enable new approaches for Public Speaking training that allow users to practice in a safe and engaging environment. We explore feedback strategies for Public Speaking training that are based on an interactive virtual audience paradigm. We investigate three study conditions: (1) a non-interactive virtual audience (control condition), (2) direct visual feedback, and (3) nonverbal feedback from an interactive virtual audience. We perform a threefold evaluation based on self-assessment questionnaires, expert assessments, and two objectively annotated measures of eye-contact and avoidance of pause fillers. Our experiments show that the interactive virtual audience brings together the best of both worlds: increased engagement and challenge as well as improved Public Speaking skills as judged by experts.

  • an interactive virtual audience platform for Public Speaking training
    Adaptive Agents and Multi-Agents Systems, 2014
    Co-Authors: Mathieu Chollet, Louis-philippe Morency, Ari Shapiro, Giota Sratou, Stefan Scherer
    Abstract:

    We have developed an interactive virtual audience platform for Public Speaking training. Users' Public Speaking behavior is automatically analyzed using audiovisual sensors. The virtual characters display indirect feedback depending on user's behavior descriptors correlated with Public Speaking performance. We used the system to collect a dataset of Public Speaking performances in different training conditions.

  • an interactive virtual audience platform for Public Speaking training demonstration
    2014
    Co-Authors: Mathieu Chollet, Louis-philippe Morency, Ari Shapiro, Giota Sratou, Stefan Scherer
    Abstract:

    We have developed an interactive virtual audience platform for Public Speaking training. Users’ Public Speaking behavior is automatically analyzed using audiovisual sensors. The virtual characters display indirect feedback depending on user’s behavior descriptors correlated with Public Speaking performance. We used the system to collect a dataset of Public Speaking performances in dierent training conditions.

  • cicero towards a multimodal virtual audience platform for Public Speaking training
    Intelligent Virtual Agents, 2013
    Co-Authors: Ligia Atrinca, Giota Stratou, Louis-philippe Morency, Ari Shapiro, Stefa Schere
    Abstract:

    Public Speaking performances are not only characterized by the presentation of the content, but also by the presenters’ nonverbal behavior, such as gestures, tone of voice, vocal variety, and facial expressions. Within this work, we seek to identify automatic nonverbal behavior descriptors that correlate with expert-assessments of behaviors characteristic of good and bad Public Speaking performances. We present a novel multimodal corpus recorded with a virtual audience Public Speaking training platform. Lastly, we utilize the behavior descriptors to automatically approximate the overall assessment of the performance using support vector regression in a speaker-independent experiment and yield promising results approaching human performance.

Philip Lindner - One of the best experts on this subject based on the ideXlab platform.