Vocal Expression

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 252 Experts worldwide ranked by ideXlab platform

Petri Laukka - One of the best experts on this subject based on the ideXlab platform.

  • cross cultural emotion recognition and in group advantage in Vocal Expression a meta analysis
    Emotion Review, 2021
    Co-Authors: Petri Laukka, Hillary Anger Elfenbein
    Abstract:

    Most research on cross-cultural emotion recognition has focused on facial Expressions. To integrate the body of evidence on Vocal Expression, we present a meta-analysis of 37 cross-cultural studies...

  • The Mirror to Our Soul? Comparisons of Spontaneous and Posed Vocal Expression of Emotion
    Journal of Nonverbal Behavior, 2018
    Co-Authors: Patrik N. Juslin, Petri Laukka, Tanja Banziger
    Abstract:

    It has been the subject of much debate in the study of Vocal Expression of emotions whether posed Expressions (e.g., actor portrayals) are different from spontaneous Expressions. In the present investigation, we assembled a new database consisting of 1877 voice clips from 23 datasets, and used it to systematically compare spontaneous and posed Expressions across 3 experiments. Results showed that (a) spontaneous Expressions were generally rated as more genuinely emotional than were posed Expressions, even when controlling for differences in emotion intensity, (b) there were differences between the two stimulus types with regard to their acoustic characteristics, and (c) spontaneous Expressions with a high emotion intensity conveyed discrete emotions to listeners to a similar degree as has previously been found for posed Expressions, supporting a dose–response relationship between intensity of Expression and discreteness in perceived emotions. Our conclusion is that there are reliable differences between spontaneous and posed Expressions, though not necessarily in the ways commonly assumed. Implications for emotion theories and the use of emotion portrayals in studies of Vocal Expression are discussed.

  • Expression of emotion in music and Vocal communication introduction to the research topic
    Frontiers in Psychology, 2014
    Co-Authors: Petri Laukka, Anjali Bhatara, Daniel J Levitin
    Abstract:

    In social interactions, we must gauge the emotional state of others in order to behave appropriately. We rely heavily on auditory cues, specifically speech prosody, to do this. Music is also a complex auditory signal with the capacity to communicate emotion rapidly and effectively and often occurs in social situations or ceremonies as an emotional unifier.In sum, the main contribution of this Research Topic, along with highlighting the variety of research being done already, is to show the places of contact between the domains of music and Vocal Expression that occur at the level of emotional communication. In addition, we hope it will encourage future dialog among researchers interested in emotion in fields as diverse as computer science, linguistics, musicology, neuroscience, psychology, speech and hearing sciences, and sociology, who can each contribute knowledge necessary for studying this complex topic.

  • graded structure in Vocal Expression of emotion what is meant by prototypical Expressions
    International Workshop on Paralinguistic Speech - between models and data ParaLing'07, 2007
    Co-Authors: Petri Laukka, Nicolas Audibert, Veronique Auberge
    Abstract:

    This study examined what determines typicality (gradedstructure) in Vocal Expressions of emotion. Separate groups ofjudges rated expressive speech stimuli (both acted andspontaneous Expressions) with regard to typicality, ideality,and frequency of instantiation (FI). Also, a measure ofsimilarity to central tendency (CT) was obtained from listenerjudgments. Partial correlations and multiple regressionanalysis revealed that similarity to ideal, and not FI or CT,explained most variance in judged typicality. It is argued thatthese results may indicate that prototypical Vocal Expressionsare best characterized as goal-derived categories, rather thancommon taxonomic categories. This could explain howprototypical Expressions can be acoustically distinct andhighly recognizable, while at the same time occur relativelyrarely in everyday speech.

  • categorical perception of Vocal emotion Expressions
    Emotion, 2005
    Co-Authors: Petri Laukka
    Abstract:

    Continua of Vocal emotion Expressions, ranging from one Expression to another, were created using speech synthesis. Each emotion continuum consisted of Expressions differing by equal physical amounts. In 2 experiments, subjects identified the emotion of each Expression and discriminated between pairs of Expressions. Identification results show that the continua were perceived as 2 distinct sections separated by a sudden category boundary. Also, discrimination accuracy was generally higher for pairs of stimuli falling across category boundaries than for pairs belonging to the same category. These results suggest that Vocal Expressions are perceived categorically. Results are interpreted from an evolutionary perspective on the function of Vocal Expression.

Klaus R. R. Scherer - One of the best experts on this subject based on the ideXlab platform.

  • automated recognition of emotion appraisals
    2015
    Co-Authors: Marcello Mortillaro, Ben Meuleman, Klaus R. R. Scherer
    Abstract:

    Most computer models for the automatic recognition of emotion from nonverbal signals (e.g., facial or Vocal Expression) have adopted a discrete emotion perspective, i.e., they output a categorical emotion from a limited pool of candidate labels. The discrete perspective suffers from practical and theoretical drawbacks that limit the generalizability of such systems. The authors of this chapter propose instead to adopt an appraisal perspective in modeling emotion recognition, i.e., to infer the subjective cognitive evaluations that underlie both the nonverbal cues and the overall emotion states. In a first step, expressive features would be used to infer appraisals; in a second step, the inferred appraisals would be used to predict an emotion label. The first step is practically unexplored in emotion literature. Such a system would allow to (a) link models of emotion recognition and production, (b) add contextual information to the inference algorithm, and (c) allow detection of subtle emotion states.

  • the role of perceived voice and speech characteristics in Vocal emotion communication
    Journal of Nonverbal Behavior, 2014
    Co-Authors: Tanja Banziger, Sona Patel, Klaus R. R. Scherer
    Abstract:

    Aiming at a more comprehensive assessment of nonverbal Vocal emotion communication, this article presents the development and validation of a new rating instrument for the assessment of perceived voice and speech features. In two studies, using two different sets of emotion portrayals by German and French actors, ratings of perceived voice and speech characteristics (loudness, pitch, intonation, sharpness, articulation, roughness, instability, and speech rate) were obtained from non-expert (untrained) listeners. In addition, standard acoustic parameters were extracted from the voice samples. Overall, highly similar patterns of results were found in both studies. Rater agreement (reliability) reached highly satisfactory levels for most features. Multiple discriminant analysis results reveal that both perceived Vocal features and acoustic parameters allow a high degree of differentiation of the actor-portrayed emotions. Positive emotions can be classified with a higher hit rate on the basis of perceived Vocal features, confirming suggestions in the literature that it is difficult to find acoustic valence indicators. The results show that the suggested scales (Geneva Voice Perception Scales) can be reliably measured and make a substantial contribution to a more comprehensive assessment of the process of emotion inferences from Vocal Expression.

  • mapping emotions into acoustic space the role of voice production
    Biological Psychology, 2011
    Co-Authors: Sona Patel, Klaus R. R. Scherer, Eva Bjorkner, Johan Sundberg
    Abstract:

    Abstract Research on the Vocal Expression of emotion has long since used a “fishing expedition” approach to find acoustic markers for emotion categories and dimensions. Although partially successful, the underlying mechanisms have not yet been elucidated. To illustrate that this research can profit from considering the underlying voice production mechanism, we specifically analyzed short affect bursts (sustained/a/vowels produced by 10 professional actors for five emotions) according to physiological variations in phonation (using acoustic parameters derived from the acoustic signal and the inverse filter estimated voice source waveform). Results show significant emotion main effects for 11 of 12 parameters. Subsequent principal components analysis revealed three components that explain acoustic variations due to emotion, including “tension,” “perturbation,” and “voicing frequency.” These results suggest that future work may benefit from theory-guided development of parameters to assess differences in physiological voice production mechanisms in the Vocal Expression of different emotions.

  • beyond arousal valence and potency control cues in the Vocal Expression of emotion
    Journal of the Acoustical Society of America, 2010
    Co-Authors: Martijn Goudbeek, Klaus R. R. Scherer
    Abstract:

    The important role of arousal in determining Vocal parameters in the Expression of emotion is well established. There is less evidence for the contribution of emotion dimensions such as valence and potency/control to Vocal emotion Expression. Here, an acoustic analysis of the newly developed Geneva Multimodal Emotional Portrayals corpus, is presented to examine the role of dimensions other than arousal. This corpus contains twelve emotions that systematically vary with respect to valence, arousal, and potency/control. The emotions were portrayed by professional actors coached by a stage director. The extracted acoustic parameters were first compared with those obtained from a similar corpus [Banse and Scherer (1996). J. Pers. Soc. Psychol. 70, 614–636] and shown to largely replicate the earlier findings. Based on a principal component analysis, seven composite scores were calculated and were used to determine the relative contribution of the respective Vocal parameters to the emotional dimensions arousal,...

  • the new handbook of methods in nonverbal behavior research
    2008
    Co-Authors: Jinni A Harrigan, Robert Rosenthal, Klaus R. R. Scherer
    Abstract:

    Introduction BASIC RESEARCH METHODS AND PROCEDURES 2. Measuring facial action 3. Vocal Expression of affect 4. Proxemics, kinesics, and gaze 5. Conducting judgment studies: some methodological issues RESEARCH APPLICATIONS IN NONVERBAL BEHAVIOR 6. Nonverbal behavior and interpersonal sensitivity 7. Nonverbal behavior in education 8. Nonverbal behavior and psychopathology 9. Research methods in detecting deception research 10. Nonverbal communication coding systems of committed couples 11. Macrovariables in affective Expression in women with breast cancer participating in support groups SUPPLEMENTARY MATERIALS 12. Technical issues in recording nonverbal behavior Appendix: Methodological issues in studying nonverbal behavior

Elodie F Briefer - One of the best experts on this subject based on the ideXlab platform.

  • coding for dynamic information Vocal Expression of emotional arousal and valence in non human animals
    2020
    Co-Authors: Elodie F Briefer
    Abstract:

    Emotions guide behavioural decisions in response to events or stimuli of importance for the organism, and thus, are an important component of an animal’s life. Communicating emotions to conspecifics allows, in turn, the regulation of social interactions (e.g. approach and avoidance). The existence of common rules governing Vocal Expression of affective states across species has been proposed as a function of the motivational state (i.e. intention of behaviour) of the emitter (‘motivation-structural rules’) and as a function the two main dimensions of emotions, valence (positive versus negative) and arousal (bodily activation). In this chapter, I review the potential for Vocalisations to serve as universal non-invasive indicators of animal emotions, by considering the latest evidence for common rules existing across species according to the two dimensions of emotions (‘emotional-dimension rules’). Vocal indicators of emotional arousal have been relatively well studied. Cross-species comparison shows that, when arousal increases, Vocalisations tend to be louder and are produced at faster rates, with higher frequencies (both source- and filter-related) and a more variable fundamental frequency (F0). In contrast, indicators of valence have only been investigated in a few species. The evidence so far indicates that, compared with negative Vocalisations, positive Vocalisations tend to be shorter, with a lower and less variable F0. Yet, comparison of Vocal indicators of valence between closely related species suggests that these indicators are more species specific than indicators of arousal, which have clearly been conserved throughout evolution. To conclude, I further suggest a new set of rules that could explain the acoustic structure of Vocalisations across species, which combine features predicted by the motivation-structural rules, the emotional-dimension rules, and characteristics of the social relationship involving the emitter and receiver.

  • Vocal Expression of emotional valence in pigs across multiple call types and contexts
    2nd Workshop on Vocal interactivity in-and-between Humans Animals and Robots, 2019
    Co-Authors: Elodie F Briefer, Pavel Linhart, Richard Policht, Marek Spinka, Lisette M C Leliveld, Sandra Dupjan, Birger Puppe, Monica Padilla De La Torre, Andrew M Janczak
    Abstract:

    Vocal Expression of emotional valence in pigs across multiple call types and contexts. 2nd Workshop on Vocal interactivity in-and-between Humans, Animals and Robots

  • Vocal Expression of emotional valence in Przewalski's horses (Equus przewalskii).
    Scientific reports, 2017
    Co-Authors: Anne-laure Maigrot, Edna Hillmann, Callista Anne, Elodie F Briefer
    Abstract:

    Vocal Expression of emotions has been suggested to be conserved throughout evolution. However, since Vocal indicators of emotions have never been compared between closely related species using similar methods, it remains unclear whether this is the case. Here, we investigated Vocal indicators of emotional valence (negative versus positive) in Przewalski’s horses, in order to find out if Expression of valence is similar between species and notably among Equidae through a comparison with previous results obtained in domestic horse whinnies. We observed Przewalski’s horses in naturally occurring contexts characterised by positive or negative valence. As emotional arousal (bodily activation) can act as a confounding factor in the search for indicators of valence, we controlled for its effect on Vocal parameters using a behavioural indicator (movement). We found that positive and negative situations were associated with specific types of calls. Additionally, the acoustic structure of calls differed according to the valence. There were some similarities but also striking differences in Expression of valence between Przewalski’s and domestic horses, suggesting that Vocal Expression of emotional valence, unlike emotional arousal, could be species specific rather than conserved throughout evolution.

  • Vocal Expression of emotions in mammals mechanisms of production and evidence
    Journal of Zoology, 2012
    Co-Authors: Elodie F Briefer
    Abstract:

    Emotions play a crucial role in an animal's life because they facilitate responses to external or internal events of significance for the organism. In social species, one of the main functions of emotional Expression is to regulate social interactions. There has recently been a surge of interest in animal emotions in several disciplines, ranging from neuroscience to evolutionary zoology. Because measurements of subjective emotional experiences are not possible in animals, researchers use neurophysiological, behavioural and cognitive indicators. However, good indicators, particularly of positive emotions, are still lacking. Vocalizations are linked to the inner state of the caller. The emotional state of the caller causes changes in the muscular tension and action of its Vocal apparatus, which in turn, impacts on Vocal parameters of Vocalizations. By considering the mode of production of Vocalizations, we can understand and predict how Vocal parameters should change according to the arousal (intensity) or valence (positive/negative) of emotional states. In this paper, I review the existing literature on Vocal correlates of emotions in mammals. Non-human mammals could serve as ideal models to study Vocal Expression of emotions, because, contrary to human speech, animal Vocalizations are assumed to be largely free of control and therefore direct Expressions of underlying emotions. Furthermore, a comparative approach between humans and other animals would give us a better understanding of how emotion Expression evolved. Additionally, these non-invasive indicators could serve various disciplines that require animal emotions to be clearly identified, including psychopharmacology and animal welfare science.

Hillary Anger Elfenbein - One of the best experts on this subject based on the ideXlab platform.

Klaus R Scherer - One of the best experts on this subject based on the ideXlab platform.

  • comment advances in studying the Vocal Expression of emotion current contributions and further options
    Emotion Review, 2021
    Co-Authors: Klaus R Scherer
    Abstract:

    I consider the five contributions in this special section as evidence that the research area dealing with the Vocal Expression of emotion is advancing rapidly, both in terms of the number of pertin...

  • beyond arousal valence and potency control cues in the Vocal Expression of emotion
    Journal of the Acoustical Society of America, 2010
    Co-Authors: Martijn Goudbeek, Klaus R Scherer
    Abstract:

    The important role of arousal in determining Vocal parameters in the Expression of emotion is well established. There is less evidence for the contribution of emotion dimensions such as valence and potency/control to Vocal emotion Expression. Here, an acoustic analysis of the newly developed Geneva Multimodal Emotional Portrayals corpus, is presented to examine the role of dimensions other than arousal. This corpus contains twelve emotions that systematically vary with respect to valence, arousal, and potency/control. The emotions were portrayed by professional actors coached by a stage director. The extracted acoustic parameters were first compared with those obtained from a similar corpus [Banse and Scherer (1996). J. Pers. Soc. Psychol. 70, 614-636] and shown to largely replicate the earlier findings. Based on a principal component analysis, seven composite scores were calculated and were used to determine the relative contribution of the respective Vocal parameters to the emotional dimensions arousal, valence, and potency/control. The results show that although arousal dominates for many Vocal parameters, it is possible to identify parameters, in particular spectral balance and spectral noise, that are specifically related to valence and potency/control.

  • a cross cultural investigation of emotion inferences from voice and speech implications for speech technology
    Conference of the International Speech Communication Association, 2000
    Co-Authors: Klaus R Scherer
    Abstract:

    Recent years have seen increasing efforts to improve speech technology tools such as speaker verification, speech recognition, and speech synthesis by taking voice and speech variations due to speaker emotion or attitudes into account. Given the global marketing of speech technology products, it is of vital importance to establish whether the Vocal changes produced by emotional and attitudinal factors are universal or vary over cultures and/or languages. The answer to this question determines whether similar algorithms can be used to factor out (or produce) emotional variations across all languages and cultures. This contribution describes the first large-scale effort to obtain empirical data on this issue by studying emotion recognition from voice in nine countries on three different continents. 1. OVERVIEW OF THE ISSUE The important role of emotion in shaping human voice production has been known since antiquity (see [1] for a review of early research). Yet, due to the relative neglect of the issue of emotion-induced voice and speech changes in speech research, little progress has been made in the past 25 years. It has been only rather recently that the important role of emotion and attitude dependent voice and speech variations has found the interest of phoneticians and engineers working on speech synthesis, speech recognition, and speaker verification [2,3]. Currently, a number of efforts are under way to understand the effects of emotion on voice and speech and to examine possibilities to adapt speech technology algorithms accordingly [4-6]. Figure 1: Beta weights and multiple R for selected acoustic parameters predicting emotion judgments in multiple regressions (adapted from [7]). However, one central issue remains largely unexplored: the question of the universality vs. the cultural and/or linguistic relativity of the emotion effects on Vocal production. This is obviously of major importance for the development and marketing of speech technology products that have been enhanced to allow the factoring out of emotional or attitudinal variablility in recognition and verification or to produce appropriate acoustic features in synthesis. Obviously, little customization of the respective algorithms would be necessary if emotion effects on the voice were universal whereas culturally or linguistically relative effects would require special adaptations for specific languages or countries. It is of major practical interest, then, to determine whether there are systematic and differentiated effects of different emotions on voice and speech characteristics, whether these are universal or culturally/linguistically relative, and whether intercultural recognition of emotion cues in the voice is possible. The apparent predominance of the activation dimension in Vocal emotion Expression has often led critics to suggest that judges’ ability to infer emotion from the voice might be limited to basing inference systematically on perceived activation. In consequence, one might expect that if this single, overriding dimension is controlled for, there should be little evidence for judges’ ability to recognize emotion from purely Vocal, nonverbal cues. However, evidence from a recent study by Banse & Scherer [7] that used a larger than customary set of acoustic variables, as well as more controlled elicitation techniques, points to the existence of both activation and valence cues to emotion in the voice. Twelve professional actors were asked to portray fourteen different emotions using two standard, meaningless sentences. Their portrayals were based on scenarios provided for each emotion. The actors were asked to use Stanislavski techniques, i.e. attempt to immerse themselves into the scenario and feel the emotion in the process of portraying it. The resulting speech samples were presented to expert judges to eliminate unsuccessful or unnatural Expressions. In the next step 0 . 6 0 . 4 0 . 2 0 0 . 2 0 . 4 0 . 6 0 . 8 H o t A n g e r C o l d A n g e r P a n i c F e a r A n x i e t y D e s p e r a t i o n S a d n e s s E l a t i o n H a p p i n e s s I n t e r e s t B o r e d o m S h a m e P r i d e D i s g u s t C o n t e m p t M e a n F 0 S D F 0 M e a n E n e r g y D u r a t i o n V o i c e d S e g m e n t s H a m m a r b e r g I n d e x % E n e r g y > 1 k H z S p e c t r a l s l o p e > 1 k H z M u l t i p l e R naive judges were used to assess the accuracy with which the different speech samples could be recognized with respect to the expressed emotion. 224 portrayals that were recognized with a sufficient level of accuracy were chosen for acoustic analysis. The results of these analyses suggest the existence of emotionspecific Vocal profiles that differentiate the different emotions not only on a basic arousal or activation dimension but also with respect to qualitative differences. Figure 2: Accuracy percentages for emotion classification attained by human judges, jackknifing, and discriminant analysis (adapted from [7]). These results explain why studies investigating the ability of listeners to recognize different emotions from a wide variety of standardized Vocal stimuli achieve a level of accuracy that largely exeeds what would be expected by chance [8]. The assumption is, of course, that the judges' inference is in fact based on the acoustic cues that determine the Vocal profiles for the different emotions. Figure 1, which plots the beta weights from multiple regressions of Vocal features on emotion judgment, shows that this is indeed the case: A sizeable proportion of the variance in the judgments can be explained by the acoustic cues that are prominently involved in differentiating Vocal emotion profiles. This interpretation is bolstered by a comparison of the confusion matrices produced by human judges with those found for statistical techniques of classification on the basis of predictor variables, namely jack-knifing and disciminant analysis. Figure 2 shows the respective accuracy percentages for these three types of classification. Not only is the pattern of accuracy coefficients across emotions (the diagonals in the confusion matrices as shown in Figure 2) highly comparable (with a few interesting exceptions) but the off-diagonal errors are also very similar across judges and classification algorithms (see [7] for details). This provides further evidence for the assumption that judges base their inference of emotion in the voice on acoustic profiles that are characteristic for specific emotions. Furthermore, many aspects of the profiles identified by Banse & Scherer were predicted by Scherer's Component Process model of emotion [9] that assumes a universal, psychobiological mechanism (push effects) for the Vocal Expression of emotion (even though it also allows for sociocultural variations -pull effects). This suggests that many of the emotion effects on the voice should be universal. Unfortunately, there is no study to date that examined these effects across speakers from different languages and cultures. In the meantime, it may be useful to examine whether judges from different countries can identify Vocal Expressions from another language/culture. Whereas the perception of emotion from facial Expression has been extensively studied cross-culturally, little is known about the ability of judges from different cultures, speaking different languages, to infer emotion from voice and speech encoded in another language by members of other cultures. This contribution summarizes the results of a series of recognition studies conducted by Scherer, Banse, and Wallbott [10] in nine different countries in Europe, the United States, and Asia (using Vocal emotion portrayals with content-free sentences as produced by professional German actors.