Emotion Recognition

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 33948 Experts worldwide ranked by ideXlab platform

Gang Yang - One of the best experts on this subject based on the ideXlab platform.

  • ICMR - Source Separation Improves Music Emotion Recognition
    Proceedings of International Conference on Multimedia Retrieval - ICMR '14, 2014
    Co-Authors: Jieping Xu, Xirong Li, Yun Hao, Gang Yang
    Abstract:

    Despite the impressive progress in music Emotion Recognition, it remains unclear what aspect of a song, i.e., singing voice and accompanied music, carries more Emotional information. As an initial attempt to answer the question, we introduce source separation into a standard music Emotion Recognition system. This allows us to compare systems with and without source separation, and consequently reveal the influence of singing voice and accompanied music on Emotion Recognition. Classification experiments on a set of 267 songs with last.fm annotations verify the new finding that source separation improves song music Emotion Recognition.

  • source separation improves music Emotion Recognition
    International Conference on Multimedia Retrieval, 2014
    Co-Authors: Jieping Xu, Xirong Li, Yun Hao, Gang Yang
    Abstract:

    Despite the impressive progress in music Emotion Recognition, it remains unclear what aspect of a song, i.e., singing voice and accompanied music, carries more Emotional information. As an initial attempt to answer the question, we introduce source separation into a standard music Emotion Recognition system. This allows us to compare systems with and without source separation, and consequently reveal the influence of singing voice and accompanied music on Emotion Recognition. Classification experiments on a set of 267 songs with last.fm annotations verify the new finding that source separation improves song music Emotion Recognition.

Abhinav Dhall - One of the best experts on this subject based on the ideXlab platform.

  • a survey on automatic multimodal Emotion Recognition in the wild
    2021
    Co-Authors: Garima Sharma, Abhinav Dhall
    Abstract:

    Affective computing has been an active area of research for the past two decades. One of the major component of affective computing is automatic Emotion Recognition. This chapter gives a detailed overview of different Emotion Recognition techniques and the predominantly used signal modalities. The discussion starts with the different Emotion representations and their limitations. Given that affective computing is a data-driven research area, a thorough comparison of standard Emotion labelled databases is presented. Based on the source of the data, feature extraction and analysis techniques are presented for Emotion Recognition. Further, applications of automatic Emotion Recognition are discussed along with current and important issues such as privacy and fairness.

  • Emotion Recognition in the wild challenge 2016
    International Conference on Multimodal Interfaces, 2016
    Co-Authors: Abhinav Dhall, Jyoti Joshi, Roland Goecke, Tom Gedeon
    Abstract:

    The fourth Emotion Recognition in the Wild (EmotiW) challenge is a grand challenge in the ACM International Conference on Multimodal Interaction 2016, Tokyo. EmotiW is a series of benchmarking and competition effort for researchers working in the area of automatic Emotion Recognition in the wild. The fourth EmotiW has two sub-challenges: Video based Emotion Recognition (VReco) and Group-level Emotion Recognition (GReco). The VReco sub-challenge is being run for the fourth time and GReco is a new sub-challenge this year.

  • ICMI - Emotion Recognition in the wild challenge 2016
    Proceedings of the 18th ACM International Conference on Multimodal Interaction - ICMI 2016, 2016
    Co-Authors: Abhinav Dhall, Jyoti Joshi, Roland Goecke, Tom Gedeon
    Abstract:

    The fourth Emotion Recognition in the Wild (EmotiW) challenge is a grand challenge in the ACM International Conference on Multimodal Interaction 2016, Tokyo. EmotiW is a series of benchmarking and competition effort for researchers working in the area of automatic Emotion Recognition in the wild. The fourth EmotiW has two sub-challenges: Video based Emotion Recognition (VReco) and Group-level Emotion Recognition (GReco). The VReco sub-challenge is being run for the fourth time and GReco is a new sub-challenge this year.

  • Emotion Recognition in the wild challenge 2013
    International Conference on Multimodal Interfaces, 2013
    Co-Authors: Abhinav Dhall, Jyoti Joshi, Roland Goecke, Michael Wagner, Tom Gedeon
    Abstract:

    Emotion Recognition is a very active field of research. The Emotion Recognition In The Wild Challenge and Workshop (EmotiW) 2013 Grand Challenge consists of an audio-video based Emotion classification challenges, which mimics real-world conditions. Traditionally, Emotion Recognition has been performed on laboratory controlled data. While undoubtedly worthwhile at the time, such laboratory controlled data poorly represents the environment and conditions faced in real-world situations. The goal of this Grand Challenge is to define a common platform for evaluation of Emotion Recognition methods in real-world conditions. The database in the 2013 challenge is the Acted Facial Expression in the Wild (AFEW), which has been collected from movies showing close-to-real-world conditions.

  • ICMI - Emotion Recognition in the wild challenge 2013
    Proceedings of the 15th ACM on International conference on multimodal interaction - ICMI '13, 2013
    Co-Authors: Abhinav Dhall, Jyoti Joshi, Roland Goecke, Michael Wagner, Tom Gedeon
    Abstract:

    Emotion Recognition is a very active field of research. The Emotion Recognition In The Wild Challenge and Workshop (EmotiW) 2013 Grand Challenge consists of an audio-video based Emotion classification challenges, which mimics real-world conditions. Traditionally, Emotion Recognition has been performed on laboratory controlled data. While undoubtedly worthwhile at the time, such laboratory controlled data poorly represents the environment and conditions faced in real-world situations. The goal of this Grand Challenge is to define a common platform for evaluation of Emotion Recognition methods in real-world conditions. The database in the 2013 challenge is the Acted Facial Expression in the Wild (AFEW), which has been collected from movies showing close-to-real-world conditions.

Tom Gedeon - One of the best experts on this subject based on the ideXlab platform.

  • Emotion Recognition in the wild challenge 2016
    International Conference on Multimodal Interfaces, 2016
    Co-Authors: Abhinav Dhall, Jyoti Joshi, Roland Goecke, Tom Gedeon
    Abstract:

    The fourth Emotion Recognition in the Wild (EmotiW) challenge is a grand challenge in the ACM International Conference on Multimodal Interaction 2016, Tokyo. EmotiW is a series of benchmarking and competition effort for researchers working in the area of automatic Emotion Recognition in the wild. The fourth EmotiW has two sub-challenges: Video based Emotion Recognition (VReco) and Group-level Emotion Recognition (GReco). The VReco sub-challenge is being run for the fourth time and GReco is a new sub-challenge this year.

  • ICMI - Emotion Recognition in the wild challenge 2016
    Proceedings of the 18th ACM International Conference on Multimodal Interaction - ICMI 2016, 2016
    Co-Authors: Abhinav Dhall, Jyoti Joshi, Roland Goecke, Tom Gedeon
    Abstract:

    The fourth Emotion Recognition in the Wild (EmotiW) challenge is a grand challenge in the ACM International Conference on Multimodal Interaction 2016, Tokyo. EmotiW is a series of benchmarking and competition effort for researchers working in the area of automatic Emotion Recognition in the wild. The fourth EmotiW has two sub-challenges: Video based Emotion Recognition (VReco) and Group-level Emotion Recognition (GReco). The VReco sub-challenge is being run for the fourth time and GReco is a new sub-challenge this year.

  • Emotion Recognition in the wild challenge 2013
    International Conference on Multimodal Interfaces, 2013
    Co-Authors: Abhinav Dhall, Jyoti Joshi, Roland Goecke, Michael Wagner, Tom Gedeon
    Abstract:

    Emotion Recognition is a very active field of research. The Emotion Recognition In The Wild Challenge and Workshop (EmotiW) 2013 Grand Challenge consists of an audio-video based Emotion classification challenges, which mimics real-world conditions. Traditionally, Emotion Recognition has been performed on laboratory controlled data. While undoubtedly worthwhile at the time, such laboratory controlled data poorly represents the environment and conditions faced in real-world situations. The goal of this Grand Challenge is to define a common platform for evaluation of Emotion Recognition methods in real-world conditions. The database in the 2013 challenge is the Acted Facial Expression in the Wild (AFEW), which has been collected from movies showing close-to-real-world conditions.

  • ICMI - Emotion Recognition in the wild challenge 2013
    Proceedings of the 15th ACM on International conference on multimodal interaction - ICMI '13, 2013
    Co-Authors: Abhinav Dhall, Jyoti Joshi, Roland Goecke, Michael Wagner, Tom Gedeon
    Abstract:

    Emotion Recognition is a very active field of research. The Emotion Recognition In The Wild Challenge and Workshop (EmotiW) 2013 Grand Challenge consists of an audio-video based Emotion classification challenges, which mimics real-world conditions. Traditionally, Emotion Recognition has been performed on laboratory controlled data. While undoubtedly worthwhile at the time, such laboratory controlled data poorly represents the environment and conditions faced in real-world situations. The goal of this Grand Challenge is to define a common platform for evaluation of Emotion Recognition methods in real-world conditions. The database in the 2013 challenge is the Acted Facial Expression in the Wild (AFEW), which has been collected from movies showing close-to-real-world conditions.

Jieping Xu - One of the best experts on this subject based on the ideXlab platform.

  • ICMR - Source Separation Improves Music Emotion Recognition
    Proceedings of International Conference on Multimedia Retrieval - ICMR '14, 2014
    Co-Authors: Jieping Xu, Xirong Li, Yun Hao, Gang Yang
    Abstract:

    Despite the impressive progress in music Emotion Recognition, it remains unclear what aspect of a song, i.e., singing voice and accompanied music, carries more Emotional information. As an initial attempt to answer the question, we introduce source separation into a standard music Emotion Recognition system. This allows us to compare systems with and without source separation, and consequently reveal the influence of singing voice and accompanied music on Emotion Recognition. Classification experiments on a set of 267 songs with last.fm annotations verify the new finding that source separation improves song music Emotion Recognition.

  • source separation improves music Emotion Recognition
    International Conference on Multimedia Retrieval, 2014
    Co-Authors: Jieping Xu, Xirong Li, Yun Hao, Gang Yang
    Abstract:

    Despite the impressive progress in music Emotion Recognition, it remains unclear what aspect of a song, i.e., singing voice and accompanied music, carries more Emotional information. As an initial attempt to answer the question, we introduce source separation into a standard music Emotion Recognition system. This allows us to compare systems with and without source separation, and consequently reveal the influence of singing voice and accompanied music on Emotion Recognition. Classification experiments on a set of 267 songs with last.fm annotations verify the new finding that source separation improves song music Emotion Recognition.

Shashidhar G. Koolagudi - One of the best experts on this subject based on the ideXlab platform.

  • Speech Emotion Recognition: A Review
    SpringerBriefs in Electrical and Computer Engineering, 2012
    Co-Authors: Sreenivasa Rao Krothapalli, Shashidhar G. Koolagudi
    Abstract:

    This chapter presents the literature related to the databases, features, pattern classifiers used for Emotion Recognition from speech. Different types of Emotional databases such as simulated, elicited and natural are critically reviewed from the research point of view. Review of existing Emotion Recognition systems developed using excitation source, vocal tract system and prosodic features is briefly presented. Basic pattern classification models used for discriminating the Emotions are discussed in brief. Finally, the chapter concludes with motivation and scope of the work presented in this book.

  • Emotion Recognition from speech: a review
    International Journal of Speech Technology, 2012
    Co-Authors: Shashidhar G. Koolagudi
    Abstract:

    Emotion Recognition from speech has emerged as an important research area in the recent past. In this regard, review of existing work on Emotional speech processing is useful for carrying out further research. In this paper, the recent literature on speech Emotion Recognition has been presented considering the issues related to Emotional speech corpora, different types of speech features and models used for Recognition of Emotions from speech. Thirty two representative speech databases are reviewed in this work from point of view of their language, number of speakers, number of Emotions, and purpose of collection. The issues related to Emotional speech databases used in Emotional speech Recognition are also briefly discussed. Literature on different features used in the task of Emotion Recognition from speech is presented. The importance of choosing different classification models has been discussed along with the review. The important issues to be considered for further Emotion Recognition research in general and in specific to the Indian context have been highlighted where ever necessary.

  • IC3 - Text Independent Emotion Recognition Using Spectral Features
    Communications in Computer and Information Science, 2011
    Co-Authors: Rahul Chauhan, Jainath Yadav, Shashidhar G. Koolagudi, K. Sreenivasa Rao
    Abstract:

    This paper presents text independent Emotion Recognition from speech using mel frequency cepstral coefficients (MFCCs) along with their velocity and acceleration coefficients. In this work simulated Hindi Emotion speech corpus, IITKGP-SEHSC is used for conducting the Emotion Recognition studies. The Emotions considered are anger, disgust, fear, happy, neutral, sad, sarcastic, and surprise. Gaussian mixture models are used for developing Emotion Recognition models. Emotion Recognition performance for text independent and text dependent cases are compared. Around 72% and 82% of Emotion Recognition rate is observed for text independent and dependent cases respectively.