Facial Expression

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 89091 Experts worldwide ranked by ideXlab platform

Maja Pantic - One of the best experts on this subject based on the ideXlab platform.

  • Automated Facial Expression Analysis
    2020
    Co-Authors: Maja Pantic, Léon J. M. Rothkrantz
    Abstract:

    Human Emotion Recognition Clips Utilised Expert System was designed to recognise and interpret Facial Expressions of the observed person in an automatic way [1]. Still, input to HERCULES has been manually supplied. Through integrating HERCULES into the Integrated System for Facial Expression Recognition (ISFER) a complete process of automatic analysis of Facial Expressions has been achieved [2]. ISFER forms a part of the project Automated System for Non-verbal Communication [3] which is an ongoing project at the Knowledge Based Systems department of Delft University of Technology. The theoretical formulation of the implemented Facial Expression recognition has been acquired from FACS [4]. The interpretation of the recognised Facial Expression is currently based [1][7] on the recognition of so-called six basic emotions defined by Ekman [5][6]: happiness, sadness, fear, surprise, disgust and anger. Validation of the implemented knowledge and performing of the Facial Expression analysis in a completely automated way, are the main topics of this paper.

  • Meta-analysis of the first Facial Expression recognition challenge
    IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, 2012
    Co-Authors: Michel Valstar, Bihan Jiang, Marc Mehu, Maja Pantic, Klaus R. R. Scherer
    Abstract:

    Automatic Facial Expression recognition has been an active topic in computer science for over two decades, in particular Facial action coding system action unit (AU) detection and classification of a number of discrete emotion states from Facial expressive imagery. Standardization and comparability have received some attention; for instance, there exist a number of commonly used Facial Expression databases. However, lack of a commonly accepted evaluation protocol and, typically, lack of sufficient details needed to reproduce the reported individual results make it difficult to compare systems. This, in turn, hinders the progress of the field. A periodical challenge in Facial Expression recognition would allow such a comparison on a level playing field. It would provide an insight on how far the field has come and would allow researchers to identify new goals, challenges, and targets. This paper presents a meta-analysis of the first such challenge in automatic recognition of Facial Expressions, held during the IEEE conference on Face and Gesture Recognition 2011. It details the challenge data, evaluation protocol, and the results attained in two subchallenges: AU detection and classification of Facial Expression imagery in terms of a number of discrete emotion categories. We also summarize the lessons learned and reflect on the future of the field of Facial Expression recognition in general and on possible future challenges in particular.

  • Regression-based multi-view Facial Expression recognition
    Proceedings - International Conference on Pattern Recognition, 2010
    Co-Authors: Ognjen Rudovic, Ioannis Patras, Maja Pantic
    Abstract:

    We present a regression-based scheme for multi-view Facial Expression recognition based on 2 D geometric features. We address the problem by mapping Facial points (e.g. mouth corners) from non-frontal to frontal view where further recognition of the Expressions can be performed using a state-of-the-art Facial Expression recognition method. To learn the mapping functions we investigate four regression models: Linear Regression (LR), Support Vector Regression (SVR), Relevance Vector Regression (RVR) and Gaussian Process Regression (GPR). Our extensive experiments on the CMU Multi-PIE Facial Expression database show that the proposed scheme outperforms view-specific classifiers by utilizing considerably less training data. © 2010 IEEE.

  • Web-based database for Facial Expression analysis
    IEEE International Conference on Multimedia and Expo, ICME 2005, 2005
    Co-Authors: Maja Pantic, Ron Rademaker, Michel Valstar, Leendert Maat
    Abstract:

    In the last decade, the research topic of automatic analysis of Facial Expressions has become a central topic in machine vision research. Nonetheless, there is a glaring lack of a comprehensive, readily accessible reference set of face images that could be used as a basis for benchmarks for efforts in the field. This lack of easily accessible, suitable, common testing resource forms the major impediment to comparing and extending the issues concerned with automatic Facial Expression analysis. In this paper, we discuss a number of issues that make the problem of creating a benchmark Facial Expression database difficult. We then present the MMI Facial Expression database, which includes more than 1500 samples of both static images and image sequences of faces in frontal and in profile view displaying various Expressions of emotion, single and multiple Facial muscle activation. It has been built as a Web-based direct-manipulation application, allowing easy access and easy search of the available images. This database represents the most comprehensive reference set of images for studies on Facial Expression analysis to date.

Balakrishnan Prabhakaran - One of the best experts on this subject based on the ideXlab platform.

  • Real-time Facial Expression recognition on smartphones
    Proceedings - 2015 IEEE Winter Conference on Applications of Computer Vision WACV 2015, 2015
    Co-Authors: Myunghoon Suk, Balakrishnan Prabhakaran
    Abstract:

    Temporal segmentation of real time video is an important part for automatic Facial Expression recognition system. Many studies for Facial Expression recognition have been carried out under restricted experimental environment such as pre-segmented video set. In this paper, we present a real-time temporal video segmenting approach for automatic Facial Expression recognition applicable in a smartphone. The proposed system uses a Finite State Machine (FSM) for segmenting real time video into temporal phases from neutral Expression to the peak of an Expression. The FSM uses Lucas-Kanade's optical flow vector based scores for state transitions to adapt the varying speeds of Facial Expressions. While even HMM based or hybrid HMM model based approaches handling time series data require sampling times, the proposed system runs without any sampling time delay. The proposed system performs Facial Expression recognition with Support Vector Machines (SVM) on every apex state after automatic temporal segmentation. The mobile app with our approach runs on Samsung Galaxy S3 with 3.7 fps and the accuracy of real-time mobile emotion recognition is about 70.6% for 6 basic emotions by 5 subjects who are not professional actors.

Myunghoon Suk - One of the best experts on this subject based on the ideXlab platform.

  • Real-time Facial Expression recognition on smartphones
    Proceedings - 2015 IEEE Winter Conference on Applications of Computer Vision WACV 2015, 2015
    Co-Authors: Myunghoon Suk, Balakrishnan Prabhakaran
    Abstract:

    Temporal segmentation of real time video is an important part for automatic Facial Expression recognition system. Many studies for Facial Expression recognition have been carried out under restricted experimental environment such as pre-segmented video set. In this paper, we present a real-time temporal video segmenting approach for automatic Facial Expression recognition applicable in a smartphone. The proposed system uses a Finite State Machine (FSM) for segmenting real time video into temporal phases from neutral Expression to the peak of an Expression. The FSM uses Lucas-Kanade's optical flow vector based scores for state transitions to adapt the varying speeds of Facial Expressions. While even HMM based or hybrid HMM model based approaches handling time series data require sampling times, the proposed system runs without any sampling time delay. The proposed system performs Facial Expression recognition with Support Vector Machines (SVM) on every apex state after automatic temporal segmentation. The mobile app with our approach runs on Samsung Galaxy S3 with 3.7 fps and the accuracy of real-time mobile emotion recognition is about 70.6% for 6 basic emotions by 5 subjects who are not professional actors.

Ran He - One of the best experts on this subject based on the ideXlab platform.

  • geometry guided adversarial Facial Expression synthesis
    ACM Multimedia, 2018
    Co-Authors: Lingxiao Song, Zhihe Lu, Ran He
    Abstract:

    Facial Expression synthesis has drawn much attention in the field of computer graphics and pattern recognition. It has been widely used in face animation and recognition. However, it is still challenging due to the high-level semantic presence of large and non-linear face geometry variations. This paper proposes a Geometry-Guided Generative Adversarial Network (G2-GAN) for continuously-adjusting and identity-preserving Facial Expression synthesis. We employ Facial geometry (fiducial points) as a controllable condition to guide Facial texture synthesis with specific Expression. A pair of generative adversarial subnetworks is jointly trained towards opposite tasks: Expression removal and Expression synthesis. The paired networks form a mapping cycle between neutral Expression and arbitrary Expressions, with which the proposed approach can be conducted among unpaired data. The proposed paired networks also facilitate other applications such as face transfer, Expression interpolation and Expression-invariant face recognition. Experimental results on several Facial Expression databases show that our method can generate compelling perceptual results on different Expression editing tasks.

  • geometry guided adversarial Facial Expression synthesis
    arXiv: Computer Vision and Pattern Recognition, 2017
    Co-Authors: Lingxiao Song, Zhihe Lu, Ran He
    Abstract:

    Facial Expression synthesis has drawn much attention in the field of computer graphics and pattern recognition. It has been widely used in face animation and recognition. However, it is still challenging due to the high-level semantic presence of large and non-linear face geometry variations. This paper proposes a Geometry-Guided Generative Adversarial Network (G2-GAN) for photo-realistic and identity-preserving Facial Expression synthesis. We employ Facial geometry (fiducial points) as a controllable condition to guide Facial texture synthesis with specific Expression. A pair of generative adversarial subnetworks are jointly trained towards opposite tasks: Expression removal and Expression synthesis. The paired networks form a mapping cycle between neutral Expression and arbitrary Expressions, which also facilitate other applications such as face transfer and Expression invariant face recognition. Experimental results show that our method can generate compelling perceptual results on various Facial Expression synthesis databases. An Expression invariant face recognition experiment is also performed to further show the advantages of our proposed method.

Mengchu Zhou - One of the best experts on this subject based on the ideXlab platform.

  • A Facial Expression emotion recognition based human-robot interaction system
    IEEE CAA Journal of Automatica Sinica, 2017
    Co-Authors: Z. Liu, J.-y. Xu, W. Cao, Ri Zhang, Luefeng Chen, Jianping Xu, Mengtian Zhou, Liqun Chen, Rui Zhang, M Wu, Mengchu Zhou, J. Mao
    Abstract:

    A Facial Expression emotion recognition based human-robot interaction U+0028 FEER-HRI U+0029 system is proposed, for which a four-layer system framework is designed. The FEER-HRI system enables the robots not only to recognize human emotions, but also to generate Facial Expression for adapting to human emotions. A Facial emotion recognition method based on 2D-Gabor, uniform local binary pattern U+0028 LBP U+0029 operator, and multiclass extreme learning machine U+0028 ELM U+0029 classifier is presented, which is applied to real-time Facial Expression recognition for robots. Facial Expressions of robots are represented by simple cartoon symbols and displayed by a LED screen equipped in the robots, which can be easily understood by human. Four scenarios, i.e., guiding, entertainment, home service and scene simulation are performed in the human-robot interaction experiment, in which smooth communication is realized by Facial Expression recognition of humans and Facial Expression generation of robots within 2 seconds. As a few prospective applications, the FEER-HRI system can be applied in home service, smart home, safe driving, and so on.

  • Image ratio features for Facial Expression recognition application
    IEEE Transactions on Systems Man and Cybernetics Part B: Cybernetics, 2010
    Co-Authors: Mingli Song, Zicheng Liu, Dacheng Tao, Xuelong Li, Mengchu Zhou
    Abstract:

    Video-based Facial Expression recognition is a challenging problem in computer vision and human-computer interaction. To target this problem, texture features have been extracted and widely used, because they can capture image intensity changes raised by skin deformation. However, existing texture features encounter problems with albedo and lighting variations. To solve both problems, we propose a new texture feature called image ratio features. Compared with previously proposed texture features, e.g., high gradient component features, image ratio features are more robust to albedo and lighting variations. In addition, to further improve Facial Expression recognition accuracy based on image ratio features, we combine image ratio features with Facial animation parameters (FAPs), which describe the geometric motions of Facial feature points. The performance evaluation is based on the Carnegie Mellon University Cohn-Kanade database, our own database, and the Japanese Female Facial Expression database. Experimental results show that the proposed image ratio feature is more robust to albedo and lighting variations, and the combination of image ratio features and FAPs outperforms each feature alone. In addition, we study asymmetric Facial Expressions based on our own Facial Expression database and demonstrate the superior performance of our combined Expression recognition system.