Facial Feature

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 22734 Experts worldwide ranked by ideXlab platform

Qiang Ji - One of the best experts on this subject based on the ideXlab platform.

  • Simultaneous Facial Feature tracking and Facial expression recognition.
    IEEE transactions on image processing : a publication of the IEEE Signal Processing Society, 2013
    Co-Authors: Yongqiang Li, Shangfei Wang, Yongping Zhao, Qiang Ji
    Abstract:

    The tracking and recognition of Facial activities from images or videos have attracted great attention in computer vision field. Facial activities are characterized by three levels. First, in the bottom level, Facial Feature points around each Facial component, i.e., eyebrow, mouth, etc., capture the detailed face shape information. Second, in the middle level, Facial action units, defined in the Facial action coding system, represent the contraction of a specific set of Facial muscles, i.e., lid tightener, eyebrow raiser, etc. Finally, in the top level, six prototypical Facial expressions represent the global Facial muscle movement and are commonly used to describe the human emotion states. In contrast to the mainstream approaches, which usually only focus on one or two levels of Facial activities, and track (or recognize) them separately, this paper introduces a unified probabilistic framework based on the dynamic Bayesian network to simultaneously and coherently represent the Facial evolvement in different levels, their interactions and their observations. Advanced machine learning methods are introduced to learn the model based on both training data and subjective prior knowledge. Given the model and the measurements of Facial motions, all three levels of Facial activities are simultaneously recognized through a probabilistic inference. Extensive experiments are performed to illustrate the feasibility and effectiveness of the proposed model on all three level Facial activities.

  • robust Facial Feature tracking under varying face pose and Facial expression
    Pattern Recognition, 2007
    Co-Authors: Yan Tong, Yang Wang, Qiang Ji
    Abstract:

    This paper presents a hierarchical multi-state pose-dependent approach for Facial Feature detection and tracking under varying Facial expression and face pose. For effective and efficient representation of Feature points, a hybrid representation that integrates Gabor wavelets and gray-level profiles is proposed. To model the spatial relations among Feature points, a hierarchical statistical face shape model is proposed to characterize both the global shape of human face and the local structural details of each Facial component. Furthermore, multi-state local shape models are introduced to deal with shape variations of some Facial components under different Facial expressions. During detection and tracking, both Facial component states and Feature point positions, constrained by the hierarchical face shape model, are dynamically estimated using a switching hypothesized measurements (SHM) model. Experimental results demonstrate that the proposed method accurately and robustly tracks Facial Features in real time under different Facial expressions and face poses.

Richard Bowden - One of the best experts on this subject based on the ideXlab platform.

  • robust Facial Feature tracking using shape constrained multiresolution selected linear predictors
    IEEE Transactions on Pattern Analysis and Machine Intelligence, 2011
    Co-Authors: Engjon Ong, Richard Bowden
    Abstract:

    This paper proposes a learned data-driven approach for accurate, real-time tracking of Facial Features using only intensity information. The task of automatic Facial Feature tracking is nontrivial since the face is a highly deformable object with large textural variations and motion in certain regions. Existing works attempt to address these problems by either limiting themselves to tracking Feature points with strong and unique visual cues (e.g., mouth and eye corners) or by incorporating a priori information that needs to be manually designed (e.g., selecting points for a shape model). The framework proposed here largely avoids the need for such restrictions by automatically identifying the optimal visual support required for tracking a single Facial Feature point. This automatic identification of the visual context required for tracking allows the proposed method to potentially track any point on the face. Tracking is achieved via linear predictors which provide a fast and effective method for mapping pixel intensities into tracked Feature position displacements. Building upon the simplicity and strengths of linear predictors, a more robust biased linear predictor is introduced. Multiple linear predictors are then grouped into a rigid flock to further increase robustness. To improve tracking accuracy, a novel probabilistic selection method is used to identify relevant visual areas for tracking a Feature point. These selected flocks are then combined into a hierarchical multiresolution LP model. Finally, we also exploit a simple shape constraint for correcting the occasional tracking failure of a minority of Feature points. Experimental results show that this method performs more robustly and accurately than AAMs, with minimal training examples on example sequences that range from SD quality to Youtube quality. Additionally, an analysis of the visual support consistency across different subjects is also provided.

  • robust Facial Feature tracking using selected multi resolution linear predictors
    International Conference on Computer Vision, 2009
    Co-Authors: Engjon Ong, Yuxuan Lan, Barryjohn Theobald, Richard P Harvey, Richard Bowden
    Abstract:

    This paper proposes a learnt data-driven approach for accurate, real-time tracking of Facial Features using only intensity information. Constraints such as a-priori shape models or temporal models for dynamics are not required or used. Tracking Facial Features simply becomes the independent tracking of a set of points on the face. This allows us to cope with Facial configurations not present in the training data. Tracking is achieved via linear predictors which provide a fast and effective method for mapping pixel-level information to tracked Feature position displacements. To improve on this, a novel and robust biased linear predictor is proposed in this paper. Multiple linear predictors are grouped into a rigid flock to increase robustness. To further improve tracking accuracy, a novel probabilistic selection method is used to identify relevant visual areas for tracking a Feature point. These selected flocks are then combined into a hierarchical multi-resolution LP model. Experimental results also show that this method performs more robustly and accurately than AAMs, without any a priori shape information and with minimal training examples.

Rana El Kaliouby - One of the best experts on this subject based on the ideXlab platform.

  • Real time Facial expression recognition in video using support vector machines
    ICMI'03: Fifth International Conference on Multimodal Interfaces, 2003
    Co-Authors: P Michel, Rana El Kaliouby
    Abstract:

    Enabling computer systems to recognize Facial expressions and infer emotions from them in real time presents a challenging research topic. In this paper, we present a real time approach to emotion recognition through Facial expression in live video. We employ an automatic Facial Feature tracker to perform face localization and Feature extraction. The Facial Feature displacements in the video stream are used as input to a Support Vector Machine classifier. We evaluate our method in terms of recognition accuracy for a variety of interaction and classification scenarios. Our person-dependent and person-independent experiments demonstrate the effectiveness of a support vector machine and Feature tracking approach to fully automatic, unobtrusive expression recognition in live video. We conclude by discussing the relevance of our work to affective and intelligent man-machine interfaces and exploring further improvements. © 2003 ACM.

Ioannis Pitas - One of the best experts on this subject based on the ideXlab platform.

  • Facial Feature extraction in frontal views using biometric analogies
    European Signal Processing Conference, 1998
    Co-Authors: Sofia Tsekeridou, Ioannis Pitas
    Abstract:

    Face detection and Facial Feature extraction are considered to be key requirements in many applications, such as access control systems, model-based video coding, content-based video browsing and retrieval. Thus, accurate face localization and Facial Feature extraction are most desirable. A face detection and Facial Feature extraction in frontal views algorithm is described in this paper. The algorithm is based on principles described in [1] but extends the work by considering: (a) the mirror-symmetry of the face in the vertical direction and (b) Facial biometrie analogies depending on the size of the face estimated by the face localization method. Further improvements have been added to the face localization method to enhance its performance. The proposed algorithm has been applied to frontal views extracted from the European ACTS M2VTS database with very good results.

  • a novel method for automatic face segmentation Facial Feature extraction and tracking
    Signal Processing-image Communication, 1998
    Co-Authors: K Sobottka, Ioannis Pitas
    Abstract:

    The present paper describes a novel method for the segmentation of faces, extraction of Facial Features and tracking of the face contour and Features over time. Robust segmentation of faces out of complex scenes is done based on color and shape information. Additionally, face candidates are verified by searching for Facial Features in the interior of the face. As interesting Facial Features we employ eyebrows, eyes, nostrils, mouth and chin. We consider incomplete Feature constellations as well. If a face and its Features are detected once reliably, we track the face contour and the Features over time. Face contour tracking is done by using deformable models like snakes. Facial Feature tracking is performed by block matching. The success of our approach was verified by evaluating 38 different color image sequences, containing Features as beard, glasses and changing Facial expressions.

  • face localization and Facial Feature extraction based on shape and color information
    International Conference on Image Processing, 1996
    Co-Authors: K Sobottka, Ioannis Pitas
    Abstract:

    Recognition of human faces out of still images or image sequences is a research field of fast increasing interest. At first, Facial regions and Facial Features like eyes and mouth have to be extracted. In the present paper we propose an approach that copes with problems of these first two steps. We perform face localization based on the observation that human faces are characterized by their oval shape and skin-color, also in the case of varying light conditions. For that we segment faces by evaluating shape and color (HSV) information. Then face hypotheses are verified by searching for Facial Features inside of the face-like regions. This is done by applying morphological operations and minima localization to intensity images.

Engjon Ong - One of the best experts on this subject based on the ideXlab platform.

  • robust Facial Feature tracking using shape constrained multiresolution selected linear predictors
    IEEE Transactions on Pattern Analysis and Machine Intelligence, 2011
    Co-Authors: Engjon Ong, Richard Bowden
    Abstract:

    This paper proposes a learned data-driven approach for accurate, real-time tracking of Facial Features using only intensity information. The task of automatic Facial Feature tracking is nontrivial since the face is a highly deformable object with large textural variations and motion in certain regions. Existing works attempt to address these problems by either limiting themselves to tracking Feature points with strong and unique visual cues (e.g., mouth and eye corners) or by incorporating a priori information that needs to be manually designed (e.g., selecting points for a shape model). The framework proposed here largely avoids the need for such restrictions by automatically identifying the optimal visual support required for tracking a single Facial Feature point. This automatic identification of the visual context required for tracking allows the proposed method to potentially track any point on the face. Tracking is achieved via linear predictors which provide a fast and effective method for mapping pixel intensities into tracked Feature position displacements. Building upon the simplicity and strengths of linear predictors, a more robust biased linear predictor is introduced. Multiple linear predictors are then grouped into a rigid flock to further increase robustness. To improve tracking accuracy, a novel probabilistic selection method is used to identify relevant visual areas for tracking a Feature point. These selected flocks are then combined into a hierarchical multiresolution LP model. Finally, we also exploit a simple shape constraint for correcting the occasional tracking failure of a minority of Feature points. Experimental results show that this method performs more robustly and accurately than AAMs, with minimal training examples on example sequences that range from SD quality to Youtube quality. Additionally, an analysis of the visual support consistency across different subjects is also provided.

  • robust Facial Feature tracking using selected multi resolution linear predictors
    International Conference on Computer Vision, 2009
    Co-Authors: Engjon Ong, Yuxuan Lan, Barryjohn Theobald, Richard P Harvey, Richard Bowden
    Abstract:

    This paper proposes a learnt data-driven approach for accurate, real-time tracking of Facial Features using only intensity information. Constraints such as a-priori shape models or temporal models for dynamics are not required or used. Tracking Facial Features simply becomes the independent tracking of a set of points on the face. This allows us to cope with Facial configurations not present in the training data. Tracking is achieved via linear predictors which provide a fast and effective method for mapping pixel-level information to tracked Feature position displacements. To improve on this, a novel and robust biased linear predictor is proposed in this paper. Multiple linear predictors are grouped into a rigid flock to increase robustness. To further improve tracking accuracy, a novel probabilistic selection method is used to identify relevant visual areas for tracking a Feature point. These selected flocks are then combined into a hierarchical multi-resolution LP model. Experimental results also show that this method performs more robustly and accurately than AAMs, without any a priori shape information and with minimal training examples.