The Experts below are selected from a list of 37893 Experts worldwide ranked by ideXlab platform
Yueting Zhuang - One of the best experts on this subject based on the ideXlab platform.
-
ACIVS - Video-based facial Expression hallucination: a two- level hierarchical fusion approach
Advanced Concepts for Intelligent Vision Systems, 2006Co-Authors: Jian Zhang, Yueting ZhuangAbstract:Facial Expression hallucination is an important approach to facial Expression synthesis. Existing works mainly focused on synthesizing a static facial Expression image given one face image with Neutral Expression. In this paper, we propose a novel two-level hierarchical fusion approach to hallucinate dynamic Expression video sequences when given only one Neutral Expression face image. By fusion of local linear and global nonlinear subspace learning, the two-level approach provides a sound solution to organizing the complex video sample space. Experiments show that our approach generates reasonable facial Expression sequences both in temporal domain and spatial domain with less artifact compared with existing works.
-
Video-based facial Expression hallucination : A two-level hierarchical fusion approach
Lecture Notes in Computer Science, 2006Co-Authors: Jian Zhang, Yueting ZhuangAbstract:Facial Expression hallucination is an important approach to facial Expression synthesis. Existing works mainly focused on synthesizing a static facial Expression image given one face image with Neutral Expression. In this paper, we propose a novel two-level hierarchical fusion approach to hallucinate dynamic Expression video sequences when given only one Neutral Expression face image. By fusion of local linear and global nonlinear subspace learning, the two-level approach provides a sound solution to organizing the complex video sample space. Experiments show that our approach generates reasonable facial Expression sequences both in temporal domain and spatial domain with less artifact compared with existing works.
William C. Woods - One of the best experts on this subject based on the ideXlab platform.
-
In the Eye of the Beholder: A Comprehensive Analysis of Stimulus Type, Perceiver, and Target in Physical Attractiveness Perceptions
Journal of Nonverbal Behavior, 2021Co-Authors: Molly A. Bowdring, Michael A. Sayette, Jeffrey M. Girard, William C. WoodsAbstract:Physical attractiveness plays a central role in psychosocial experiences. One of the top research priorities has been to identify factors affecting perceptions of physical attractiveness (PPA). Recent work suggests PPA derives from different sources (e.g., target, perceiver, stimulus type). Although smiles in particular are believed to enhance PPA, support has been surprisingly limited. This study comprehensively examines the effect of smiles on PPA and, more broadly, evaluates the roles of target, perceiver, and stimulus type in PPA variation. Perceivers ( n = 181) rated both static images and 5-s videos of targets displaying smiling and Neutral-Expressions. Smiling images were rated as more attractive than Neutral-Expression images (regardless of stimulus motion format). Interestingly, perceptions of physical attractiveness were based more on the perceiver than on either the target or format in which the target was presented. Results clarify the effect of smiles, and highlight the significant role of the perceiver, in PPA.
-
In the eye of the beholder: A comprehensive analysis of stimulus type, perceiver, and target in physical attractiveness perceptions
2020Co-Authors: Molly A. Bowdring, Michael A. Sayette, Jeffrey M. Girard, William C. WoodsAbstract:Physical attractiveness plays a central role in psychosocial experiences. One of the top research priorities has been to identify factors affecting perceptions of physical attractiveness (PPA). Recent work suggests PPA derives from different sources (e.g., target, perceiver, stimulus type). Although smiles in particular are believed to enhance PPA, support has been surprisingly limited. This study comprehensively examines the effect of smiles on PPA and, more broadly, evaluates the roles of target, perceiver, and stimulus type in PPA variation. Perceivers (n = 181) rated both static images and 5-sec videos of targets displaying smiling and Neutral-Expressions. Smiling images were rated as more attractive than Neutral-Expression images (regardless of stimulus motion format). Interestingly, perceptions of physical attractiveness were based more on the perceiver than on either the target or format in which the target was presented. Results clarify the effect of smiles, and highlight the significant role of the perceiver, in PPA.
Louise S. Delicato - One of the best experts on this subject based on the ideXlab platform.
-
A robust method for measuring an individual’s sensitivity to facial Expressions
Attention Perception & Psychophysics, 2020Co-Authors: Louise S. DelicatoAbstract:This paper describes a method to measure the sensitivity of an individual to different facial Expressions. It shows that individual participants are more sensitive to happy than to fearful Expressions and that the differences are statistically significant using the model-comparison approach. Sensitivity is measured by asking participants to discriminate between an emotional facial Expression and a Neutral Expression of the same face. The Expression was diluted to different degrees by combining it in different proportions with the Neutral Expression using morphing software. Sensitivity is defined as measurement of the proportion of Neutral Expression in a stimulus required for participants to discriminate the emotional Expression on 75% of presentations. Individuals could reliably discriminate happy Expressions diluted with a greater proportion of the Neutral Expression compared with that required for discrimination of fearful Expressions. This tells us that individual participants are more sensitive to happy compared with fearful Expressions. Sensitivity is equivalent when measured on two different testing sessions, and greater sensitivity to happy Expressions is maintained with short stimulus durations and stimuli generated using different morphing software. Increased sensitivity to happy compared with fear Expressions was affected at smaller image sizes for some participants. Application of the approach for use with clinical populations, as well as understanding the relative contribution of perceptual processing and affective processing in facial Expression recognition, is discussed.
-
A robust method for measuring an individual's sensitivity to facial Expressions.
Attention perception & psychophysics, 2020Co-Authors: Louise S. DelicatoAbstract:This paper describes a method to measure the sensitivity of an individual to different facial Expressions. It shows that individual participants are more sensitive to happy than to fearful Expressions and that the differences are statistically significant using the model-comparison approach. Sensitivity is measured by asking participants to discriminate between an emotional facial Expression and a Neutral Expression of the same face. The Expression was diluted to different degrees by combining it in different proportions with the Neutral Expression using morphing software. Sensitivity is defined as measurement of the proportion of Neutral Expression in a stimulus required for participants to discriminate the emotional Expression on 75% of presentations. Individuals could reliably discriminate happy Expressions diluted with a greater proportion of the Neutral Expression compared with that required for discrimination of fearful Expressions. This tells us that individual participants are more sensitive to happy compared with fearful Expressions. Sensitivity is equivalent when measured on two different testing sessions, and greater sensitivity to happy Expressions is maintained with short stimulus durations and stimuli generated using different morphing software. Increased sensitivity to happy compared with fear Expressions was affected at smaller image sizes for some participants. Application of the approach for use with clinical populations, as well as understanding the relative contribution of perceptual processing and affective processing in facial Expression recognition, is discussed.
Jian Zhang - One of the best experts on this subject based on the ideXlab platform.
-
ACIVS - Video-based facial Expression hallucination: a two- level hierarchical fusion approach
Advanced Concepts for Intelligent Vision Systems, 2006Co-Authors: Jian Zhang, Yueting ZhuangAbstract:Facial Expression hallucination is an important approach to facial Expression synthesis. Existing works mainly focused on synthesizing a static facial Expression image given one face image with Neutral Expression. In this paper, we propose a novel two-level hierarchical fusion approach to hallucinate dynamic Expression video sequences when given only one Neutral Expression face image. By fusion of local linear and global nonlinear subspace learning, the two-level approach provides a sound solution to organizing the complex video sample space. Experiments show that our approach generates reasonable facial Expression sequences both in temporal domain and spatial domain with less artifact compared with existing works.
-
Video-based facial Expression hallucination : A two-level hierarchical fusion approach
Lecture Notes in Computer Science, 2006Co-Authors: Jian Zhang, Yueting ZhuangAbstract:Facial Expression hallucination is an important approach to facial Expression synthesis. Existing works mainly focused on synthesizing a static facial Expression image given one face image with Neutral Expression. In this paper, we propose a novel two-level hierarchical fusion approach to hallucinate dynamic Expression video sequences when given only one Neutral Expression face image. By fusion of local linear and global nonlinear subspace learning, the two-level approach provides a sound solution to organizing the complex video sample space. Experiments show that our approach generates reasonable facial Expression sequences both in temporal domain and spatial domain with less artifact compared with existing works.
Paul Debevec - One of the best experts on this subject based on the ideXlab platform.
-
post production facial performance relighting using reflectance transfer
International Conference on Computer Graphics and Interactive Techniques, 2007Co-Authors: Pieter Peers, Naoki Tamura, Wojciech Matusik, Paul DebevecAbstract:We propose a novel post-production facial performance relighting system for human actors. Our system uses just a dataset of view-dependent facial appearances with a Neutral Expression, captured for a static subject using a Light Stage apparatus. For the actual performance, however, a potentially different actor is captured under known, but static, illumination. During post-production, the reflectance field of the reference dataset actor is transferred onto the dynamic performance, enabling image-based relighting of the entire sequence. Our approach makes post-production relighting more practical and could easily be incorporated in a traditional production pipeline since it does not require additional hardware during principal photography. Additionally, we show that our system is suitable for real-time post-production illumination editing.