Automated Action

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 198 Experts worldwide ranked by ideXlab platform

Terrance E Boult - One of the best experts on this subject based on the ideXlab platform.

  • Automated Action units vs expert raters face off
    Workshop on Applications of Computer Vision, 2018
    Co-Authors: Svati Dhamija, Terrance E Boult
    Abstract:

    User engagement is an essential component of any application design. Finding reliable methods to forecast continuos engagement can aid in creating adaptive applications like web-based interventions, intelligent student tutoring, the creation of socially intelligent human-robots, etc. In this paper, we compare observational estimates from expert raters to vision-based learning, for estimating user engagement. The vision-based approach uses Automated computation of Action Units combined with an RNN. Several data collection techniques have been explored in the past that capture different modalities for engagement from obtaining self-reports and gathering external observations via crowd-sourcing or even trained expert raters. Traditional machine learning approaches discard annotations from inconsistent raters, use rater averages or apply raterspecific weighting schemes. Such approaches often end up throwing away expensive annotations. We introduce a novel approach that exploits the inherent confusion and disagreement in raters annotations to build a scalable engagement estimation model that learns to appropriately weigh subjective behavioral cues. We show that actively modeling the uncertainty, either explicitly from expert raters or from Automated estimation with AU, significantly improves prediction over prediction from just the average engagement ratings. Our approach performs significantly better or on par with experts in predicting engagement for a trauma-recovery application.

  • WACV - Automated Action Units Vs. Expert Raters: Face off
    2018 IEEE Winter Conference on Applications of Computer Vision (WACV), 2018
    Co-Authors: Svati Dhamija, Terrance E Boult
    Abstract:

    User engagement is an essential component of any application design. Finding reliable methods to forecast continuos engagement can aid in creating adaptive applications like web-based interventions, intelligent student tutoring, the creation of socially intelligent human-robots, etc. In this paper, we compare observational estimates from expert raters to vision-based learning, for estimating user engagement. The vision-based approach uses Automated computation of Action Units combined with an RNN. Several data collection techniques have been explored in the past that capture different modalities for engagement from obtaining self-reports and gathering external observations via crowd-sourcing or even trained expert raters. Traditional machine learning approaches discard annotations from inconsistent raters, use rater averages or apply raterspecific weighting schemes. Such approaches often end up throwing away expensive annotations. We introduce a novel approach that exploits the inherent confusion and disagreement in raters annotations to build a scalable engagement estimation model that learns to appropriately weigh subjective behavioral cues. We show that actively modeling the uncertainty, either explicitly from expert raters or from Automated estimation with AU, significantly improves prediction over prediction from just the average engagement ratings. Our approach performs significantly better or on par with experts in predicting engagement for a trauma-recovery application.

Mohammad H Mahoor - One of the best experts on this subject based on the ideXlab platform.

  • temporal facial expression modeling for Automated Action unit intensity measurement
    International Conference on Pattern Recognition, 2014
    Co-Authors: Mohammad S Mavadati, Mohammad H Mahoor
    Abstract:

    Spontaneous facial expression recognition using temporal patterns is a relatively unexplored area in facial image analysis. Several factors such as head orientation, co-occurrence and presence of subtle facial Action units (AUs), and time variability of AUs make the problem more challenging. This paper presents a methodology to model and automatically recognize the intensity of spontaneous AUs in videos. Our method exploits localized Gabor features and Hidden Markov Model (HMM) to represent and model the dependencies of AU dynamics in both subject-dependent (SD) and subject-independent (SI) settings. Our experimental results show that temporal information can improve the recognition of AUs and their intensity levels compared to static methods.

  • ICPR - Temporal Facial Expression Modeling for Automated Action Unit Intensity Measurement
    2014 22nd International Conference on Pattern Recognition, 2014
    Co-Authors: S. Mohammad Mavadati, Mohammad H Mahoor
    Abstract:

    Spontaneous facial expression recognition using temporal patterns is a relatively unexplored area in facial image analysis. Several factors such as head orientation, co-occurrence and presence of subtle facial Action units (AUs), and time variability of AUs make the problem more challenging. This paper presents a methodology to model and automatically recognize the intensity of spontaneous AUs in videos. Our method exploits localized Gabor features and Hidden Markov Model (HMM) to represent and model the dependencies of AU dynamics in both subject-dependent (SD) and subject-independent (SI) settings. Our experimental results show that temporal information can improve the recognition of AUs and their intensity levels compared to static methods.

  • DISFA: A Spontaneous Facial Action Intensity Database
    IEEE Transactions on Affective Computing, 2013
    Co-Authors: Mohammad S Mavadati, Mohammad H Mahoor, Kevin Bartlett, Philip Trinh, Jeffrey F. Cohn
    Abstract:

    Access to well-labeled recordings of facial expression is critical to progress in Automated facial expression recognition. With few exceptions, publicly available databases are limited to posed facial behavior that can differ markedly in conformation, intensity, and timing from what occurs spontaneously. To meet the need for publicly available corpora of well-labeled video, we collected, ground-truthed, and prepared for distribution the Denver intensity of spontaneous facial Action database. Twenty-seven young adults were video recorded by a stereo camera while they viewed video clips intended to elicit spontaneous emotion expression. Each video frame was manually coded for presence, absence, and intensity of facial Action units according to the facial Action unit coding system. Action units are the smallest visibly discriminable changes in facial Action; they may occur individually and in combinations to comprise more molar facial expressions. To provide a baseline for use in future research, protocols and benchmarks for Automated Action unit intensity measurement are reported. Details are given for accessing the database for research in computer vision, machine learning, and affective and behavioral science.

Mohammad S Mavadati - One of the best experts on this subject based on the ideXlab platform.

  • temporal facial expression modeling for Automated Action unit intensity measurement
    International Conference on Pattern Recognition, 2014
    Co-Authors: Mohammad S Mavadati, Mohammad H Mahoor
    Abstract:

    Spontaneous facial expression recognition using temporal patterns is a relatively unexplored area in facial image analysis. Several factors such as head orientation, co-occurrence and presence of subtle facial Action units (AUs), and time variability of AUs make the problem more challenging. This paper presents a methodology to model and automatically recognize the intensity of spontaneous AUs in videos. Our method exploits localized Gabor features and Hidden Markov Model (HMM) to represent and model the dependencies of AU dynamics in both subject-dependent (SD) and subject-independent (SI) settings. Our experimental results show that temporal information can improve the recognition of AUs and their intensity levels compared to static methods.

  • DISFA: A Spontaneous Facial Action Intensity Database
    IEEE Transactions on Affective Computing, 2013
    Co-Authors: Mohammad S Mavadati, Mohammad H Mahoor, Kevin Bartlett, Philip Trinh, Jeffrey F. Cohn
    Abstract:

    Access to well-labeled recordings of facial expression is critical to progress in Automated facial expression recognition. With few exceptions, publicly available databases are limited to posed facial behavior that can differ markedly in conformation, intensity, and timing from what occurs spontaneously. To meet the need for publicly available corpora of well-labeled video, we collected, ground-truthed, and prepared for distribution the Denver intensity of spontaneous facial Action database. Twenty-seven young adults were video recorded by a stereo camera while they viewed video clips intended to elicit spontaneous emotion expression. Each video frame was manually coded for presence, absence, and intensity of facial Action units according to the facial Action unit coding system. Action units are the smallest visibly discriminable changes in facial Action; they may occur individually and in combinations to comprise more molar facial expressions. To provide a baseline for use in future research, protocols and benchmarks for Automated Action unit intensity measurement are reported. Details are given for accessing the database for research in computer vision, machine learning, and affective and behavioral science.

Svati Dhamija - One of the best experts on this subject based on the ideXlab platform.

  • Automated Action units vs expert raters face off
    Workshop on Applications of Computer Vision, 2018
    Co-Authors: Svati Dhamija, Terrance E Boult
    Abstract:

    User engagement is an essential component of any application design. Finding reliable methods to forecast continuos engagement can aid in creating adaptive applications like web-based interventions, intelligent student tutoring, the creation of socially intelligent human-robots, etc. In this paper, we compare observational estimates from expert raters to vision-based learning, for estimating user engagement. The vision-based approach uses Automated computation of Action Units combined with an RNN. Several data collection techniques have been explored in the past that capture different modalities for engagement from obtaining self-reports and gathering external observations via crowd-sourcing or even trained expert raters. Traditional machine learning approaches discard annotations from inconsistent raters, use rater averages or apply raterspecific weighting schemes. Such approaches often end up throwing away expensive annotations. We introduce a novel approach that exploits the inherent confusion and disagreement in raters annotations to build a scalable engagement estimation model that learns to appropriately weigh subjective behavioral cues. We show that actively modeling the uncertainty, either explicitly from expert raters or from Automated estimation with AU, significantly improves prediction over prediction from just the average engagement ratings. Our approach performs significantly better or on par with experts in predicting engagement for a trauma-recovery application.

  • WACV - Automated Action Units Vs. Expert Raters: Face off
    2018 IEEE Winter Conference on Applications of Computer Vision (WACV), 2018
    Co-Authors: Svati Dhamija, Terrance E Boult
    Abstract:

    User engagement is an essential component of any application design. Finding reliable methods to forecast continuos engagement can aid in creating adaptive applications like web-based interventions, intelligent student tutoring, the creation of socially intelligent human-robots, etc. In this paper, we compare observational estimates from expert raters to vision-based learning, for estimating user engagement. The vision-based approach uses Automated computation of Action Units combined with an RNN. Several data collection techniques have been explored in the past that capture different modalities for engagement from obtaining self-reports and gathering external observations via crowd-sourcing or even trained expert raters. Traditional machine learning approaches discard annotations from inconsistent raters, use rater averages or apply raterspecific weighting schemes. Such approaches often end up throwing away expensive annotations. We introduce a novel approach that exploits the inherent confusion and disagreement in raters annotations to build a scalable engagement estimation model that learns to appropriately weigh subjective behavioral cues. We show that actively modeling the uncertainty, either explicitly from expert raters or from Automated estimation with AU, significantly improves prediction over prediction from just the average engagement ratings. Our approach performs significantly better or on par with experts in predicting engagement for a trauma-recovery application.

Jeffrey F. Cohn - One of the best experts on this subject based on the ideXlab platform.

  • DISFA: A Spontaneous Facial Action Intensity Database
    IEEE Transactions on Affective Computing, 2013
    Co-Authors: Mohammad S Mavadati, Mohammad H Mahoor, Kevin Bartlett, Philip Trinh, Jeffrey F. Cohn
    Abstract:

    Access to well-labeled recordings of facial expression is critical to progress in Automated facial expression recognition. With few exceptions, publicly available databases are limited to posed facial behavior that can differ markedly in conformation, intensity, and timing from what occurs spontaneously. To meet the need for publicly available corpora of well-labeled video, we collected, ground-truthed, and prepared for distribution the Denver intensity of spontaneous facial Action database. Twenty-seven young adults were video recorded by a stereo camera while they viewed video clips intended to elicit spontaneous emotion expression. Each video frame was manually coded for presence, absence, and intensity of facial Action units according to the facial Action unit coding system. Action units are the smallest visibly discriminable changes in facial Action; they may occur individually and in combinations to comprise more molar facial expressions. To provide a baseline for use in future research, protocols and benchmarks for Automated Action unit intensity measurement are reported. Details are given for accessing the database for research in computer vision, machine learning, and affective and behavioral science.

  • Automatic recognition of eye blinking in spontaneously occurring behavior
    Behavior Research Methods Instruments & Computers, 2003
    Co-Authors: Jeffrey F. Cohn, Jing Xiao, Tsuyoshi Moriyama, Zara Ambadar, Takeo Kanade
    Abstract:

    Previous researchin automatic facial expression recognition has been limited to recognition of gross expression categories (e.g., joy or anger) in posed facial behavior under well-controlled conditions (e.g., frontal pose and minimal out-of-plane head motion). We have developed a system that detects a discrete and important facial Action (e.g., eye blinking) in spontaneously occurring facial behavior that has been measured with a nonfrontal pose, moderate out-of-plane head motion, and occlusion. The system recovers three-dimensional motion parameters, stabilizes facial regions, extracts motion and appearance information, and recognizes discrete facial Actions in spontaneous facial behavior. We tested the system in video data from a two-person interview. The 10 subjects were ethnically diverse, Action units occurred during speech, and out-of-plane motion and occlusion from head motion and glasses were common. The video data were originally collected to answer substantive questions in psychology and represent a substantial challenge to Automated Action unit recognition. In analysis of blinks, the system achieved 98% accuracy.