Facial Recognition Software

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 267 Experts worldwide ranked by ideXlab platform

Rafael A [ed] Calvo - One of the best experts on this subject based on the ideXlab platform.

  • A Multi-Componential Analysis of Emotions during Complex Learning with an Intelligent Multi-Agent System
    2016
    Co-Authors: Jason Matthew Harley, François Bouchet, Roger Azevedo, M Sazzad Hussain, Rafael A [ed] Calvo
    Abstract:

    In this paper we discuss the methodology and results of aligning three different emotional measurement methods (automatic Facial expression Recognition, self-report, electrodermal activation) and their agreement regarding learners’ emotions. Data was collected from 67 undergraduate students from a North American university who interacted with {MetaTutor}, an intelligent, multi-agent, hypermedia environment for learning about the human circulatory system, for a 1 hour learning session (Azevedo et al., 2013, Harley, Bouchet, \& Azevedo, 2013). A webcam was used to capture videos of learners’ Facial expressions, which were analyzed using automatic Facial Recognition Software ({FaceReader} 5.0). Learners’ physiological arousal was measured using Affectiva’s Q-Sensor 2.0 electrodermal activation bracelet. Learners self-reported their experience of 19 different emotional states (including basic, learner-centered, and academic achievement emotions) using the Emotion-Value questionnaire (Harley et al., 2013). They did so on five different occasions during the learning session, which were used as markers to align data from {FaceReader} and Q-Sensor. We found a high agreement between the Facial and self-report data (75.6%) when similar emotions were grouped together along theoretical dimensions and definitions (e.g., anger and frustration) (Harley, et al., 2013). However, our new results examining the agreement between the Q-Sensor and these two methods suggests that electrodermal ({EDA}/physiological) indices of emotions do not have a tightly coupled (Gross, Sheppes, \& Urry, 2011) relationship with them. Explanations for this finding are discussed.

  • A Multi-Componential Analysis of Emotions during Complex Learning with an Intelligent Multi-agent System
    Computers in Human Behavior, 2015
    Co-Authors: Jason Matthew Harley, François Bouchet, Roger Azevedo, M Sazzad Hussain, Rafael A [ed] Calvo
    Abstract:

    This paper presents the evaluation of the synchronization of three emotional measurement methods (automatic Facial expression Recognition, self-report, electrodermal activity) and their agreement regarding learners' emotions. Data were collected from 67 undergraduates enrolled at a North American university whom learned about a complex science topic while interacting with MetaTutor, a multi-agent computerized learning environment. Videos of learners' Facial expressions captured with a webcam were analyzed using automatic Facial Recognition Software (FaceReader 5.0). Learners' physiological arousal was recorded using Affectiva's Q-Sensor 2.0 electrodermal activity measurement bracelet. Learners' self-reported their experience of 19 different emotional states on five different occasions during the learning session, which were used as markers to synchronize data from FaceReader and Q-Sensor. We found a high agreement between the Facial and self-report data (75.6%), but low levels of agreement between them and the Q-Sensor data, suggesting that a tightly coupled relationship does not always exist between emotional response components.

  • A multi-componential analysis of emotions during complex learning with an intelligent multi-agent system
    Computers in Human Behavior, 2015
    Co-Authors: Jason Matthew Harley, François Bouchet, Roger Azevedo, M Sazzad Hussain, Rafael A [ed] Calvo
    Abstract:

    International audienceThis paper presents the evaluation of the synchronization of three emotional measurement methods (automatic Facial expression Recognition, self-report, electrodermal activity) and their agreement regarding learners' emotions. Data were collected from 67 undergraduates enrolled at a North American university whom learned about a complex science topic while interacting with MetaTutor, a multi-agent computerized learning environment. Videos of learners' Facial expressions captured with a webcam were analyzed using automatic Facial Recognition Software (FaceReader 5.0). Learners' physiological arousal was recorded using Affectiva's Q-Sensor 2.0 electrodermal activity measurement bracelet. Learners' self-reported their experience of 19 different emotional states on five different occasions during the learning session, which were used as markers to synchronize data from FaceReader and Q-Sensor. We found a high agreement between the Facial and self-report data (75.6%), but low levels of agreement between them and the Q-Sensor data, suggesting that a tightly coupled relationship does not always exist between emotional response components

Jason Matthew Harley - One of the best experts on this subject based on the ideXlab platform.

  • A Multi-Componential Analysis of Emotions during Complex Learning with an Intelligent Multi-Agent System
    2016
    Co-Authors: Jason Matthew Harley, François Bouchet, Roger Azevedo, M Sazzad Hussain, Rafael A [ed] Calvo
    Abstract:

    In this paper we discuss the methodology and results of aligning three different emotional measurement methods (automatic Facial expression Recognition, self-report, electrodermal activation) and their agreement regarding learners’ emotions. Data was collected from 67 undergraduate students from a North American university who interacted with {MetaTutor}, an intelligent, multi-agent, hypermedia environment for learning about the human circulatory system, for a 1 hour learning session (Azevedo et al., 2013, Harley, Bouchet, \& Azevedo, 2013). A webcam was used to capture videos of learners’ Facial expressions, which were analyzed using automatic Facial Recognition Software ({FaceReader} 5.0). Learners’ physiological arousal was measured using Affectiva’s Q-Sensor 2.0 electrodermal activation bracelet. Learners self-reported their experience of 19 different emotional states (including basic, learner-centered, and academic achievement emotions) using the Emotion-Value questionnaire (Harley et al., 2013). They did so on five different occasions during the learning session, which were used as markers to align data from {FaceReader} and Q-Sensor. We found a high agreement between the Facial and self-report data (75.6%) when similar emotions were grouped together along theoretical dimensions and definitions (e.g., anger and frustration) (Harley, et al., 2013). However, our new results examining the agreement between the Q-Sensor and these two methods suggests that electrodermal ({EDA}/physiological) indices of emotions do not have a tightly coupled (Gross, Sheppes, \& Urry, 2011) relationship with them. Explanations for this finding are discussed.

  • A Multi-Componential Analysis of Emotions during Complex Learning with an Intelligent Multi-agent System
    Computers in Human Behavior, 2015
    Co-Authors: Jason Matthew Harley, François Bouchet, Roger Azevedo, M Sazzad Hussain, Rafael A [ed] Calvo
    Abstract:

    This paper presents the evaluation of the synchronization of three emotional measurement methods (automatic Facial expression Recognition, self-report, electrodermal activity) and their agreement regarding learners' emotions. Data were collected from 67 undergraduates enrolled at a North American university whom learned about a complex science topic while interacting with MetaTutor, a multi-agent computerized learning environment. Videos of learners' Facial expressions captured with a webcam were analyzed using automatic Facial Recognition Software (FaceReader 5.0). Learners' physiological arousal was recorded using Affectiva's Q-Sensor 2.0 electrodermal activity measurement bracelet. Learners' self-reported their experience of 19 different emotional states on five different occasions during the learning session, which were used as markers to synchronize data from FaceReader and Q-Sensor. We found a high agreement between the Facial and self-report data (75.6%), but low levels of agreement between them and the Q-Sensor data, suggesting that a tightly coupled relationship does not always exist between emotional response components.

  • A multi-componential analysis of emotions during complex learning with an intelligent multi-agent system
    Computers in Human Behavior, 2015
    Co-Authors: Jason Matthew Harley, François Bouchet, Roger Azevedo, M Sazzad Hussain, Rafael A [ed] Calvo
    Abstract:

    International audienceThis paper presents the evaluation of the synchronization of three emotional measurement methods (automatic Facial expression Recognition, self-report, electrodermal activity) and their agreement regarding learners' emotions. Data were collected from 67 undergraduates enrolled at a North American university whom learned about a complex science topic while interacting with MetaTutor, a multi-agent computerized learning environment. Videos of learners' Facial expressions captured with a webcam were analyzed using automatic Facial Recognition Software (FaceReader 5.0). Learners' physiological arousal was recorded using Affectiva's Q-Sensor 2.0 electrodermal activity measurement bracelet. Learners' self-reported their experience of 19 different emotional states on five different occasions during the learning session, which were used as markers to synchronize data from FaceReader and Q-Sensor. We found a high agreement between the Facial and self-report data (75.6%), but low levels of agreement between them and the Q-Sensor data, suggesting that a tightly coupled relationship does not always exist between emotional response components

François Bouchet - One of the best experts on this subject based on the ideXlab platform.

  • A Multi-Componential Analysis of Emotions during Complex Learning with an Intelligent Multi-Agent System
    2016
    Co-Authors: Jason Matthew Harley, François Bouchet, Roger Azevedo, M Sazzad Hussain, Rafael A [ed] Calvo
    Abstract:

    In this paper we discuss the methodology and results of aligning three different emotional measurement methods (automatic Facial expression Recognition, self-report, electrodermal activation) and their agreement regarding learners’ emotions. Data was collected from 67 undergraduate students from a North American university who interacted with {MetaTutor}, an intelligent, multi-agent, hypermedia environment for learning about the human circulatory system, for a 1 hour learning session (Azevedo et al., 2013, Harley, Bouchet, \& Azevedo, 2013). A webcam was used to capture videos of learners’ Facial expressions, which were analyzed using automatic Facial Recognition Software ({FaceReader} 5.0). Learners’ physiological arousal was measured using Affectiva’s Q-Sensor 2.0 electrodermal activation bracelet. Learners self-reported their experience of 19 different emotional states (including basic, learner-centered, and academic achievement emotions) using the Emotion-Value questionnaire (Harley et al., 2013). They did so on five different occasions during the learning session, which were used as markers to align data from {FaceReader} and Q-Sensor. We found a high agreement between the Facial and self-report data (75.6%) when similar emotions were grouped together along theoretical dimensions and definitions (e.g., anger and frustration) (Harley, et al., 2013). However, our new results examining the agreement between the Q-Sensor and these two methods suggests that electrodermal ({EDA}/physiological) indices of emotions do not have a tightly coupled (Gross, Sheppes, \& Urry, 2011) relationship with them. Explanations for this finding are discussed.

  • A Multi-Componential Analysis of Emotions during Complex Learning with an Intelligent Multi-agent System
    Computers in Human Behavior, 2015
    Co-Authors: Jason Matthew Harley, François Bouchet, Roger Azevedo, M Sazzad Hussain, Rafael A [ed] Calvo
    Abstract:

    This paper presents the evaluation of the synchronization of three emotional measurement methods (automatic Facial expression Recognition, self-report, electrodermal activity) and their agreement regarding learners' emotions. Data were collected from 67 undergraduates enrolled at a North American university whom learned about a complex science topic while interacting with MetaTutor, a multi-agent computerized learning environment. Videos of learners' Facial expressions captured with a webcam were analyzed using automatic Facial Recognition Software (FaceReader 5.0). Learners' physiological arousal was recorded using Affectiva's Q-Sensor 2.0 electrodermal activity measurement bracelet. Learners' self-reported their experience of 19 different emotional states on five different occasions during the learning session, which were used as markers to synchronize data from FaceReader and Q-Sensor. We found a high agreement between the Facial and self-report data (75.6%), but low levels of agreement between them and the Q-Sensor data, suggesting that a tightly coupled relationship does not always exist between emotional response components.

  • A multi-componential analysis of emotions during complex learning with an intelligent multi-agent system
    Computers in Human Behavior, 2015
    Co-Authors: Jason Matthew Harley, François Bouchet, Roger Azevedo, M Sazzad Hussain, Rafael A [ed] Calvo
    Abstract:

    International audienceThis paper presents the evaluation of the synchronization of three emotional measurement methods (automatic Facial expression Recognition, self-report, electrodermal activity) and their agreement regarding learners' emotions. Data were collected from 67 undergraduates enrolled at a North American university whom learned about a complex science topic while interacting with MetaTutor, a multi-agent computerized learning environment. Videos of learners' Facial expressions captured with a webcam were analyzed using automatic Facial Recognition Software (FaceReader 5.0). Learners' physiological arousal was recorded using Affectiva's Q-Sensor 2.0 electrodermal activity measurement bracelet. Learners' self-reported their experience of 19 different emotional states on five different occasions during the learning session, which were used as markers to synchronize data from FaceReader and Q-Sensor. We found a high agreement between the Facial and self-report data (75.6%), but low levels of agreement between them and the Q-Sensor data, suggesting that a tightly coupled relationship does not always exist between emotional response components

Roger Azevedo - One of the best experts on this subject based on the ideXlab platform.

  • A Multi-Componential Analysis of Emotions during Complex Learning with an Intelligent Multi-Agent System
    2016
    Co-Authors: Jason Matthew Harley, François Bouchet, Roger Azevedo, M Sazzad Hussain, Rafael A [ed] Calvo
    Abstract:

    In this paper we discuss the methodology and results of aligning three different emotional measurement methods (automatic Facial expression Recognition, self-report, electrodermal activation) and their agreement regarding learners’ emotions. Data was collected from 67 undergraduate students from a North American university who interacted with {MetaTutor}, an intelligent, multi-agent, hypermedia environment for learning about the human circulatory system, for a 1 hour learning session (Azevedo et al., 2013, Harley, Bouchet, \& Azevedo, 2013). A webcam was used to capture videos of learners’ Facial expressions, which were analyzed using automatic Facial Recognition Software ({FaceReader} 5.0). Learners’ physiological arousal was measured using Affectiva’s Q-Sensor 2.0 electrodermal activation bracelet. Learners self-reported their experience of 19 different emotional states (including basic, learner-centered, and academic achievement emotions) using the Emotion-Value questionnaire (Harley et al., 2013). They did so on five different occasions during the learning session, which were used as markers to align data from {FaceReader} and Q-Sensor. We found a high agreement between the Facial and self-report data (75.6%) when similar emotions were grouped together along theoretical dimensions and definitions (e.g., anger and frustration) (Harley, et al., 2013). However, our new results examining the agreement between the Q-Sensor and these two methods suggests that electrodermal ({EDA}/physiological) indices of emotions do not have a tightly coupled (Gross, Sheppes, \& Urry, 2011) relationship with them. Explanations for this finding are discussed.

  • A Multi-Componential Analysis of Emotions during Complex Learning with an Intelligent Multi-agent System
    Computers in Human Behavior, 2015
    Co-Authors: Jason Matthew Harley, François Bouchet, Roger Azevedo, M Sazzad Hussain, Rafael A [ed] Calvo
    Abstract:

    This paper presents the evaluation of the synchronization of three emotional measurement methods (automatic Facial expression Recognition, self-report, electrodermal activity) and their agreement regarding learners' emotions. Data were collected from 67 undergraduates enrolled at a North American university whom learned about a complex science topic while interacting with MetaTutor, a multi-agent computerized learning environment. Videos of learners' Facial expressions captured with a webcam were analyzed using automatic Facial Recognition Software (FaceReader 5.0). Learners' physiological arousal was recorded using Affectiva's Q-Sensor 2.0 electrodermal activity measurement bracelet. Learners' self-reported their experience of 19 different emotional states on five different occasions during the learning session, which were used as markers to synchronize data from FaceReader and Q-Sensor. We found a high agreement between the Facial and self-report data (75.6%), but low levels of agreement between them and the Q-Sensor data, suggesting that a tightly coupled relationship does not always exist between emotional response components.

  • A multi-componential analysis of emotions during complex learning with an intelligent multi-agent system
    Computers in Human Behavior, 2015
    Co-Authors: Jason Matthew Harley, François Bouchet, Roger Azevedo, M Sazzad Hussain, Rafael A [ed] Calvo
    Abstract:

    International audienceThis paper presents the evaluation of the synchronization of three emotional measurement methods (automatic Facial expression Recognition, self-report, electrodermal activity) and their agreement regarding learners' emotions. Data were collected from 67 undergraduates enrolled at a North American university whom learned about a complex science topic while interacting with MetaTutor, a multi-agent computerized learning environment. Videos of learners' Facial expressions captured with a webcam were analyzed using automatic Facial Recognition Software (FaceReader 5.0). Learners' physiological arousal was recorded using Affectiva's Q-Sensor 2.0 electrodermal activity measurement bracelet. Learners' self-reported their experience of 19 different emotional states on five different occasions during the learning session, which were used as markers to synchronize data from FaceReader and Q-Sensor. We found a high agreement between the Facial and self-report data (75.6%), but low levels of agreement between them and the Q-Sensor data, suggesting that a tightly coupled relationship does not always exist between emotional response components

M Sazzad Hussain - One of the best experts on this subject based on the ideXlab platform.

  • A Multi-Componential Analysis of Emotions during Complex Learning with an Intelligent Multi-Agent System
    2016
    Co-Authors: Jason Matthew Harley, François Bouchet, Roger Azevedo, M Sazzad Hussain, Rafael A [ed] Calvo
    Abstract:

    In this paper we discuss the methodology and results of aligning three different emotional measurement methods (automatic Facial expression Recognition, self-report, electrodermal activation) and their agreement regarding learners’ emotions. Data was collected from 67 undergraduate students from a North American university who interacted with {MetaTutor}, an intelligent, multi-agent, hypermedia environment for learning about the human circulatory system, for a 1 hour learning session (Azevedo et al., 2013, Harley, Bouchet, \& Azevedo, 2013). A webcam was used to capture videos of learners’ Facial expressions, which were analyzed using automatic Facial Recognition Software ({FaceReader} 5.0). Learners’ physiological arousal was measured using Affectiva’s Q-Sensor 2.0 electrodermal activation bracelet. Learners self-reported their experience of 19 different emotional states (including basic, learner-centered, and academic achievement emotions) using the Emotion-Value questionnaire (Harley et al., 2013). They did so on five different occasions during the learning session, which were used as markers to align data from {FaceReader} and Q-Sensor. We found a high agreement between the Facial and self-report data (75.6%) when similar emotions were grouped together along theoretical dimensions and definitions (e.g., anger and frustration) (Harley, et al., 2013). However, our new results examining the agreement between the Q-Sensor and these two methods suggests that electrodermal ({EDA}/physiological) indices of emotions do not have a tightly coupled (Gross, Sheppes, \& Urry, 2011) relationship with them. Explanations for this finding are discussed.

  • A Multi-Componential Analysis of Emotions during Complex Learning with an Intelligent Multi-agent System
    Computers in Human Behavior, 2015
    Co-Authors: Jason Matthew Harley, François Bouchet, Roger Azevedo, M Sazzad Hussain, Rafael A [ed] Calvo
    Abstract:

    This paper presents the evaluation of the synchronization of three emotional measurement methods (automatic Facial expression Recognition, self-report, electrodermal activity) and their agreement regarding learners' emotions. Data were collected from 67 undergraduates enrolled at a North American university whom learned about a complex science topic while interacting with MetaTutor, a multi-agent computerized learning environment. Videos of learners' Facial expressions captured with a webcam were analyzed using automatic Facial Recognition Software (FaceReader 5.0). Learners' physiological arousal was recorded using Affectiva's Q-Sensor 2.0 electrodermal activity measurement bracelet. Learners' self-reported their experience of 19 different emotional states on five different occasions during the learning session, which were used as markers to synchronize data from FaceReader and Q-Sensor. We found a high agreement between the Facial and self-report data (75.6%), but low levels of agreement between them and the Q-Sensor data, suggesting that a tightly coupled relationship does not always exist between emotional response components.

  • A multi-componential analysis of emotions during complex learning with an intelligent multi-agent system
    Computers in Human Behavior, 2015
    Co-Authors: Jason Matthew Harley, François Bouchet, Roger Azevedo, M Sazzad Hussain, Rafael A [ed] Calvo
    Abstract:

    International audienceThis paper presents the evaluation of the synchronization of three emotional measurement methods (automatic Facial expression Recognition, self-report, electrodermal activity) and their agreement regarding learners' emotions. Data were collected from 67 undergraduates enrolled at a North American university whom learned about a complex science topic while interacting with MetaTutor, a multi-agent computerized learning environment. Videos of learners' Facial expressions captured with a webcam were analyzed using automatic Facial Recognition Software (FaceReader 5.0). Learners' physiological arousal was recorded using Affectiva's Q-Sensor 2.0 electrodermal activity measurement bracelet. Learners' self-reported their experience of 19 different emotional states on five different occasions during the learning session, which were used as markers to synchronize data from FaceReader and Q-Sensor. We found a high agreement between the Facial and self-report data (75.6%), but low levels of agreement between them and the Q-Sensor data, suggesting that a tightly coupled relationship does not always exist between emotional response components