Observable Variable

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 114 Experts worldwide ranked by ideXlab platform

Julie Shah - One of the best experts on this subject based on the ideXlab platform.

  • Efficient Model Learning from Joint-Action Demonstrations for Human-Robot Collaborative Tasks
    2015 10th ACM IEEE International Conference on Human-Robot Interaction (HRI), 2015
    Co-Authors: Stefanos Nikolaidis, Ramya Ramakrishnan, Keren Gu, Julie Shah
    Abstract:

    We present a framework for automatically learning human user models from joint-action demonstrations that enables a robot to compute a robust policy for a collaborative task with a human. First, the demonstrated action sequences are clustered into different human types using an unsupervised learning algorithm. A reward function is then learned for each type through the employment of an inverse reinforcement learning algorithm. The learned model is then incorporated into a mixed-observability Markov decision process (MOMDP) formulation, wherein the human type is a partially Observable Variable. With this framework, we can infer online the human type of a new user that was not included in the training set, and can compute a policy for the robot that will be aligned to the preference of this user. In a human subject experiment (n=30), participants agreed more strongly that the robot anticipated their actions when working with a robot incorporating the proposed framework (p

  • Efficient Model Learning from Joint-Action Demonstrations for Human-Robot Collaborative Tasks
    2015
    Co-Authors: Stefanos Nikolaidis, Ramya Ramakrishnan, Keren Gu, Julie Shah
    Abstract:

    We present a framework for learning human user models from joint-action demonstrations that enables the robot to compute a robust policy for a collaborative task with a human. The learning takes place completely automatically, without any human intervention. First, we describe the clustering of demonstrated action sequences into different human types using an unsupervised learning algorithm. These demonstrated sequences are also used by the robot to learn a reward function that is representative for each type, through the employment of an inverse reinforcement learning algorithm. The learned model is then used as part of a Mixed Observability Markov Decision Process formulation, wherein the human type is a partially Observable Variable. With this framework, we can infer, either offline or online, the human type of a new user that was not included in the training set, and can compute a policy for the robot that will be aligned to the preference of this new user and will be robust to deviations of the human actions from prior demonstrations. Finally we validate the approach using data collected in human subject experiments, and conduct proof-of-concept demonstrations in which a person performs a collaborative task with a small industrial robot.

Stefanos Nikolaidis - One of the best experts on this subject based on the ideXlab platform.

  • Efficient Model Learning from Joint-Action Demonstrations for Human-Robot Collaborative Tasks
    2015 10th ACM IEEE International Conference on Human-Robot Interaction (HRI), 2015
    Co-Authors: Stefanos Nikolaidis, Ramya Ramakrishnan, Keren Gu, Julie Shah
    Abstract:

    We present a framework for automatically learning human user models from joint-action demonstrations that enables a robot to compute a robust policy for a collaborative task with a human. First, the demonstrated action sequences are clustered into different human types using an unsupervised learning algorithm. A reward function is then learned for each type through the employment of an inverse reinforcement learning algorithm. The learned model is then incorporated into a mixed-observability Markov decision process (MOMDP) formulation, wherein the human type is a partially Observable Variable. With this framework, we can infer online the human type of a new user that was not included in the training set, and can compute a policy for the robot that will be aligned to the preference of this user. In a human subject experiment (n=30), participants agreed more strongly that the robot anticipated their actions when working with a robot incorporating the proposed framework (p

  • efficient model learning from joint action demonstrations for human robot collaborative tasks
    Human-Robot Interaction, 2015
    Co-Authors: Stefanos Nikolaidis, Ramya Ramakrishnan, Keren Gu, Julie A Shah
    Abstract:

    We present a framework for automatically learning human user models from joint-action demonstrations that enables a robot to compute a robust policy for a collaborative task with a human. First, the demonstrated action sequences are clustered into different human types using an unsupervised learning algorithm. A reward function is then learned for each type through the employment of an inverse reinforcement learning algorithm. The learned model is then incorporated into a mixed-observability Markov decision process (MOMDP) formulation, wherein the human type is a partially Observable Variable. With this framework, we can infer online the human type of a new user that was not included in the training set, and can compute a policy for the robot that will be aligned to the preference of this user. In a human subject experiment (n=30), participants agreed more strongly that the robot anticipated their actions when working with a robot incorporating the proposed framework (p<0.01), compared to manually annotating robot actions. In trials where participants faced difficulty annotating the robot actions to complete the task, the proposed framework significantly improved team efficiency (p <0.01). The robot incorporating the framework was also found to be more responsive to human actions compared to policies computed using a hand-coded reward function by a domain expert (p<0.01). These results indicate that learning human user models from joint-action demonstrations and encoding them in a MOMDP formalism can support effective teaming in human-robot collaborative tasks.

  • Efficient Model Learning from Joint-Action Demonstrations for Human-Robot Collaborative Tasks
    2015
    Co-Authors: Stefanos Nikolaidis, Ramya Ramakrishnan, Keren Gu, Julie Shah
    Abstract:

    We present a framework for learning human user models from joint-action demonstrations that enables the robot to compute a robust policy for a collaborative task with a human. The learning takes place completely automatically, without any human intervention. First, we describe the clustering of demonstrated action sequences into different human types using an unsupervised learning algorithm. These demonstrated sequences are also used by the robot to learn a reward function that is representative for each type, through the employment of an inverse reinforcement learning algorithm. The learned model is then used as part of a Mixed Observability Markov Decision Process formulation, wherein the human type is a partially Observable Variable. With this framework, we can infer, either offline or online, the human type of a new user that was not included in the training set, and can compute a policy for the robot that will be aligned to the preference of this new user and will be robust to deviations of the human actions from prior demonstrations. Finally we validate the approach using data collected in human subject experiments, and conduct proof-of-concept demonstrations in which a person performs a collaborative task with a small industrial robot.

Keren Gu - One of the best experts on this subject based on the ideXlab platform.

  • Efficient Model Learning from Joint-Action Demonstrations for Human-Robot Collaborative Tasks
    2015 10th ACM IEEE International Conference on Human-Robot Interaction (HRI), 2015
    Co-Authors: Stefanos Nikolaidis, Ramya Ramakrishnan, Keren Gu, Julie Shah
    Abstract:

    We present a framework for automatically learning human user models from joint-action demonstrations that enables a robot to compute a robust policy for a collaborative task with a human. First, the demonstrated action sequences are clustered into different human types using an unsupervised learning algorithm. A reward function is then learned for each type through the employment of an inverse reinforcement learning algorithm. The learned model is then incorporated into a mixed-observability Markov decision process (MOMDP) formulation, wherein the human type is a partially Observable Variable. With this framework, we can infer online the human type of a new user that was not included in the training set, and can compute a policy for the robot that will be aligned to the preference of this user. In a human subject experiment (n=30), participants agreed more strongly that the robot anticipated their actions when working with a robot incorporating the proposed framework (p

  • efficient model learning from joint action demonstrations for human robot collaborative tasks
    Human-Robot Interaction, 2015
    Co-Authors: Stefanos Nikolaidis, Ramya Ramakrishnan, Keren Gu, Julie A Shah
    Abstract:

    We present a framework for automatically learning human user models from joint-action demonstrations that enables a robot to compute a robust policy for a collaborative task with a human. First, the demonstrated action sequences are clustered into different human types using an unsupervised learning algorithm. A reward function is then learned for each type through the employment of an inverse reinforcement learning algorithm. The learned model is then incorporated into a mixed-observability Markov decision process (MOMDP) formulation, wherein the human type is a partially Observable Variable. With this framework, we can infer online the human type of a new user that was not included in the training set, and can compute a policy for the robot that will be aligned to the preference of this user. In a human subject experiment (n=30), participants agreed more strongly that the robot anticipated their actions when working with a robot incorporating the proposed framework (p<0.01), compared to manually annotating robot actions. In trials where participants faced difficulty annotating the robot actions to complete the task, the proposed framework significantly improved team efficiency (p <0.01). The robot incorporating the framework was also found to be more responsive to human actions compared to policies computed using a hand-coded reward function by a domain expert (p<0.01). These results indicate that learning human user models from joint-action demonstrations and encoding them in a MOMDP formalism can support effective teaming in human-robot collaborative tasks.

  • Efficient Model Learning from Joint-Action Demonstrations for Human-Robot Collaborative Tasks
    2015
    Co-Authors: Stefanos Nikolaidis, Ramya Ramakrishnan, Keren Gu, Julie Shah
    Abstract:

    We present a framework for learning human user models from joint-action demonstrations that enables the robot to compute a robust policy for a collaborative task with a human. The learning takes place completely automatically, without any human intervention. First, we describe the clustering of demonstrated action sequences into different human types using an unsupervised learning algorithm. These demonstrated sequences are also used by the robot to learn a reward function that is representative for each type, through the employment of an inverse reinforcement learning algorithm. The learned model is then used as part of a Mixed Observability Markov Decision Process formulation, wherein the human type is a partially Observable Variable. With this framework, we can infer, either offline or online, the human type of a new user that was not included in the training set, and can compute a policy for the robot that will be aligned to the preference of this new user and will be robust to deviations of the human actions from prior demonstrations. Finally we validate the approach using data collected in human subject experiments, and conduct proof-of-concept demonstrations in which a person performs a collaborative task with a small industrial robot.

Ramya Ramakrishnan - One of the best experts on this subject based on the ideXlab platform.

  • Efficient Model Learning from Joint-Action Demonstrations for Human-Robot Collaborative Tasks
    2015 10th ACM IEEE International Conference on Human-Robot Interaction (HRI), 2015
    Co-Authors: Stefanos Nikolaidis, Ramya Ramakrishnan, Keren Gu, Julie Shah
    Abstract:

    We present a framework for automatically learning human user models from joint-action demonstrations that enables a robot to compute a robust policy for a collaborative task with a human. First, the demonstrated action sequences are clustered into different human types using an unsupervised learning algorithm. A reward function is then learned for each type through the employment of an inverse reinforcement learning algorithm. The learned model is then incorporated into a mixed-observability Markov decision process (MOMDP) formulation, wherein the human type is a partially Observable Variable. With this framework, we can infer online the human type of a new user that was not included in the training set, and can compute a policy for the robot that will be aligned to the preference of this user. In a human subject experiment (n=30), participants agreed more strongly that the robot anticipated their actions when working with a robot incorporating the proposed framework (p

  • efficient model learning from joint action demonstrations for human robot collaborative tasks
    Human-Robot Interaction, 2015
    Co-Authors: Stefanos Nikolaidis, Ramya Ramakrishnan, Keren Gu, Julie A Shah
    Abstract:

    We present a framework for automatically learning human user models from joint-action demonstrations that enables a robot to compute a robust policy for a collaborative task with a human. First, the demonstrated action sequences are clustered into different human types using an unsupervised learning algorithm. A reward function is then learned for each type through the employment of an inverse reinforcement learning algorithm. The learned model is then incorporated into a mixed-observability Markov decision process (MOMDP) formulation, wherein the human type is a partially Observable Variable. With this framework, we can infer online the human type of a new user that was not included in the training set, and can compute a policy for the robot that will be aligned to the preference of this user. In a human subject experiment (n=30), participants agreed more strongly that the robot anticipated their actions when working with a robot incorporating the proposed framework (p<0.01), compared to manually annotating robot actions. In trials where participants faced difficulty annotating the robot actions to complete the task, the proposed framework significantly improved team efficiency (p <0.01). The robot incorporating the framework was also found to be more responsive to human actions compared to policies computed using a hand-coded reward function by a domain expert (p<0.01). These results indicate that learning human user models from joint-action demonstrations and encoding them in a MOMDP formalism can support effective teaming in human-robot collaborative tasks.

  • Efficient Model Learning from Joint-Action Demonstrations for Human-Robot Collaborative Tasks
    2015
    Co-Authors: Stefanos Nikolaidis, Ramya Ramakrishnan, Keren Gu, Julie Shah
    Abstract:

    We present a framework for learning human user models from joint-action demonstrations that enables the robot to compute a robust policy for a collaborative task with a human. The learning takes place completely automatically, without any human intervention. First, we describe the clustering of demonstrated action sequences into different human types using an unsupervised learning algorithm. These demonstrated sequences are also used by the robot to learn a reward function that is representative for each type, through the employment of an inverse reinforcement learning algorithm. The learned model is then used as part of a Mixed Observability Markov Decision Process formulation, wherein the human type is a partially Observable Variable. With this framework, we can infer, either offline or online, the human type of a new user that was not included in the training set, and can compute a policy for the robot that will be aligned to the preference of this new user and will be robust to deviations of the human actions from prior demonstrations. Finally we validate the approach using data collected in human subject experiments, and conduct proof-of-concept demonstrations in which a person performs a collaborative task with a small industrial robot.

Manfred Schmitt - One of the best experts on this subject based on the ideXlab platform.

  • latent state trait theory and research in personality and individual differences
    European Journal of Personality, 1999
    Co-Authors: Rolf Steyer, Manfred Schmitt
    Abstract:

    Latent state–trait (LST) theory is a generalization of classical test theory designed to take account of the fact that psychological measurement does not take place in a situational vacuum. The basic concepts of latent state–trait theory (LST theory) are introduced. The core of LST theory consists of two decompositions: (a) the decomposition of any observed score into latent state and measurement error, and (b) the decomposition of any latent state into latent trait and latent state residual representing situational and/or interaction effects. Latent states and latent traits are defined as special conditional expectations. A score on a latent state Variable is defined as the expectation of an Observable Variable Yik given a person in a situation whereas a score on a latent trait Variable is the expectation of Yik given a person. The theory also comprises the definition of consistency, occasion specificity, reliability, and stability coefficients. An overview of different models of LST theory is given. It is shown how different research questions of personality psychology can be and have been analysed within the LST framework and why research in personality and individual differences can profit from LST theory and methodology. Copyright © 1999 John Wiley & Sons, Ltd.

  • Latent state–trait theory and research in personality and individual differences
    European Journal of Personality, 1999
    Co-Authors: Rolf Steyer, Manfred Schmitt
    Abstract:

    Latent state–trait (LST) theory is a generalization of classical test theory designed to take account of the fact that psychological measurement does not take place in a situational vacuum. The basic concepts of latent state–trait theory (LST theory) are introduced. The core of LST theory consists of two decompositions: (a) the decomposition of any observed score into latent state and measurement error, and (b) the decomposition of any latent state into latent trait and latent state residual representing situational and/or interaction effects. Latent states and latent traits are defined as special conditional expectations. A score on a latent state Variable is defined as the expectation of an Observable Variable Yik given a person in a situation whereas a score on a latent trait Variable is the expectation of Yik given a person. The theory also comprises the definition of consistency, occasion specificity, reliability, and stability coefficients. An overview of different models of LST theory is given. It is shown how different research questions of personality psychology can be and have been analysed within the LST framework and why research in personality and individual differences can profit from LST theory and methodology. Copyright © 1999 John Wiley & Sons, Ltd.