Adaptive Agent

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 30828 Experts worldwide ranked by ideXlab platform

Daniel A Braun - One of the best experts on this subject based on the ideXlab platform.

  • a minimum relative entropy principle for learning and acting
    Journal of Artificial Intelligence Research, 2010
    Co-Authors: Pedro A Ortega, Daniel A Braun
    Abstract:

    This paper proposes a method to construct an Adaptive Agent that is universal with respect to a given class of experts, where each expert is designed specifically for a particular environment. This Adaptive control problem is formalized as the problem of minimizing the relative entropy of the Adaptive Agent from the expert that is most suitable for the unknown environment. If the Agent is a passive observer, then the optimal solution is the well-known Bayesian predictor. However, if the Agent is active, then its past actions need to be treated as causal interventions on the I/O stream rather than normal probability conditions. Here it is shown that the solution to this new variational problem is given by a stochastic controller called the Bayesian control rule, which implements Adaptive behavior as a mixture of experts. Furthermore, it is shown that under mild assumptions, the Bayesian control rule converges to the control law of the most suitable expert.

  • a minimum relative entropy principle for learning and acting
    arXiv: Artificial Intelligence, 2008
    Co-Authors: Pedro A Ortega, Daniel A Braun
    Abstract:

    This paper proposes a method to construct an Adaptive Agent that is universal with respect to a given class of experts, where each expert is an Agent that has been designed specifically for a particular environment. This Adaptive control problem is formalized as the problem of minimizing the relative entropy of the Adaptive Agent from the expert that is most suitable for the unknown environment. If the Agent is a passive observer, then the optimal solution is the well-known Bayesian predictor. However, if the Agent is active, then its past actions need to be treated as causal interventions on the I/O stream rather than normal probability conditions. Here it is shown that the solution to this new variational problem is given by a stochastic controller called the Bayesian control rule, which implements Adaptive behavior as a mixture of experts. Furthermore, it is shown that under mild assumptions, the Bayesian control rule converges to the control law of the most suitable expert.

Toyoaki Nishida - One of the best experts on this subject based on the ideXlab platform.

  • formation conditions of mutual adaptation in human Agent collaborative interaction
    Applied Intelligence, 2012
    Co-Authors: Yoshimasa Ohmoto, Kazuhiro Ueda, Takanori Komatsu, Takeshi Okadome, Koji Kamei, Shogo Okada, Yasuyuki Sumi, Toyoaki Nishida
    Abstract:

    When an Adaptive Agent works with a human user in a collaborative task, in order to enable flexible instructions to be issued by ordinary people, it is believed that a mutual adaptation phenomenon can enable the Agent to handle flexible mapping relations between the human user's instructions and the Agent's actions. To elucidate the conditions required to induce the mutual adaptation phenomenon, we designed an appropriate experimental environment called "WAITER" (Waiter Agent Interactive Training Experimental Restaurant) and conducted two experiments in this environment. The experimental results suggest that the proposed conditions can induce the mutual adaptation phenomenon.

  • Establishing adaptation loop in interaction between human user and Adaptive Agent
    2009 IEEE International Conference on Intelligent Computing and Intelligent Systems, 2009
    Co-Authors: Yoshimasa Ohmoto, Kazuhiro Ueda, Takanori Komatsu, Takeshi Okadome, Koji Kamei, Shogo Okada, Yasuyuki Sumi, Toyoaki Nishida
    Abstract:

    In order to develop an Adaptive Agent that a human user may feel easy to adapt to, it is useful to establish an adaption loop between the user and the Agent. A mutual adaptation phenomenon often occurs during establishment of such an adaption loop. Aiming to disclose the essence of mutual adaptation, we designed a waiter Agent task and conducted an experiment with respect to human-Agent mutual adaptation. Results of the experiment imply that not only response behavior of the Agent, but also type of human user affect the establishment of the adaptation loop.

  • AMT - Actively Adaptive Agent for Human-Agent Collaborative Task
    Active Media Technology, 2009
    Co-Authors: Yoshimasa Ohmoto, Kazuhiro Ueda, Takanori Komatsu, Takeshi Okadome, Koji Kamei, Shogo Okada, Yasuyuki Sumi, Toyoaki Nishida
    Abstract:

    Active interface is one of critical characteristics of Agents who have to interact with human users to achieve human-Agent collaboration. This characteristic is especially important in beginning phase of human-Agent interaction when an ordinary human user begins to interact with an Adaptive autonomous Agent. In order to investigate principal characteristics of an active interface, we developed a human-Agent collaborative experimental environment named WAITER. Two types of experiment: WOZ Agent experiment and autonomous Agent experiment were conducted. Objective of the experiment is to observe how human users change their instructions when interacting with Adaptive Agents with different degree of freedom. Experimental results indicate that participants can recognize changes of Agent's actions and change their instruction methods accordingly. It infers that changes of instruction method depend not only on waiter Agent's reactions, but also on human manager's cognitive models of the Agent.

Pedro A Ortega - One of the best experts on this subject based on the ideXlab platform.

  • a minimum relative entropy principle for learning and acting
    Journal of Artificial Intelligence Research, 2010
    Co-Authors: Pedro A Ortega, Daniel A Braun
    Abstract:

    This paper proposes a method to construct an Adaptive Agent that is universal with respect to a given class of experts, where each expert is designed specifically for a particular environment. This Adaptive control problem is formalized as the problem of minimizing the relative entropy of the Adaptive Agent from the expert that is most suitable for the unknown environment. If the Agent is a passive observer, then the optimal solution is the well-known Bayesian predictor. However, if the Agent is active, then its past actions need to be treated as causal interventions on the I/O stream rather than normal probability conditions. Here it is shown that the solution to this new variational problem is given by a stochastic controller called the Bayesian control rule, which implements Adaptive behavior as a mixture of experts. Furthermore, it is shown that under mild assumptions, the Bayesian control rule converges to the control law of the most suitable expert.

  • a minimum relative entropy principle for learning and acting
    arXiv: Artificial Intelligence, 2008
    Co-Authors: Pedro A Ortega, Daniel A Braun
    Abstract:

    This paper proposes a method to construct an Adaptive Agent that is universal with respect to a given class of experts, where each expert is an Agent that has been designed specifically for a particular environment. This Adaptive control problem is formalized as the problem of minimizing the relative entropy of the Adaptive Agent from the expert that is most suitable for the unknown environment. If the Agent is a passive observer, then the optimal solution is the well-known Bayesian predictor. However, if the Agent is active, then its past actions need to be treated as causal interventions on the I/O stream rather than normal probability conditions. Here it is shown that the solution to this new variational problem is given by a stochastic controller called the Bayesian control rule, which implements Adaptive behavior as a mixture of experts. Furthermore, it is shown that under mild assumptions, the Bayesian control rule converges to the control law of the most suitable expert.

Jan Treur - One of the best experts on this subject based on the ideXlab platform.

  • Healing the next generation: an Adaptive Agent model for the effects of parental narcissism
    Brain Informatics, 2021
    Co-Authors: Fakhra Jabeen, Charlotte Gerritsen, Jan Treur
    Abstract:

    Parents play an important role in the mental development of a child. In our previous work, we addressed how a narcissistic parent influences a child (online/offline) when (s)he is happy and admires the child. Now, we address the influence of a parent who is not so much pleased, and may curse the child for being the reason for his or her unhappiness. An abusive relationship with a parent can also cause trauma and poor mental health of the child. We also address how certain coping behaviors can help the child cope with such a situation. Therefore, the aim of the study is threefold. We present an Adaptive Agent model of a child, while incorporating the concept of mirroring through social contagion, the avoidance behaviors from a child, and the effects of regulation strategies to cope with stressful situations.

  • PRIMA - Modeling Higher-Order Adaptive Evolutionary Processes by Multilevel Adaptive Agent Models
    PRIMA 2019: Principles and Practice of Multi-Agent Systems, 2019
    Co-Authors: Jan Treur
    Abstract:

    In this paper a fourth-order Adaptive Agent model based on a multilevel reified network model is introduced to describe different orders of adaptivity of the Agent?s biological embodiment, as found in a case study on evolutionary processes. The Adaptive Agent model describes how the causal pathways for newly developed features affect the causal pathways of already existing features. This makes these new features one order of adaptivity higher than the existing ones. A network reification approach is shown to be an adequate means to model this.

  • An Adaptive Agent model for affective social decision making
    Biologically Inspired Cognitive Architectures, 2013
    Co-Authors: Alexei Sharpanskykh, Jan Treur
    Abstract:

    Abstract Decision making under stressful circumstances may involve strong emotions and requires adequate prediction and valuation capabilities. In a social context contagion from others plays an important role as well. Moreover, Agents adapt their decision making based on their experiences over time. Knowledge of principles from neuroscience provides an important source of inspiration to model such processes. In this paper an Adaptive Agent-based computational model is proposed to address the above-mentioned aspects in an integrative manner. As an application Adaptive decision making of an Agent in an emergency evacuation scenario is explored. By means of formal analysis and simulation, the model has been explored and evaluated.

  • A Generic Adaptive Agent Architecture Integrating Cognitive and Affective States and their Interaction
    Proceedings of the 3d Conference on Artificial General Intelligence (AGI-10), 2010
    Co-Authors: Zulfiqar A. Memon, Jan Treur
    Abstract:

    In this paper a generic Adaptive Agent architecture is presented that integrates the interaction between cognitive and affective aspects of mental functioning, based on variants of notions adopted from neurological literature. It is discussed how it addresses a number of issues that have recurred in the recent literature on Cognitive Science and

  • PRIMA - An Adaptive Agent Model for Emotion Reading by Mirroring Body States and Hebbian Learning
    Principles of Practice in Multi-Agent Systems, 2009
    Co-Authors: Tibor Bosse, Zulfiqar A. Memon, Jan Treur
    Abstract:

    In recent years, the topic of emotion reading has increasingly received attention from researchers in Cognitive Science and Artificial Intelligence. To study this phenomenon, in this paper an Adaptive Agent model is presented with capabilities to interpret another Agent's emotions. The presented Agent model is based on recent advances in neurological context. First a non-Adaptive Agent model for emotion reading is described involving (preparatory) mirroring body states of the other Agent. Here emotion reading is modelled taking into account the Simulation Theory perspective as known from the literature, involving the own body states and emotions in reading somebody else's emotions. This models an Agent that first develops the same feeling, and after feeling the emotion imputes it to the other Agent. Next the Agent model is extended to an Adaptive model based on a Hebbian learning principle to develop a direct connection between a sensed stimulus concerning another Agent's body state (e.g., face expression) and the emotion recognition state. In this Adaptive Agent model the emotion is imputed to the other Agent before it is actually felt. The Agent model has been designed based on principles of neural modelling, and as such has a close relation to a neurological realisation.

Marco Aiello - One of the best experts on this subject based on the ideXlab platform.

  • An Adaptive Agent-based system for deregulated smart grids
    Service Oriented Computing and Applications, 2016
    Co-Authors: Nicola Capodieci, Giacomo Cabri, Giuliano Andrea Pagani, Marco Aiello
    Abstract:

    The power grid is undergoing a major change due mainly to the increased penetration of renewables and novel digital instruments in the hands of the end users that help to monitor and shift their loads. Such transformation is only possible with the coupling of an information and communication technology infrastructure to the existing power distribution grid. Given the scale and the interoperability requirements of such future system, service-oriented architectures (SOAs) are seen as one of the reference models and are considered already in many of the proposed standards for the smart grid (e.g., IEC-62325 and OASIS eMIX). Beyond the technical issues of what the service-oriented architectures of the smart grid will look like, there is a pressing question about what the added value for the end user could be. Clearly, the operators need to guarantee availability and security of supply, but why should the end users care? In this paper, we explore a scenario in which the end users can both consume and produce small quantities of energy and can trade these quantities in an open and deregulated market. For the trading, they delegate software Agents that can fully interoperate and interact with one another thus taking advantage of the SOA. In particular, the Agents have strategies, inspired from game theory, to take advantage of a service-oriented smart grid market and give profit to their delegators, while implicitly helping balancing the power grid. The proposal is implemented with simulated Agents and interaction with existing Web services. To show the advantage of the Agent with strategies, we compare our approach with the “base” Agent one by means of simulations, highlighting the advantages of the proposal.