Imitative Learning

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 2385 Experts worldwide ranked by ideXlab platform

Thomas R. Zentall - One of the best experts on this subject based on the ideXlab platform.

  • Imitation and emulation by dogs using a bidirectional control procedure
    Behavioural Processes, 2009
    Co-Authors: Holly C. Miller, Rebecca Rayburn-reeves, Thomas R. Zentall
    Abstract:

    A successful procedure for studying Imitative behavior in non-humans is the bidirectional control procedure in which observers are exposed to a demonstrator that responds by moving a manipulandum in one of two different directions (e.g., left vs. right). Imitative Learning is demonstrated when observers make the response in the direction that they observed it being made. This procedure controls for socially mediated effects (the mere presence of a demonstrator), stimulus enhancement (attention drawn to a manipulandum by its movement), and if an appropriate control is included, emulation (Learning how the environment works). Recent research with dogs has found that dogs may not demonstrate Imitative Learning when the demonstrator is human. In the present research, we found that when odors were controlled for, dogs imitated the direction of a screen-push demonstrated by another dog more than in a control condition in which they observed the screen move independently while another dog was present. Furthermore, we found that dogs would match the direction of screen-push demonstrated by a human and they were equally likely to match the direction in which the screen moved independently while a human was present.

  • Imitative Learning in Japanese quail (Coturnix japonica) using the bidirectional control procedure
    Animal Learning & Behavior, 2002
    Co-Authors: Chana K. Akins, Emily D. Klein, Thomas R. Zentall
    Abstract:

    In the bidirectional control procedure, observers are exposed to a conspecific demonstrator responding to a manipulandum in one of two directions (e.g., left vs. right). This procedure controls for socially mediated effects (the mere presence of a conspecific) and stimulus enhancement (attention drawn to a manipulandum by its movement), and it has the added advantage of being symmetrical (the two different responses are similar in topography). Imitative Learning is demonstrated when the observers make the response in the direction that they observed it being made. Recently, however, it has been suggested that when such evidence is found with a predominantly olfactory animal, such as the rat, it may result artifactually from odor cues left on one side of the manipulandum by the demonstrator. In the present experiment, we found that Japanese quail, for which odor cues are not likely to play a role, also showed significant correspondence between the direction in which the demonstrator and the observer push a screen to gain access to reward. Furthermore, control quail that observed the screen move, when the movement of the screen was not produced by the demonstrator, did not show similar correspondence between the direction of screen movement observed and that performed by the observer. Thus, with the appropriate control, the bidirectional procedure appears to be useful for studying imitation in avian species.

  • Imitative Learning in Japanese quail (Coturnix japonica) depends on the motivational state of the observer quail at the time of observation.
    Journal of Comparative Psychology, 2001
    Co-Authors: Brigette R. Dorrance, Thomas R. Zentall
    Abstract:

    : The 2-action method was used to examine whether Imitative Learning in Japanese quail (Coturnix japonica) depends on the motivational state of the observer quail at the time of observation of the demonstrated behavior. Two groups of observers were fed before observation (satiated groups), whereas 2 other groups of observers were deprived of food before observation (hungry groups). Quail were tested either immediately following observation or after a 30-min delay. Results indicated that quail in the hungry groups imitated, whereas those in the satiated groups did not, regardless of whether their test was immediate or delayed. The results suggest that observer quail may not learn (through observation) behavior that leads to a reinforcer for which they are unmotivated at the time of test. In addition, the results show that quail are able to delay the performance of a response acquired through observation (i.e., they show deferred imitation).

  • Imitation in Japanese quail: The role of reinforcement of demonstrator responding
    Psychonomic Bulletin & Review, 1998
    Co-Authors: Chana K. Akins, Thomas R. Zentall
    Abstract:

    Imitative Learning has been difficult to demonstrate in animals, partly because techniques have not adequately ruled out alternative accounts based on motivational and perceptual mechanisms (Zentall, 1996). Recently, it has been proposed that differences in the effects of observation of two very different responsetopographies would rule out such artifactual, alternative accounts (Akins & Zentall, 1996). In the present research, we confirmed that strong evidence for imitation can be found in Japanese quail, and that such imitation requires the imitator’s observation ofreinforced responding by the demonstrator. Thus, under the present conditions, it appears that an observer will imitate a demonstrated responseonly if it also observes the appetitive consequences of that response.

  • Imitative Learning in male Japanese quail (Coturnix japonica) using the two-action method.
    Journal of Comparative Psychology, 1996
    Co-Authors: Chana K. Akins, Thomas R. Zentall
    Abstract:

    The study of Imitative Learning in animals has suffered from the presence of a number of confounding motivational and attentional factors (e.g., social facilitation and stimulus enhancement). The two-action method avoids these problems by exposing observers to demonstrators performing a response (e.g., operating a treadle) using 1 of 2 distinctive topographies (e.g., by pecking or by stepping). Japanese quail (Coturnix japonica) observers exposed to conspecific demonstrators showed a high correlation between the topography of the response they observed and the response they performed. These data provide strong evidence for the existence of true Imitative Learning in an active, precocious bird under conditions that control for alternative accounts.

Damien Ernst - One of the best experts on this subject based on the ideXlab platform.

  • DARE - Imitative Learning for Online Planning in Microgrids
    Data Analytics for Renewable Energy Integration, 2015
    Co-Authors: Samy Aittahar, Damien Ernst, Stefan Lodeweyckx, Vincent François-lavet, Raphaël Fonteneau
    Abstract:

    This paper aims to design an algorithm dedicated to operational planning for microgrids in the challenging case where the scenarios of production and consumption are not known in advance. Using expert knowledge obtained from solving a family of linear programs, we build a Learning set for training a decision-making agent. The empirical performances in terms of Levelized Energy Cost LEC of the obtained agent are compared to the expert performances obtained in the case where the scenarios are known in advance. Preliminary results are promising.

  • Imitative Learning for online planning in microgrids
    European Conference on Principles of Data Mining and Knowledge Discovery, 2015
    Co-Authors: Samy Aittahar, Damien Ernst, Vincent Francoislavet, Stefan Lodeweyckx, Raphaël Fonteneau
    Abstract:

    This paper aims to design an algorithm dedicated to operational planning for microgrids in the challenging case where the scenarios of production and consumption are not known in advance. Using expert knowledge obtained from solving a family of linear programs, we build a Learning set for training a decision-making agent. The empirical performances in terms of Levelized Energy Cost LEC of the obtained agent are compared to the expert performances obtained in the case where the scenarios are known in advance. Preliminary results are promising.

  • Imitative Learning for real-time strategy games
    2012 IEEE Conference on Computational Intelligence and Games CIG 2012, 2012
    Co-Authors: Quentin Gemine, Firas Safadi, Raphaël Fonteneau, Damien Ernst
    Abstract:

    Over the past decades, video games have become increasingly popular and complex. Virtual worlds have gone a long way since the first arcades and so have the artificial intelligence (AI) techniques used to control agents in these growing environments. Tasks such as world exploration, constrained pathfinding or team tactics and coordination just to name a few are now default requirements for contemporary video games. However, despite its recent advances, video game AI still lacks the ability to learn. In this paper, we attempt to break the barrier between video game AI and machine Learning and propose a generic method allowing real-time strategy (RTS) agents to learn production strategies from a set of recorded games using supervised Learning. We test this Imitative Learning approach on the popular RTS title StarCraft II® and successfully teach a Terran agent facing a Protoss opponent new production strategies.

  • CIG - Imitative Learning for real-time strategy games
    2012 IEEE Conference on Computational Intelligence and Games (CIG), 2012
    Co-Authors: Quentin Gemine, Firas Safadi, Raphaël Fonteneau, Damien Ernst
    Abstract:

    Over the past decades, video games have become increasingly popular and complex. Virtual worlds have gone a long way since the first arcades and so have the artificial intelligence (AI) techniques used to control agents in these growing environments. Tasks such as world exploration, constrained pathfinding or team tactics and coordination just to name a few are now default requirements for contemporary video games. However, despite its recent advances, video game AI still lacks the ability to learn. In this paper, we attempt to break the barrier between video game AI and machine Learning and propose a generic method allowing real-time strategy (RTS) agents to learn production strategies from a set of recorded games using supervised Learning. We test this Imitative Learning approach on the popular RTS title StarCraft II® and successfully teach a Terran agent facing a Protoss opponent new production strategies.

Raphaël Fonteneau - One of the best experts on this subject based on the ideXlab platform.

  • DARE - Imitative Learning for Online Planning in Microgrids
    Data Analytics for Renewable Energy Integration, 2015
    Co-Authors: Samy Aittahar, Damien Ernst, Stefan Lodeweyckx, Vincent François-lavet, Raphaël Fonteneau
    Abstract:

    This paper aims to design an algorithm dedicated to operational planning for microgrids in the challenging case where the scenarios of production and consumption are not known in advance. Using expert knowledge obtained from solving a family of linear programs, we build a Learning set for training a decision-making agent. The empirical performances in terms of Levelized Energy Cost LEC of the obtained agent are compared to the expert performances obtained in the case where the scenarios are known in advance. Preliminary results are promising.

  • Imitative Learning for online planning in microgrids
    European Conference on Principles of Data Mining and Knowledge Discovery, 2015
    Co-Authors: Samy Aittahar, Damien Ernst, Vincent Francoislavet, Stefan Lodeweyckx, Raphaël Fonteneau
    Abstract:

    This paper aims to design an algorithm dedicated to operational planning for microgrids in the challenging case where the scenarios of production and consumption are not known in advance. Using expert knowledge obtained from solving a family of linear programs, we build a Learning set for training a decision-making agent. The empirical performances in terms of Levelized Energy Cost LEC of the obtained agent are compared to the expert performances obtained in the case where the scenarios are known in advance. Preliminary results are promising.

  • Imitative Learning for real-time strategy games
    2012 IEEE Conference on Computational Intelligence and Games CIG 2012, 2012
    Co-Authors: Quentin Gemine, Firas Safadi, Raphaël Fonteneau, Damien Ernst
    Abstract:

    Over the past decades, video games have become increasingly popular and complex. Virtual worlds have gone a long way since the first arcades and so have the artificial intelligence (AI) techniques used to control agents in these growing environments. Tasks such as world exploration, constrained pathfinding or team tactics and coordination just to name a few are now default requirements for contemporary video games. However, despite its recent advances, video game AI still lacks the ability to learn. In this paper, we attempt to break the barrier between video game AI and machine Learning and propose a generic method allowing real-time strategy (RTS) agents to learn production strategies from a set of recorded games using supervised Learning. We test this Imitative Learning approach on the popular RTS title StarCraft II® and successfully teach a Terran agent facing a Protoss opponent new production strategies.

  • CIG - Imitative Learning for real-time strategy games
    2012 IEEE Conference on Computational Intelligence and Games (CIG), 2012
    Co-Authors: Quentin Gemine, Firas Safadi, Raphaël Fonteneau, Damien Ernst
    Abstract:

    Over the past decades, video games have become increasingly popular and complex. Virtual worlds have gone a long way since the first arcades and so have the artificial intelligence (AI) techniques used to control agents in these growing environments. Tasks such as world exploration, constrained pathfinding or team tactics and coordination just to name a few are now default requirements for contemporary video games. However, despite its recent advances, video game AI still lacks the ability to learn. In this paper, we attempt to break the barrier between video game AI and machine Learning and propose a generic method allowing real-time strategy (RTS) agents to learn production strategies from a set of recorded games using supervised Learning. We test this Imitative Learning approach on the popular RTS title StarCraft II® and successfully teach a Terran agent facing a Protoss opponent new production strategies.

Naoyuki Kubota - One of the best experts on this subject based on the ideXlab platform.

  • Associative Learning for Cognitive Development of Partner Robot through Interaction with People
    2009
    Co-Authors: Naoyuki Kubota
    Abstract:

    This paper discusses associative Learning of a partner robots through interaction with people. Human interaction based on gestures is very important to realize the natural communication. The meaning of gestures can be understood through the actual interaction with a human and the imitation of a human. Therefore, we propose a method for associative Learning based on imitation and conversation to realize the natural communication. Steady-state genetic algorithms are applied for detecting human face and objects in image processing. Spiking neural networks are applied for memorizing spatio-temporal patterns of human hand motions, and relationship among perceptual information. Furthermore, we conduct several experiments of the partner robot on the interaction based on imitation and conversation with people. The experimental results show that the proposed method can refine the relationship among the perceptual information, and can reflect the updated relationship to the natural communication with a human. Human interaction based on gestures is very important to realize the natural communication. The meaning of gestures can be understood through the actual interaction with a human and imitation of a human. Therefore, we propose a method for associative Learning based on imitation and conversation to realize the natural communication. Basically, Imitative Learning is composed of model observation and model reproduction. Furthermore, model Learning is required to memorize and generalize motion patterns as gestures. In addition, the model clustering is required to distinguish a specific gesture from others, and model selection is also performed for the human interaction. In this way, the Imitative Learning requires various Learning capabilities of model observation, model clustering, model selection, model reproduction, and model Learning simultaneously. We proposed a method for Imitative Learning of partner robots based on visual perception (11,12). First of all, the robot detects a human based on image processing with a steady-state genetic algorithm (SSGA) (13). Next, a series of the movements of the human hand are extracted by SSGA used as model observation, and the hand motion pattern is extracted by a spiking neural network (SNN) . Furthermore, SSGA is used for generating a trajectory similar to the human hand motion pattern as model reproduction (14). In addition to the Imitative Learning, the robot requires the capability of extracting necessary perceptual information in finite time for the natural communication with a human. Associative memory in the cognitive development is very important for the perception. Therefore we propose a method for the simultaneous associative Learning of various types of perceptual information such as colors, shapes, and gestures related with symbolic information used for conversation with a human. Symbolic information used in utterances is very important and helpful for the associative Learning, because human language has been improved and refined for long time. The meaning of symbols is neither exact nor precise among people, but the use of linguistic information is very useful and helpful for robots in order to share the meanings of patterns in visual images with people. We apply SNN for associative Learning of perceptual information. Furthermore, we conduct several experiments of the partner robot on the interaction with people.

  • modular fuzzy neural networks for Imitative Learning of a partner robot
    International Joint Conference on Neural Network, 2006
    Co-Authors: Naoyuki Kubota, T Shimizu
    Abstract:

    Imitation is a powerful tool for behavior Learning and human communication. Basically, Imitative Learning is composed of model observation and model reproduction. This paper applies a spiking neural network and self-organizing map for model observation, and modular fuzzy neural networks and a steady-state genetic algorithm for model reproduction. The proposed method is applied for a partner robot interacting with a human. Experimental results show that the proposed method enables a robot to learn behaviors through imitation and can interact with a human efficiently.

  • Visual perception and reproduction for Imitative Learning of a partner robot
    2006
    Co-Authors: Naoyuki Kubota
    Abstract:

    This paper proposes visual perception and model reproduction based on imitation of a partner robot interacting with a human. First of all, we discuss the role of imitation, and propose the method for Imitative behavior generation. After the robot searches for a human by using a CCD camera, human hand positions are extracted from a series of images taken from the CCD camera. Next, the position sequence of the extracted human hand is used as inputs to a fuzzy spiking neural network to recognize the position sequence as a motion pattern. The trajectory for the robot behavior is generated and updated by a steady-state genetic algorithm based on the human motions pattern. Furthermore, a self-organizing map is used for clustering human hand motion patterns. Finally, we show experimental results of Imitative behavior generation through interaction with a human.

  • IJCNN - Modular Fuzzy Neural Networks for Imitative Learning of A Partner Robot
    The 2006 IEEE International Joint Conference on Neural Network Proceedings, 2006
    Co-Authors: Naoyuki Kubota, T Shimizu
    Abstract:

    Imitation is a powerful tool for behavior Learning and human communication. Basically, Imitative Learning is composed of model observation and model reproduction. This paper applies a spiking neural network and self-organizing map for model observation, and modular fuzzy neural networks and a steady-state genetic algorithm for model reproduction. The proposed method is applied for a partner robot interacting with a human. Experimental results show that the proposed method enables a robot to learn behaviors through imitation and can interact with a human efficiently.

  • ICRA - Fuzzy Computing for Communication of A Partner Robot Based on Imitation
    Proceedings of the 2005 IEEE International Conference on Robotics and Automation, 1
    Co-Authors: Naoyuki Kubota, K. Nishida
    Abstract:

    This paper discusses communication between a partner robot and human based on visual tracking, and Imitative Learning for the partner robot. In this paper, we propose Imitative Learning and communication with human based on spiking neural network, self-organizing map, and steady-state genetic algorithm. Furthermore, we show experimental results of the partner robot based on imitation.

Quentin Gemine - One of the best experts on this subject based on the ideXlab platform.

  • Imitative Learning for real-time strategy games
    2012 IEEE Conference on Computational Intelligence and Games CIG 2012, 2012
    Co-Authors: Quentin Gemine, Firas Safadi, Raphaël Fonteneau, Damien Ernst
    Abstract:

    Over the past decades, video games have become increasingly popular and complex. Virtual worlds have gone a long way since the first arcades and so have the artificial intelligence (AI) techniques used to control agents in these growing environments. Tasks such as world exploration, constrained pathfinding or team tactics and coordination just to name a few are now default requirements for contemporary video games. However, despite its recent advances, video game AI still lacks the ability to learn. In this paper, we attempt to break the barrier between video game AI and machine Learning and propose a generic method allowing real-time strategy (RTS) agents to learn production strategies from a set of recorded games using supervised Learning. We test this Imitative Learning approach on the popular RTS title StarCraft II® and successfully teach a Terran agent facing a Protoss opponent new production strategies.

  • CIG - Imitative Learning for real-time strategy games
    2012 IEEE Conference on Computational Intelligence and Games (CIG), 2012
    Co-Authors: Quentin Gemine, Firas Safadi, Raphaël Fonteneau, Damien Ernst
    Abstract:

    Over the past decades, video games have become increasingly popular and complex. Virtual worlds have gone a long way since the first arcades and so have the artificial intelligence (AI) techniques used to control agents in these growing environments. Tasks such as world exploration, constrained pathfinding or team tactics and coordination just to name a few are now default requirements for contemporary video games. However, despite its recent advances, video game AI still lacks the ability to learn. In this paper, we attempt to break the barrier between video game AI and machine Learning and propose a generic method allowing real-time strategy (RTS) agents to learn production strategies from a set of recorded games using supervised Learning. We test this Imitative Learning approach on the popular RTS title StarCraft II® and successfully teach a Terran agent facing a Protoss opponent new production strategies.