Point Attractor

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 7653 Experts worldwide ranked by ideXlab platform

Tetsuya Ogata - One of the best experts on this subject based on the ideXlab platform.

  • dynamical integration of language and behavior in a recurrent neural network for human robot interaction
    Frontiers in Neurorobotics, 2016
    Co-Authors: Tatsuro Yamada, Shingo Murata, Hiroaki Arie, Tetsuya Ogata
    Abstract:

    To work cooperatively with humans by using language, robots must not only acquire a mapping between language and their behavior but also autonomously utilize the mapping in appropriate contexts of interactive tasks online. To this end, we propose a novel learning method linking language to robot behavior by means of a recurrent neural network. In this method, the network learns from correct examples of the imposed task that are given not as explicitly separated sets of language and behavior but as sequential data constructed from the actual temporal flow of the task. By doing this, the internal dynamics of the network models both language--behavior relationships and the temporal patterns of interaction. Here, ``internal dynamics'' refers to the time development of the system defined on the fixed-dimensional space of the internal states of the context layer. Thus, in the execution phase, by constantly representing where in the interaction context it is as its current state, the network autonomously switches between recognition and generation phases without any explicit signs and utilizes the acquired mapping in appropriate contexts. To evaluate our method, we conducted an experiment in which a robot generates appropriate behavior responding to a human's linguistic instruction. After learning, the network actually formed the Attractor structure representing both language--behavior relationships and the task's temporal pattern in its internal dynamics. In the dynamics, language--behavior mapping was achieved by the branching structure. Repetition of human's instruction and robot's behavioral response was represented as the cyclic structure, and besides, waiting to a subsequent instruction was represented as the fixed-Point Attractor. Thanks to this structure, the robot was able to interact online with a human concerning the given task by autonomously switching phases.

  • Attractor representations of language behavior structure in a recurrent neural network for human robot interaction
    Intelligent Robots and Systems, 2015
    Co-Authors: Tatsuro Yamada, Shingo Murata, Hiroaki Arie, Tetsuya Ogata
    Abstract:

    In recent years there has been increased interest in studies that explore integrative learning of language and other modalities by using neural network models. However, for practical application to human-robot interaction, the acquired semantic structure between language and meaning has to be available immediately and repeatably whenever necessary, just as in everyday communication. As a solution to this problem, this study proposes a method in which a recurrent neural network self-organizes cyclic Attractors that reflect semantic structure and represent interaction flows in its internal dynamics. To evaluate this method we design a simple task in which a human verbally directs a robot, which responds appropriately. Training the network with training data that represent the interaction series, the cyclic Attractors that reflect the semantic structure is self-organized. The network first receives a verbal direction, and its internal state moves according to the first half of the cyclic Attractors with branch structures corresponding to semantics. After that, the internal state reaches a potential to generate appropriate behavior. Finally, the internal state moves to the second half and converges on the initial Point of the cycle while generating the appropriate behavior. By self-organizing such an internal structure in its forward dynamics, the model achieves immediate and repeatable response to linguistic directions. Furthermore, the network self-organizes a fixed-Point Attractor, and so able to wait for directions. It can thus repeat the interaction flexibly without explicit turn-taking signs.

Tatsuro Yamada - One of the best experts on this subject based on the ideXlab platform.

  • dynamical integration of language and behavior in a recurrent neural network for human robot interaction
    Frontiers in Neurorobotics, 2016
    Co-Authors: Tatsuro Yamada, Shingo Murata, Hiroaki Arie, Tetsuya Ogata
    Abstract:

    To work cooperatively with humans by using language, robots must not only acquire a mapping between language and their behavior but also autonomously utilize the mapping in appropriate contexts of interactive tasks online. To this end, we propose a novel learning method linking language to robot behavior by means of a recurrent neural network. In this method, the network learns from correct examples of the imposed task that are given not as explicitly separated sets of language and behavior but as sequential data constructed from the actual temporal flow of the task. By doing this, the internal dynamics of the network models both language--behavior relationships and the temporal patterns of interaction. Here, ``internal dynamics'' refers to the time development of the system defined on the fixed-dimensional space of the internal states of the context layer. Thus, in the execution phase, by constantly representing where in the interaction context it is as its current state, the network autonomously switches between recognition and generation phases without any explicit signs and utilizes the acquired mapping in appropriate contexts. To evaluate our method, we conducted an experiment in which a robot generates appropriate behavior responding to a human's linguistic instruction. After learning, the network actually formed the Attractor structure representing both language--behavior relationships and the task's temporal pattern in its internal dynamics. In the dynamics, language--behavior mapping was achieved by the branching structure. Repetition of human's instruction and robot's behavioral response was represented as the cyclic structure, and besides, waiting to a subsequent instruction was represented as the fixed-Point Attractor. Thanks to this structure, the robot was able to interact online with a human concerning the given task by autonomously switching phases.

  • Attractor representations of language behavior structure in a recurrent neural network for human robot interaction
    Intelligent Robots and Systems, 2015
    Co-Authors: Tatsuro Yamada, Shingo Murata, Hiroaki Arie, Tetsuya Ogata
    Abstract:

    In recent years there has been increased interest in studies that explore integrative learning of language and other modalities by using neural network models. However, for practical application to human-robot interaction, the acquired semantic structure between language and meaning has to be available immediately and repeatably whenever necessary, just as in everyday communication. As a solution to this problem, this study proposes a method in which a recurrent neural network self-organizes cyclic Attractors that reflect semantic structure and represent interaction flows in its internal dynamics. To evaluate this method we design a simple task in which a human verbally directs a robot, which responds appropriately. Training the network with training data that represent the interaction series, the cyclic Attractors that reflect the semantic structure is self-organized. The network first receives a verbal direction, and its internal state moves according to the first half of the cyclic Attractors with branch structures corresponding to semantics. After that, the internal state reaches a potential to generate appropriate behavior. Finally, the internal state moves to the second half and converges on the initial Point of the cycle while generating the appropriate behavior. By self-organizing such an internal structure in its forward dynamics, the model achieves immediate and repeatable response to linguistic directions. Furthermore, the network self-organizes a fixed-Point Attractor, and so able to wait for directions. It can thus repeat the interaction flexibly without explicit turn-taking signs.

Hiroaki Arie - One of the best experts on this subject based on the ideXlab platform.

  • dynamical integration of language and behavior in a recurrent neural network for human robot interaction
    Frontiers in Neurorobotics, 2016
    Co-Authors: Tatsuro Yamada, Shingo Murata, Hiroaki Arie, Tetsuya Ogata
    Abstract:

    To work cooperatively with humans by using language, robots must not only acquire a mapping between language and their behavior but also autonomously utilize the mapping in appropriate contexts of interactive tasks online. To this end, we propose a novel learning method linking language to robot behavior by means of a recurrent neural network. In this method, the network learns from correct examples of the imposed task that are given not as explicitly separated sets of language and behavior but as sequential data constructed from the actual temporal flow of the task. By doing this, the internal dynamics of the network models both language--behavior relationships and the temporal patterns of interaction. Here, ``internal dynamics'' refers to the time development of the system defined on the fixed-dimensional space of the internal states of the context layer. Thus, in the execution phase, by constantly representing where in the interaction context it is as its current state, the network autonomously switches between recognition and generation phases without any explicit signs and utilizes the acquired mapping in appropriate contexts. To evaluate our method, we conducted an experiment in which a robot generates appropriate behavior responding to a human's linguistic instruction. After learning, the network actually formed the Attractor structure representing both language--behavior relationships and the task's temporal pattern in its internal dynamics. In the dynamics, language--behavior mapping was achieved by the branching structure. Repetition of human's instruction and robot's behavioral response was represented as the cyclic structure, and besides, waiting to a subsequent instruction was represented as the fixed-Point Attractor. Thanks to this structure, the robot was able to interact online with a human concerning the given task by autonomously switching phases.

  • Attractor representations of language behavior structure in a recurrent neural network for human robot interaction
    Intelligent Robots and Systems, 2015
    Co-Authors: Tatsuro Yamada, Shingo Murata, Hiroaki Arie, Tetsuya Ogata
    Abstract:

    In recent years there has been increased interest in studies that explore integrative learning of language and other modalities by using neural network models. However, for practical application to human-robot interaction, the acquired semantic structure between language and meaning has to be available immediately and repeatably whenever necessary, just as in everyday communication. As a solution to this problem, this study proposes a method in which a recurrent neural network self-organizes cyclic Attractors that reflect semantic structure and represent interaction flows in its internal dynamics. To evaluate this method we design a simple task in which a human verbally directs a robot, which responds appropriately. Training the network with training data that represent the interaction series, the cyclic Attractors that reflect the semantic structure is self-organized. The network first receives a verbal direction, and its internal state moves according to the first half of the cyclic Attractors with branch structures corresponding to semantics. After that, the internal state reaches a potential to generate appropriate behavior. Finally, the internal state moves to the second half and converges on the initial Point of the cycle while generating the appropriate behavior. By self-organizing such an internal structure in its forward dynamics, the model achieves immediate and repeatable response to linguistic directions. Furthermore, the network self-organizes a fixed-Point Attractor, and so able to wait for directions. It can thus repeat the interaction flexibly without explicit turn-taking signs.

Shingo Murata - One of the best experts on this subject based on the ideXlab platform.

  • dynamical integration of language and behavior in a recurrent neural network for human robot interaction
    Frontiers in Neurorobotics, 2016
    Co-Authors: Tatsuro Yamada, Shingo Murata, Hiroaki Arie, Tetsuya Ogata
    Abstract:

    To work cooperatively with humans by using language, robots must not only acquire a mapping between language and their behavior but also autonomously utilize the mapping in appropriate contexts of interactive tasks online. To this end, we propose a novel learning method linking language to robot behavior by means of a recurrent neural network. In this method, the network learns from correct examples of the imposed task that are given not as explicitly separated sets of language and behavior but as sequential data constructed from the actual temporal flow of the task. By doing this, the internal dynamics of the network models both language--behavior relationships and the temporal patterns of interaction. Here, ``internal dynamics'' refers to the time development of the system defined on the fixed-dimensional space of the internal states of the context layer. Thus, in the execution phase, by constantly representing where in the interaction context it is as its current state, the network autonomously switches between recognition and generation phases without any explicit signs and utilizes the acquired mapping in appropriate contexts. To evaluate our method, we conducted an experiment in which a robot generates appropriate behavior responding to a human's linguistic instruction. After learning, the network actually formed the Attractor structure representing both language--behavior relationships and the task's temporal pattern in its internal dynamics. In the dynamics, language--behavior mapping was achieved by the branching structure. Repetition of human's instruction and robot's behavioral response was represented as the cyclic structure, and besides, waiting to a subsequent instruction was represented as the fixed-Point Attractor. Thanks to this structure, the robot was able to interact online with a human concerning the given task by autonomously switching phases.

  • Attractor representations of language behavior structure in a recurrent neural network for human robot interaction
    Intelligent Robots and Systems, 2015
    Co-Authors: Tatsuro Yamada, Shingo Murata, Hiroaki Arie, Tetsuya Ogata
    Abstract:

    In recent years there has been increased interest in studies that explore integrative learning of language and other modalities by using neural network models. However, for practical application to human-robot interaction, the acquired semantic structure between language and meaning has to be available immediately and repeatably whenever necessary, just as in everyday communication. As a solution to this problem, this study proposes a method in which a recurrent neural network self-organizes cyclic Attractors that reflect semantic structure and represent interaction flows in its internal dynamics. To evaluate this method we design a simple task in which a human verbally directs a robot, which responds appropriately. Training the network with training data that represent the interaction series, the cyclic Attractors that reflect the semantic structure is self-organized. The network first receives a verbal direction, and its internal state moves according to the first half of the cyclic Attractors with branch structures corresponding to semantics. After that, the internal state reaches a potential to generate appropriate behavior. Finally, the internal state moves to the second half and converges on the initial Point of the cycle while generating the appropriate behavior. By self-organizing such an internal structure in its forward dynamics, the model achieves immediate and repeatable response to linguistic directions. Furthermore, the network self-organizes a fixed-Point Attractor, and so able to wait for directions. It can thus repeat the interaction flexibly without explicit turn-taking signs.

Damien Coyle - One of the best experts on this subject based on the ideXlab platform.

  • model based bifurcation and power spectral analyses of thalamocortical alpha rhythm slowing in alzheimer s disease
    Neurocomputing, 2013
    Co-Authors: Basabdatta Sen Bhattacharya, Liam Maguire, Yuksel Cakir, Neslihan Serapsengor, Damien Coyle
    Abstract:

    The focus of this paper is to correlate the bifurcation behaviour of a thalamocortical neural mass model with the power spectral alpha (8–13 Hz) oscillatory activity in Electroencephalography (EEG). The aim is to understand the neural correlates of alpha rhythm slowing (decrease in mean frequency of oscillation), a hallmark in the EEG of Alzheimer's Disease (AD) patients. The neural mass model used, referred to herein as the modARm, is a modified version of Lopes da Silva's alpha rhythm model (ARm). Previously, the power spectral behaviour of the modARm was analysed in context to AD. In this work, we revisit the modARm to make a combined study of the dynamical behaviour of the model and its power spectral behaviour within the alpha band while simulating the hallmark neuropathological condition of ‘synaptic depletion’ in AD. The results show that the modARm exhibits two ‘operating modes’ in the time-domain i.e. a Point Attractor and a limit cycle mode; the alpha rhythmic content in the model output is maximal at the vicinity of the Point of bifurcation. Furthermore, the inhibitory synaptic connectivity from the cells of the Thalamic Reticular Nucleus to the Thalamo-Cortical Relay cells significantly influence bifurcation behaviour—while a decrease in the inhibition can induce limit-cycle behaviour corresponding to abnormal brain states such as seizures, an increase in inhibition in awake state corresponding to a Point Attractor mode may result in the slowing of the alpha rhythms as observed in AD. These observations help emphasise the importance of bifurcation analysis of model behaviour in inferring the biological relevance of results obtained from power-spectral analysis of the neural models in the context of understanding neurodegeneration.