Language Behavior

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 213 Experts worldwide ranked by ideXlab platform

Tetsuya Ogata - One of the best experts on this subject based on the ideXlab platform.

  • Dynamical Integration of Language and Behavior in a Recurrent Neural Network for Human–Robot Interaction
    Frontiers in neurorobotics, 2016
    Co-Authors: Tatsuro Yamada, Shingo Murata, Hiroaki Arie, Tetsuya Ogata
    Abstract:

    To work cooperatively with humans by using Language, robots must not only acquire a mapping between Language and their Behavior but also autonomously utilize the mapping in appropriate contexts of interactive tasks online. To this end, we propose a novel learning method linking Language to robot Behavior by means of a recurrent neural network. In this method, the network learns from correct examples of the imposed task that are given not as explicitly separated sets of Language and Behavior but as sequential data constructed from the actual temporal flow of the task. By doing this, the internal dynamics of the network models both Language--Behavior relationships and the temporal patterns of interaction. Here, ``internal dynamics'' refers to the time development of the system defined on the fixed-dimensional space of the internal states of the context layer. Thus, in the execution phase, by constantly representing where in the interaction context it is as its current state, the network autonomously switches between recognition and generation phases without any explicit signs and utilizes the acquired mapping in appropriate contexts. To evaluate our method, we conducted an experiment in which a robot generates appropriate Behavior responding to a human's linguistic instruction. After learning, the network actually formed the attractor structure representing both Language--Behavior relationships and the task's temporal pattern in its internal dynamics. In the dynamics, Language--Behavior mapping was achieved by the branching structure. Repetition of human's instruction and robot's Behavioral response was represented as the cyclic structure, and besides, waiting to a subsequent instruction was represented as the fixed-point attractor. Thanks to this structure, the robot was able to interact online with a human concerning the given task by autonomously switching phases.

  • dynamical integration of Language and Behavior in a recurrent neural network for human robot interaction
    Frontiers in Neurorobotics, 2016
    Co-Authors: Tatsuro Yamada, Shingo Murata, Hiroaki Arie, Tetsuya Ogata
    Abstract:

    To work cooperatively with humans by using Language, robots must not only acquire a mapping between Language and their Behavior but also autonomously utilize the mapping in appropriate contexts of interactive tasks online. To this end, we propose a novel learning method linking Language to robot Behavior by means of a recurrent neural network. In this method, the network learns from correct examples of the imposed task that are given not as explicitly separated sets of Language and Behavior but as sequential data constructed from the actual temporal flow of the task. By doing this, the internal dynamics of the network models both Language--Behavior relationships and the temporal patterns of interaction. Here, ``internal dynamics'' refers to the time development of the system defined on the fixed-dimensional space of the internal states of the context layer. Thus, in the execution phase, by constantly representing where in the interaction context it is as its current state, the network autonomously switches between recognition and generation phases without any explicit signs and utilizes the acquired mapping in appropriate contexts. To evaluate our method, we conducted an experiment in which a robot generates appropriate Behavior responding to a human's linguistic instruction. After learning, the network actually formed the attractor structure representing both Language--Behavior relationships and the task's temporal pattern in its internal dynamics. In the dynamics, Language--Behavior mapping was achieved by the branching structure. Repetition of human's instruction and robot's Behavioral response was represented as the cyclic structure, and besides, waiting to a subsequent instruction was represented as the fixed-point attractor. Thanks to this structure, the robot was able to interact online with a human concerning the given task by autonomously switching phases.

  • attractor representations of Language Behavior structure in a recurrent neural network for human robot interaction
    Intelligent Robots and Systems, 2015
    Co-Authors: Tatsuro Yamada, Shingo Murata, Hiroaki Arie, Tetsuya Ogata
    Abstract:

    In recent years there has been increased interest in studies that explore integrative learning of Language and other modalities by using neural network models. However, for practical application to human-robot interaction, the acquired semantic structure between Language and meaning has to be available immediately and repeatably whenever necessary, just as in everyday communication. As a solution to this problem, this study proposes a method in which a recurrent neural network self-organizes cyclic attractors that reflect semantic structure and represent interaction flows in its internal dynamics. To evaluate this method we design a simple task in which a human verbally directs a robot, which responds appropriately. Training the network with training data that represent the interaction series, the cyclic attractors that reflect the semantic structure is self-organized. The network first receives a verbal direction, and its internal state moves according to the first half of the cyclic attractors with branch structures corresponding to semantics. After that, the internal state reaches a potential to generate appropriate Behavior. Finally, the internal state moves to the second half and converges on the initial point of the cycle while generating the appropriate Behavior. By self-organizing such an internal structure in its forward dynamics, the model achieves immediate and repeatable response to linguistic directions. Furthermore, the network self-organizes a fixed-point attractor, and so able to wait for directions. It can thus repeat the interaction flexibly without explicit turn-taking signs.

Tatsuro Yamada - One of the best experts on this subject based on the ideXlab platform.

  • Dynamical Integration of Language and Behavior in a Recurrent Neural Network for Human–Robot Interaction
    Frontiers in neurorobotics, 2016
    Co-Authors: Tatsuro Yamada, Shingo Murata, Hiroaki Arie, Tetsuya Ogata
    Abstract:

    To work cooperatively with humans by using Language, robots must not only acquire a mapping between Language and their Behavior but also autonomously utilize the mapping in appropriate contexts of interactive tasks online. To this end, we propose a novel learning method linking Language to robot Behavior by means of a recurrent neural network. In this method, the network learns from correct examples of the imposed task that are given not as explicitly separated sets of Language and Behavior but as sequential data constructed from the actual temporal flow of the task. By doing this, the internal dynamics of the network models both Language--Behavior relationships and the temporal patterns of interaction. Here, ``internal dynamics'' refers to the time development of the system defined on the fixed-dimensional space of the internal states of the context layer. Thus, in the execution phase, by constantly representing where in the interaction context it is as its current state, the network autonomously switches between recognition and generation phases without any explicit signs and utilizes the acquired mapping in appropriate contexts. To evaluate our method, we conducted an experiment in which a robot generates appropriate Behavior responding to a human's linguistic instruction. After learning, the network actually formed the attractor structure representing both Language--Behavior relationships and the task's temporal pattern in its internal dynamics. In the dynamics, Language--Behavior mapping was achieved by the branching structure. Repetition of human's instruction and robot's Behavioral response was represented as the cyclic structure, and besides, waiting to a subsequent instruction was represented as the fixed-point attractor. Thanks to this structure, the robot was able to interact online with a human concerning the given task by autonomously switching phases.

  • dynamical integration of Language and Behavior in a recurrent neural network for human robot interaction
    Frontiers in Neurorobotics, 2016
    Co-Authors: Tatsuro Yamada, Shingo Murata, Hiroaki Arie, Tetsuya Ogata
    Abstract:

    To work cooperatively with humans by using Language, robots must not only acquire a mapping between Language and their Behavior but also autonomously utilize the mapping in appropriate contexts of interactive tasks online. To this end, we propose a novel learning method linking Language to robot Behavior by means of a recurrent neural network. In this method, the network learns from correct examples of the imposed task that are given not as explicitly separated sets of Language and Behavior but as sequential data constructed from the actual temporal flow of the task. By doing this, the internal dynamics of the network models both Language--Behavior relationships and the temporal patterns of interaction. Here, ``internal dynamics'' refers to the time development of the system defined on the fixed-dimensional space of the internal states of the context layer. Thus, in the execution phase, by constantly representing where in the interaction context it is as its current state, the network autonomously switches between recognition and generation phases without any explicit signs and utilizes the acquired mapping in appropriate contexts. To evaluate our method, we conducted an experiment in which a robot generates appropriate Behavior responding to a human's linguistic instruction. After learning, the network actually formed the attractor structure representing both Language--Behavior relationships and the task's temporal pattern in its internal dynamics. In the dynamics, Language--Behavior mapping was achieved by the branching structure. Repetition of human's instruction and robot's Behavioral response was represented as the cyclic structure, and besides, waiting to a subsequent instruction was represented as the fixed-point attractor. Thanks to this structure, the robot was able to interact online with a human concerning the given task by autonomously switching phases.

  • attractor representations of Language Behavior structure in a recurrent neural network for human robot interaction
    Intelligent Robots and Systems, 2015
    Co-Authors: Tatsuro Yamada, Shingo Murata, Hiroaki Arie, Tetsuya Ogata
    Abstract:

    In recent years there has been increased interest in studies that explore integrative learning of Language and other modalities by using neural network models. However, for practical application to human-robot interaction, the acquired semantic structure between Language and meaning has to be available immediately and repeatably whenever necessary, just as in everyday communication. As a solution to this problem, this study proposes a method in which a recurrent neural network self-organizes cyclic attractors that reflect semantic structure and represent interaction flows in its internal dynamics. To evaluate this method we design a simple task in which a human verbally directs a robot, which responds appropriately. Training the network with training data that represent the interaction series, the cyclic attractors that reflect the semantic structure is self-organized. The network first receives a verbal direction, and its internal state moves according to the first half of the cyclic attractors with branch structures corresponding to semantics. After that, the internal state reaches a potential to generate appropriate Behavior. Finally, the internal state moves to the second half and converges on the initial point of the cycle while generating the appropriate Behavior. By self-organizing such an internal structure in its forward dynamics, the model achieves immediate and repeatable response to linguistic directions. Furthermore, the network self-organizes a fixed-point attractor, and so able to wait for directions. It can thus repeat the interaction flexibly without explicit turn-taking signs.

Shingo Murata - One of the best experts on this subject based on the ideXlab platform.

  • Dynamical Integration of Language and Behavior in a Recurrent Neural Network for Human–Robot Interaction
    Frontiers in neurorobotics, 2016
    Co-Authors: Tatsuro Yamada, Shingo Murata, Hiroaki Arie, Tetsuya Ogata
    Abstract:

    To work cooperatively with humans by using Language, robots must not only acquire a mapping between Language and their Behavior but also autonomously utilize the mapping in appropriate contexts of interactive tasks online. To this end, we propose a novel learning method linking Language to robot Behavior by means of a recurrent neural network. In this method, the network learns from correct examples of the imposed task that are given not as explicitly separated sets of Language and Behavior but as sequential data constructed from the actual temporal flow of the task. By doing this, the internal dynamics of the network models both Language--Behavior relationships and the temporal patterns of interaction. Here, ``internal dynamics'' refers to the time development of the system defined on the fixed-dimensional space of the internal states of the context layer. Thus, in the execution phase, by constantly representing where in the interaction context it is as its current state, the network autonomously switches between recognition and generation phases without any explicit signs and utilizes the acquired mapping in appropriate contexts. To evaluate our method, we conducted an experiment in which a robot generates appropriate Behavior responding to a human's linguistic instruction. After learning, the network actually formed the attractor structure representing both Language--Behavior relationships and the task's temporal pattern in its internal dynamics. In the dynamics, Language--Behavior mapping was achieved by the branching structure. Repetition of human's instruction and robot's Behavioral response was represented as the cyclic structure, and besides, waiting to a subsequent instruction was represented as the fixed-point attractor. Thanks to this structure, the robot was able to interact online with a human concerning the given task by autonomously switching phases.

  • dynamical integration of Language and Behavior in a recurrent neural network for human robot interaction
    Frontiers in Neurorobotics, 2016
    Co-Authors: Tatsuro Yamada, Shingo Murata, Hiroaki Arie, Tetsuya Ogata
    Abstract:

    To work cooperatively with humans by using Language, robots must not only acquire a mapping between Language and their Behavior but also autonomously utilize the mapping in appropriate contexts of interactive tasks online. To this end, we propose a novel learning method linking Language to robot Behavior by means of a recurrent neural network. In this method, the network learns from correct examples of the imposed task that are given not as explicitly separated sets of Language and Behavior but as sequential data constructed from the actual temporal flow of the task. By doing this, the internal dynamics of the network models both Language--Behavior relationships and the temporal patterns of interaction. Here, ``internal dynamics'' refers to the time development of the system defined on the fixed-dimensional space of the internal states of the context layer. Thus, in the execution phase, by constantly representing where in the interaction context it is as its current state, the network autonomously switches between recognition and generation phases without any explicit signs and utilizes the acquired mapping in appropriate contexts. To evaluate our method, we conducted an experiment in which a robot generates appropriate Behavior responding to a human's linguistic instruction. After learning, the network actually formed the attractor structure representing both Language--Behavior relationships and the task's temporal pattern in its internal dynamics. In the dynamics, Language--Behavior mapping was achieved by the branching structure. Repetition of human's instruction and robot's Behavioral response was represented as the cyclic structure, and besides, waiting to a subsequent instruction was represented as the fixed-point attractor. Thanks to this structure, the robot was able to interact online with a human concerning the given task by autonomously switching phases.

  • attractor representations of Language Behavior structure in a recurrent neural network for human robot interaction
    Intelligent Robots and Systems, 2015
    Co-Authors: Tatsuro Yamada, Shingo Murata, Hiroaki Arie, Tetsuya Ogata
    Abstract:

    In recent years there has been increased interest in studies that explore integrative learning of Language and other modalities by using neural network models. However, for practical application to human-robot interaction, the acquired semantic structure between Language and meaning has to be available immediately and repeatably whenever necessary, just as in everyday communication. As a solution to this problem, this study proposes a method in which a recurrent neural network self-organizes cyclic attractors that reflect semantic structure and represent interaction flows in its internal dynamics. To evaluate this method we design a simple task in which a human verbally directs a robot, which responds appropriately. Training the network with training data that represent the interaction series, the cyclic attractors that reflect the semantic structure is self-organized. The network first receives a verbal direction, and its internal state moves according to the first half of the cyclic attractors with branch structures corresponding to semantics. After that, the internal state reaches a potential to generate appropriate Behavior. Finally, the internal state moves to the second half and converges on the initial point of the cycle while generating the appropriate Behavior. By self-organizing such an internal structure in its forward dynamics, the model achieves immediate and repeatable response to linguistic directions. Furthermore, the network self-organizes a fixed-point attractor, and so able to wait for directions. It can thus repeat the interaction flexibly without explicit turn-taking signs.

Hiroaki Arie - One of the best experts on this subject based on the ideXlab platform.

  • Dynamical Integration of Language and Behavior in a Recurrent Neural Network for Human–Robot Interaction
    Frontiers in neurorobotics, 2016
    Co-Authors: Tatsuro Yamada, Shingo Murata, Hiroaki Arie, Tetsuya Ogata
    Abstract:

    To work cooperatively with humans by using Language, robots must not only acquire a mapping between Language and their Behavior but also autonomously utilize the mapping in appropriate contexts of interactive tasks online. To this end, we propose a novel learning method linking Language to robot Behavior by means of a recurrent neural network. In this method, the network learns from correct examples of the imposed task that are given not as explicitly separated sets of Language and Behavior but as sequential data constructed from the actual temporal flow of the task. By doing this, the internal dynamics of the network models both Language--Behavior relationships and the temporal patterns of interaction. Here, ``internal dynamics'' refers to the time development of the system defined on the fixed-dimensional space of the internal states of the context layer. Thus, in the execution phase, by constantly representing where in the interaction context it is as its current state, the network autonomously switches between recognition and generation phases without any explicit signs and utilizes the acquired mapping in appropriate contexts. To evaluate our method, we conducted an experiment in which a robot generates appropriate Behavior responding to a human's linguistic instruction. After learning, the network actually formed the attractor structure representing both Language--Behavior relationships and the task's temporal pattern in its internal dynamics. In the dynamics, Language--Behavior mapping was achieved by the branching structure. Repetition of human's instruction and robot's Behavioral response was represented as the cyclic structure, and besides, waiting to a subsequent instruction was represented as the fixed-point attractor. Thanks to this structure, the robot was able to interact online with a human concerning the given task by autonomously switching phases.

  • dynamical integration of Language and Behavior in a recurrent neural network for human robot interaction
    Frontiers in Neurorobotics, 2016
    Co-Authors: Tatsuro Yamada, Shingo Murata, Hiroaki Arie, Tetsuya Ogata
    Abstract:

    To work cooperatively with humans by using Language, robots must not only acquire a mapping between Language and their Behavior but also autonomously utilize the mapping in appropriate contexts of interactive tasks online. To this end, we propose a novel learning method linking Language to robot Behavior by means of a recurrent neural network. In this method, the network learns from correct examples of the imposed task that are given not as explicitly separated sets of Language and Behavior but as sequential data constructed from the actual temporal flow of the task. By doing this, the internal dynamics of the network models both Language--Behavior relationships and the temporal patterns of interaction. Here, ``internal dynamics'' refers to the time development of the system defined on the fixed-dimensional space of the internal states of the context layer. Thus, in the execution phase, by constantly representing where in the interaction context it is as its current state, the network autonomously switches between recognition and generation phases without any explicit signs and utilizes the acquired mapping in appropriate contexts. To evaluate our method, we conducted an experiment in which a robot generates appropriate Behavior responding to a human's linguistic instruction. After learning, the network actually formed the attractor structure representing both Language--Behavior relationships and the task's temporal pattern in its internal dynamics. In the dynamics, Language--Behavior mapping was achieved by the branching structure. Repetition of human's instruction and robot's Behavioral response was represented as the cyclic structure, and besides, waiting to a subsequent instruction was represented as the fixed-point attractor. Thanks to this structure, the robot was able to interact online with a human concerning the given task by autonomously switching phases.

  • attractor representations of Language Behavior structure in a recurrent neural network for human robot interaction
    Intelligent Robots and Systems, 2015
    Co-Authors: Tatsuro Yamada, Shingo Murata, Hiroaki Arie, Tetsuya Ogata
    Abstract:

    In recent years there has been increased interest in studies that explore integrative learning of Language and other modalities by using neural network models. However, for practical application to human-robot interaction, the acquired semantic structure between Language and meaning has to be available immediately and repeatably whenever necessary, just as in everyday communication. As a solution to this problem, this study proposes a method in which a recurrent neural network self-organizes cyclic attractors that reflect semantic structure and represent interaction flows in its internal dynamics. To evaluate this method we design a simple task in which a human verbally directs a robot, which responds appropriately. Training the network with training data that represent the interaction series, the cyclic attractors that reflect the semantic structure is self-organized. The network first receives a verbal direction, and its internal state moves according to the first half of the cyclic attractors with branch structures corresponding to semantics. After that, the internal state reaches a potential to generate appropriate Behavior. Finally, the internal state moves to the second half and converges on the initial point of the cycle while generating the appropriate Behavior. By self-organizing such an internal structure in its forward dynamics, the model achieves immediate and repeatable response to linguistic directions. Furthermore, the network self-organizes a fixed-point attractor, and so able to wait for directions. It can thus repeat the interaction flexibly without explicit turn-taking signs.

Eulalia Soares - One of the best experts on this subject based on the ideXlab platform.