Imitation

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 318 Experts worldwide ranked by ideXlab platform

Zhiyao Duan - One of the best experts on this subject based on the ideXlab platform.

  • Siamese Style Convolutional Neural Networks for Sound Search by Vocal Imitation
    IEEE ACM Transactions on Audio Speech and Language Processing, 2019
    Co-Authors: Yichi Zhang, Bryan Pardo, Zhiyao Duan
    Abstract:

    Conventional methods for finding audio in databases typically search text labels, rather than the audio itself. This can be problematic as labels may be missing, irrelevant to the audio content, or not known by users. Query by vocal Imitation lets users query using vocal Imitations instead. To do so, appropriate audio feature representations and effective similarity measures of Imitations and original sounds must be developed. In this paper, we build upon our preliminary work to propose Siamese style convolutional neural networks to learn feature representations and similarity measures in a unified end-to-end training framework. Our Siamese architecture uses two convolutional neural networks to extract features, one from vocal Imitations and the other from original sounds. The encoded features are then concatenated and fed into a fully connected network to estimate their similarity. We propose two versions of the system: IMINET is symmetric where the two encoders have an identical structure and are trained from scratch, while TL-IMINET is asymmetric and adopts the transfer learning idea by pretraining the two encoders from other relevant tasks: spoken language recognition for the Imitation encoder and environmental sound classification for the original sound encoder. Experimental results show that both versions of the proposed system outperform a state-of-the-art system for sound search by vocal Imitation, and the performance can be further improved when they are fused with the state of the art system. Results also show that transfer learning significantly improves the retrieval performance. This paper also provides insights to the proposed networks by visualizing and sonifying input patterns that maximize the activation of certain neurons in different layers.

  • Visualization and Interpretation of Siamese Style Convolutional Neural Networks for Sound Search by Vocal Imitation
    2018 IEEE International Conference on Acoustics Speech and Signal Processing (ICASSP), 2018
    Co-Authors: Yichi Zhang, Zhiyao Duan
    Abstract:

    Designing systems that allow users to search sounds through vocal Imitation augments the current text-based search engines and advances human-computer interaction. Previously we proposed a Siamese style convolutional network called IMINET for sound search by vocal Imitation, which jointly addresses feature extraction by Convolutional Neural Network (CNN) and similarity calculation by Fully Connected Network (FCN), and is currently the state of the art. However, how such architecture works is still a mystery. In this paper, we try to answer this question. First, we visualize the input patterns that maximize the activation of different neurons in each CNN tower; this helps us understand what features are extracted from vocal Imitations and sound candidates. Second, we visualize the Imitation-sound input pairs that maximize the activation of different neurons in the FCN layers; this helps us understand what kind of input pattern pairs are recognized during the similarity calculation. Interesting patterns are found to reveal the local-to-global and simple-to-conceptual learning mechanism of TL-IMINET. Experiments also show how transfer learning helps to improve TL-IMINET performance from the visualization aspect.

Yichi Zhang - One of the best experts on this subject based on the ideXlab platform.

  • Siamese Style Convolutional Neural Networks for Sound Search by Vocal Imitation
    IEEE ACM Transactions on Audio Speech and Language Processing, 2019
    Co-Authors: Yichi Zhang, Bryan Pardo, Zhiyao Duan
    Abstract:

    Conventional methods for finding audio in databases typically search text labels, rather than the audio itself. This can be problematic as labels may be missing, irrelevant to the audio content, or not known by users. Query by vocal Imitation lets users query using vocal Imitations instead. To do so, appropriate audio feature representations and effective similarity measures of Imitations and original sounds must be developed. In this paper, we build upon our preliminary work to propose Siamese style convolutional neural networks to learn feature representations and similarity measures in a unified end-to-end training framework. Our Siamese architecture uses two convolutional neural networks to extract features, one from vocal Imitations and the other from original sounds. The encoded features are then concatenated and fed into a fully connected network to estimate their similarity. We propose two versions of the system: IMINET is symmetric where the two encoders have an identical structure and are trained from scratch, while TL-IMINET is asymmetric and adopts the transfer learning idea by pretraining the two encoders from other relevant tasks: spoken language recognition for the Imitation encoder and environmental sound classification for the original sound encoder. Experimental results show that both versions of the proposed system outperform a state-of-the-art system for sound search by vocal Imitation, and the performance can be further improved when they are fused with the state of the art system. Results also show that transfer learning significantly improves the retrieval performance. This paper also provides insights to the proposed networks by visualizing and sonifying input patterns that maximize the activation of certain neurons in different layers.

  • Visualization and Interpretation of Siamese Style Convolutional Neural Networks for Sound Search by Vocal Imitation
    2018 IEEE International Conference on Acoustics Speech and Signal Processing (ICASSP), 2018
    Co-Authors: Yichi Zhang, Zhiyao Duan
    Abstract:

    Designing systems that allow users to search sounds through vocal Imitation augments the current text-based search engines and advances human-computer interaction. Previously we proposed a Siamese style convolutional network called IMINET for sound search by vocal Imitation, which jointly addresses feature extraction by Convolutional Neural Network (CNN) and similarity calculation by Fully Connected Network (FCN), and is currently the state of the art. However, how such architecture works is still a mystery. In this paper, we try to answer this question. First, we visualize the input patterns that maximize the activation of different neurons in each CNN tower; this helps us understand what features are extracted from vocal Imitations and sound candidates. Second, we visualize the Imitation-sound input pairs that maximize the activation of different neurons in the FCN layers; this helps us understand what kind of input pattern pairs are recognized during the similarity calculation. Interesting patterns are found to reveal the local-to-global and simple-to-conceptual learning mechanism of TL-IMINET. Experiments also show how transfer learning helps to improve TL-IMINET performance from the visualization aspect.

Marco Iacoboni - One of the best experts on this subject based on the ideXlab platform.

  • Neurobiology of Imitation
    Current Opinion in Neurobiology, 2009
    Co-Authors: Marco Iacoboni
    Abstract:

    Recent research on the neurobiology of Imitation has gone beyond the study of its ‘core’ mechanisms, focus of investigation of the past years.. The current trends can be grouped into four main categories: (1) non ‘core’ neural mechanisms that are also important for Imitation; (2) mechanisms of control, in both imitative learning and inhibition of Imitation; (3) the developmental trajectory of neural mechanisms of Imitation and their relation with the development of social cognition; (4) neurobiological mechanisms of Imitation in non-primates, in particular vocal learning in songbirds, and their relations with similar mechanisms of vocal learning in humans. The existing data suggest that both perceptual and motor aspects of Imitation follow organizing principles that originally belonged to the motor system.

  • Imitation, Empathy, and Mirror Neurons
    Annual Review of Psychology, 2009
    Co-Authors: Marco Iacoboni
    Abstract:

    There is a convergence between cognitive models of Imitation, constructs derived from social psychology studies on mimicry and empathy, and recent empirical findings from the neurosciences. The ideomotor framework of human actions assumes a common representational format for action and perception that facilitates Imitation. Furthermore, the associative sequence learning model of Imitation proposes that experience-based Hebbian learning forms links between sensory processing of the actions of others and motor plans. Social psychology studies have demonstrated that Imitation and mimicry are pervasive, automatic, and facilitate empathy. Neuroscience investigations have demonstrated physiological mechanisms of mirroring at single-cell and neural-system levels that support the cognitive and social psychology constructs. Why were these neural mechanisms selected, and what is their adaptive advantage? Neural mirroring solves the "problem of other minds" (how we can access and understand the minds of others) and makes intersubjectivity possible, thus facilitating social behavior. © 2009 by Annual Reviews. All rights reserved.

  • Imitation empathy and mirror neurons
    Annual Review of Psychology, 2009
    Co-Authors: Marco Iacoboni
    Abstract:

    There is a convergence between cognitive models of Imitation, constructs derived from social psychology studies on mimicry and empathy, and recent empirical findings from the neurosciences. The ideomotor framework of human actions assumes a common representational format for action and perception that facilitates Imitation. Furthermore, the associative sequence learning model of Imitation proposes that experience-based Hebbian learning forms links between sensory processing of the actions of others and motor plans. Social psychology studies have demonstrated that Imitation and mimicry are pervasive, automatic, and facilitate empathy. Neuroscience investigations have demonstrated physiological mechanisms of mirroring at single-cell and neural-system levels that support the cognitive and social psychology constructs. Why were these neural mechanisms selected, and what is their adaptive advantage? Neural mirroring solves the “problem of other minds” (how we can access and understand the minds of others) and makes intersubjectivity possible, thus facilitating social behavior.

  • neural mechanisms of Imitation
    Current Opinion in Neurobiology, 2005
    Co-Authors: Marco Iacoboni
    Abstract:

    Recent advances in our knowledge of the neural mechanisms of Imitation suggest that there is a core circuitry of Imitation comprising the superior temporal sulcus and the ‘mirror neuron system’, which consists of the posterior inferior frontal gyrus and adjacent ventral premotor cortex, as well as the rostral inferior parietal lobule. This core circuitry communicates with other neural systems according to the type of Imitation performed. Imitative learning is supported by interaction of the core circuitry of Imitation with the dorsolateral prefrontal cortex and perhaps motor preparation areas — namely, the mesial frontal, dorsal premotor and superior parietal areas. By contrast, Imitation as a form of social mirroring is supported by interaction of the core circuitry of Imitation with the limbic system.

  • cortical mechanisms of human Imitation
    Science, 1999
    Co-Authors: Marco Iacoboni, Roger P Woods, Marcel Brass, Harold Bekkering, John C Mazziotta, Giacomo Rizzolatti
    Abstract:

    How does Imitation occur? How can the motor plans necessary for imitating an action derive from the observation of that action? Imitation may be based on a mechanism directly matching the observed action onto an internal motor representation of that action (“direct matching hypothesis”). To test this hypothesis, normal human participants were asked to observe and imitate a finger movement and to perform the same movement after spatial or symbolic cues. Brain activity was measured with functional magnetic resonance imaging. If the direct matching hypothesis is correct, there should be areas that become active during finger movement, regardless of how it is evoked, and their activation should increase when the same movement is elicited by the observation of an identical movement made by another individual. Two areas with these properties were found in the left inferior frontal cortex (opercular region) and the rostral-most region of the right superior parietal lobule. Imitation has a central role in human development and learning of motor, communicative, and social skills (1, 2). However, the neural basis of Imitation and its functional mechanisms are poorly understood. Data from patients with brain lesions suggest that frontal and parietal regions may be critical for human Imitation (3) but do not provide insights on the mechanisms underlying it. Models of Imitation based on instrumental

J Oechssler - One of the best experts on this subject based on the ideXlab platform.

  • Imitation - theory and experimental evidence
    J ECON THEORY, 2007
    Co-Authors: J Oechssler
    Abstract:

    We introduce a generalized theoretical approach to study Imitation and subject it to rigorous experimental testing. In our theoretical analysis we find that the different predictions of previous Imitation models are mainly explained by different informational assumptions, and to a lesser extent by different behavioral rules. In a laboratory experiment we test the different theories by systematically varying information conditions. We find significant effects of seemingly innocent changes in information. Moreover, the generalized Imitation model predicts the differences between treatments well. The data provide support for Imitation on the individual level, both in terms of choice and in terms of perception. Furthermore, individuals' propensity to imitate more successful actions is increasing in payoff differences. (C) 2006 Elsevier Inc. All rights reserved.

  • Imitation—theory and experimental evidence
    Journal of Economic Theory, 2007
    Co-Authors: J Oechssler
    Abstract:

    We introduce a generalized theoretical approach to study Imitation and subject it to rigorous experimental testing. In our theoretical analysis we find that the different predictions of previous Imitation models are mainly explained by different informational assumptions, and to a lesser extent by different behavioral rules. In a laboratory experiment we test the different theories by systematically varying information conditions. We find significant effects of seemingly innocent changes in information. Moreover, the generalized Imitation model predicts the differences between treatments well. The data provide support for Imitation on the individual level, both in terms of choice and in terms of perception. Furthermore, individuals’ propensity to imitate more successful actions is increasing in payoff differences.

Kuniaki Kawabata - One of the best experts on this subject based on the ideXlab platform.

  • A study on Imitation motion based on imitated person's view — Finding out the differences between Imitation and non-Imitation
    2015 6th IEEE International Conference on Cognitive Infocommunications (CogInfoCom), 2015
    Co-Authors: Sho Yokota, Taeko Tanaka, Hiroshi Hashimoto, Daisuke Chugo, Kuniaki Kawabata
    Abstract:

    This paper considers the selection of feature points about the Imitation motion when the subject felt to have been imitated by another person, in the scene of real time Imitation on the body motion. There are many researches regarding Imitation motion, most of them consider it from the view point of the person imitating another person's motion. The Imitation motion is cooperative motion by both persons who are the person imitating and the person being imitated. It is, therefore, worth to consider the Imitation motion from the view point of the person being imitated. Thus, this paper considers an Imitation motion from the view point of the person being imitated under the conditions that the person being imitated can recognize own posture by own somatic sensation and can recognize motion of another person by own visual sensation. In particular, this paper tries to select the feature points of the Imitation motion based on human motion and sensation. As a results of experiment, it turned out that the person tends to feel that the motion of another person was not Imitation motion if a delay of the motion was increased. However, the amplitudes of the motion did not affect the impression of the person being imitated. Therefore, this paper concludes that the feature point on Imitation motion is the phase difference (delay) between the motions of another person and subject.

  • A study on Imitation motion based on imitated person's view - Finding out the differences between Imitation and non-Imitation
    2015 6th IEEE International Conference on Cognitive Infocommunications (CogInfoCom), 2015
    Co-Authors: Sho Yokota, Taeko Tanaka, Hiroshi Hashimoto, Daisuke Chugo, Kuniaki Kawabata
    Abstract:

    This paper considers the selection of feature points about the Imitation motion when the subject felt to have been imitated by another person, in the scene of real time Imitation on the body motion. There are many researches regarding Imitation motion, most of them consider it from the view point of the person imitating another person's motion. The Imitation motion is cooperative motion by both persons who are the person imitating and the person being imitated. It is, therefore, worth to consider the Imitation motion from the view point of the person being imitated. Thus, this paper considers an Imitation motion from the view point of the person being imitated under the conditions that the person being imitated can recognize own posture by own somatic sensation and can recognize motion of another person by own visual sensation. In particular, this paper tries to select the feature points of the Imitation motion based on human motion and sensation. As a results of experiment, it turned out that the person tends to feel that the motion of another person was not Imitation motion if a delay of the motion was increased. However, the amplitudes of the motion did not affect the impression of the person being imitated. Therefore, this paper concludes that the feature point on Imitation motion is the phase difference (delay) between the motions of another person and subject.