Shared Representation

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 41058 Experts worldwide ranked by ideXlab platform

Judith Holler - One of the best experts on this subject based on the ideXlab platform.

  • do you see what i m singing visuospatial movement biases pitch perception
    Brain and Cognition, 2013
    Co-Authors: Louise Connell, Zhenguang G Cai, Judith Holler
    Abstract:

    The nature of the connection between musical and spatial processing is controversial. While pitch may be described in spatial terms such as "high" or "low", it is unclear whether pitch and space are associated but separate dimensions or whether they share Representational and processing resources. In the present study, we asked participants to judge whether a target vocal note was the same as (or different from) a preceding cue note. Importantly, target trials were presented as video clips where a singer sometimes gestured upward or downward while singing that target note, thus providing an alternative, concurrent source of spatial information. Our results show that pitch discrimination was significantly biased by the spatial movement in gesture, such that downward gestures made notes seem lower in pitch than they really were, and upward gestures made notes seem higher in pitch. These effects were eliminated by spatial memory load but preserved under verbal memory load conditions. Together, our findings suggest that pitch and space have a Shared Representation such that the mental Representation of pitch is audiospatial in nature.

  • do you see what i m singing visuospatial movement biases pitch perception
    Cognitive Science, 2012
    Co-Authors: Louise Connell, Zhenguang G Cai, Judith Holler
    Abstract:

    Do You See What I'm Singing? Visuospatial Movement Biases Pitch Perception Louise Connell (louise.connell@manchester.ac.uk) Zhenguang G. Cai (zhenguang.cai@manchester.ac.uk) School of Psychological Sciences, University of Manchester, Oxford Road, Manchester M13 9PL, UK Judith Holler (judith.holler@mpi.nl) Max Planck Institute for Psycholinguistics, Wundtlaan 1, 6525 XD, Nijmegen, Netherlands School of Psychological Sciences, University of Manchester, Oxford Road, Manchester M13 9PL, UK Abstract order to create a common spatial code (Bryant, 1992; Giudice, Betty, & Loomis, 2011; Lacey, Campbell, & Sathian, 2007). Numerous studies have shown that activating pitch also activates space along the vertical axis. A high-pitch prime leads people to explicitly relate it to a high spatial location (Pratt, 1930), and to implicitly attend to a visual target (Walker et al., 2010) or make a manual response (Rusconi et al., 2006) in a high spatial location. However, the above findings cannot distinguish between an associative mapping explanation, where Representations of pitch and space are separate but linked, and a Shared Representation explanation, where pitch and space share common Representational and processing resources. According to an associative mapping explanation, the Representation of musical pitch is purely auditory in nature. An individual's perception of a note’s pitch would essentially comprise a modality-specific auditory Representation of its sound frequency, and one would recall its pitch as a simulation (i.e., a partial replay of the neural activation that arose during experience: Barsalou, 1999) of that frequency. Perceiving a high pitch note rapidly activates a high spatial location because the two Representational dimensions are directly associated, as are the dimensions of pitch and loudness (McDermott et al., 2008) or pitch and happiness (Eitan & Timmers, 2010). Notwithstanding these associations, pitch perception and discrimination itself remains an exclusively auditory matter. Conversely, a Shared Representation explanation for pitch/space effects would hold that the Representation of musical pitch is audiospatial in nature. Here, an individual's perception of a note’s pitch would comprise an audiospatial Representation of both its sound frequency and its height on the vertical axis. One would then recall its pitch as an auditory and spatial simulation of that frequency and height. People may therefore be willing to map musical pitch to other dimensions because they all share a common spatial grounding (i.e., are mediated by space): for example, both loudness (Eitan, Schupak, & Marks, 2010) and emotional valence (Meier & Robinson, 2004) show similar effects to pitch in vertical space. Pitch perception and discrimination, therefore, is obligatorily audiospatial. In the present studies, we aimed to distinguish between these two explanations by using a basic psychophysical task of pitch discrimination, where participants must judge whether a target vocal note is the same as (or different from) a preceding cue note. Importantly, target trials were presented as video clips where a singer sometimes gestured upward or downward while singing that target note, thus The nature of the connection between musical and spatial processing is controversial. While pitch may be described in spatial terms such as “high” or “low”, it is unclear whether pitch and space are associated but separate dimensions or whether they share Representational and processing resources. In the present study, we asked participants to judge whether a target vocal note was the same as (or different from) a preceding cue note. Importantly, target trials were presented as video clips where a singer sometimes gestured upward or downward while singing that target note, thus providing an alternative, concurrent source of spatial information. Our results show that pitch discrimination was significantly biased by the spatial movement in gesture. These effects were eliminated by spatial memory load but preserved under verbal memory load conditions. Together, our findings suggest that pitch and space have a Shared Representation such that the mental Representation of pitch is audiospatial in nature. Keywords: mental Representation; pitch perception; music; gesture; spatial Representation; metaphor Introduction Musical and spatial processing are interlinked, but the exact nature and extent of the connection is controversial. People with amusia (i.e., an impaired ability to discriminate pitch) have corresponding spatial deficits in some reports (Douglas & Bilkey, 2007), but others have failed to replicate the association (Tillman et al., 2010; Williamson, Cocchini, & Stewart, 2011). People have been found to map musical pitch to vertical spatial locations (Pratt, 1930; Rusconi, Kwan, Giordano, Umilta, & Butterworth, 2006), but they are also willing to map it to psychophysical luminosity and loudness (Hubbard, 1996; McDermott, Lehr, & Oxenham, 2008), and to words denoting emotion, size, sweetness, texture and temperature (Eitan & Timmers, 2010; Walker & Smith, 1984). Thus, while pitch may be described in spatial terms such as “high” or “low”, it remains unclear whether pitch and space are merely two amongst many associated dimensions or whether the Representation of pitch is fundamentally spatial. Pitch is a psychoacoustic property that corresponds to waveform frequency; its Representation involves the primary auditory cortex but the full neural specification of pitch processing is still not well understood (e.g., Bendor, 2011). Space is a physical property of the three-dimensional body we occupy and the world through which we move, and is represented in a multimodal or supramodal system that takes input from vision, touch, and other perceptual modalities in

Louise Connell - One of the best experts on this subject based on the ideXlab platform.

  • do you see what i m singing visuospatial movement biases pitch perception
    Brain and Cognition, 2013
    Co-Authors: Louise Connell, Zhenguang G Cai, Judith Holler
    Abstract:

    The nature of the connection between musical and spatial processing is controversial. While pitch may be described in spatial terms such as "high" or "low", it is unclear whether pitch and space are associated but separate dimensions or whether they share Representational and processing resources. In the present study, we asked participants to judge whether a target vocal note was the same as (or different from) a preceding cue note. Importantly, target trials were presented as video clips where a singer sometimes gestured upward or downward while singing that target note, thus providing an alternative, concurrent source of spatial information. Our results show that pitch discrimination was significantly biased by the spatial movement in gesture, such that downward gestures made notes seem lower in pitch than they really were, and upward gestures made notes seem higher in pitch. These effects were eliminated by spatial memory load but preserved under verbal memory load conditions. Together, our findings suggest that pitch and space have a Shared Representation such that the mental Representation of pitch is audiospatial in nature.

  • do you see what i m singing visuospatial movement biases pitch perception
    Cognitive Science, 2012
    Co-Authors: Louise Connell, Zhenguang G Cai, Judith Holler
    Abstract:

    Do You See What I'm Singing? Visuospatial Movement Biases Pitch Perception Louise Connell (louise.connell@manchester.ac.uk) Zhenguang G. Cai (zhenguang.cai@manchester.ac.uk) School of Psychological Sciences, University of Manchester, Oxford Road, Manchester M13 9PL, UK Judith Holler (judith.holler@mpi.nl) Max Planck Institute for Psycholinguistics, Wundtlaan 1, 6525 XD, Nijmegen, Netherlands School of Psychological Sciences, University of Manchester, Oxford Road, Manchester M13 9PL, UK Abstract order to create a common spatial code (Bryant, 1992; Giudice, Betty, & Loomis, 2011; Lacey, Campbell, & Sathian, 2007). Numerous studies have shown that activating pitch also activates space along the vertical axis. A high-pitch prime leads people to explicitly relate it to a high spatial location (Pratt, 1930), and to implicitly attend to a visual target (Walker et al., 2010) or make a manual response (Rusconi et al., 2006) in a high spatial location. However, the above findings cannot distinguish between an associative mapping explanation, where Representations of pitch and space are separate but linked, and a Shared Representation explanation, where pitch and space share common Representational and processing resources. According to an associative mapping explanation, the Representation of musical pitch is purely auditory in nature. An individual's perception of a note’s pitch would essentially comprise a modality-specific auditory Representation of its sound frequency, and one would recall its pitch as a simulation (i.e., a partial replay of the neural activation that arose during experience: Barsalou, 1999) of that frequency. Perceiving a high pitch note rapidly activates a high spatial location because the two Representational dimensions are directly associated, as are the dimensions of pitch and loudness (McDermott et al., 2008) or pitch and happiness (Eitan & Timmers, 2010). Notwithstanding these associations, pitch perception and discrimination itself remains an exclusively auditory matter. Conversely, a Shared Representation explanation for pitch/space effects would hold that the Representation of musical pitch is audiospatial in nature. Here, an individual's perception of a note’s pitch would comprise an audiospatial Representation of both its sound frequency and its height on the vertical axis. One would then recall its pitch as an auditory and spatial simulation of that frequency and height. People may therefore be willing to map musical pitch to other dimensions because they all share a common spatial grounding (i.e., are mediated by space): for example, both loudness (Eitan, Schupak, & Marks, 2010) and emotional valence (Meier & Robinson, 2004) show similar effects to pitch in vertical space. Pitch perception and discrimination, therefore, is obligatorily audiospatial. In the present studies, we aimed to distinguish between these two explanations by using a basic psychophysical task of pitch discrimination, where participants must judge whether a target vocal note is the same as (or different from) a preceding cue note. Importantly, target trials were presented as video clips where a singer sometimes gestured upward or downward while singing that target note, thus The nature of the connection between musical and spatial processing is controversial. While pitch may be described in spatial terms such as “high” or “low”, it is unclear whether pitch and space are associated but separate dimensions or whether they share Representational and processing resources. In the present study, we asked participants to judge whether a target vocal note was the same as (or different from) a preceding cue note. Importantly, target trials were presented as video clips where a singer sometimes gestured upward or downward while singing that target note, thus providing an alternative, concurrent source of spatial information. Our results show that pitch discrimination was significantly biased by the spatial movement in gesture. These effects were eliminated by spatial memory load but preserved under verbal memory load conditions. Together, our findings suggest that pitch and space have a Shared Representation such that the mental Representation of pitch is audiospatial in nature. Keywords: mental Representation; pitch perception; music; gesture; spatial Representation; metaphor Introduction Musical and spatial processing are interlinked, but the exact nature and extent of the connection is controversial. People with amusia (i.e., an impaired ability to discriminate pitch) have corresponding spatial deficits in some reports (Douglas & Bilkey, 2007), but others have failed to replicate the association (Tillman et al., 2010; Williamson, Cocchini, & Stewart, 2011). People have been found to map musical pitch to vertical spatial locations (Pratt, 1930; Rusconi, Kwan, Giordano, Umilta, & Butterworth, 2006), but they are also willing to map it to psychophysical luminosity and loudness (Hubbard, 1996; McDermott, Lehr, & Oxenham, 2008), and to words denoting emotion, size, sweetness, texture and temperature (Eitan & Timmers, 2010; Walker & Smith, 1984). Thus, while pitch may be described in spatial terms such as “high” or “low”, it remains unclear whether pitch and space are merely two amongst many associated dimensions or whether the Representation of pitch is fundamentally spatial. Pitch is a psychoacoustic property that corresponds to waveform frequency; its Representation involves the primary auditory cortex but the full neural specification of pitch processing is still not well understood (e.g., Bendor, 2011). Space is a physical property of the three-dimensional body we occupy and the world through which we move, and is represented in a multimodal or supramodal system that takes input from vision, touch, and other perceptual modalities in

Zhenguang G Cai - One of the best experts on this subject based on the ideXlab platform.

  • do you see what i m singing visuospatial movement biases pitch perception
    Brain and Cognition, 2013
    Co-Authors: Louise Connell, Zhenguang G Cai, Judith Holler
    Abstract:

    The nature of the connection between musical and spatial processing is controversial. While pitch may be described in spatial terms such as "high" or "low", it is unclear whether pitch and space are associated but separate dimensions or whether they share Representational and processing resources. In the present study, we asked participants to judge whether a target vocal note was the same as (or different from) a preceding cue note. Importantly, target trials were presented as video clips where a singer sometimes gestured upward or downward while singing that target note, thus providing an alternative, concurrent source of spatial information. Our results show that pitch discrimination was significantly biased by the spatial movement in gesture, such that downward gestures made notes seem lower in pitch than they really were, and upward gestures made notes seem higher in pitch. These effects were eliminated by spatial memory load but preserved under verbal memory load conditions. Together, our findings suggest that pitch and space have a Shared Representation such that the mental Representation of pitch is audiospatial in nature.

  • do you see what i m singing visuospatial movement biases pitch perception
    Cognitive Science, 2012
    Co-Authors: Louise Connell, Zhenguang G Cai, Judith Holler
    Abstract:

    Do You See What I'm Singing? Visuospatial Movement Biases Pitch Perception Louise Connell (louise.connell@manchester.ac.uk) Zhenguang G. Cai (zhenguang.cai@manchester.ac.uk) School of Psychological Sciences, University of Manchester, Oxford Road, Manchester M13 9PL, UK Judith Holler (judith.holler@mpi.nl) Max Planck Institute for Psycholinguistics, Wundtlaan 1, 6525 XD, Nijmegen, Netherlands School of Psychological Sciences, University of Manchester, Oxford Road, Manchester M13 9PL, UK Abstract order to create a common spatial code (Bryant, 1992; Giudice, Betty, & Loomis, 2011; Lacey, Campbell, & Sathian, 2007). Numerous studies have shown that activating pitch also activates space along the vertical axis. A high-pitch prime leads people to explicitly relate it to a high spatial location (Pratt, 1930), and to implicitly attend to a visual target (Walker et al., 2010) or make a manual response (Rusconi et al., 2006) in a high spatial location. However, the above findings cannot distinguish between an associative mapping explanation, where Representations of pitch and space are separate but linked, and a Shared Representation explanation, where pitch and space share common Representational and processing resources. According to an associative mapping explanation, the Representation of musical pitch is purely auditory in nature. An individual's perception of a note’s pitch would essentially comprise a modality-specific auditory Representation of its sound frequency, and one would recall its pitch as a simulation (i.e., a partial replay of the neural activation that arose during experience: Barsalou, 1999) of that frequency. Perceiving a high pitch note rapidly activates a high spatial location because the two Representational dimensions are directly associated, as are the dimensions of pitch and loudness (McDermott et al., 2008) or pitch and happiness (Eitan & Timmers, 2010). Notwithstanding these associations, pitch perception and discrimination itself remains an exclusively auditory matter. Conversely, a Shared Representation explanation for pitch/space effects would hold that the Representation of musical pitch is audiospatial in nature. Here, an individual's perception of a note’s pitch would comprise an audiospatial Representation of both its sound frequency and its height on the vertical axis. One would then recall its pitch as an auditory and spatial simulation of that frequency and height. People may therefore be willing to map musical pitch to other dimensions because they all share a common spatial grounding (i.e., are mediated by space): for example, both loudness (Eitan, Schupak, & Marks, 2010) and emotional valence (Meier & Robinson, 2004) show similar effects to pitch in vertical space. Pitch perception and discrimination, therefore, is obligatorily audiospatial. In the present studies, we aimed to distinguish between these two explanations by using a basic psychophysical task of pitch discrimination, where participants must judge whether a target vocal note is the same as (or different from) a preceding cue note. Importantly, target trials were presented as video clips where a singer sometimes gestured upward or downward while singing that target note, thus The nature of the connection between musical and spatial processing is controversial. While pitch may be described in spatial terms such as “high” or “low”, it is unclear whether pitch and space are associated but separate dimensions or whether they share Representational and processing resources. In the present study, we asked participants to judge whether a target vocal note was the same as (or different from) a preceding cue note. Importantly, target trials were presented as video clips where a singer sometimes gestured upward or downward while singing that target note, thus providing an alternative, concurrent source of spatial information. Our results show that pitch discrimination was significantly biased by the spatial movement in gesture. These effects were eliminated by spatial memory load but preserved under verbal memory load conditions. Together, our findings suggest that pitch and space have a Shared Representation such that the mental Representation of pitch is audiospatial in nature. Keywords: mental Representation; pitch perception; music; gesture; spatial Representation; metaphor Introduction Musical and spatial processing are interlinked, but the exact nature and extent of the connection is controversial. People with amusia (i.e., an impaired ability to discriminate pitch) have corresponding spatial deficits in some reports (Douglas & Bilkey, 2007), but others have failed to replicate the association (Tillman et al., 2010; Williamson, Cocchini, & Stewart, 2011). People have been found to map musical pitch to vertical spatial locations (Pratt, 1930; Rusconi, Kwan, Giordano, Umilta, & Butterworth, 2006), but they are also willing to map it to psychophysical luminosity and loudness (Hubbard, 1996; McDermott, Lehr, & Oxenham, 2008), and to words denoting emotion, size, sweetness, texture and temperature (Eitan & Timmers, 2010; Walker & Smith, 1984). Thus, while pitch may be described in spatial terms such as “high” or “low”, it remains unclear whether pitch and space are merely two amongst many associated dimensions or whether the Representation of pitch is fundamentally spatial. Pitch is a psychoacoustic property that corresponds to waveform frequency; its Representation involves the primary auditory cortex but the full neural specification of pitch processing is still not well understood (e.g., Bendor, 2011). Space is a physical property of the three-dimensional body we occupy and the world through which we move, and is represented in a multimodal or supramodal system that takes input from vision, touch, and other perceptual modalities in

Andrew Y Ng - One of the best experts on this subject based on the ideXlab platform.

  • Multimodal Deep Learning
    Proceedings of The 28th International Conference on Machine Learning (ICML), 2011
    Co-Authors: Jiquan Ngiam, Juhan Nam, Honglak Lee, Mingyu Kim, Aditya Khosla, Andrew Y Ng
    Abstract:

    Deep networks have been successfully applied to unsupervised feature learning for single modalities (e.g., text, images or audio). In this work, we propose a novel application of deep networks to learn features over multiple modalities. We present a series of tasks for multimodal learning and show how to train deep networks that learn features to address these tasks. In particular, we demonstrate cross modality feature learning, where better features for one modality (e.g., video) can be learned ifmultiple modalities (e.g., audio and video) are present at feature learning time. Furthermore, we show how to learn a Shared Representation between modalities and evalu- ate it on a unique task, where the classifier is trained with audio-only data but tested with video-only data and vice-versa. Our mod- els are validated on the CUAVE and AVLet- ters datasets on audio-visual speech classifi- cation, demonstrating best published visual speech classification on AVLetters and effec- tive Shared Representation learning.

Johan A. K. Suykens - One of the best experts on this subject based on the ideXlab platform.

  • generative restricted kernel machines a framework for multi view generation and disentangled feature learning
    Neural Networks, 2021
    Co-Authors: Arun Pandey, Joachim Schreurs, Johan A. K. Suykens
    Abstract:

    Abstract This paper introduces a novel framework for generative models based on Restricted Kernel Machines (RKMs) with joint multi-view generation and uncorrelated feature learning, called Gen-RKM. To enable joint multi-view generation, this mechanism uses a Shared Representation of data from various views. Furthermore, the model has a primal and dual formulation to incorporate both kernel-based and (deep convolutional) neural network based models within the same setting. When using neural networks as explicit feature-maps, a novel training procedure is proposed, which jointly learns the features and Shared subspace Representation. The latent variables are given by the eigen-decomposition of the kernel matrix, where the mutual orthogonality of eigenvectors represents the learned uncorrelated features. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of generated samples on various standard datasets.