Spatial Discrimination

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 30537 Experts worldwide ranked by ideXlab platform

Ross K. Maddox - One of the best experts on this subject based on the ideXlab platform.

  • Effects of auditory reliability and ambiguous visual stimuli on auditory Spatial Discrimination
    2020
    Co-Authors: Madeline S. Cappelloni, Sabyasachi Shivkumar, Ralf M. Haefner, Ross K. Maddox
    Abstract:

    The brain combines information from multiple sensory modalities to interpret the environment. Multisensory integration is often modeled by ideal Bayesian causal inference, a model proposing that perceptual decisions arise from a statistical weighting of information from each sensory modality based on its reliability and relevance to the observer9s task. However, ideal Bayesian causal inference fails to describe human behavior in a simultaneous auditory Spatial Discrimination task in which Spatially aligned visual stimuli improve performance despite providing no information about the correct response. This work tests the hypothesis that humans weight auditory and visual information in this task based on their relative reliabilities, even though the visual stimuli are task-uninformative, carrying no information about the correct response, and should be given zero weight. Listeners perform an auditory Spatial Discrimination task with relative reliabilities modulated by the stimulus durations. By comparing conditions in which task-uninformative visual stimuli are Spatially aligned with auditory stimuli or centrally located (control condition), listeners are shown to have a larger multisensory effect when their auditory thresholds are worse. Even in cases in which visual stimuli are not task-informative, the brain combines sensory information that is scene-relevant, especially when the task is difficult due to unreliable auditory information.

  • task uninformative visual stimuli improve auditory Spatial Discrimination in humans but not the ideal observer
    PLOS ONE, 2019
    Co-Authors: Madeline S. Cappelloni, Sabyasachi Shivkumar, Ralf M. Haefner, Ross K. Maddox
    Abstract:

    In order to survive and function in the world, we must understand the content of our environment. This requires us to gather and parse complex, sometimes conflicting, information. Yet, the brain is capable of translating sensory stimuli from disparate modalities into a cohesive and accurate percept with little conscious effort. Previous studies of multisensory integration have suggested that the brain's integration of cues is well-approximated by an ideal observer implementing Bayesian causal inference. However, behavioral data from tasks that include only one stimulus in each modality fail to capture what is in nature a complex process. Here we employed an auditory Spatial Discrimination task in which listeners were asked to determine on which side they heard one of two concurrently presented sounds. We compared two visual conditions in which task-uninformative shapes were presented in the center of the screen, or Spatially aligned with the auditory stimuli. We found that performance on the auditory task improved when the visual stimuli were Spatially aligned with the auditory stimuli-even though the shapes provided no information about which side the auditory target was on. We also demonstrate that a model of a Bayesian ideal observer performing causal inference cannot explain this improvement, demonstrating that humans deviate systematically from the ideal observer model.

Sandra Peña De Ortiz - One of the best experts on this subject based on the ideXlab platform.

  • knockdown of nurr1 in the rat hippocampus implications to Spatial Discrimination learning and memory
    Learning & Memory, 2006
    Co-Authors: Wanda I Coloncesario, Jahaira Félix, Michelle M Martinezmontemayor, Sohaira Morales, Juan Cruz, Monique Adorno, Lixmar Pereira, Nydia Colon, Carmen S Maldonadovlaar, Sandra Peña De Ortiz
    Abstract:

    Nurr1 expression is up-regulated in the brain following associative learning experiences, but its relevance to cognitive processes remains unclear. In these studies, rats initially received bilateral hippocampal infusions of control or antisense oligodeoxynucleotides (ODNs) 1 h prior to training in a holeboard Spatial Discrimination task. Such pre-training infusions of nurr1 antisense ODNs caused a moderate effect in learning the task and also impaired LTM tested 7 d later. In a second experiment, ODN infusions were given immediately after the animals had received two sessions of training, during which all animals showed normal learning. Although antisense treated rats were significantly impaired during the post-infusion stages of acquisition of the task, no group differences were observed during the LTM test given 7 d later. These animals were subjected 3 d later to reversal training in the same maze in the absence of any additional treatments. Remarkably, rats previously treated with antisense ODNs displayed perseveration: The animals were fixated with the previously learned pattern of baited holes, causing them to be significantly impaired in the extinction of acquired Spatial preferences and future learning. We postulate that Nurr1 function in the hippocampus is important for normal cognitive processes.

  • Hippocampal gene expression profiling in Spatial Discrimination learning.
    Neurobiology of Learning and Memory, 2003
    Co-Authors: Yolanda Robles, Pablo E. Vivas-mejia, Humberto Ortiz-zuazaga, Jahaira Félix, Xiomara Ramos, Sandra Peña De Ortiz
    Abstract:

    Abstract Learning and long-term memory are thought to involve temporally defined changes in gene expression that lead to the strengthening of synaptic connections in selected brain regions. We used cDNA microarrays to study hippocampal gene expression in animals trained in a Spatial Discrimination-learning paradigm. Our analysis identified 19 genes that showed statistically significant changes in expression when comparing Naive versus Trained animals. We confirmed the changes in expression for the genes encoding the nuclear protein prothymosinα and the δ-1 opioid receptor (DOR1) by Northern blotting or in situ hybridization. In additional studies, laser-capture microdissection (LCM) allowed us to obtain enriched neuronal populations from the dentate gyrus, CA1, and CA3 subregions of the hippocampus from Naive, Pseudotrained, and Spatially Trained animals. Real-time PCR examined the Spatial learning specificity of hippocampal modulation of the genes encoding protein kinase B (PKB, also known as Akt), protein kinase Cδ (PKCδ), cell adhesion kinaseβ (CAKβ, also known as Pyk2), and receptor protein tyrosine phosphataseζ/β (RPTPζ/β). These studies showed subregion specificity of Spatial learning-induced changes in gene expression within the hippocampus, a feature that was particular to each gene studied. We suggest that statistically valid gene expression profiles generated with cDNA microarrays may provide important insights as to the cellular and molecular events subserving learning and memory processes in the brain.

  • Different hippocampal activity profiles for PKA and PKC in Spatial Discrimination learning.
    Behavioral Neuroscience, 2000
    Co-Authors: Sandra I. Vázquez, Adrinel Vázquez, Sandra Peña De Ortiz
    Abstract:

    Protein kinases are considered essential for the processing and storage of information in the brain. However, the dynamics of protein kinase activation in the hippocampus during Spatial learning are poorly understood. In this study, rats were trained to learn a holeboard Spatial Discrimination task and the activity profiles for cyclic adenosine monophosphate (cAMP)-dependent protein kinase (PKA) and Ca 2+ / phospholipid-dependent protein kinase C (PKC) in the hippocampus were examined. Hippocampal PKA activity increased rapidly on Day I of Spatial learning and remained moderately high at later stages of acquisition. In contrast, PKC activity increased in particulate fractions compared with cytosolic fractions after habituation training and was maximal at Day 3 of Spatial acquisition. The results establish a temporal dissociation between PKA and PKC during acquisition of Spatial Discrimination learning.

Richard J Brown - One of the best experts on this subject based on the ideXlab platform.

  • investigating the time course of tactile reflexive attention using a non Spatial Discrimination task
    Acta Psychologica, 2008
    Co-Authors: Eleanor Miles, Ellen Poliakoff, Richard J Brown
    Abstract:

    Peripheral cues are thought to facilitate responses to stimuli presented at the same location because they lead to exogenous attention shifts. Facilitation has been observed in numerous studies of visual and auditory attention, but there have been only four demonstrations of tactile facilitation, all in studies with potential confounds. Three studies used a Spatial (finger versus thumb) Discrimination task, where the cue could have provided a Spatial framework that might have assisted the Discrimination of subsequent targets presented on the same side as the cue. The final study circumvented this problem by using a non-Spatial Discrimination; however, the cues were informative and interspersed with visual cues which may have affected the attentional effects observed. In the current study, therefore, we used a non-Spatial tactile frequency Discrimination task following a non-informative tactile white noise cue. When the target was presented 150 ms after the cue, we observed faster Discrimination responses to targets presented on the same side compared to the opposite side as the cue; by 1000 ms, responses were significantly faster to targets presented on the opposite side to the cue. Thus, we demonstrated that tactile attentional facilitation can be observed in a non-Spatial Discrimination task, under unimodal conditions and with entirely non-predictive cues. Furthermore, we provide the first demonstration of significant tactile facilitation and tactile inhibition of return within a single experiment.

Vahid Vahdat Zad - One of the best experts on this subject based on the ideXlab platform.

  • Spatial Discrimination in tehran s modern urban planning 1906 1979
    Journal of Planning History, 2013
    Co-Authors: Vahid Vahdat Zad
    Abstract:

    This article studies the relationship between social culture and Spatial Discrimination in the modern urban planning of Tehran. It examines how planning attempts during the Qajar era began to enrich the ideas of citizenship and public space, which in effect transformed the racial and religious Discriminations to new forms of segregation based on economic classes. It shows how multiple planning practices during Reza Shah favored a uniform modernist style to create an imaginary national identity. The divergence of socioeconomic classes under Shah’s planning practices is also analyzed to show the Spatial Discrimination that the new urban poor had to bear.

  • Spatial Discrimination in Tehran’s Modern Urban Planning 1906–1979
    Journal of Planning History, 2012
    Co-Authors: Vahid Vahdat Zad
    Abstract:

    This article studies the relationship between social culture and Spatial Discrimination in the modern urban planning of Tehran. It examines how planning attempts during the Qajar era began to enrich the ideas of citizenship and public space, which in effect transformed the racial and religious Discriminations to new forms of segregation based on economic classes. It shows how multiple planning practices during Reza Shah favored a uniform modernist style to create an imaginary national identity. The divergence of socioeconomic classes under Shah’s planning practices is also analyzed to show the Spatial Discrimination that the new urban poor had to bear.

Madeline S. Cappelloni - One of the best experts on this subject based on the ideXlab platform.

  • Effects of auditory reliability and ambiguous visual stimuli on auditory Spatial Discrimination
    2020
    Co-Authors: Madeline S. Cappelloni, Sabyasachi Shivkumar, Ralf M. Haefner, Ross K. Maddox
    Abstract:

    The brain combines information from multiple sensory modalities to interpret the environment. Multisensory integration is often modeled by ideal Bayesian causal inference, a model proposing that perceptual decisions arise from a statistical weighting of information from each sensory modality based on its reliability and relevance to the observer9s task. However, ideal Bayesian causal inference fails to describe human behavior in a simultaneous auditory Spatial Discrimination task in which Spatially aligned visual stimuli improve performance despite providing no information about the correct response. This work tests the hypothesis that humans weight auditory and visual information in this task based on their relative reliabilities, even though the visual stimuli are task-uninformative, carrying no information about the correct response, and should be given zero weight. Listeners perform an auditory Spatial Discrimination task with relative reliabilities modulated by the stimulus durations. By comparing conditions in which task-uninformative visual stimuli are Spatially aligned with auditory stimuli or centrally located (control condition), listeners are shown to have a larger multisensory effect when their auditory thresholds are worse. Even in cases in which visual stimuli are not task-informative, the brain combines sensory information that is scene-relevant, especially when the task is difficult due to unreliable auditory information.

  • task uninformative visual stimuli improve auditory Spatial Discrimination in humans but not the ideal observer
    PLOS ONE, 2019
    Co-Authors: Madeline S. Cappelloni, Sabyasachi Shivkumar, Ralf M. Haefner, Ross K. Maddox
    Abstract:

    In order to survive and function in the world, we must understand the content of our environment. This requires us to gather and parse complex, sometimes conflicting, information. Yet, the brain is capable of translating sensory stimuli from disparate modalities into a cohesive and accurate percept with little conscious effort. Previous studies of multisensory integration have suggested that the brain's integration of cues is well-approximated by an ideal observer implementing Bayesian causal inference. However, behavioral data from tasks that include only one stimulus in each modality fail to capture what is in nature a complex process. Here we employed an auditory Spatial Discrimination task in which listeners were asked to determine on which side they heard one of two concurrently presented sounds. We compared two visual conditions in which task-uninformative shapes were presented in the center of the screen, or Spatially aligned with the auditory stimuli. We found that performance on the auditory task improved when the visual stimuli were Spatially aligned with the auditory stimuli-even though the shapes provided no information about which side the auditory target was on. We also demonstrate that a model of a Bayesian ideal observer performing causal inference cannot explain this improvement, demonstrating that humans deviate systematically from the ideal observer model.