Linguistic Input

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 23916 Experts worldwide ranked by ideXlab platform

Evelina Fedorenko - One of the best experts on this subject based on the ideXlab platform.

  • composition is the core driver of the language selective network
    Neurobiology of Language, 2020
    Co-Authors: Francis Mollica, Evelina Fedorenko, Matthew Siegelman, Evgeniia Diachek, Steven Piantadosi, Zachary Mineroff, Richard Futrell, Hope Kean, Peng Qian
    Abstract:

    The frontotemporal language network responds robustly and selectively to sentences. But the features of Linguistic Input that drive this response and the computations that these language areas supp...

  • composition is the core driver of the language selective network
    bioRxiv, 2019
    Co-Authors: Francis Mollica, Matthew Siegelman, Evgeniia Diachek, Steven Piantadosi, Zachary Mineroff, Richard Futrell, Hope Kean, Peng Qian, Evelina Fedorenko
    Abstract:

    The fronto-temporal language network responds robustly and selectively to sentences. But the features of Linguistic Input that drive this response and the computations these language areas support remain debated. Two key features of sentences are typically confounded in natural Linguistic Input: words in sentences a) are semantically and syntactically combinable into phrase- and clause-level meanings, and b) occur in an order licensed by the language9s grammar. Inspired by recent psychoLinguistic work establishing that language processing is robust to word order violations, we hypothesized that the core Linguistic computation is composition, and, thus, can take place even when the word order violates the grammatical constraints of the language. This hypothesis predicts that a Linguistic string should elicit a sentence-level response in the language network as long as the words in that string can enter into dependency relationships as in typical sentences. We tested this prediction across two fMRI experiments (total N=47) by introducing a varying number of local word swaps into naturalistic sentences, leading to progressively less syntactically well-formed strings. Critically, local dependency relationships were preserved because combinable words remained close to each other. As predicted, word order degradation did not decrease the magnitude of the BOLD response in the language network, except when combinable words were so far apart that composition among nearby words was highly unlikely. This finding demonstrates that composition is robust to word order violations, and that the language regions respond as strongly as they do to naturalistic Linguistic Input as long as composition can take place.

  • domain general brain regions do not track Linguistic Input as closely as language selective regions
    The Journal of Neuroscience, 2017
    Co-Authors: Idan Blank, Evelina Fedorenko
    Abstract:

    Language comprehension engages a cortical network of left frontal and temporal regions. Activity in this network is language-selective, showing virtually no modulation by non-Linguistic tasks. In addition, language comprehension engages a second network consisting of bilateral frontal, parietal, cingulate, and insular regions. Activity in this “Multiple Demand (MD)” network scales with comprehension difficulty, but also with cognitive effort across a wide range of non-Linguistic tasks in a domain-general fashion. Given the functional dissociation between the language and MD networks, their respective contributions to comprehension are likely distinct, yet such differences remain elusive. Prior neuroimaging studies have suggested that activity in each network co-varies with some Linguistic features that, behaviorally, influence on-line processing and comprehension. This sensitivity of the language and MD networks to local Input characteristics has often been interpreted — implicitly or explicitly — as evidence that both networks track Linguistic Input closely, and in a manner consistent across individuals. Here, we used fMRI to directly test this assumption by comparing the BOLD signal time-courses in each network across different people ( n =45, men and women) listening to the same story. Language network activity showed fewer individual differences, indicative of closer Input tracking, whereas MD network activity was more idiosyncratic and, moreover, showed lower reliability within an individual across repetitions of a story. These findings constrain cognitive models of language comprehension by suggesting a novel distinction between the processes implemented in the language and MD networks. SIGNIFICANCE STATEMENT Language comprehension recruits both language-specific mechanisms and domain-general mechanisms that are engaged in many cognitive processes. In the human cortex, language-selective mechanisms are implemented in the left-lateralized “core language network”, whereas domain-general mechanisms are implemented in the bilateral “Multiple Demand (MD)” network. Here, we report the first direct comparison of the respective contributions of these networks to naturalistic story comprehension. Using a novel combination of neuroimaging approaches we find that MD regions track stories less closely than language regions. This finding constrains the possible contributions of the MD network to comprehension, contrasts with accounts positing that this network has continuous access to Linguistic Input, and suggests a new typology of comprehension processes based on their extent of Input tracking.

  • domain general brain regions do not track Linguistic Input as closely as language selective regions
    bioRxiv, 2017
    Co-Authors: Idan Blank, Evelina Fedorenko
    Abstract:

    Language comprehension engages a cortical network of left frontal and temporal regions. Activity in this network is language-selective, showing virtually no modulation by non-Linguistic tasks. In addition, language comprehension engages a second network consisting of bilateral frontal, parietal, cingulate, and insular regions. Activity in this "Multiple Demand (MD)" network scales with comprehension difficulty, but also with cognitive effort across a wide range of non-Linguistic tasks in a domain-general fashion. Given the functional dissociation between the language and MD networks, their respective contributions to comprehension are likely distinct, yet such differences remain elusive. Critically, given that each network is sensitive to some Linguistic features, prior research has assumed − implicitly or explicitly − that both networks track Linguistic Input closely, and in a manner consistent across individuals. Here, we used fMRI to directly test this assumption by comparing the BOLD signal time-courses in each network across different people listening to the same story. Language network activity showed fewer individual differences, indicative of closer Input tracking, whereas MD network activity was more idiosyncratic and, moreover, showed lower reliability within an individual across repetitions of a story. These findings constrain cognitive models of language comprehension by suggesting a novel distinction between the processes implemented in the language and MD

  • language selective brain regions track Linguistic Input more closely than domain general regions
    bioRxiv, 2016
    Co-Authors: Idan Blank, Evelina Fedorenko
    Abstract:

    Language comprehension engages a cortical network of left frontal and temporal regions [1-6]. Activity in this network is sensitive to Linguistic features such as lexical information, syntax and compositional semantics [7-10]. However, this network shows virtually no engagement in non-Linguistic tasks [11-14] and is therefore language-selective. In addition, language comprehension engages a second network consisting of frontal, parietal, cingulate, and insular regions [15-18]. Activity in this "Multiple Demand (MD)" network [19] is sensitive to comprehension difficulty, increasing in the presence of e.g. ambiguity [20-26], infrequent words [27-33] and non-local syntactic dependencies [34-40]. However, this network similarly scales its activity with cognitive effort across a wide range of non-Linguistic tasks [19, 41] and is therefore domain-general. Given the functional dissociation between the language and MD networks [42, 43], their respective contributions to comprehension are likely distinct, yet such differences remain elusive. Critically, given that each network is sensitive to some Linguistic features, prior research has presupposed that both networks track Linguistic Input closely, and in a manner consistent across individuals. Here, we used fMRI to test this assumption by comparing the BOLD signal time-courses in each network across different individuals listening to the same story [44-46]. Language network activity showed fewer individual differences, indicative of closer Input tracking, whereas MD network activity was more idiosyncratic and, moreover, showed lower reliability within an individual across repetitions of a story. These findings constrain cognitive models of language comprehension by suggesting a novel distinction between the processes implemented in the language and MD networks.

Aaron Courville - One of the best experts on this subject based on the ideXlab platform.

  • modulating early visual processing by language
    Neural Information Processing Systems, 2017
    Co-Authors: Harm De Vries, Florian Strub, Jeremie Mary, Hugo Larochelle, Olivier Pietquin, Aaron Courville
    Abstract:

    It is commonly assumed that language refers to high-level visual concepts while leaving low-level visual processing unaffected. This view dominates the current literature in computational models for language-vision tasks, where visual and Linguistic Inputs are mostly processed independently before being fused into a single representation. In this paper, we deviate from this classic pipeline and propose to modulate the \emph{entire visual processing} by a Linguistic Input. Specifically, we introduce Conditional Batch Normalization (CBN) as an efficient mechanism to modulate convolutional feature maps by a Linguistic embedding. We apply CBN to a pre-trained Residual Network (ResNet), leading to the MODulatEd ResNet (\MRN) architecture, and show that this significantly improves strong baselines on two visual question answering tasks. Our ablation study confirms that modulating from the early stages of the visual processing is beneficial.

  • modulating early visual processing by language
    arXiv: Computer Vision and Pattern Recognition, 2017
    Co-Authors: Harm De Vries, Florian Strub, Jeremie Mary, Hugo Larochelle, Olivier Pietquin, Aaron Courville
    Abstract:

    It is commonly assumed that language refers to high-level visual concepts while leaving low-level visual processing unaffected. This view dominates the current literature in computational models for language-vision tasks, where visual and Linguistic Input are mostly processed independently before being fused into a single representation. In this paper, we deviate from this classic pipeline and propose to modulate the \emph{entire visual processing} by Linguistic Input. Specifically, we condition the batch normalization parameters of a pretrained residual network (ResNet) on a language embedding. This approach, which we call MOdulated RESnet (\MRN), significantly improves strong baselines on two visual question answering tasks. Our ablation study shows that modulating from the early stages of the visual processing is beneficial.

Marie Coppola - One of the best experts on this subject based on the ideXlab platform.

  • visible social interactions do not support the development of false belief understanding in the absence of Linguistic Input evidence from deaf adult homesigners
    Frontiers in Psychology, 2017
    Co-Authors: Deanna L Gagne, Marie Coppola
    Abstract:

    Congenitally deaf individuals exhibit enhanced visuospatial abilities relative to normally hearing individuals. An early example is the increased sensitivity of deaf signers to stimuli in the visual periphery (Neville & Lawson, 1987a). While these enhancements are robust and extend across a number of visual and spatial skill, they seem not to extend to other domains which could potentially build on these enhancements. For example, congenitally deaf children, in the absence of adequate language exposure and acquisition, do not develop typical social cognition skills as measured by traditional Theory of Mind tasks. These delays/deficits occur despite their presumed lifetime use of visuoperceptual abilities to infer the intentions and behaviors of others (e.g., O’Reilly, Peterson, & Wellman, 2014; Pyers & Senghas, 2009). In a series of studies, we explore the limits on the plasticity of visually-based socio-cognitive abilities, from perspective taking to Theory of Mind/False Belief, in rarely studied individuals: deaf adults who have not acquired a conventional language (Homesigners). We compared Homesigners’ performance to that of two other understudied groups in the same culture: Deaf signers of an emerging language (Cohort 1 of Nicaraguan Sign Language), and hearing speakers of Spanish with minimal schooling. We found that homesigners performed equivalently to both comparison groups with respect to several visual socio-cognitive abilities: Perspective Taking (Level 1 and Level 2), adapted from Masangkay et al. (1974), and the False Photograph task, adapted from Leslie & Thaiss (1992). However, a lifetime of visuo-perceptual experiences (observing the behavior and interactions of others) did not support success on False Belief tasks, even when Linguistic demands are minimized. Participants in the comparison groups outperform the Homesigners, but did not universally pass the False Belief tasks. Our results suggest that while some of the social development achievements of young typically developing children may be dissociable from their Linguistic experiences, language and/or educational experiences clearly scaffolds the transition into False Belief understanding. The lack of experience using a shared language cannot be overcome, even with the benefit of many years of observing others’ behaviors and the potential neural reorganization and visuospatial enhancements resulting from deafness.

Mirella Lapata - One of the best experts on this subject based on the ideXlab platform.

  • visual information in semantic representation
    North American Chapter of the Association for Computational Linguistics, 2010
    Co-Authors: Yansong Feng, Mirella Lapata
    Abstract:

    The question of how meaning might be acquired by young children and represented by adult speakers of a language is one of the most debated topics in cognitive science. Existing semantic representation models are primarily amodal based on information provided by the Linguistic Input despite ample evidence indicating that the cognitive system is also sensitive to perceptual information. In this work we exploit the vast resource of images and associated documents available on the web and develop a model of multimodal meaning representation which is based on the Linguistic and visual context. Experimental results show that a closer correspondence to human data can be obtained by taking the visual modality into account.

  • visual information in semantic representation
    North American Chapter of the Association for Computational Linguistics, 2010
    Co-Authors: Yansong Feng, Mirella Lapata
    Abstract:

    The question of how meaning might be acquired by young children and represented by adult speakers of a language is one of the most debated topics in cognitive science. Existing semantic representation models are primarily amodal based on information provided by the Linguistic Input despite ample evidence indicating that the cognitive system is also sensitive to perceptual information. In this work we exploit the vast resource of images and associated documents available on the web and develop a model of multimodal meaning representation which is based on the Linguistic and visual context. Experimental results show that a closer correspondence to human data can be obtained by taking the visual modality into account.

Harm De Vries - One of the best experts on this subject based on the ideXlab platform.

  • modulating early visual processing by language
    Neural Information Processing Systems, 2017
    Co-Authors: Harm De Vries, Florian Strub, Jeremie Mary, Hugo Larochelle, Olivier Pietquin, Aaron Courville
    Abstract:

    It is commonly assumed that language refers to high-level visual concepts while leaving low-level visual processing unaffected. This view dominates the current literature in computational models for language-vision tasks, where visual and Linguistic Inputs are mostly processed independently before being fused into a single representation. In this paper, we deviate from this classic pipeline and propose to modulate the \emph{entire visual processing} by a Linguistic Input. Specifically, we introduce Conditional Batch Normalization (CBN) as an efficient mechanism to modulate convolutional feature maps by a Linguistic embedding. We apply CBN to a pre-trained Residual Network (ResNet), leading to the MODulatEd ResNet (\MRN) architecture, and show that this significantly improves strong baselines on two visual question answering tasks. Our ablation study confirms that modulating from the early stages of the visual processing is beneficial.

  • modulating early visual processing by language
    arXiv: Computer Vision and Pattern Recognition, 2017
    Co-Authors: Harm De Vries, Florian Strub, Jeremie Mary, Hugo Larochelle, Olivier Pietquin, Aaron Courville
    Abstract:

    It is commonly assumed that language refers to high-level visual concepts while leaving low-level visual processing unaffected. This view dominates the current literature in computational models for language-vision tasks, where visual and Linguistic Input are mostly processed independently before being fused into a single representation. In this paper, we deviate from this classic pipeline and propose to modulate the \emph{entire visual processing} by Linguistic Input. Specifically, we condition the batch normalization parameters of a pretrained residual network (ResNet) on a language embedding. This approach, which we call MOdulated RESnet (\MRN), significantly improves strong baselines on two visual question answering tasks. Our ablation study shows that modulating from the early stages of the visual processing is beneficial.