Sign Languages

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 360 Experts worldwide ranked by ideXlab platform

Evie Malaia - One of the best experts on this subject based on the ideXlab platform.

  • visual and linguistic components of short term memory generalized neural model gnm for spoken and Sign Languages
    Cortex, 2019
    Co-Authors: Evie Malaia, Ronnie B Wilbur
    Abstract:

    Abstract The question of apparent discrepancies in short-term memory capacity for Sign language and speech has long presented difficulties for the models of verbal working memory. While short-term memory (STM) capacity for spoken language spans up to 7 ± 2 items, the verbal working memory capacity for Sign Languages appears to be lower at 5 ± 2. The assumption that both auditory and visual communication (Sign language) rely on the same memory buffers led to the claims of impairment of STM buffers in Sign language users. Yet, no common model deals with both the sensory and linguistic nature of spoken and Sign Languages. The authors present a generalized neural model (GNM) of short-term memory use across modalities, which accounts for experimental results in both Sign and spoken Languages. GNM postulates that during hierarchically organized processing phases in language comprehension, spoken language users rely on neural resources for spatial representation in sequential rehearsal strategy, i.e., the phonological loop. The spatial nature of Sign language precludes Signers from utilizing a similar ‘overflow’ strategy, which speakers rely on to extend their STM capacity. This model offers a parsimonious neuroarchitectural explanation for the conflict between spatial and linguistic processing in spoken language, as well as the differences observed in STM capacity for Sign and speech.

Carlo Cecchetto - One of the best experts on this subject based on the ideXlab platform.

  • another way to mark syntactic dependencies the case for right peripheral specifiers in Sign Languages
    Language, 2009
    Co-Authors: Carlo Cecchetto, Carlo Geraci, Sandro Zucchi
    Abstract:

    The occurrence of WH -items at the right edge of the sentence, while extremely rare in spoken Languages, is quite common in Sign Languages. In particular, in Sign Languages like LIS (Italian Sign Language) WH -items cannot be positioned at the left edge. We argue that existing accounts of right-peripheral occurrences of WH -items are empirically inadequate and provide no clue as to why Sign Languages and spoken Languages differ in this respect. We suggest that the occurrence of WH -items at the right edge of the sentence in Sign Languages be taken at face value: in these Languages, WH -phrases undergo rightward movement. Based on data from LIS, we argue that this is due to the fact that WH-NONMANUAL MARKING (NMM) marks the dependency between an interrogative complementizer and the position that the WH -phrase occupies before it moves. The hypothesis that NMM can play this role also accounts for the spreading of negative NMM with LIS negative quantifiers. We discuss how our analysis can be extended to ASL (American Sign Language) and IPSL (Indo-Pakistani Sign Language). Our account is spelled out in the principles-and-parameters framework. In the last part of the article, we relate our proposal to recent work on prosody in spoken Languages showing that WH -dependencies can be prosodically marked in spoken Languages. Overt movement and prosodic marking of the WH -dependency do not normally cooccur in spoken Languages, while they are possible in Sign Languages. We propose that this is due to the fact that Sign Languages, unlike spoken Languages, are multidimensional.

  • how grammar can cope with limited short term memory simultaneity and seriality in Sign Languages
    Cognition, 2008
    Co-Authors: Carlo Geraci, Marta Gozzi, Costanza Papagno, Carlo Cecchetto
    Abstract:

    It is known that in American Sign Language (ASL) span is shorter than in English, but this discrepancy has never been systematically investigated using other pairs of Signed and spoken Languages. This finding is at odds with results showing that short-term memory (STM) for Signs has an internal organization similar to STM for words. Moreover, some methodological questions remain open. Thus, we measured span of deaf and matched hearing participants for Italian Sign Language (LIS) and Italian, respectively, controlling for all the possible variables that might be responsible for the discrepancy: yet, a difference in span between deaf Signers and hearing speakers was found. However, the advantage of hearing subjects was removed in a visuo-spatial STM task. We attribute the source of the lower span to the internal structure of Signs: indeed, unlike English (or Italian) words, Signs contain both simultaneous and sequential components. Nonetheless, Sign Languages are fully-fledged grammatical systems, probably because the overall architecture of the grammar of Signed Languages reduces the STM load. Our hypothesis is that the faculty of language is dependent on STM, being however flexible enough to develop even in a relatively hostile environment.

Ronnie B Wilbur - One of the best experts on this subject based on the ideXlab platform.

  • visual and linguistic components of short term memory generalized neural model gnm for spoken and Sign Languages
    Cortex, 2019
    Co-Authors: Evie Malaia, Ronnie B Wilbur
    Abstract:

    Abstract The question of apparent discrepancies in short-term memory capacity for Sign language and speech has long presented difficulties for the models of verbal working memory. While short-term memory (STM) capacity for spoken language spans up to 7 ± 2 items, the verbal working memory capacity for Sign Languages appears to be lower at 5 ± 2. The assumption that both auditory and visual communication (Sign language) rely on the same memory buffers led to the claims of impairment of STM buffers in Sign language users. Yet, no common model deals with both the sensory and linguistic nature of spoken and Sign Languages. The authors present a generalized neural model (GNM) of short-term memory use across modalities, which accounts for experimental results in both Sign and spoken Languages. GNM postulates that during hierarchically organized processing phases in language comprehension, spoken language users rely on neural resources for spatial representation in sequential rehearsal strategy, i.e., the phonological loop. The spatial nature of Sign language precludes Signers from utilizing a similar ‘overflow’ strategy, which speakers rely on to extend their STM capacity. This model offers a parsimonious neuroarchitectural explanation for the conflict between spatial and linguistic processing in spoken language, as well as the differences observed in STM capacity for Sign and speech.

  • internally headed relative clauses in Sign Languages
    Glossa: a journal of general linguistics, 2017
    Co-Authors: Ronnie B Wilbur
    Abstract:

    This chapter considers relative clause data from Sign Languages in light of their variation with respect to basic word order, nonmanual marking, and presence/absence of internally-headed and externally-headed relative clauses. Syntactically, a double merge cartographic model (Cinque 2005a; b), following Brunelli (2011), is adopted. The differences across Sign Languages are suggested to result from differences in raising requirements with respect to the relative clauses themselves and with respect to their heads, rather than basic word order, use of complementizers, relative pronouns, or nominalizers, or (type of) nonmanual marking. Typologically, it is noted that several of the SVO SLs have IHRCs, that at least one SOV SL does not have IHRCs, and that three of the SLs have both internally-headed (IHRCs) and externally-headed (EHRCs) relative clauses. This article is part of the special collection:  Internally-headed Relative Clauses

Carlo Geraci - One of the best experts on this subject based on the ideXlab platform.

  • event representations constrain the structure of language Sign language as a window into universally accessible linguistic biases
    Proceedings of the National Academy of Sciences of the United States of America, 2015
    Co-Authors: Brent Strickland, Carlo Geraci, Emmanuel Chemla, Philippe Schlenker, Meltem Kelepir, Roland Pfau
    Abstract:

    One key issue in the study of human language is understanding what, if any, features of individual Languages may be universally accessible. Sign Languages offer a privileged perspective on this issue because the visual modality can help implement and detect certain properties that may be present but unmarked in spoken Languages. The current work finds that fine-grained aspects of verb meanings visibly emerge across unrelated Sign Languages using identical mappings between meaning and visual form. Moreover, nonSigners lacking prior exposure to Sign Languages can intuit these meanings from entirely unfamiliar Signs. This is highly suggestive that Signers and nonSigners share universally accessible notions of telicity as well as universally accessible “mapping biases” between telicity and visual form.

  • Determining argument structure in Sign Languages
    2014
    Co-Authors: Carlo Geraci, Josep Quer
    Abstract:

    In this paper we offer an overview of existing analyses of argument structure that sets the stage for further inquiry into this domain. The particular structure of the lexicon in Sign Languages (SLs) is introduced, with special attention to the agreement patterns found in lexical predicates, as overt agreement marking in the set of verbs that can realize it offers a window into verb meaning and overt argument realization. Classifier predicates, on the other hand, have proven to be a very rich domain for research on argument structure: unaccusative/unergative and unaccusative/transitive alternations have been identified in American Sign Language (ASL) classifier constructions, and replicated in other SLs. As expected, the validity of valency tests is sometimes limited to one language, but the alternations are attested crosslinguistically and can be applied to lexical verbs as well. Specially interesting is the traditional divide between agreement marking in lexical predicates and spatial agreement marking in classifier constructions, often seen as having a different nature. Given the fact that the morphological exponence of agreement is superficially the same (i.e. the path or trajectory that the verbal Sign crosses in Signing space), the divide must be motivated on empirical arguments, which are not always compatible or consistent with a broad empirical coverage. We identify a number of areas where research should be carried out in order to advance our ounderstanding of argument structure in Languages in the visual-gestural modality, in order to determine which of the observed properties is really modality-specific.

  • another way to mark syntactic dependencies the case for right peripheral specifiers in Sign Languages
    Language, 2009
    Co-Authors: Carlo Cecchetto, Carlo Geraci, Sandro Zucchi
    Abstract:

    The occurrence of WH -items at the right edge of the sentence, while extremely rare in spoken Languages, is quite common in Sign Languages. In particular, in Sign Languages like LIS (Italian Sign Language) WH -items cannot be positioned at the left edge. We argue that existing accounts of right-peripheral occurrences of WH -items are empirically inadequate and provide no clue as to why Sign Languages and spoken Languages differ in this respect. We suggest that the occurrence of WH -items at the right edge of the sentence in Sign Languages be taken at face value: in these Languages, WH -phrases undergo rightward movement. Based on data from LIS, we argue that this is due to the fact that WH-NONMANUAL MARKING (NMM) marks the dependency between an interrogative complementizer and the position that the WH -phrase occupies before it moves. The hypothesis that NMM can play this role also accounts for the spreading of negative NMM with LIS negative quantifiers. We discuss how our analysis can be extended to ASL (American Sign Language) and IPSL (Indo-Pakistani Sign Language). Our account is spelled out in the principles-and-parameters framework. In the last part of the article, we relate our proposal to recent work on prosody in spoken Languages showing that WH -dependencies can be prosodically marked in spoken Languages. Overt movement and prosodic marking of the WH -dependency do not normally cooccur in spoken Languages, while they are possible in Sign Languages. We propose that this is due to the fact that Sign Languages, unlike spoken Languages, are multidimensional.

  • how grammar can cope with limited short term memory simultaneity and seriality in Sign Languages
    Cognition, 2008
    Co-Authors: Carlo Geraci, Marta Gozzi, Costanza Papagno, Carlo Cecchetto
    Abstract:

    It is known that in American Sign Language (ASL) span is shorter than in English, but this discrepancy has never been systematically investigated using other pairs of Signed and spoken Languages. This finding is at odds with results showing that short-term memory (STM) for Signs has an internal organization similar to STM for words. Moreover, some methodological questions remain open. Thus, we measured span of deaf and matched hearing participants for Italian Sign Language (LIS) and Italian, respectively, controlling for all the possible variables that might be responsible for the discrepancy: yet, a difference in span between deaf Signers and hearing speakers was found. However, the advantage of hearing subjects was removed in a visuo-spatial STM task. We attribute the source of the lower span to the internal structure of Signs: indeed, unlike English (or Italian) words, Signs contain both simultaneous and sequential components. Nonetheless, Sign Languages are fully-fledged grammatical systems, probably because the overall architecture of the grammar of Signed Languages reduces the STM load. Our hypothesis is that the faculty of language is dependent on STM, being however flexible enough to develop even in a relatively hostile environment.

Asli Ozyurek - One of the best experts on this subject based on the ideXlab platform.

  • does space structure spatial language a comparison of spatial expression across Sign Languages
    Language, 2015
    Co-Authors: Pamela M Perniss, I E P Zwitserlood, Asli Ozyurek
    Abstract:

    The spatial affordances of the visual modality give rise to a high degree of similarity between Sign Languages in the spatial domain. This stands in contrast to the vast structural and semantic diversity in linguistic encoding of space found in spoken Languages. However, the possibility and nature of linguistic diversity in spatial encoding in Sign Languages has not been rigorously investigated by systematic crosslinguistic comparison. Here, we compare locative expression in two unrelated Sign Languages, Turkish Sign Language ( Turk Isaret Dili , TID) and German Sign Language ( Deutsche Gebardensprache , DGS), focusing on the expression of figure-ground (e.g. cup on table) and figure-figure (e.g. cup next to cup) relationships in a discourse context. In addition to similarities, we report qualitative and quantitative differences between the Sign Languages in the formal devices used (i.e. unimanual vs. bimanual; simultaneous vs. sequential) and in the degree of iconicity of the spatial devices. Our results suggest that Sign Languages may display more diversity in the spatial domain than has been previously assumed, and in a way more comparable with the diversity found in spoken Languages. The study contributes to a more comprehensive understanding of how space gets encoded in language.

  • does space structure spatial language linguistic encoding of space in Sign Languages
    Cognitive Science, 2011
    Co-Authors: Pamela M Perniss, I E P Zwitserlood, Asli Ozyurek
    Abstract:

    Does Space Structure Spatial Language? Linguistic Encoding of Space in Sign Languages Pamela Perniss (pamela.perniss@mpi.nl) Inge Zwitserlood (inge.zwitserlood@mpi.nl) Asli Ozyurek (asli.ozyurek@mpi.nl) Radboud University Nijmegen & Max Planck Institute for Psycholinguistics PO Box 310, 6500 AH Nijmegen, Netherlands space. The spatial relationship between the Signer’s hands represents the spatial relationship between the referents, whereby the handshapes are iconic with certain features of the referents (e.g. the inverted cupped hand to represent the bulk of a house). In contrast, there is no resemblance, or iconicity, between the actual scene and the linguistic form of a spoken language locative expression, as e.g. the English expression There is a bicycle next to the house. Abstract Spatial language in Signed language is assumed to be shaped by affordances of the visual-spatial modality – where the use of the hands and space allow the mapping of spatial relationships in an iconic, analogue way – and thus to be similar across Sign Languages. In this study, we test assumptions regarding the modality-driven similarity of spatial language by comparing locative expressions (e.g., cup is on the table) in two unrelated Sign Languages, TID (Turk Isaret Dili, Turkish Sign Language) and DGS (Deutsche Gebardensprache, German Sign Language) in a communicative, discourse context. Our results show that each Sign language conventionalizes the structure of locative expressions in different ways, going beyond iconic and analogue representations, suggesting that the use of space to represent space does not uniformly and predictably drive spatial language in the visual-spatial modality. These results are important for our understanding of how language modality shapes the structure of language. HOUSE loc here BICYCLE loc next-to-house Figure 1. Example of an ASL (American Sign Language) locative expression depicting the spatial relationship of a bicycle next to a house (Emmorey, 2002). The expression contains the lexical Signs for house (still 1) and bicycle (still 3), each followed by a locative predicate localizing the referent in space. Keywords: iconicity; language modality; spatial language; locative expression; Sign language Introduction Despite the difference in modality of expression, Signed (visual-spatial) and spoken (vocal-aural) Languages similarly conform to principles of grammatical structure and linguistic form (Klima & Bellugi, 1979; Liddell, 1980; Padden, 1983; Stokoe, 1960; Supalla, 1986). However, in Signed language, the use of the hands as primary articulators within a visible spatial medium for expression (i.e. the space around the body) has special consequences for the expression of visual-spatial information (e.g. of referent size/shape, location, or motion). Spatial language, such as locative expressions, is a primary domain in which modality affects the structure of representation. Locative expressions in both Signed and spoken language are characterized by linguistic encoding of entities and the spatial relationship between them (cf. Talmy, 1985). However, Sign language locative expressions differ radically from those in spoken language in affording a visual similarity (or iconicity) with the real-world scenes being represented. For example, a Signed expression of the spatial relationship between a house and a bicycle is clearly iconic of the scene itself. In the example from American Sign Language (ASL) in Figure 1, the Signer depicts a bicycle as being located beside a house by placing her hands (her left hand representing the house in still 2; her right hand representing the bicycle in still 4) next to each other in Sign In general, spoken Languages exhibit a wide range of cross- linguistic variation in the encoding of spatial relationships in locative expressions, both in the devices used and in their morphosyntactic arrangement (Grinevald, 2006; Levinson & Wilkins, 2006). For example, spoken language locative expressions exhibit the use of adpositions, like the spatial prepositions used in English or the case-marking postpositions used in Turkish, or different types of locative or postural verbs (as in Ewe (Ghana) or Tzeltal (Mexico)). Such variation is not expected in Signed Languages, however. Instead, Signed Languages are assumed to be structurally homogenous in the expression of spatial relationships. The affordances of the visual-spatial modality for iconic, analogue spatial representation are assumed to be the primary force in shaping spatial expression, thus creating fundamental similarities in spatial language across different Sign Languages (e.g. Aronoff, Meir, Padden & Sandler, 2003; Emmorey, 2002). A consequence of this assumption of similarity, rooted in the notion that Signers will exploit the iconic affordances of the modality where possible, has been a dearth of empirical investigation in this domain. Where the encoding of spatial relationships is mentioned in the literature, its iconic character is stated as fact, and conforms to the underlying assumption that spatial relationships will be represented in an iconic, analogue way,