Signed Language

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 37230 Experts worldwide ranked by ideXlab platform

Mairead Macsweeney - One of the best experts on this subject based on the ideXlab platform.

  • Language experience impacts brain activation for spoken and Signed Language in infancy insights from unimodal and bimodal bilinguals
    Neurobiology of Language (2019) (In press)., 2019
    Co-Authors: Mairead Macsweeney, Evelyne Mercure, Samuel Evans, Laura Pirazzoli, Laura Goldberg, Harriet Bowdenhowl, K Coulsonthaker, Indie Beedie, Sarah Lloydfox, Mark H Johnson
    Abstract:

    Recent neuroimaging studies suggest that monolingual infants activate a left-lateralized frontotemporal brain network in response to spoken Language, which is similar to the network involved in processing spoken and Signed Language in adulthood. However, it is unclear how brain activation to Language is influenced by early experience in infancy. To address this question, we present functional near-infrared spectroscopy (fNIRS) data from 60 hearing infants (4 to 8 months of age): 19 monolingual infants exposed to English, 20 unimodal bilingual infants exposed to two spoken Languages, and 21 bimodal bilingual infants exposed to English and British Sign Language (BSL). Across all infants, spoken Language elicited activation in a bilateral brain network including the inferior frontal and posterior temporal areas, whereas sign Language elicited activation in the right temporoparietal area. A significant difference in brain lateralization was observed between groups. Activation in the posterior temporal region was not lateralized in monolinguals and bimodal bilinguals, but right lateralized in response to both Language modalities in unimodal bilinguals. This suggests that the experience of two spoken Languages influences brain activation for sign Language when experienced for the first time. Multivariate pattern analyses (MVPAs) could classify distributed patterns of activation within the left hemisphere for spoken and Signed Language in monolinguals (proportion correct = 0.68; p = 0.039) but not in unimodal or bimodal bilinguals. These results suggest that bilingual experience in infancy influences brain activation for Language and that unimodal bilingual experience has greater impact on early brain lateralization than bimodal bilingual experience.

  • corrigendum to fingerspelling Signed Language text and picture processing in deaf native signers the role of the mid fusiform gyrus neuroimage 35 2007 1287 1302
    NeuroImage, 2008
    Co-Authors: Dafydd Waters, Ruth Campbell, Cheryl M Capek, Bencie Woll, Anthony S David, Philip Mcguire, Michael Brammer, Mairead Macsweeney
    Abstract:

    We recently noticed an error in the Methods section of this paper. Voxel size in Talairach space was reported as 3×3×3 mm. It was actually 3.3×3.3×3.3 mm. Due to this error, and to a systematic error in calculating metric volume, activation volumes were reported incorrectly throughout the paper. These errors have no impact, however, on the arguments presented. In the Results section, and in Tables 3–8, activation sizes are given as both number of activated voxels and metric volume. In all cases, the figure reported for the number of voxels is correct, whereas the volume reported is incorrect. The following formula may be used to calculate the correct volume for these activations:

  • Sign Language and the brain: a review.
    Journal of deaf studies and deaf education, 2008
    Co-Authors: Ruth Campbell, Mairead Macsweeney, Dafydd Waters
    Abstract:

    How are Signed Languages processed by the brain? This review briefly outlines some basic principles of brain structure and function and the methodological principles and techniques that have been used to investigate this question. We then summarize a number of different studies exploring brain activity associated with sign Language processing especially as compared to speech processing. We focus on lateralization: is Signed Language lateralized to the left hemisphere (LH) of native signers, just as spoken Language is lateralized to the LH of native speakers, or could sign processing involve the right hemisphere to a greater extent than speech processing? Experiments that have addressed this question are described, and some problems in obtaining a clear answer are outlined.

  • fingerspelling Signed Language text and picture processing in deaf native signers the role of the mid fusiform gyrus
    NeuroImage, 2007
    Co-Authors: Dafydd Waters, Ruth Campbell, Cheryl M Capek, Bencie Woll, Anthony S David, Philip Mcguire, Michael Brammer, Mairead Macsweeney
    Abstract:

    In fingerspelling, different hand configurations are used to represent the different letters of the alphabet. Signers use this method of representing written Language to fill lexical gaps in a Signed Language. Using fMRI, we compared cortical networks supporting the perception of fingerspelled, Signed, written, and pictorial stimuli in deaf native signers of British Sign Language (BSL). In order to examine the effects of linguistic knowledge, hearing participants who knew neither fingerspelling nor a Signed Language were also tested. All input forms activated a left fronto-temporal network, including portions of left inferior temporal and mid-fusiform gyri, in both groups. To examine the extent to which activation in this region was influenced by orthographic structure, two contrasts of orthographic and non-orthographic stimuli were made: one using static stimuli (text vs. pictures), the other using dynamic stimuli (fingerspelling vs. Signed Language). Greater activation in left and right inferior temporal and mid-fusiform gyri was found for pictures than text in both deaf and hearing groups. In the fingerspelling vs. Signed Language contrast, a significant interaction indicated locations within the left and right mid-fusiform gyri. This showed greater activation for fingerspelling than Signed Language in deaf but not hearing participants. These results are discussed in light of recent proposals that the mid-fusiform gyrus may act as an integration region, mediating between visual input and higher-order stimulus properties.

Bencie Woll - One of the best experts on this subject based on the ideXlab platform.

  • finish variation and grammaticalization in a Signed Language how far down this well trodden pathway is auslan australian sign Language
    Language Variation and Change, 2015
    Co-Authors: Trevor Johnston, Adam Schembri, Donovan Cresdee, Bencie Woll
    Abstract:

    Language variation is often symptomatic of ongoing historical change, including grammaticalization. Signed Languages lack detailed historical records and a written literature, so tracking grammaticalization in these Languages is problematic. Grammaticalization can, however, also be observed synchronically through the comparison of data on variant word forms and multiword constructions in particular contexts and in different dialects and registers. In this paper, we report an investigation of Language change and variation in Auslan (Australian Sign Language). Signs glossed as finish were tagged for function (e.g., verb, noun, adverb, auxiliary, conjunction), variation in production (number of hands used, duration, mouthing), position relative to the main verb (pre- or postmodifying), and event types of the clauses in which they appear (states, activities, achievements, accomplishments). The data suggest ongoing grammaticalization may be part of the explanation of the variation—variants correlate with different uses in different linguistic contexts, rather than social and individual factors.

  • corrigendum to fingerspelling Signed Language text and picture processing in deaf native signers the role of the mid fusiform gyrus neuroimage 35 2007 1287 1302
    NeuroImage, 2008
    Co-Authors: Dafydd Waters, Ruth Campbell, Cheryl M Capek, Bencie Woll, Anthony S David, Philip Mcguire, Michael Brammer, Mairead Macsweeney
    Abstract:

    We recently noticed an error in the Methods section of this paper. Voxel size in Talairach space was reported as 3×3×3 mm. It was actually 3.3×3.3×3.3 mm. Due to this error, and to a systematic error in calculating metric volume, activation volumes were reported incorrectly throughout the paper. These errors have no impact, however, on the arguments presented. In the Results section, and in Tables 3–8, activation sizes are given as both number of activated voxels and metric volume. In all cases, the figure reported for the number of voxels is correct, whereas the volume reported is incorrect. The following formula may be used to calculate the correct volume for these activations:

  • the bimodal bilingual brain fmri investigations concerning the cortical distribution and differentiation of Signed Language and speechreading
    Rivista di Psicolinguistica applicata VIII (3) pp. 109-124. (2008), 2008
    Co-Authors: Cheryl M Capek, Ruth Campbell, Bencie Woll
    Abstract:

    Many users of Signed Languages also have access to a spoken Language. They are bilin- gual in two modalities : spoken Language and Signed Language. Here we consider some fmri findings relevant to bimodal bilingualism. We explored comprehension of signs and of seen spoken words in bimodal bilinguals - native signers of British Sign Language (bsl) who are proficient speechreaders of English. Both deaf and hearing bimodal bilinguals were tested. Seen words and signs activated different regions of the temporal lobes bilaterally. Signs activated more posterior and inferior regions, whereas seen speech activated middle and superior posterior temporal regions to a greater extent. We also observed characteristic dissociations within BsL in the bimodal bilingual participants depen- dent on hearing status. In deaf respondents, manual signs with 'mouthings' (oral speechlike actions) and manual signs with 'mouth gestures' (oral non-speechlike actions) showed distinctive patterns that resembled those where speech and sign were contrasted directly. The dissociated pattern was only partly replicated in hearing bimodal bilinguals. That is, hearing status can moderate cortical activation related to oral and manual actions in sl processing. A further analysis identified amodal Language regions in the deaf bimodal brain. Superior temporal regions that were activated for both si- gns and seen speech in deaf bilinguals were only activated by seen speech in hearing monolinguals.

  • fingerspelling Signed Language text and picture processing in deaf native signers the role of the mid fusiform gyrus
    NeuroImage, 2007
    Co-Authors: Dafydd Waters, Ruth Campbell, Cheryl M Capek, Bencie Woll, Anthony S David, Philip Mcguire, Michael Brammer, Mairead Macsweeney
    Abstract:

    In fingerspelling, different hand configurations are used to represent the different letters of the alphabet. Signers use this method of representing written Language to fill lexical gaps in a Signed Language. Using fMRI, we compared cortical networks supporting the perception of fingerspelled, Signed, written, and pictorial stimuli in deaf native signers of British Sign Language (BSL). In order to examine the effects of linguistic knowledge, hearing participants who knew neither fingerspelling nor a Signed Language were also tested. All input forms activated a left fronto-temporal network, including portions of left inferior temporal and mid-fusiform gyri, in both groups. To examine the extent to which activation in this region was influenced by orthographic structure, two contrasts of orthographic and non-orthographic stimuli were made: one using static stimuli (text vs. pictures), the other using dynamic stimuli (fingerspelling vs. Signed Language). Greater activation in left and right inferior temporal and mid-fusiform gyri was found for pictures than text in both deaf and hearing groups. In the fingerspelling vs. Signed Language contrast, a significant interaction indicated locations within the left and right mid-fusiform gyri. This showed greater activation for fingerspelling than Signed Language in deaf but not hearing participants. These results are discussed in light of recent proposals that the mid-fusiform gyrus may act as an integration region, mediating between visual input and higher-order stimulus properties.

Dafydd Waters - One of the best experts on this subject based on the ideXlab platform.

  • corrigendum to fingerspelling Signed Language text and picture processing in deaf native signers the role of the mid fusiform gyrus neuroimage 35 2007 1287 1302
    NeuroImage, 2008
    Co-Authors: Dafydd Waters, Ruth Campbell, Cheryl M Capek, Bencie Woll, Anthony S David, Philip Mcguire, Michael Brammer, Mairead Macsweeney
    Abstract:

    We recently noticed an error in the Methods section of this paper. Voxel size in Talairach space was reported as 3×3×3 mm. It was actually 3.3×3.3×3.3 mm. Due to this error, and to a systematic error in calculating metric volume, activation volumes were reported incorrectly throughout the paper. These errors have no impact, however, on the arguments presented. In the Results section, and in Tables 3–8, activation sizes are given as both number of activated voxels and metric volume. In all cases, the figure reported for the number of voxels is correct, whereas the volume reported is incorrect. The following formula may be used to calculate the correct volume for these activations:

  • Sign Language and the brain: a review.
    Journal of deaf studies and deaf education, 2008
    Co-Authors: Ruth Campbell, Mairead Macsweeney, Dafydd Waters
    Abstract:

    How are Signed Languages processed by the brain? This review briefly outlines some basic principles of brain structure and function and the methodological principles and techniques that have been used to investigate this question. We then summarize a number of different studies exploring brain activity associated with sign Language processing especially as compared to speech processing. We focus on lateralization: is Signed Language lateralized to the left hemisphere (LH) of native signers, just as spoken Language is lateralized to the LH of native speakers, or could sign processing involve the right hemisphere to a greater extent than speech processing? Experiments that have addressed this question are described, and some problems in obtaining a clear answer are outlined.

  • fingerspelling Signed Language text and picture processing in deaf native signers the role of the mid fusiform gyrus
    NeuroImage, 2007
    Co-Authors: Dafydd Waters, Ruth Campbell, Cheryl M Capek, Bencie Woll, Anthony S David, Philip Mcguire, Michael Brammer, Mairead Macsweeney
    Abstract:

    In fingerspelling, different hand configurations are used to represent the different letters of the alphabet. Signers use this method of representing written Language to fill lexical gaps in a Signed Language. Using fMRI, we compared cortical networks supporting the perception of fingerspelled, Signed, written, and pictorial stimuli in deaf native signers of British Sign Language (BSL). In order to examine the effects of linguistic knowledge, hearing participants who knew neither fingerspelling nor a Signed Language were also tested. All input forms activated a left fronto-temporal network, including portions of left inferior temporal and mid-fusiform gyri, in both groups. To examine the extent to which activation in this region was influenced by orthographic structure, two contrasts of orthographic and non-orthographic stimuli were made: one using static stimuli (text vs. pictures), the other using dynamic stimuli (fingerspelling vs. Signed Language). Greater activation in left and right inferior temporal and mid-fusiform gyri was found for pictures than text in both deaf and hearing groups. In the fingerspelling vs. Signed Language contrast, a significant interaction indicated locations within the left and right mid-fusiform gyri. This showed greater activation for fingerspelling than Signed Language in deaf but not hearing participants. These results are discussed in light of recent proposals that the mid-fusiform gyrus may act as an integration region, mediating between visual input and higher-order stimulus properties.

Gregory Hickok - One of the best experts on this subject based on the ideXlab platform.

  • neural organization of linguistic short term memory is sensory modality dependent evidence from Signed and spoken Language
    Journal of Cognitive Neuroscience, 2008
    Co-Authors: Judy Pa, Ursula Bellugi, Stephen M Wilson, Herbert Pickell, Gregory Hickok
    Abstract:

    Despite decades of research, there is still disagreement regarding the nature of the information that is maintained in linguistic short-term memory (STM). Some authors argue for abstract phonological codes, whereas others argue for more general sensory traces. We assess these possibilities by investigating linguistic STM in two distinct sensory--motor modalities, spoken and Signed Language. Hearing bilingual participants (native in English and American Sign Language) performed equivalent STM tasks in both Languages during functional magnetic resonance imaging. Distinct, sensory-specific activations were seen during the maintenance phase of the task for spoken versus Signed Language. These regions have been previously shown to respond to nonlinguistic sensory stimulation, suggesting that linguistic STM tasks recruit sensory-specific networks. However, maintenance-phase activations common to the two Languages were also observed, implying some form of common process. We conclude that linguistic STM involves sensory-dependent neural networks, but suggest that sensory-independent neural networks may also exist.

  • Neural Organization of Linguistic Short-term Memory is Sensory Modality–dependent: Evidence from Signed and Spoken Language
    Journal of Cognitive Neuroscience, 2008
    Co-Authors: Judy Pa, Ursula Bellugi, Stephen M Wilson, Herbert Pickell, Gregory Hickok
    Abstract:

    Despite decades of research, there is still disagreement regarding the nature of the information that is maintained in linguistic short-term memory (STM). Some authors argue for abstract phonological codes, whereas others argue for more general sensory traces. We assess these possibilities by investigating linguistic STM in two distinct sensory–motor modalities, spoken and Signed Language. Hearing bilingual participants (native in English and American Sign Language) performed equivalent STM tasks in both Languages during functional magnetic resonance imaging. Distinct, sensory-specific activations were seen during the maintenance phase of the task for spoken versus Signed Language. These regions have been previously shown to respond to nonlinguistic sensory stimulation, suggesting that linguistic STM tasks recruit sensory-specific networks. However, maintenance-phase activations common to the two Languages were also observed, implying some form of common process. We conclude that linguistic STM involves sensory-dependent neural networks, but suggest that sensory-independent neural networks may also exist.

  • the role of the left frontal operculum in sign Language aphasia
    Neurocase, 1996
    Co-Authors: Gregory Hickok, Mark Kritchevsky, Ursula Bellugi, Edward S Klima
    Abstract:

    Abstract Broca's area has long been implicated in aspects of speech production. But does this region play a role in the production of Signed Language in prelingually deaf individuals? In this report, we describe our findings in a patient, congenitally deaf and a native user of American Sign Language, who suffered an ischemic infarct involving the left frontal operculum. Our patient presented with an acute expressive aphasia that subsequently resolved, and a chronic deficit predominantly characterized by frequent phonemic-like paraphaslas. We conclude that the left frontal operculum does, in fact, play a role in the production of Signed Language.

Trevor Johnston - One of the best experts on this subject based on the ideXlab platform.

  • finish variation and grammaticalization in a Signed Language how far down this well trodden pathway is auslan australian sign Language
    Language Variation and Change, 2015
    Co-Authors: Trevor Johnston, Adam Schembri, Donovan Cresdee, Bencie Woll
    Abstract:

    Language variation is often symptomatic of ongoing historical change, including grammaticalization. Signed Languages lack detailed historical records and a written literature, so tracking grammaticalization in these Languages is problematic. Grammaticalization can, however, also be observed synchronically through the comparison of data on variant word forms and multiword constructions in particular contexts and in different dialects and registers. In this paper, we report an investigation of Language change and variation in Auslan (Australian Sign Language). Signs glossed as finish were tagged for function (e.g., verb, noun, adverb, auxiliary, conjunction), variation in production (number of hands used, duration, mouthing), position relative to the main verb (pre- or postmodifying), and event types of the clauses in which they appear (states, activities, achievements, accomplishments). The data suggest ongoing grammaticalization may be part of the explanation of the variation—variants correlate with different uses in different linguistic contexts, rather than social and individual factors.

  • the reluctant oracle using strategic annotations to add value to and extract value from a Signed Language corpus
    Corpora, 2014
    Co-Authors: Trevor Johnston
    Abstract:

    In this paper, I discuss the ways in which multimedia annotation software is being used to transform an archive of Auslan recordings into a true machine-readable Language corpus. After the basic structure of the annotation files in the Auslan corpus is described and the exercise differentiated from transcription, the glossing and annotation conventions are explained. Following this, I exemplify the searching and pattern-matching at different levels of linguistic organisation that these annotations make possible. The paper shows how, in the creation of Signed Language corpora, it is important to be clear about the difference between transcription and annotation. Without an awareness of this distinction – and despite time consuming and expensive processing of the video recordings – we may not be able to discern the types of patterns in our corpora that we hope to. The conventions are deSigned to ensure that the annotations really do enable researchers to identify regularities at different levels of linguist...

  • from archive to corpus transcription and annotation in the creation of Signed Language corpora
    International Journal of Corpus Linguistics, 2010
    Co-Authors: Trevor Johnston
    Abstract:

    Annotations are an important resource in corpus-based linguistic research. In fact, the most important feature of a modern Signed Language corpus should be that it has been annotated rather than simply transcribed. Digital multi-media annotation software can now transform Language recordings into machine-readable texts using gloss-based annotations without it first being necessary to transcribe these utterances, provided that sign tokens are identified and discriminated according to type. Further annotations can subsequently be appended to these units. However, unique identifiers of sign types (or ‘ID-glosses’) can only be used if a comprehensive reference lexical database of the Language already exists. In order to create a basic multi-purpose reference Signed Language corpus, therefore, linguists should prioritize annotation using ID-glosses above transcription. The effort expended in creating a transcription that does not facilitate the unique identification of sign types will not result in a machine-readable corpus in any meaningful sense, contrary to expectations.

  • from archive to corpus transcription and annotation in the creation of Signed Language corpora
    Pacific Asia Conference on Language Information and Computation, 2008
    Co-Authors: Trevor Johnston
    Abstract:

    The essential characteristic of a Signed Language corpus is that it has been annotated, and not, contrary to the practice of many Signed Language researchers, that it has been transcribed. Annotations are necessary for corpus-based investigations of Signed or spoken Languages. Multi-media annotation software can now be used to transform a recording into a machine-readable text without it first being necessary to transcribe the text, provided that linguistic units are uniquely identified and annotations subsequently appended to these units. These unique identifiers are here referred to as ID-glosses. The use of ID-glosses is only possible if a reference lexical database (i.e., dictionary) exists as the result of prior foundation research into the lexicon. In short, the creators of Signed Language corpora should prioritize annotation above transcription, and ensure that signs are identified using unique gloss-based annotations. Without this the whole rationale for corpus-creation is undermined.

  • issues in the creation of a digital archive of a Signed Language
    2006
    Co-Authors: Trevor Johnston, Adam Schembri
    Abstract:

    Introduction In this paper we summarise the fieldwork involved in creating the Auslan (Australian Sign Language) archive and corpus. We briefly discuss some of the technical and ethical problems in data collection and management associated with visually based linguistic data for an archive intended to be as open as possible. We also discuss the new research questions opened up by the existence of Signed Language data in this new form. The Auslan archive is the output of a project funded by the Endangered Languages Documentation Program within the School of Oriental and African Studies (SOAS) at the University of London and is to be submitted in mid-2007.