Speech Comprehension

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 3471 Experts worldwide ranked by ideXlab platform

Asli Ozyurek - One of the best experts on this subject based on the ideXlab platform.

  • degree of language experience modulates visual attention to visible Speech and iconic gestures during clear and degraded Speech Comprehension
    Cognitive Science, 2019
    Co-Authors: Linda Drijvers, Asli Ozyurek, Julija Vaitonyte
    Abstract:

    Visual information conveyed by iconic hand gestures and visible Speech can enhance Speech Comprehension under adverse listening conditions for both native and non-native listeners. However, how a listener allocates visual attention to these articulators during Speech Comprehension is unknown. We used eye-tracking to investigate whether and how native and highly proficient non-native listeners of Dutch allocated overt eye gaze to visible Speech and gestures during clear and degraded Speech Comprehension. Participants watched video clips of an actress uttering a clear or degraded (6-band noise-vocoded) action verb while performing a gesture or not, and were asked to indicate the word they heard in a cued-recall task. Gestural enhancement was the largest (i.e., a relative reduction in reaction time cost) when Speech was degraded for all listeners, but it was stronger for native listeners. Both native and non-native listeners mostly gazed at the face during Comprehension, but non-native listeners gazed more often at gestures than native listeners. However, only native but not non-native listeners' gaze allocation to gestures predicted gestural benefit during degraded Speech Comprehension. We conclude that non-native listeners might gaze at gesture more as it might be more challenging for non-native listeners to resolve the degraded auditory cues and couple those cues to phonological information that is conveyed by visible Speech. This diminished phonological knowledge might hinder the use of semantic information that is conveyed by gestures for non-native compared to native listeners. Our results demonstrate that the degree of language experience impacts overt visual attention to visual articulators, resulting in different visual benefits for native versus non-native listeners.

  • Non-native Listeners Benefit Less from Gestures and Visible Speech than Native Listeners During Degraded Speech Comprehension:
    Language and Speech, 2019
    Co-Authors: Linda Drijvers, Asli Ozyurek
    Abstract:

    Native listeners benefit from both visible Speech and iconic gestures to enhance degraded Speech Comprehension (Drijvers & Ozyurek, 2017). We tested how highly proficient non-native listeners benefit from these visual articulators compared to native listeners. We presented videos of an actress uttering a verb in clear, moderately, or severely degraded Speech, while her lips were blurred, visible, or visible and accompanied by a gesture. Our results revealed that unlike native listeners, non-native listeners were less likely to benefit from the combined enhancement of visible Speech and gestures, especially since the benefit from visible Speech was minimal when the signal quality was not sufficient.

  • hearing and seeing meaning in noise alpha beta and gamma oscillations predict gestural enhancement of degraded Speech Comprehension
    Human Brain Mapping, 2018
    Co-Authors: Linda Drijvers, Asli Ozyurek, Ole Jensen
    Abstract:

    During face-to-face communication, listeners integrate Speech with gestures. The semantic information conveyed by iconic gestures (e.g., a drinking gesture) can aid Speech Comprehension in adverse listening conditions. In this magnetoencephalography (MEG) study, we investigated the spatiotemporal neural oscillatory activity associated with gestural enhancement of degraded Speech Comprehension. Participants watched videos of an actress uttering clear or degraded Speech, accompanied by a gesture or not and completed a cued-recall task after watching every video. When gestures semantically disambiguated degraded Speech Comprehension, an alpha and beta power suppression and a gamma power increase revealed engagement and active processing in the hand-area of the motor cortex, the extended language network (LIFG/pSTS/STG/MTG), medial temporal lobe, and occipital regions. These observed low- and high-frequency oscillatory modulations in these areas support general unification, integration and lexical access processes during online language Comprehension, and simulation of and increased visual attention to manual gestures over time. All individual oscillatory power modulations associated with gestural enhancement of degraded Speech Comprehension predicted a listener's correct disambiguation of the degraded verb after watching the videos. Our results thus go beyond the previously proposed role of oscillatory dynamics in unimodal degraded Speech Comprehension and provide first evidence for the role of low- and high-frequency oscillations in predicting the integration of auditory and visual information at a semantic level.

  • visual context enhanced the joint contribution of iconic gestures and visible Speech to degraded Speech Comprehension
    Journal of Speech Language and Hearing Research, 2017
    Co-Authors: Linda Drijvers, Asli Ozyurek
    Abstract:

    Purpose This study investigated whether and to what extent iconic co-Speech gestures contribute to information from visible Speech to enhance degraded Speech Comprehension at different levels of no...

Sonja A Kotz - One of the best experts on this subject based on the ideXlab platform.

  • predictions in Speech Comprehension fmri evidence on the meter semantic interface
    NeuroImage, 2013
    Co-Authors: Kathrin Rothermich, Sonja A Kotz
    Abstract:

    When listening to Speech we not only form predictions about what is coming next, but also when something is coming. For example, metric stress may be utilized to predict the next salient Speech event (i.e. the next stressed syllable) and in turn facilitate Speech Comprehension. However, Speech Comprehension can also be facilitated by semantic context, that is, which content word is likely to appear next. In the current fMRI experiment we investigated (1) the brain networks that underlie metric and semantic predictions by means of prediction errors, (2) how semantic processing is influenced by a metrically regular or irregular sentence context, and (3) whether task demands influence both processes. The results are three-fold: First, while metrically incongruent sentences activated a bilateral fronto-striatal network, semantically incongruent trials led to activation of fronto-temporal areas. Second, metrically regular context facilitated Speech Comprehension in the left-fronto-temporal language network. Third, attention directed to metric or semantic aspects in Speech engaged different subcomponents of the left inferior frontal gyrus (IFG). The current results suggest that Speech Comprehension relies on different forms of prediction, and extends known Speech Comprehension networks to subcortical sensorimotor areas.

  • Predictions in Speech Comprehension: fMRI evidence on the meter–semantic interface
    NeuroImage, 2013
    Co-Authors: Kathrin Rothermich, Sonja A Kotz
    Abstract:

    When listening to Speech we not only form predictions about what is coming next, but also when something is coming. For example, metric stress may be utilized to predict the next salient Speech event (i.e. the next stressed syllable) and in turn facilitate Speech Comprehension. However, Speech Comprehension can also be facilitated by semantic context, that is, which content word is likely to appear next. In the current fMRI experiment we investigated (1) the brain networks that underlie metric and semantic predictions by means of prediction errors, (2) how semantic processing is influenced by a metrically regular or irregular sentence context, and (3) whether task demands influence both processes. The results are three-fold: First, while metrically incongruent sentences activated a bilateral fronto-striatal network, semantically incongruent trials led to activation of fronto-temporal areas. Second, metrically regular context facilitated Speech Comprehension in the left-fronto-temporal language network. Third, attention directed to metric or semantic aspects in Speech engaged different subcomponents of the left inferior frontal gyrus (IFG). The current results suggest that Speech Comprehension relies on different forms of prediction, and extends known Speech Comprehension networks to subcortical sensorimotor areas.

  • bilateral Speech Comprehension reflects differential sensitivity to spectral and temporal features
    The Journal of Neuroscience, 2008
    Co-Authors: Jonas Obleser, Frank Eisner, Sonja A Kotz
    Abstract:

    Speech Comprehension has been shown to be a strikingly bilateral process, but the differential contributions of the subfields of left and right auditory cortices have remained elusive. The hypothesis that left auditory areas engage predominantly in decoding fast temporal perturbations of a signal whereas the right areas are relatively more driven by changes of the frequency spectrum has not been directly tested in Speech or music. This brain-imaging study independently manipulated the Speech signal itself along the spectral and the temporal domain using noise-band vocoding. In a parametric design with five temporal and five spectral degradation levels in word Comprehension, a functional distinction of the left and right auditory association cortices emerged: increases in the temporal detail of the signal were most effective in driving brain activation of the left anterolateral superior temporal sulcus (STS), whereas the right homolog areas exhibited stronger sensitivity to the variations in spectral detail. In accordance with behavioral measures of Speech Comprehension acquired in parallel, change of spectral detail exhibited a stronger coupling with the STS BOLD signal. The relative pattern of lateralization (quantified using lateralization quotients) proved reliable in a jack-knifed iterative reanalysis of the group functional magnetic resonance imaging model. This study supplies direct evidence to the often implied functional distinction of the two cerebral hemispheres in Speech processing. Applying direct manipulations to the Speech signal rather than to low-level surrogates, the results lend plausibility to the notion of complementary roles for the left and right superior temporal sulci in comprehending the Speech signal.

  • role of the corpus callosum in Speech Comprehension interfacing syntax and prosody
    Neuron, 2007
    Co-Authors: Angela D Friederici, Yves D Von Cramon, Sonja A Kotz
    Abstract:

    Summary The role of the corpus callosum (CC) in the interhemispheric interaction of prosodic and syntactic information during Speech Comprehension was investigated in patients with lesions in the CC, and in healthy controls. The event-related brain potential experiment examined the effect of prosodic phrase structure on the processing of a verb whose argument structure matched or did not match the prior prosody-induced syntactic structure. While controls showed an N400-like effect for prosodically mismatching verb argument structures, thus indicating a stable interplay between prosody and syntax, patients with lesions in the posterior third of the CC did not show this effect. Because these patients displayed a prosody-independent semantic N400 effect, the present data indicate that the posterior third of the CC is the crucial neuroanatomical structure for the interhemispheric interplay of suprasegmental prosodic information and syntactic information.

Linda Drijvers - One of the best experts on this subject based on the ideXlab platform.

  • degree of language experience modulates visual attention to visible Speech and iconic gestures during clear and degraded Speech Comprehension
    Cognitive Science, 2019
    Co-Authors: Linda Drijvers, Asli Ozyurek, Julija Vaitonyte
    Abstract:

    Visual information conveyed by iconic hand gestures and visible Speech can enhance Speech Comprehension under adverse listening conditions for both native and non-native listeners. However, how a listener allocates visual attention to these articulators during Speech Comprehension is unknown. We used eye-tracking to investigate whether and how native and highly proficient non-native listeners of Dutch allocated overt eye gaze to visible Speech and gestures during clear and degraded Speech Comprehension. Participants watched video clips of an actress uttering a clear or degraded (6-band noise-vocoded) action verb while performing a gesture or not, and were asked to indicate the word they heard in a cued-recall task. Gestural enhancement was the largest (i.e., a relative reduction in reaction time cost) when Speech was degraded for all listeners, but it was stronger for native listeners. Both native and non-native listeners mostly gazed at the face during Comprehension, but non-native listeners gazed more often at gestures than native listeners. However, only native but not non-native listeners' gaze allocation to gestures predicted gestural benefit during degraded Speech Comprehension. We conclude that non-native listeners might gaze at gesture more as it might be more challenging for non-native listeners to resolve the degraded auditory cues and couple those cues to phonological information that is conveyed by visible Speech. This diminished phonological knowledge might hinder the use of semantic information that is conveyed by gestures for non-native compared to native listeners. Our results demonstrate that the degree of language experience impacts overt visual attention to visual articulators, resulting in different visual benefits for native versus non-native listeners.

  • Non-native Listeners Benefit Less from Gestures and Visible Speech than Native Listeners During Degraded Speech Comprehension:
    Language and Speech, 2019
    Co-Authors: Linda Drijvers, Asli Ozyurek
    Abstract:

    Native listeners benefit from both visible Speech and iconic gestures to enhance degraded Speech Comprehension (Drijvers & Ozyurek, 2017). We tested how highly proficient non-native listeners benefit from these visual articulators compared to native listeners. We presented videos of an actress uttering a verb in clear, moderately, or severely degraded Speech, while her lips were blurred, visible, or visible and accompanied by a gesture. Our results revealed that unlike native listeners, non-native listeners were less likely to benefit from the combined enhancement of visible Speech and gestures, especially since the benefit from visible Speech was minimal when the signal quality was not sufficient.

  • hearing and seeing meaning in noise alpha beta and gamma oscillations predict gestural enhancement of degraded Speech Comprehension
    Human Brain Mapping, 2018
    Co-Authors: Linda Drijvers, Asli Ozyurek, Ole Jensen
    Abstract:

    During face-to-face communication, listeners integrate Speech with gestures. The semantic information conveyed by iconic gestures (e.g., a drinking gesture) can aid Speech Comprehension in adverse listening conditions. In this magnetoencephalography (MEG) study, we investigated the spatiotemporal neural oscillatory activity associated with gestural enhancement of degraded Speech Comprehension. Participants watched videos of an actress uttering clear or degraded Speech, accompanied by a gesture or not and completed a cued-recall task after watching every video. When gestures semantically disambiguated degraded Speech Comprehension, an alpha and beta power suppression and a gamma power increase revealed engagement and active processing in the hand-area of the motor cortex, the extended language network (LIFG/pSTS/STG/MTG), medial temporal lobe, and occipital regions. These observed low- and high-frequency oscillatory modulations in these areas support general unification, integration and lexical access processes during online language Comprehension, and simulation of and increased visual attention to manual gestures over time. All individual oscillatory power modulations associated with gestural enhancement of degraded Speech Comprehension predicted a listener's correct disambiguation of the degraded verb after watching the videos. Our results thus go beyond the previously proposed role of oscillatory dynamics in unimodal degraded Speech Comprehension and provide first evidence for the role of low- and high-frequency oscillations in predicting the integration of auditory and visual information at a semantic level.

  • visual context enhanced the joint contribution of iconic gestures and visible Speech to degraded Speech Comprehension
    Journal of Speech Language and Hearing Research, 2017
    Co-Authors: Linda Drijvers, Asli Ozyurek
    Abstract:

    Purpose This study investigated whether and to what extent iconic co-Speech gestures contribute to information from visible Speech to enhance degraded Speech Comprehension at different levels of no...

Jonas Obleser - One of the best experts on this subject based on the ideXlab platform.

  • transcranial alternating current stimulation with Speech envelopes modulates Speech Comprehension
    NeuroImage, 2018
    Co-Authors: Anna Wilsch, Jonas Obleser, Toralf Neuling, Christoph Herrmann
    Abstract:

    Abstract Cortical entrainment of the auditory cortex to the broadband temporal envelope of a Speech signal is crucial for Speech Comprehension. Entrainment results in phases of high and low neural excitability, which structure and decode the incoming Speech signal. Entrainment to Speech is strongest in the theta frequency range (4–8 Hz), the average frequency of the Speech envelope. If a Speech signal is degraded, entrainment to the Speech envelope is weaker and Speech intelligibility declines. Besides perceptually evoked cortical entrainment, transcranial alternating current stimulation (tACS) entrains neural oscillations by applying an electric signal to the brain. Accordingly, tACS-induced entrainment in auditory cortex has been shown to improve auditory perception. The aim of the current study was to modulate Speech intelligibility externally by means of tACS such that the electric current corresponds to the envelope of the presented Speech stream (i.e., envelope-tACS). Participants performed the Oldenburg sentence test with sentences presented in noise in combination with envelope-tACS. Critically, tACS was induced at time lags of 0–250 ms in 50-ms steps relative to sentence onset (auditory stimuli were simultaneous to or preceded tACS). We performed single-subject sinusoidal, linear, and quadratic fits to the sentence Comprehension performance across the time lags. We could show that the sinusoidal fit described the modulation of sentence Comprehension best. Importantly, the average frequency of the sinusoidal fit was 5.12 Hz, corresponding to the peaks of the amplitude spectrum of the stimulated envelopes. This finding was supported by a significant 5-Hz peak in the average power spectrum of individual performance time series. Altogether, envelope-tACS modulates intelligibility of Speech in noise, presumably by enhancing and disrupting (time lag with in- or out-of-phase stimulation, respectively) cortical entrainment to the Speech envelope in auditory cortex.

  • Tracking the signal, cracking the code: Speech and Speech Comprehension in non-invasive human electrophysiology
    Language cognition and neuroscience, 2016
    Co-Authors: Malte Wöstmann, Lorenz Fiedler, Jonas Obleser
    Abstract:

    ABSTRACTMagneto- and electroencephalographic (M/EEG) signals recorded from the human scalp have allowed for substantial advances for neural models of Speech Comprehension over the past decades. These methods are currently advancing rapidly and continue to offer unparalleled insight in the near-to-real-time neural dynamics of Speech processing. We provide a historically informed overview over dependent measures in the time and frequency domain and highlight recent advances resulting from these measures. We discuss the notorious challenges (and solutions) Speech and language researchers are faced with when studying auditory brain responses in M/EEG. We argue that a key to understanding the neural basis of Speech Comprehension will lie in studying interactions between the neural tracking of Speech and the functional neural network dynamics. This article is intended for both, non-experts who want to learn how to use M/EEG to study Speech Comprehension and scholars aiming for an overview of state-of-the-art M/...

  • repetitive transcranial magnetic stimulation over left angular gyrus modulates the predictability gain in degraded Speech Comprehension
    Cortex, 2015
    Co-Authors: Thomas Golombek, Gesa Hartwigsen, Jonas Obleser
    Abstract:

    Increased neural activity in left angular gyrus (AG) accompanies successful Comprehension of acoustically degraded but highly predictable sentences, as previous functional imaging studies have shown. However, it remains unclear whether the left AG is causally relevant for the Comprehension of degraded Speech. Here, we applied transient virtual lesions to either the left AG or superior parietal lobe (SPL, as a control area) with repetitive transcranial magnetic stimulation (rTMS) while healthy volunteers listened to and repeated sentences with high- versus low-predictable endings and different noise vocoding levels. We expected that rTMS of AG should selectively modulate the predictability gain (i.e., the Comprehension benefit from sentences with high-predictable endings) at a medium degradation level. We found that rTMS of AG indeed reduced the predictability gain at a medium degradation level of 4-band noise vocoding (relative to control rTMS of SPL). In contrast, the behavioral perturbation induced by rTMS changed with increased signal quality. Hence, at 8-band noise vocoding, rTMS over AG versus SPL decreased the number of correctly repeated keywords for sentences with low-predictable endings. Together, these results show that the degree of the rTMS interference depended jointly on signal quality and predictability. Our results provide the first causal evidence that the left AG is a critical node for facilitating Speech Comprehension in challenging listening conditions.

  • upregulation of cognitive control networks in older adults Speech Comprehension
    Frontiers in Systems Neuroscience, 2013
    Co-Authors: Jonas Obleser
    Abstract:

    Speech Comprehension abilities decline with age and with age-related hearing loss, but it is unclear how this decline expresses in terms of central neural mechanisms. The current study examined neural Speech processing in a group of older adults (aged 56– 77, n = 16, with varying degrees of sensorineural hearing loss), and compared them to a cohort of young adults (aged 22–31, n = 30, self-reported normal hearing). In a functional MRI experiment, listeners heard and repeated back degraded sentences (4-band vocoded, where the temporal envelope of the acoustic signal is preserved, while the spectral information is substantially degraded). Behaviorally, older adults adapted to degraded Speech at the same rate as young listeners, although their overall Comprehension of degraded Speech was lower. Neurally, both older and young adults relied on the left anterior insula for degraded more than clear Speech perception. However, anterior insula engagement in older adults was dependent on hearing acuity. Young adults additionally employed the anterior cingulate cortex (ACC). Interestingly, this age group × degradation interaction was driven by a reduced dynamic range in older adults who displayed elevated levels of ACC activity for both degraded and clear Speech, consistent with a persistent upregulation in cognitive control irrespective of task difficulty. For correct Speech Comprehension, older adults relied on the middle frontal gyrus in addition to a core Speech Comprehension network recruited by younger adults suggestive of a compensatory mechanism. Taken together, the results indicate that older adults increasingly recruit cognitive control networks, even under optimal listening conditions, at the expense of these systems’ dynamic range.

  • Speech Comprehension aided by multiple modalities behavioural and neural interactions
    Neuropsychologia, 2012
    Co-Authors: Carolyn Mcgettigan, Andrew Faulkner, Irene Altarelli, Jonas Obleser, Harriet Baverstock, Sophie K Scott
    Abstract:

    Speech Comprehension is a complex human skill, the performance of which requires the perceiver to combine information from several sources - e.g. voice, face, gesture, linguistic context - to achieve an intelligible and interpretable percept. We describe a functional imaging investigation of how auditory, visual and linguistic information interact to facilitate Comprehension. Our specific aims were to investigate the neural responses to these different information sources, alone and in interaction, and further to use behavioural Speech Comprehension scores to address sites of intelligibility-related activation in multifactorial Speech Comprehension. In fMRI, participants passively watched videos of spoken sentences, in which we varied Auditory Clarity (with noise-vocoding), Visual Clarity (with Gaussian blurring) and Linguistic Predictability. Main effects of enhanced signal with increased auditory and visual clarity were observed in overlapping regions of posterior STS. Two-way interactions of the factors (auditory × visual, auditory × predictability) in the neural data were observed outside temporal cortex, where positive signal change in response to clearer facial information and greater semantic predictability was greatest at intermediate levels of auditory clarity. Overall changes in stimulus intelligibility by condition (as determined using an independent behavioural experiment) were reflected in the neural data by increased activation predominantly in bilateral dorsolateral temporal cortex, as well as inferior frontal cortex and left fusiform gyrus. Specific investigation of intelligibility changes at intermediate auditory clarity revealed a set of regions, including posterior STS and fusiform gyrus, showing enhanced responses to both visual and linguistic information. Finally, an individual differences analysis showed that greater Comprehension performance in the scanning participants (measured in a post-scan behavioural test) were associated with increased activation in left inferior frontal gyrus and left posterior STS. The current multimodal Speech Comprehension paradigm demonstrates recruitment of a wide Comprehension network in the brain, in which posterior STS and fusiform gyrus form sites for convergence of auditory, visual and linguistic information, while left-dominant sites in temporal and frontal cortex support successful Comprehension.

Elzbieta Szelag - One of the best experts on this subject based on the ideXlab platform.

  • The Treatment Based on Temporal Information Processing Reduces Speech Comprehension Deficits in Aphasic Subjects
    Frontiers in Aging Neuroscience, 2017
    Co-Authors: Aneta Szymaszek, Tomasz Wolak, Elzbieta Szelag
    Abstract:

    Experimental studies have reported a close association between temporal information processing (TIP) and language Comprehension. Brain-injured subjects with aphasia show disturbed TIP which was evidenced in elevated temporal order threshold (TOT) as compared to control subjects. The present study is aimed at improving auditory Speech Comprehension in aphasic subjects using a specific temporal treatment. Fourteen patients having deficits in both Speech Comprehension and TIP were tested. The Token Test, phoneme discrimination tests and Voice-Onset-Time Test were employed to assess Speech Comprehension. The TOT was measured using two 10 ms tones (400, 3000Hz) presented binaurally. The patients participated in eight 45-minute sessions of either the specific temporal treatment (n=7) aimed at improving the perception of sequencing abilities, or in a non-temporal control treatment (n=7) on volume discrimination. The temporal treatment yielded an improvement in TIP. Moreover, a transfer of improvement from the time domain to the language domain was observed. The control treatment did not improve either TIP or Speech Comprehension in any of the applied tests.

  • The treatment based on temporal information processing reduces Speech Comprehension deficits in aphasic subjects
    Frontiers in Aging Neuroscience, 2017
    Co-Authors: Neta Szymaszek, Tomasz Wolak, Elzbieta Szelag
    Abstract:

    Experimental studies have reported a close association between temporal information processing (TIP) and language Comprehension. Brain-injured subjects with aphasia show disturbed TIP which was evidenced in elevated temporal order threshold (TOT) as compared to control subjects. The present study is aimed at improving auditory Speech Comprehension in aphasic subjects using a specific temporal treatment. Fourteen patients having deficits in both Speech Comprehension and TIP were tested. The Token Test, phoneme discrimination tests (PDT) and Voice-Onset-Time (VOT) Test were employed to assess Speech Comprehension. The TOT was measured using two 10 ms tones (400 Hz, 3000 Hz) presented binaurally. The patients participated in eight 45-min sessions of either the specific temporal treatment (n = 7) aimed at improving the perception of sequencing abilities, or in a non-temporal control treatment (n = 7) on volume discrimination. The temporal treatment yielded an improvement in TIP. Moreover, a transfer of improvement from the time domain to the language domain was observed. The control treatment did not improve either TIP or Speech Comprehension in any of the applied tests.