Nonsense Syllable

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 3432 Experts worldwide ranked by ideXlab platform

Juha Tapio Silvola - One of the best experts on this subject based on the ideXlab platform.

  • consonant and vowel confusions in well performing children and adolescents with cochlear implants measured by a Nonsense Syllable repetition test
    Frontiers in Psychology, 2019
    Co-Authors: Arne Kirkhorn Rodvik, Ole Tvete, Janne Von Koss Torkildsen, Ona Bo Wie, Ingebjorg Skaug, Juha Tapio Silvola
    Abstract:

    Although the majority of early implanted, profoundly deaf children with cochlear implants (CIs), will develop correct pronunciation if they receive adequate oral language stimulation, many of them have difficulties with perceiving minute details of speech. The main aim of this study is to measure the confusion of consonants and vowels in well-performing children and adolescents with CIs. The study also aims to investigate how age at onset of severe to profound deafness influences perception. The participants are 36 children and adolescents with CIs (18 girls), with a mean (SD) age of 11.6 (3.0) years (range: 5.9–16.0 years). Twenty-nine of them are prelingually deaf and 7 are postlingually deaf. Two reference groups of normal-hearing (NH) 6- and 13-year-olds are included. Consonant and vowel perception is measured by repetition of 16 bisyllabic vowel-consonant-vowel Nonsense words and 9 monosyllabic consonant-vowel-consonant Nonsense words in an open-set design. For the participants with CIs, consonants were mostly confused with consonants with the same voicing and manner, and the mean (SD) voiced consonant repetition score, 63.9 (10.6)%, was considerably lower than the mean (SD) unvoiced consonant score, 76.9 (9.3)%. There was a devoicing bias for the stops; unvoiced stops were confused with other unvoiced stops and not with voiced stops, and voiced stops were confused with both unvoiced stops and other voiced stops. The mean (SD) vowel repetition score was 85.2 (10.6)% and there was a bias in the confusions of [iː] and [yː]; [yː] was perceived as [iː] twice as often as [yː] was repeated correctly. Subgroup analyses showed no statistically significant differences between the consonant scores for pre- and postlingually deaf participants. For the NH participants, the consonant repetition scores were substantially higher and the difference between voiced and unvoiced consonant repetition scores considerably lower than for the participants with CIs. The participants with CIs obtained scores close to ceiling on vowels and real-word monoSyllables, but their perception was substantially lower for voiced consonants. This ¬may partly be related to limitations in the CI technology for the transmission of low-frequency sounds, such as insertion depth of the electrode and ability to convey temporal information.

Saeed Malayeri - One of the best experts on this subject based on the ideXlab platform.

  • Subcortical encoding of speech cues in children with congenital blindness.
    Restorative Neurology and Neuroscience, 2016
    Co-Authors: Zahra Jafari, Saeed Malayeri
    Abstract:

    Congenital visual deprivation underlies neural plasticity in different brain areas, and provides an outstanding opportunity to study the neuroplastic capabilities of the brain. The present study aimed to investigate the effect of congenital blindness on subcortical auditory processing using electrophysiological and behavioral assessments in children. A total of 47 children aged 8-12 years, including 22 congenitally blind (CB) children and 25 normal-sighted (NS) control, were studied. All children were tested using an auditory brainstem response (ABR) test with both click and speech stimuli. Speech recognition and musical abilities were tested using standard tools. Significant differences were observed between the two groups in speech ABR wave latencies A, F and O (p≤0.043), wave amplitude F (p = 0.039), V-A slope (p = 0.026), and three spectral magnitudes F0, F1 and HF (p≤0.002). CB children showed a superior performance compared to NS peers in all the subtests and the total score of musical abilities (p≤0.003). Moreover, they had significantly higher scores during the Nonsense Syllable test in noise than the NS children (p = 0.034). Significant negative correlations were found only in CB children between the total music score and both wave A (p = 0.039) and wave F (p = 0.029) latencies, as well as Nonsense-Syllable test in noise and the wave A latency (p = 0.041). Our results suggest that neuroplasticity resulting from congenital blindness can be measured subcortically and has a heightened effect on temporal, musical and speech processing abilities. The findings have been discussed based on models of plasticity and the influence of corticofugal modulation in synthesizing complex auditory stimuli.

Shlomo Silman - One of the best experts on this subject based on the ideXlab platform.

  • Veterans Affairs
    2015
    Co-Authors: Shlomo Silman
    Abstract:

    Speech recognition perfo ce on a modified Nonsense Syllable tes

  • Speech recognition performance on a modified Nonsense Syllable test
    Journal of rehabilitation research and development, 1992
    Co-Authors: Stanley A. Gelfand, Teresa Schwander, Harry Levitt, Mark Weiss, Shlomo Silman
    Abstract:

    A modification of the City University of New York Nonsense Syllable test (CUNY NST) has been developed in which (a) the several subtests of the original test are replaced with a 22-item consonant-vowel (CV) subtest and a 16-item vowel- consonant (VC) subtest; and, (b) the response choices for each target Syllable include all 22 initial and all 16 final consonants, respectively. In addition, the test tokens are presented as isolat- ed Syllables without a carrier phrase. These changes enable the resolution of confusions not possible on the original NST, and also the construction of a single confusion matrix each for CVs and VCs, respectively. The modified Nonsense Syllable test (MNST) provides results that compare favorably to those of the original NST.

Zahra Jafari - One of the best experts on this subject based on the ideXlab platform.

  • Subcortical encoding of speech cues in children with congenital blindness.
    Restorative Neurology and Neuroscience, 2016
    Co-Authors: Zahra Jafari, Saeed Malayeri
    Abstract:

    Congenital visual deprivation underlies neural plasticity in different brain areas, and provides an outstanding opportunity to study the neuroplastic capabilities of the brain. The present study aimed to investigate the effect of congenital blindness on subcortical auditory processing using electrophysiological and behavioral assessments in children. A total of 47 children aged 8-12 years, including 22 congenitally blind (CB) children and 25 normal-sighted (NS) control, were studied. All children were tested using an auditory brainstem response (ABR) test with both click and speech stimuli. Speech recognition and musical abilities were tested using standard tools. Significant differences were observed between the two groups in speech ABR wave latencies A, F and O (p≤0.043), wave amplitude F (p = 0.039), V-A slope (p = 0.026), and three spectral magnitudes F0, F1 and HF (p≤0.002). CB children showed a superior performance compared to NS peers in all the subtests and the total score of musical abilities (p≤0.003). Moreover, they had significantly higher scores during the Nonsense Syllable test in noise than the NS children (p = 0.034). Significant negative correlations were found only in CB children between the total music score and both wave A (p = 0.039) and wave F (p = 0.029) latencies, as well as Nonsense-Syllable test in noise and the wave A latency (p = 0.041). Our results suggest that neuroplasticity resulting from congenital blindness can be measured subcortically and has a heightened effect on temporal, musical and speech processing abilities. The findings have been discussed based on models of plasticity and the influence of corticofugal modulation in synthesizing complex auditory stimuli.

David L Woods - One of the best experts on this subject based on the ideXlab platform.

  • perceptual training improves Syllable identification in new and experienced hearing aid users
    Journal of Rehabilitation Research and Development, 2006
    Co-Authors: Christopher G Stecker, William E. Yund, Glen A Bowman, Timothy J Herron, Christina M Roup, David L Woods
    Abstract:

    Abstract--We assessed the effects of perceptual training of Syllable identification in noise on Nonsense Syllable test (NST) performance of new (Experiment 1) and experienced (Experiment 2) hearing aid (HA) users with sensorineural hearing loss. In Experiment 1, new HA users were randomly assigned to either immediate training (IT) or delayed training (DT) groups. IT subjects underwent 8 weeks of at-home Syllable identification training and in-laboratory testing, whereas DT subjects underwent identical in-laboratory testing without training. Training produced large improvements in Syllable identification in IT subjects, whereas spontaneous improvement was minimal in DT subjects. DT subjects then underwent training and showed performance improvements comparable with those of the IT group. Training-related improvement in NST scores significantly exceeded improvements due to amplification. In Experiment 2, experienced HA users received identical training and testing procedures as users in Experiment 1. The experienced users also showed significant training benefit. Training-related improvements generalized to untrained voices and were maintained on retention tests. Perceptual training appears to be a promising tool for improving speech perception in new and experienced HA users. Key words: auditory, hearing aid, hearing loss, learning, masking noise, Nonsense Syllables, perception, perceptual training, personal computer, phonemes, presbycusis, rehabilitation, speech. INTRODUCTION Progressive high-frequency sensorineural hearing loss (SNHL) alters auditory processing at multiple levels of the auditory system. Most obviously, it alters cochlear function and deprives listeners of high-frequency speech cues that are critical in discriminating consonants [1]. In addition, gradual high-frequency SNHL results in a widespread reorganization of central auditory connections, diminishing high-frequency inputs and enhancing connections from nearby zones with intact cochlear function [2]. Phoneme processing strategies are correspondingly altered, with hearing-impaired subjects depending more on phonetic cues conveyed by low frequencies [3-6]. For example, hearing-impaired subjects rely disproportionately on vowel duration to discriminate voiced and voiceless fricative pairs such as /v/-/f/ and /z/-/s/ [4]. While hearing aids (HAs) can partially compensate for cochlear deficits by amplifying high-frequency sounds, long-standing peripheral hearing loss will also produce neuroplastic alterations within the central auditory system, including changes in synaptic connections and dendritic arborization [7]. While the newly amplified auditory inputs provided by HAs may enhance functional neuroplasticity [8], abnormal synaptic connections will not instantly renormalize. Normalization may be reflected in the gradual perceptual changes occurring during acclimatization [9-11]. However, acclimatization effects are generally small in magnitude and inconsistently obtained [12-15]. This minimal acclimatization benefit suggests that the neuroplastic changes needed to normalize auditory perception often may not occur. In the current study, we investigated the capability of adaptive perceptual training to force reorganization and consequent perceptual improvement in a high-level auditory task: Syllable identification. Cortical Plasticity and Perceptual Learning Research over the last decade has revealed extensive neuroplasticity in the auditory system that optimizes neuronal responses to the behaviorally relevant acoustic features present in the environment [16-18]. Changes in cortical organization occur reliably following both selective stimulation and deprivation. For example, explicit exposure to particular sound frequencies can enhance the number of cortical neurons driven by those stimuli and alter neuronal tuning properties and response latencies to favor behaviorally relevant sounds [19]. Environmental enrichment can also sharpen neuronal tuning curves in auditory cortex [20], while noisy environments impair the development of normal tuning [21]. …