Speech Perception

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 318 Experts worldwide ranked by ideXlab platform

Christian Lorenzi - One of the best experts on this subject based on the ideXlab platform.

  • PAPER Speech-Perception-in-noise deficits in dyslexia
    2020
    Co-Authors: Johannes C Ziegler, Florence George, Catherine Pech-georgel, Christian Lorenzi
    Abstract:

    Speech Perception deficits in developmental dyslexia were investigated in quiet and various noise conditions. Dyslexics exhibited clear Speech Perception deficits in noise but not in silence. Place-of-articulation was more affected than voicing or manner-ofarticulation . Speech-Perception-in-noise deficits persisted when performance of dyslexics was compared to that of much younger children matched on reading age, underscoring the fundamental nature of Speech-Perception-in-noise deficits. The deficits were not due to poor spectral or temporal resolution because dyslexics exhibited normal ‘masking release’ effects (i.e. better performance in fluctuating than in stationary noise). Moreover, Speech-Perception-in-noise predicted significant unique variance in reading even after controlling for low-level auditory, attentional, Speech output, short-term memory and phonological awareness processes. Finally, the presence of external noise did not seem to be a necessary condition for Speech Perception deficits to occur because similar deficits were obtained when Speech was degraded by eliminating temporal fine-structure cues without using external noise. In conclusion, the core deficit of dyslexics seems to be a lack of Speech robustness in the presence of external or internal noise.

  • Speech Perception in noise deficits in dyslexia
    Developmental Science, 2009
    Co-Authors: Johannes C Ziegler, Catherine Pechgeorgel, Florence George, Christian Lorenzi
    Abstract:

    : Speech Perception deficits in developmental dyslexia were investigated in quiet and various noise conditions. Dyslexics exhibited clear Speech Perception deficits in noise but not in silence. Place-of-articulation was more affected than voicing or manner-of-articulation. Speech-Perception-in-noise deficits persisted when performance of dyslexics was compared to that of much younger children matched on reading age, underscoring the fundamental nature of Speech-Perception-in-noise deficits. The deficits were not due to poor spectral or temporal resolution because dyslexics exhibited normal 'masking release' effects (i.e. better performance in fluctuating than in stationary noise). Moreover, Speech-Perception-in-noise predicted significant unique variance in reading even after controlling for low-level auditory, attentional, Speech output, short-term memory and phonological awareness processes. Finally, the presence of external noise did not seem to be a necessary condition for Speech Perception deficits to occur because similar deficits were obtained when Speech was degraded by eliminating temporal fine-structure cues without using external noise. In conclusion, the core deficit of dyslexics seems to be a lack of Speech robustness in the presence of external or internal noise.

  • SpeechPerception‐in‐noise deficits in dyslexia
    Developmental Science, 2009
    Co-Authors: Johannes C Ziegler, Florence George, Catherine Pech-georgel, Christian Lorenzi
    Abstract:

    : Speech Perception deficits in developmental dyslexia were investigated in quiet and various noise conditions. Dyslexics exhibited clear Speech Perception deficits in noise but not in silence. Place-of-articulation was more affected than voicing or manner-of-articulation. Speech-Perception-in-noise deficits persisted when performance of dyslexics was compared to that of much younger children matched on reading age, underscoring the fundamental nature of Speech-Perception-in-noise deficits. The deficits were not due to poor spectral or temporal resolution because dyslexics exhibited normal 'masking release' effects (i.e. better performance in fluctuating than in stationary noise). Moreover, Speech-Perception-in-noise predicted significant unique variance in reading even after controlling for low-level auditory, attentional, Speech output, short-term memory and phonological awareness processes. Finally, the presence of external noise did not seem to be a necessary condition for Speech Perception deficits to occur because similar deficits were obtained when Speech was degraded by eliminating temporal fine-structure cues without using external noise. In conclusion, the core deficit of dyslexics seems to be a lack of Speech robustness in the presence of external or internal noise.

Glenn E Schellenberg - One of the best experts on this subject based on the ideXlab platform.

  • music training music aptitude and Speech Perception
    Proceedings of the National Academy of Sciences of the United States of America, 2019
    Co-Authors: Glenn E Schellenberg
    Abstract:

    In a paper published recently in PNAS, Mankel and Bidelman (1) challenge environmental accounts of associations between music training and Speech Perception. Such accounts claim that music training causes improvements in the neural encoding of Speech and in performance on related behavioral tasks (e.g., Speech-in-noise test) (2). On the one hand, Mankel and Bidelman (1) present a refreshing counterpoint to views that mistakenly consider music training to be an ideal model for the study of plasticity (3). On the other hand, they make questionable claims about the impact of music training on Speech Perception, overinterpreting results from the available literature and the data presented in their article. … [↵][1]1Email: g.schellenberg{at}utoronto.ca. [1]: #xref-corresp-1-1

Johannes C Ziegler - One of the best experts on this subject based on the ideXlab platform.

  • PAPER Speech-Perception-in-noise deficits in dyslexia
    2020
    Co-Authors: Johannes C Ziegler, Florence George, Catherine Pech-georgel, Christian Lorenzi
    Abstract:

    Speech Perception deficits in developmental dyslexia were investigated in quiet and various noise conditions. Dyslexics exhibited clear Speech Perception deficits in noise but not in silence. Place-of-articulation was more affected than voicing or manner-ofarticulation . Speech-Perception-in-noise deficits persisted when performance of dyslexics was compared to that of much younger children matched on reading age, underscoring the fundamental nature of Speech-Perception-in-noise deficits. The deficits were not due to poor spectral or temporal resolution because dyslexics exhibited normal ‘masking release’ effects (i.e. better performance in fluctuating than in stationary noise). Moreover, Speech-Perception-in-noise predicted significant unique variance in reading even after controlling for low-level auditory, attentional, Speech output, short-term memory and phonological awareness processes. Finally, the presence of external noise did not seem to be a necessary condition for Speech Perception deficits to occur because similar deficits were obtained when Speech was degraded by eliminating temporal fine-structure cues without using external noise. In conclusion, the core deficit of dyslexics seems to be a lack of Speech robustness in the presence of external or internal noise.

  • Speech Perception in noise deficits in dyslexia
    Developmental Science, 2009
    Co-Authors: Johannes C Ziegler, Catherine Pechgeorgel, Florence George, Christian Lorenzi
    Abstract:

    : Speech Perception deficits in developmental dyslexia were investigated in quiet and various noise conditions. Dyslexics exhibited clear Speech Perception deficits in noise but not in silence. Place-of-articulation was more affected than voicing or manner-of-articulation. Speech-Perception-in-noise deficits persisted when performance of dyslexics was compared to that of much younger children matched on reading age, underscoring the fundamental nature of Speech-Perception-in-noise deficits. The deficits were not due to poor spectral or temporal resolution because dyslexics exhibited normal 'masking release' effects (i.e. better performance in fluctuating than in stationary noise). Moreover, Speech-Perception-in-noise predicted significant unique variance in reading even after controlling for low-level auditory, attentional, Speech output, short-term memory and phonological awareness processes. Finally, the presence of external noise did not seem to be a necessary condition for Speech Perception deficits to occur because similar deficits were obtained when Speech was degraded by eliminating temporal fine-structure cues without using external noise. In conclusion, the core deficit of dyslexics seems to be a lack of Speech robustness in the presence of external or internal noise.

  • SpeechPerception‐in‐noise deficits in dyslexia
    Developmental Science, 2009
    Co-Authors: Johannes C Ziegler, Florence George, Catherine Pech-georgel, Christian Lorenzi
    Abstract:

    : Speech Perception deficits in developmental dyslexia were investigated in quiet and various noise conditions. Dyslexics exhibited clear Speech Perception deficits in noise but not in silence. Place-of-articulation was more affected than voicing or manner-of-articulation. Speech-Perception-in-noise deficits persisted when performance of dyslexics was compared to that of much younger children matched on reading age, underscoring the fundamental nature of Speech-Perception-in-noise deficits. The deficits were not due to poor spectral or temporal resolution because dyslexics exhibited normal 'masking release' effects (i.e. better performance in fluctuating than in stationary noise). Moreover, Speech-Perception-in-noise predicted significant unique variance in reading even after controlling for low-level auditory, attentional, Speech output, short-term memory and phonological awareness processes. Finally, the presence of external noise did not seem to be a necessary condition for Speech Perception deficits to occur because similar deficits were obtained when Speech was degraded by eliminating temporal fine-structure cues without using external noise. In conclusion, the core deficit of dyslexics seems to be a lack of Speech robustness in the presence of external or internal noise.

David B Pisoni - One of the best experts on this subject based on the ideXlab platform.

  • Speech Perception and production.
    Wiley Interdisciplinary Reviews: Cognitive Science, 2010
    Co-Authors: Elizabeth D. Casserly, David B Pisoni
    Abstract:

    Until recently, research in Speech Perception and Speech production has largely focused on the search for psychological and phonetic evidence of discrete, abstract, context-free symbolic units corresponding to phonological segments or phonemes. Despite this common conceptual goal and intimately related objects of study, however, research in these two domains of Speech communication has progressed more or less independently for more than 60 years. In this article, we present an overview of the foundational works and current trends in the two fields, specifically discussing the progress made in both lines of inquiry as well as the basic fundamental issues that neither has been able to resolve satisfactorily so far. We then discuss theoretical models and recent experimental evidence that point to the deep, pervasive connections between Speech Perception and production. We conclude that although research focusing on each domain individually has been vital in increasing our basic understanding of spoken language processing, the human capacity for Speech communication is so complex that gaining a full understanding will not be possible until Speech Perception and production are conceptually reunited in a joint approach to problems shared by both modes. Copyright © 2010 John Wiley & Sons, Ltd. For further resources related to this article, please visit the WIREs website

  • Speech Perception and Spoken Word Recognition: Research and Theory
    Blackwell Handbook of Sensation and Perception, 2008
    Co-Authors: Miranda Cleary, David B Pisoni
    Abstract:

    The study of Speech Perception investigates how we are able to identify in the human voice the meaningful patterns that define spoken language. Research in this area has tended to focus on how humans perceive minimal linguistic contrasts known as "phonemes"--that which distinguishes 'pat' from 'bar' for example. Although Speech Perception has traditionally been the study of phoneme Perception, an account is also needed for how we identify spoken words and comprehend sentences in connected, fluent Speech. Defining Speech Perception very narrowly in terms of phoneme Perception or nonsense syllable identification was a useful and reasonable simplification in the early days of the field, but the drawbacks of conceptualizing the problem purely in these terms have become increasingly apparent in recent years. Subsections of the chapter address the following topics: (1) the acoustic Speech signal, (2) findings from the traditional domain of inquiry--phoneme Perception, (3) from phoneme Perception to perceiving fluent Speech, (4) general theoretical approaches to the study of Speech Perception, and (5) bridging the gap between Speech Perception and spoken word recognition. (PsycINFO Database Record (c) 2003 APA, all rights reserved).

  • Auditory-visual Speech Perception and synchrony detection for Speech and nonSpeech signals
    Journal of the Acoustical Society of America, 2006
    Co-Authors: Brianna Conrey, David B Pisoni
    Abstract:

    Previous research has identified a “synchrony window” of several hundred milliseconds over which auditory-visual (AV) asynchronies are not reliably perceived. Individual variability in the size of this AV synchrony window has been linked with variability in AV Speech Perception measures, but it was not clear whether AV Speech Perception measures are related to synchrony detection for Speech only or for both Speech and nonSpeech signals. An experiment was conducted to investigate the relationship between measures of AV Speech Perception and AV synchrony detection for Speech and nonSpeech signals. Variability in AV synchrony detection for both Speech and nonSpeech signals was found to be related to variability in measures of auditory-only (A-only) and AV Speech Perception, suggesting that temporal processing for both Speech and nonSpeech signals must be taken into account in explaining variability in A-only and multisensory Speech Perception.

  • The Handbook of Speech Perception - The Handbook of Speech Perception
    2005
    Co-Authors: David B Pisoni, Robert E. Remez
    Abstract:

    List of Contributors. Preface: Michael Studdert-Kennedy (Haskins Laboratories). Introduction: David B. Pisoni (Indiana University) and Robert E. Remez (Barnard College). Part I: Sensing Speech. 1. Acoustic Analysis and Synthesis of Speech: James R. Sawusch (University at Buffalo). 2. Perceptual Organization of Speech: Robert E. Remez (Barnard College). 3. Primacy of Multimodal Speech Perception: Lawrence D. Rosenblum (University of California, Riverside). 4. Phonetic Processing by the Speech Perceiving Brain: Lynne E. Bernstein (House Ear Institute). 5. Event-related Evoked Potentials (ERPs) in Speech Perception: Dennis Molfese, Alexandra P. Fonaryova Key, Mandy J. Maguire, Guy O. Dove and Victoria J. Molfese (all University of Louisville). Part II: Perception of Linguistic Properties. 6. Features in Speech Perception and Lexical Access: Kenneth N. Stevens (Massachusetts Institute of Technology). 7. Speech Perception and Phonological Contrast: Edward Flemming (Stanford University). 8. Acoustic Cues to the Perception of Segmental Phonemes: Lawrence J. Raphael (Adelphi University). 9. Clear Speech: Rosalie M. Uchanski (CID at Washington University School of Medicine). 10. Perception of Intonation: Jacqueline Vaissiere (Laboratoire de Phonetique et de Phonologique, Paris). 11. Lexical Stress: Anne C. Cutler (Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands). 12. Slips of the Ear: Z. S. Bond (Ohio University). Part III: Perception of Indexical Properties. 13. Perception of Dialect Variation: Cynthia Clopper and David B. Pisoni (both Indiana University). 14. Perception of Voice Quality: Jody Kreiman (UCLA), Diana Vanlancker-Sidtis (New York University) and Bruce R. Gerratt (UCLA). 15. Speaker Normalization in Speech Perception: Keith A. Johnson (Ohio State University). 16. Perceptual Integration of Linguistic and Non-Linguistic Properties of Speech: Lynne C. Nygaard (Emory University). Part IV: Speech Perception by Special Listeners. 17. Speech Perception in Infants: Derek M. Houston (Indiana University School of Medicine). 18. Speech Perception in Childhood: Amanda C. Walley (University of Alabama, Birmingham). 19. Age-related Changes in Spoken Word Recognition: Mitchell S. Sommers (Washington University). 20. Speech Perception in Deaf Children with Cochlear Implants: David B. Pisoni (Indiana University). 21. Speech Perception following Focal Brain Injury: William Badacker (Johns Hopkins University). 22. Cross-Language Speech Perception: Nuria Sebastian-Galles (Parc Cientific de Barcelona - Hospital de San Joan de Deu). 23. Speech Perception in Specific Language Impairment: Susan Ellis Weismer (University of Wisconsin, Madison). Part V: Recognition of Spoken Words. 24. Spoken Word Recognition: The Challenge of Variation: Paul A. Luce and Conor T. McLennan (State University of New York, Buffalo). 25. Probabilistic Phonotactics in Spoken Word Recognition: Edward T. Auer, Jr. (House Ear Institute) and Paul A. Luce (State University of New York, Buffalo). Part VI: Theoretical Perspectives. 26. The Relation of Speech Perception and Speech Production: Carol A. Fowler and Bruno Galantucci (both Haskins Laboratories). 27. A Neuroethological Perspective on the Perception of Vocal Communication Signals: Timothy Gentner (University of Chicago) and Gregory F. Ball (Johns Hopkins University). Index

  • the handbook of Speech Perception
    2004
    Co-Authors: David B Pisoni, Robert E. Remez
    Abstract:

    List of Contributors. Preface: Michael Studdert-Kennedy (Haskins Laboratories). Introduction: David B. Pisoni (Indiana University) and Robert E. Remez (Barnard College). Part I: Sensing Speech. 1. Acoustic Analysis and Synthesis of Speech: James R. Sawusch (University at Buffalo). 2. Perceptual Organization of Speech: Robert E. Remez (Barnard College). 3. Primacy of Multimodal Speech Perception: Lawrence D. Rosenblum (University of California, Riverside). 4. Phonetic Processing by the Speech Perceiving Brain: Lynne E. Bernstein (House Ear Institute). 5. Event-related Evoked Potentials (ERPs) in Speech Perception: Dennis Molfese, Alexandra P. Fonaryova Key, Mandy J. Maguire, Guy O. Dove and Victoria J. Molfese (all University of Louisville). Part II: Perception of Linguistic Properties. 6. Features in Speech Perception and Lexical Access: Kenneth N. Stevens (Massachusetts Institute of Technology). 7. Speech Perception and Phonological Contrast: Edward Flemming (Stanford University). 8. Acoustic Cues to the Perception of Segmental Phonemes: Lawrence J. Raphael (Adelphi University). 9. Clear Speech: Rosalie M. Uchanski (CID at Washington University School of Medicine). 10. Perception of Intonation: Jacqueline Vaissiere (Laboratoire de Phonetique et de Phonologique, Paris). 11. Lexical Stress: Anne C. Cutler (Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands). 12. Slips of the Ear: Z. S. Bond (Ohio University). Part III: Perception of Indexical Properties. 13. Perception of Dialect Variation: Cynthia Clopper and David B. Pisoni (both Indiana University). 14. Perception of Voice Quality: Jody Kreiman (UCLA), Diana Vanlancker-Sidtis (New York University) and Bruce R. Gerratt (UCLA). 15. Speaker Normalization in Speech Perception: Keith A. Johnson (Ohio State University). 16. Perceptual Integration of Linguistic and Non-Linguistic Properties of Speech: Lynne C. Nygaard (Emory University). Part IV: Speech Perception by Special Listeners. 17. Speech Perception in Infants: Derek M. Houston (Indiana University School of Medicine). 18. Speech Perception in Childhood: Amanda C. Walley (University of Alabama, Birmingham). 19. Age-related Changes in Spoken Word Recognition: Mitchell S. Sommers (Washington University). 20. Speech Perception in Deaf Children with Cochlear Implants: David B. Pisoni (Indiana University). 21. Speech Perception following Focal Brain Injury: William Badacker (Johns Hopkins University). 22. Cross-Language Speech Perception: Nuria Sebastian-Galles (Parc Cientific de Barcelona - Hospital de San Joan de Deu). 23. Speech Perception in Specific Language Impairment: Susan Ellis Weismer (University of Wisconsin, Madison). Part V: Recognition of Spoken Words. 24. Spoken Word Recognition: The Challenge of Variation: Paul A. Luce and Conor T. McLennan (State University of New York, Buffalo). 25. Probabilistic Phonotactics in Spoken Word Recognition: Edward T. Auer, Jr. (House Ear Institute) and Paul A. Luce (State University of New York, Buffalo). Part VI: Theoretical Perspectives. 26. The Relation of Speech Perception and Speech Production: Carol A. Fowler and Bruno Galantucci (both Haskins Laboratories). 27. A Neuroethological Perspective on the Perception of Vocal Communication Signals: Timothy Gentner (University of Chicago) and Gregory F. Ball (Johns Hopkins University). Index

D. Poeppel - One of the best experts on this subject based on the ideXlab platform.

  • Speech Perception
    Brain Mapping, 2020
    Co-Authors: D. Poeppel
    Abstract:

    Speech Perception refers to the suite of (neural, computational, cognitive) operations that transform auditory input signals into representations that can make contact with internally stored information: the words in a listener’s mental lexicon. Speech Perception is typically studied using single Speech sounds (e.g., vowels or syllables), spoken words, or connected Speech. Based on neuroimaging, lesion, and electrophysiological data, dual stream neurocognitive models of Speech Perception have been proposed that identify ventral stream (mapping from sound to meaning) and dorsal stream functions (mapping from sound articulation). Major outstanding research questions include cerebral lateralization, the role of neuronal oscillations, and the contribution of top-down, abstract knowledge in Perception

  • Speech Perception
    The Oxford Handbook of Neurolinguistics, 2019
    Co-Authors: D. Poeppel, Gregory B. Cogan, Ido Davidesco, Adeen Flinker
    Abstract:

    Speech Perception can be thought of as the set of operations that take as input the continuously varying acoustic waveforms available at the auditory periphery (vibrations in the ear) and that generate as output those representations (abstractions in the head) that constitute the basis for the subsequent operations that mediate language comprehension (which can, of course, be fed by audition, vision, or touch). The neural basis of Speech Perception proper has been studied experimentally by every available neural recording and stimulation technique. The interpretation of the findings and the development of a comprehensive mechanistic theory are complicated by the fact that very different protocols are used: studies range from the identification and categorization of single vowels and syllables to decisions on single spoken words to intelligibility judgments on connected Speech. Within this broader context, three topics have received considerable attention: the hemispheric lateralization of Speech Perception, the role of the motor system, and the potential contribution of neural oscillations to perceptual analysis. This chapter discusses each of these areas in turn.

  • Speech Perception at the interface of neurobiology and linguistics
    Philosophical Transactions of the Royal Society B, 2008
    Co-Authors: D. Poeppel, William J Idsardi, Virginie Van Wassenhove
    Abstract:

    Speech Perception consists of a set of computations that take continuously varying acoustic waveforms as input and generate discrete representations that make contact with the lexical representations stored in long-term memory as output. Because the perceptual objects that are recognized by the Speech Perception enter into subsequent linguistic computation, the format that is used for lexical representation and processing fundamentally constrains the Speech perceptual processes. Consequently, theories of Speech Perception must, at some level, be tightly linked to theories of lexical representation. Minimally, Speech Perception must yield representations that smoothly and rapidly interface with stored lexical items. Adopting the perspective of Marr, we argue and provide neurobiological and psychophysical evidence for the following research programme. First, at the implementational level, Speech Perception is a multi-time resolution process, with perceptual analyses occurring concurrently on at least two time scales (approx. 20–80 ms, approx. 150–300 ms), commensurate with (sub)segmental and syllabic analyses, respectively. Second, at the algorithmic level, we suggest that Perception proceeds on the basis of internal forward models, or uses an ‘analysis-by-synthesis’ approach. Third, at the computational level (in the sense of Marr), the theory of lexical representation that we adopt is principally informed by phonological research and assumes that words are represented in the mental lexicon in terms of sequences of discrete segments composed of distinctive features. One important goal of the research programme is to develop linking hypotheses between putative neurobiological primitives (e.g. temporal primitives) and those primitives derived from linguistic inquiry, to arrive ultimately at a biologically sensible and theoretically satisfying model of representation and computation in Speech.

  • towards a functional neuroanatomy of Speech Perception
    Trends in Cognitive Sciences, 2000
    Co-Authors: Gregory Hickok, D. Poeppel
    Abstract:

    Abstract The functional neuroanatomy of Speech Perception has been difficult to characterize. Part of the difficulty, we suggest, stems from the fact that the neural systems supporting ‘Speech Perception' vary as a function of the task. Specifically, the set of cognitive and neural systems involved in performing traditional laboratory Speech Perception tasks, such as syllable discrimination or identification, only partially overlap those involved in Speech Perception as it occurs during natural language comprehension. In this review, we argue that cortical fields in the posterior–superior temporal lobe, bilaterally, constitute the primary substrate for constructing sound-based representations of Speech, and that these sound-based representations interface with different supramodal systems in a task-dependent manner. Tasks that require access to the mental lexicon (i.e. accessing meaning-based representations) rely on auditory-to-meaning interface systems in the cortex in the vicinity of the left temporal–parietal–occipital junction. Tasks that require explicit access to Speech segments rely on auditory–motor interface systems in the left frontal and parietal lobes. This auditory–motor interface system also appears to be recruited in phonological working memory.