Word Recognition

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 81516 Experts worldwide ranked by ideXlab platform

Jonathan Grainger - One of the best experts on this subject based on the ideXlab platform.

  • Frequency-tagged visual evoked responses track syllable effects in visual Word Recognition.
    Cortex; a journal devoted to the study of the nervous system and behavior, 2019
    Co-Authors: Veronica Montani, Jonathan Grainger, Valérie Chanoine, Johannes C. Ziegler
    Abstract:

    Abstract The processing of syllables in visual Word Recognition was investigated using a novel paradigm based on steady-state visual evoked potentials (SSVEPs). French Words were presented to proficient readers in a delayed naming task. Words were split into two segments, the first of which was flickered at 18.75 Hz and the second at 25 Hz. The first segment either matched (congruent condition) or did not match (incongruent condition) the first syllable. The SSVEP responses in the congruent condition showed increased power compared to the responses in the incongruent condition, providing new evidence that syllables are important sublexical units in visual Word Recognition and reading aloud. With respect to the neural correlates of the effect, syllables elicited an early activation of a right hemisphere network. This network is typically associated with the programming of complex motor sequences, cognitive control and timing. Subsequently, responses were obtained in left hemisphere areas related to phonological processing.

  • an erp investigation of visual Word Recognition in syllabary scripts
    Cognitive Affective & Behavioral Neuroscience, 2013
    Co-Authors: Jonathan Grainger, Kana Okano, Phillip J Holcomb
    Abstract:

    The bimodal interactive-activation model has been successfully applied to understanding the neurocognitive processes involved in reading Words in alphabetic scripts, as reflected in the modulation of ERP components in masked repetition priming. In order to test the generalizability of this approach, in the present study we examined Word Recognition in a different writing system, the Japanese syllabary scripts hiragana and katakana. Native Japanese participants were presented with repeated or unrelated pairs of Japanese Words in which the prime and target Words were both in the same script (within-script priming, Exp. 1) or were in the opposite script (cross-script priming, Exp. 2). As in previous studies with alphabetic scripts, in both experiments the N250 (sublexical processing) and N400 (lexical–semantic processing) components were modulated by priming, although the time course was somewhat delayed. The earlier N/P150 effect (visual feature processing) was present only in “Experiment 1: Within-script priming”, in which the prime and target Words shared visual features. Overall, the results provide support for the hypothesis that visual Word Recognition involves a generalizable set of neurocognitive processes that operate in similar manners across different writing systems and languages, as well as pointing to the viability of the bimodal interactive-activation framework for modeling such processes.

  • native language influences on Word Recognition in a second language a megastudy
    Journal of Experimental Psychology: Learning Memory and Cognition, 2008
    Co-Authors: Kristin Lemhofer, Jonathan Grainger, Ton Dijkstra, Herbert Schriefers, Harald R Baayen, Pienie Zwitserlood
    Abstract:

    Many studies have reported that Word Recognition in a second language (L2) is affected by the native language (L1). However, little is known about the role of the specific language combination of the bilinguals. To investigate this issue, the authors administered a Word identification task (progressive demasking) on 1,025 monosyllabic English (L2) Words to native speakers of French, German, and Dutch. A regression approach was adopted, including a large number of within- and between-language variables as predictors. A substantial overlap of reaction time patterns was found across the groups of bilinguals, showing that Word Recognition results obtained for one group of bilinguals generalize to bilinguals with different mother tongues. Moreover, among the set of significant predictors, only one between-language variable was present (cognate status); all others reflected characteristics of the target language. Thus, although influences across languages exist, Word Recognition in L2 by proficient bilinguals is primarily determined by within-language factors, whereas cross-language effects appear to be limited. An additional comparison of the bilingual data with a native control group showed that there are subtle but significant differences between L1 and L2 processing.

  • effects of phonological and orthographic neighbourhood density interact in visual Word Recognition
    Quarterly Journal of Experimental Psychology, 2005
    Co-Authors: Jonathan Grainger, Mathilde Muneaux, Fernand Farioli, Johannes C. Ziegler
    Abstract:

    The present study investigated the role of phonological and orthographic neighbourhood density in visual Word Recognition. Three mechanisms were identified that predict distinct facilitatory or inh...

  • neighborhood effects in auditory Word Recognition phonological competition and orthographic facilitation
    Journal of Memory and Language, 2003
    Co-Authors: Johannes C. Ziegler, Mathilde Muneaux, Jonathan Grainger
    Abstract:

    The present study investigated phonological and orthographic neighborhood effects in auditory Word Recognition in French. In an auditory lexical decision task, phonological neighborhood (PN) produced the standard inhibitory effect (Words with many neighbors produced longer latencies and more errors than Words with few neighbors). In contrast, orthographic neighborhood (ON) produced a facilitatory effect. In Experiment 2, the facilitatory ON effect was replicated while controlling for phonotactic probability, a variable that has previously been shown to produce facilitatory effects. In Experiment 3, the results were replicated in a shadowing task, ruling out the possibility that the ON effect results from a strategic and task-specific mechanism that might operate in the lexical decision task. It is argued that the PN effect reflects lexical competition between similar sounding Words while the ON effect reflects the consistency of the sublexical mapping between phonology and orthography. The results join an accumulating number of studies suggesting that orthographic information influences auditory Word Recognition.

Arthur M. Jacobs - One of the best experts on this subject based on the ideXlab platform.

  • Pseudohomophone effects provide evidence of early lexico-phonological processing in visual Word Recognition.
    Human brain mapping, 2009
    Co-Authors: Mario Braun, Florian Hutzler, Johannes C. Ziegler, Michael Dambacher, Arthur M. Jacobs
    Abstract:

    Previous research using event-related brain potentials (ERPs) suggested that phonological processing in visual Word Recognition occurs rather late, typically after semantic or syntactic processing. Here, we show that phonological activation in visual Word Recognition can be observed much earlier. Using a lexical decision task, we show that ERPs to pseudohomophones (PsHs) (e.g., ROZE) differed from well-matched spelling controls (e.g., ROFE) as early as 150 ms (P150) after stimulus onset. The PsH effect occurred as early as the Word frequency effect suggesting that phonological activation occurs early enough to influence lexical access. Low-resolution electromagnetic tomography analysis (LORETA) revealed that left temporoparietal and right frontotemporal areas are the likely brain regions associated with the processing of phonological information at the lexical level. Altogether, the results show that phonological processes are activated early in visual Word Recognition and play an important role in lexical access.

  • Syllables and bigrams: orthographic redundancy and syllabic units affect visual Word Recognition at different processing levels.
    Journal of experimental psychology. Human perception and performance, 2009
    Co-Authors: Markus Conrad, Manuel Carreiras, Sascha Tamm, Arthur M. Jacobs
    Abstract:

    Over the last decade, there has been increasing evidence for syllabic processing during visual Word Recognition. If syllabic effects would prove to be independent from orthographic redundancy, this would seriously challenge the ability of current computational models to account for the processing of polysyllabic Words. Three experiments are presented to disentangle effects of the frequency of syllabic units and orthographic segments in lexical decision. In Experiment 1 we obtained an inhibitory syllable-frequency effect that was unaffected by the presence or absence of a “bigram trough” at the syllable boundary. In Experiments 2 and 3 an inhibitory effect of initial syllable-frequency but a facilitative effect of initial bigram-frequency emerged when manipulating one of the two measures and controlling for the other in Spanish Words starting with CV-syllables. We conclude that effects of syllable-frequency and letter cluster frequency are independent and arise at different processing levels of visual Word Recognition. Results are discussed within the framework of an interactive activation model of visual Word Recognition.

  • Inhibition and facilitation in visual Word Recognition: Prefrontal contribution to the orthographic neighborhood size effect
    NeuroImage, 2007
    Co-Authors: Christian J. Fiebach, Brigitte Ricker, Angela D. Friederici, Arthur M. Jacobs
    Abstract:

    The Recognition of Words is a central component of language processing. A major role for visual Word Recognition has been attributed to the orthographic neighbors of a Word, i.e., Words that are orthographically similar to a target Word. It has been demonstrated that the presence of orthographic neighbors facilitates the Recognition of Words, but hinders the rejection of nonWords. It is therefore assumed that representations of orthographic neighbors are at least partially activated during Word Recognition, and that they influence Word Recognition depending on the specific task context. In the present study, we used fMRI to examine the neural bases of the effect of orthographic neighborhood size on speeded lexical decisions to Words and nonWords. Our results demonstrate lexicality x neighborhood size interactions in mid-dorsolateral and medial prefrontal cortex, suggesting the involvement of a domain-general, extra-lexical process for orthographic neighborhood effects on Word and nonWord processing. This result challenges computational models that offer purely lexical accounts of the orthographic neighborhood effect and suggests an important role for executive control functions during visual Word Recognition.

  • replicating syllable frequency effects in spanish in german one more challenge to computational models of visual Word Recognition
    Language and Cognitive Processes, 2004
    Co-Authors: Markus Conrad, Arthur M. Jacobs
    Abstract:

    Two experiments tested the role of syllable frequency in Word Recognition, recently suggested in Spanish, in another shallow orthography, German. Like in Spanish, Word Recognition performance was inhibited in a lexical decision and a perceptual identification task when the first syllable of a Word was of high frequency. Given this replication of the inhibitory effect of syllable frequency in a second language, we discuss the issue whether and how computational models of Word Recognition would have to represent a Word’s syllabic structure in order to accurately describe processing of polysyllabic Words.

  • models of visual Word Recognition sampling the state of the art
    Journal of Experimental Psychology: Human Perception and Performance, 1994
    Co-Authors: Arthur M. Jacobs, Jonathan Grainger
    Abstract:

    A chart of models of visual Word Recognition is presented that facilitates formal comparisons between models of different formats. In the light of the theoretical contributions to this special section, sets of criteria for the evaluation of models are discussed, as well as strategies for model construction.

Johannes C. Ziegler - One of the best experts on this subject based on the ideXlab platform.

  • Frequency-tagged visual evoked responses track syllable effects in visual Word Recognition.
    Cortex; a journal devoted to the study of the nervous system and behavior, 2019
    Co-Authors: Veronica Montani, Jonathan Grainger, Valérie Chanoine, Johannes C. Ziegler
    Abstract:

    Abstract The processing of syllables in visual Word Recognition was investigated using a novel paradigm based on steady-state visual evoked potentials (SSVEPs). French Words were presented to proficient readers in a delayed naming task. Words were split into two segments, the first of which was flickered at 18.75 Hz and the second at 25 Hz. The first segment either matched (congruent condition) or did not match (incongruent condition) the first syllable. The SSVEP responses in the congruent condition showed increased power compared to the responses in the incongruent condition, providing new evidence that syllables are important sublexical units in visual Word Recognition and reading aloud. With respect to the neural correlates of the effect, syllables elicited an early activation of a right hemisphere network. This network is typically associated with the programming of complex motor sequences, cognitive control and timing. Subsequently, responses were obtained in left hemisphere areas related to phonological processing.

  • Pseudohomophone effects provide evidence of early lexico-phonological processing in visual Word Recognition.
    Human brain mapping, 2009
    Co-Authors: Mario Braun, Florian Hutzler, Johannes C. Ziegler, Michael Dambacher, Arthur M. Jacobs
    Abstract:

    Previous research using event-related brain potentials (ERPs) suggested that phonological processing in visual Word Recognition occurs rather late, typically after semantic or syntactic processing. Here, we show that phonological activation in visual Word Recognition can be observed much earlier. Using a lexical decision task, we show that ERPs to pseudohomophones (PsHs) (e.g., ROZE) differed from well-matched spelling controls (e.g., ROFE) as early as 150 ms (P150) after stimulus onset. The PsH effect occurred as early as the Word frequency effect suggesting that phonological activation occurs early enough to influence lexical access. Low-resolution electromagnetic tomography analysis (LORETA) revealed that left temporoparietal and right frontotemporal areas are the likely brain regions associated with the processing of phonological information at the lexical level. Altogether, the results show that phonological processes are activated early in visual Word Recognition and play an important role in lexical access.

  • orthographic facilitation and phonological inhibition in spoken Word Recognition a developmental study
    Psychonomic Bulletin & Review, 2007
    Co-Authors: Johannes C. Ziegler, Mathilde Muneaux
    Abstract:

    We investigated the extent to which learning to read and write affects spoken Word Recognition. Previous studies have reported orthographic effects on spoken language in skilled readers. However, very few studies have addressed the development of these effects as a function of reading expertise. We therefore studied orthographic neighborhood (ON) and phonological neighborhood (PN) effects in spoken Word Recognition in beginning and advanced readers and in children with developmental dyslexia. We predicted that whereas both beginning and advanced readers would show normal PN effects, only advanced readers would show ON effects. The results confirmed these predictions. The size of the ON effect on spoken Word Recognition was strongly predicted by written language experience and proficiency. In contrast, the size of the PN effect was not affected by reading level. Moreover, dyslexic readers showed no orthographic effects on spoken Word Recognition. In sum, these data suggest that orthographic effects on spoken Word Recognition are not artifacts of some uncontrolled spoken language property but reflect a genuine influence of orthographic information on spoken Word Recognition.

  • effects of phonological and orthographic neighbourhood density interact in visual Word Recognition
    Quarterly Journal of Experimental Psychology, 2005
    Co-Authors: Jonathan Grainger, Mathilde Muneaux, Fernand Farioli, Johannes C. Ziegler
    Abstract:

    The present study investigated the role of phonological and orthographic neighbourhood density in visual Word Recognition. Three mechanisms were identified that predict distinct facilitatory or inh...

  • neighborhood effects in auditory Word Recognition phonological competition and orthographic facilitation
    Journal of Memory and Language, 2003
    Co-Authors: Johannes C. Ziegler, Mathilde Muneaux, Jonathan Grainger
    Abstract:

    The present study investigated phonological and orthographic neighborhood effects in auditory Word Recognition in French. In an auditory lexical decision task, phonological neighborhood (PN) produced the standard inhibitory effect (Words with many neighbors produced longer latencies and more errors than Words with few neighbors). In contrast, orthographic neighborhood (ON) produced a facilitatory effect. In Experiment 2, the facilitatory ON effect was replicated while controlling for phonotactic probability, a variable that has previously been shown to produce facilitatory effects. In Experiment 3, the results were replicated in a shadowing task, ruling out the possibility that the ON effect results from a strategic and task-specific mechanism that might operate in the lexical decision task. It is argued that the PN effect reflects lexical competition between similar sounding Words while the ON effect reflects the consistency of the sublexical mapping between phonology and orthography. The results join an accumulating number of studies suggesting that orthographic information influences auditory Word Recognition.

Mathilde Muneaux - One of the best experts on this subject based on the ideXlab platform.

  • orthographic facilitation and phonological inhibition in spoken Word Recognition a developmental study
    Psychonomic Bulletin & Review, 2007
    Co-Authors: Johannes C. Ziegler, Mathilde Muneaux
    Abstract:

    We investigated the extent to which learning to read and write affects spoken Word Recognition. Previous studies have reported orthographic effects on spoken language in skilled readers. However, very few studies have addressed the development of these effects as a function of reading expertise. We therefore studied orthographic neighborhood (ON) and phonological neighborhood (PN) effects in spoken Word Recognition in beginning and advanced readers and in children with developmental dyslexia. We predicted that whereas both beginning and advanced readers would show normal PN effects, only advanced readers would show ON effects. The results confirmed these predictions. The size of the ON effect on spoken Word Recognition was strongly predicted by written language experience and proficiency. In contrast, the size of the PN effect was not affected by reading level. Moreover, dyslexic readers showed no orthographic effects on spoken Word Recognition. In sum, these data suggest that orthographic effects on spoken Word Recognition are not artifacts of some uncontrolled spoken language property but reflect a genuine influence of orthographic information on spoken Word Recognition.

  • effects of phonological and orthographic neighbourhood density interact in visual Word Recognition
    Quarterly Journal of Experimental Psychology, 2005
    Co-Authors: Jonathan Grainger, Mathilde Muneaux, Fernand Farioli, Johannes C. Ziegler
    Abstract:

    The present study investigated the role of phonological and orthographic neighbourhood density in visual Word Recognition. Three mechanisms were identified that predict distinct facilitatory or inh...

  • neighborhood effects in auditory Word Recognition phonological competition and orthographic facilitation
    Journal of Memory and Language, 2003
    Co-Authors: Johannes C. Ziegler, Mathilde Muneaux, Jonathan Grainger
    Abstract:

    The present study investigated phonological and orthographic neighborhood effects in auditory Word Recognition in French. In an auditory lexical decision task, phonological neighborhood (PN) produced the standard inhibitory effect (Words with many neighbors produced longer latencies and more errors than Words with few neighbors). In contrast, orthographic neighborhood (ON) produced a facilitatory effect. In Experiment 2, the facilitatory ON effect was replicated while controlling for phonotactic probability, a variable that has previously been shown to produce facilitatory effects. In Experiment 3, the results were replicated in a shadowing task, ruling out the possibility that the ON effect results from a strategic and task-specific mechanism that might operate in the lexical decision task. It is argued that the PN effect reflects lexical competition between similar sounding Words while the ON effect reflects the consistency of the sublexical mapping between phonology and orthography. The results join an accumulating number of studies suggesting that orthographic information influences auditory Word Recognition.

David B Pisoni - One of the best experts on this subject based on the ideXlab platform.

  • clustering coefficients of lexical neighborhoods does neighborhood structure matter in spoken Word Recognition
    The Mental Lexicon, 2010
    Co-Authors: Nicholas Altieri, Thomas M Gruenenfelder, David B Pisoni
    Abstract:

    High neighborhood density reduces the speed and accuracy of spoken Word Recognition. The two studies reported here investigated whether Clustering Coefficient (CC) — a graph theoretic variable measuring the degree to which a Word’s neighbors are neighbors of one another, has similar effects on spoken Word Recognition. In Experiment 1, we found that high CC Words were identified less accurately when spectrally degraded than low CC Words. In Experiment 2, using a Word repetition procedure, we observed longer response latencies for high CC Words compared to low CC Words. Taken together, the results of both studies indicate that higher CC leads to slower and less accurate spoken Word Recognition. The results are discussed in terms of activation-plus-competition models of spoken Word Recognition.

  • lexical effects on spoken Word Recognition by pediatric cochlear implant users
    Ear and Hearing, 1995
    Co-Authors: Karen Iler Kirk, David B Pisoni, Mary Joe Osberger
    Abstract:

    The Nucleus multichannel cochlear implant provides substantial auditory information to children with profound hearing impairments who are unable to benefit from conventional amplification. However, children who use the Nucleus cochlear implant greatly vary in their spoken Word Recognition skills (Staller, Beiter, Brimacombe, Mecklenburg, & Arndt, 1991a), depending in part on the age at onset and duration of their hearing loss (Fryauf-Bertschy, Tyler, Kelsay, & Gantz, 1992; Osberger, Todd, Berry, Robbins, & Miyamoto, 1991b; Staller et al., 1991a; Staller, Dowell, Beiter, & Brimacombe, 1991b), and on the length of cochlear implant use (Fryauf-Bertschy et al., 1992; Miyamoto et al., 1992, 1994; Osberger et al., 1991a; Waltzman, Cohen, & Shapiro, 1992; Waltzman et al., 1990). Several different types of tests have been used to assess the perceptual benefits of cochlear implant use in children because of this variability in performance. Closed-set tests, which provide the listener with a limited number of response alternatives, have been used to measure the perception of prosodic cues, vowel and consonant identification, and Word identification. According to Tyler (1993), approximately 50% of children with multichannel cochlear implants perform significantly above chance on closed-set tests of Word identification, and some obtain very high levels of performance (70% to 100% correct). For this latter group, more difficult open-set tests of spoken Word Recognition, wherein no response alternatives are provided, are needed to assess their perceptual capabilities. Historically, spoken Word Recognition tests were adapted from articulation tests used to evaluate military communications equipment during World War I1 (Hudgins, Hawkins, Karlin, & Stevens, 1947). Several criteria were considered essential in selecting test items, including familiarity, homogeneity of audibility, and phonetic balancing (i.e., to have phonemes within a Word list represented in the same proportion as in English). Phonetic balancing was included as a criterion because it was assumed that all speech sounds must be included to test hearing (Hudgins et al., 1947), and that phonetic balancing ensured homogeneity across different lists (Hirsh et al., 1952). Subsequent research demonstrated that phonetic balancing was not necessary to achieve equivalent Word lists (Carhart, 1965; Hood & Poole, 1980; Tobias, 1964) and that other nonauditory factors, such as subject age or language level, also influence spoken Word Recognition (Hodgson, 1985; Jerger, 1984; Smith & Hodgson, 1970). Nonetheless, phonetically balanced Word Recognition tests still enjoy widespread use in both clinical and research settings because their psychometric properties have been well established (Hirsh et al., 1952; Hudgins et al., 1947). These tests also are widely used because recorded versions of the test materials are available commercially, thereby facilitating comparison of results obtained at different test sites. Phonetically balanced Word lists have been used to evaluate potential cochlear implant candidates, as well as to measure post-implant performance. Spoken Word Recognition is often assessed in children using phonetically balanced materials such as the Phonetically Balanced Kindergarten Word lists (PB-K) (Haskins, Reference Note 1). Children with multichannel cochlear implants generally perform poorly on these phonetically balanced tests (Fryauf-Bertschy et al., 1992; Miyamoto, Osberger, Robbins, Myres, & Kessler, 1993; Osberger et al., 1991a; Staller et al., 1991a). For example, Osberger et al. (1991a) reported that the mean PB-K score for 28 subjects with approximately 2 yr of cochlear implant use was 11% (range 0% to 36%). Only six of their subjects scored above 0% Words correct. Similarly, Staller et al. (1991a) reported mean PB-K scores of approximately 9% Words correct for 80 children who had 1 yr of multichannel cochlear implant experience. It is difficult to distinguish among children with differing spoken Word Recognition skills using the PB-K test, or to measure changes with increased device experience because the scores of these subjects cluster in a restricted range near 0% correct. Furthermore, the parents and educators of children with cochlear implants have sometimes reported a discrepancy between the observed performance on these phonetically balanced Word lists and real-world or everyday communication abilities in more natural settings. That is, children may obtain very low scores on phonetically balanced Word lists, but demonstrate relatively good performance during daily activities. The administration of spoken Word Recognition tests assesses the underlying peripheral and central perceptual processes employed in spoken Word Recognition (Lively, Pisoni, & Goldinger, 1994; Pisoni & Luce, 1986). Models of spoken Word Recognition generally propose an initial stage of processing wherein the speech signal is converted to a phonetic representation, followed by a second stage wherein the phonetic representations are matched to the target Words by comparing them to items stored in the mental lexicon (Luce, 1986; Luce, Pisoni, & Goldinger, 1990; Marslen-Wilson, 1987). (For an alternative view, see Klatt's Lexical Access From Spectra [LAFS] model [Klatt, 1980]). Poor performance on phonetically balanced speech identification tests may result from difficulties at either stage. If the auditory signal presented via the cochlear implant is too degraded to allow accurate phonetic encoding, Word Recognition performance will be impaired or reduced. The structure and organization of sound patterns in the mental lexicon can also influence Word Recognition (Pisoni, Nusbaum, Luce, & Slowiaczek, 1985). For example, when test item selection is constrained by phonetic balancing, the resulting lists may contain many Words that are unfamiliar to children with profound hearing losses, who typically have limited vocabularies (Dale, 1974; Lach, Ling, & Ling, 1970; Quigley & Paul, 1984). Children should be able to repeat unfamiliar Words if their sensory aid provides adequate auditory information for phoneme identification. If not, then children will most likely select a phonemically similar Word within their working vocabulary. In addition, lexical characteristics, such as the frequency with which Words occur in the language (Andrews, 1989; Elliot, Clifton, & Servi, 1983) and the number of phonemically similar Words in the language (Treisman, 1978a, 1978b) have been shown to affect the speed and accuracy of spoken Word Recognition (Luce, 1986; Luce et al., 1990). Phonetically balanced Word Recognition tests were not designed to assess the influence of these lexical factors on Word Recognition. This paper reports the development of two new Word Recognition tests in which lexical properties of the test items were carefully controlled; test development was motivated by several assumptions embodied in current theories of spoken Word Recognition discussed below. Pediatric cochlear implant subjects’ performance on these new tests will also be compared with their performance on a phonetically balanced, Word Recognition test, the PB-K test.

  • stimulus variability and spoken Word Recognition i effects of variability in speaking rate and overall amplitude
    Journal of the Acoustical Society of America, 1994
    Co-Authors: Mitchell S Sommers, Lynne C Nygaard, David B Pisoni
    Abstract:

    The present experiments investigated how several different sources of stimulus variability within speech signals affect spoken‐Word Recognition. The effects of varying talker characteristics, speaking rate, and overall amplitude on identification performance were assessed by comparing spoken‐Word Recognition scores for contexts with and without variability along a specified stimulus dimension. Identification scores for Word lists produced by single talkers were significantly better than for the identical items produced in multiple‐talker contexts. Similarly, Recognition scores for Words produced at a single speaking rate were significantly better than for the corresponding mixed‐rate condition. Simultaneous variations in both speaking rate and talker characteristics produced greater reductions in perceptual identification scores than variability along either dimension alone. In contrast, variability in the overall amplitude of test items over a 30‐dB range did not significantly alter spoken‐Word Recognition scores. The results provide evidence for one or more resource‐demanding normalization processes which function to maintain perceptual constancy by compensating for acoustic–phonetic variability in speech signals that can affect phonetic identification.