Visual Word Recognition

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 11817 Experts worldwide ranked by ideXlab platform

Arthur M. Jacobs - One of the best experts on this subject based on the ideXlab platform.

  • Pseudohomophone effects provide evidence of early lexico-phonological processing in Visual Word Recognition.
    Human brain mapping, 2009
    Co-Authors: Mario Braun, Florian Hutzler, Johannes C. Ziegler, Michael Dambacher, Arthur M. Jacobs
    Abstract:

    Previous research using event-related brain potentials (ERPs) suggested that phonological processing in Visual Word Recognition occurs rather late, typically after semantic or syntactic processing. Here, we show that phonological activation in Visual Word Recognition can be observed much earlier. Using a lexical decision task, we show that ERPs to pseudohomophones (PsHs) (e.g., ROZE) differed from well-matched spelling controls (e.g., ROFE) as early as 150 ms (P150) after stimulus onset. The PsH effect occurred as early as the Word frequency effect suggesting that phonological activation occurs early enough to influence lexical access. Low-resolution electromagnetic tomography analysis (LORETA) revealed that left temporoparietal and right frontotemporal areas are the likely brain regions associated with the processing of phonological information at the lexical level. Altogether, the results show that phonological processes are activated early in Visual Word Recognition and play an important role in lexical access.

  • Syllables and bigrams: orthographic redundancy and syllabic units affect Visual Word Recognition at different processing levels.
    Journal of experimental psychology. Human perception and performance, 2009
    Co-Authors: Markus Conrad, Manuel Carreiras, Sascha Tamm, Arthur M. Jacobs
    Abstract:

    Over the last decade, there has been increasing evidence for syllabic processing during Visual Word Recognition. If syllabic effects would prove to be independent from orthographic redundancy, this would seriously challenge the ability of current computational models to account for the processing of polysyllabic Words. Three experiments are presented to disentangle effects of the frequency of syllabic units and orthographic segments in lexical decision. In Experiment 1 we obtained an inhibitory syllable-frequency effect that was unaffected by the presence or absence of a “bigram trough” at the syllable boundary. In Experiments 2 and 3 an inhibitory effect of initial syllable-frequency but a facilitative effect of initial bigram-frequency emerged when manipulating one of the two measures and controlling for the other in Spanish Words starting with CV-syllables. We conclude that effects of syllable-frequency and letter cluster frequency are independent and arise at different processing levels of Visual Word Recognition. Results are discussed within the framework of an interactive activation model of Visual Word Recognition.

  • Phonology as the source of syllable frequency effects in Visual Word Recognition: evidence from French.
    Memory & cognition, 2007
    Co-Authors: Markus Conrad, Jonathan Grainger, Arthur M. Jacobs
    Abstract:

    In order to investigate whether syllable frequency effects in Visual Word Recognition can be attributed to phonologically or orthographically defined syllables, we designed one experiment that allowed six critical comparisons. Whereas only a weak effect was obtained when both orthographic and phonological syllable frequency were conjointly manipulated in Comparison 1, robust effects for phonological and null effects for orthographic syllable frequency were found in Comparisons 2 and 3. Comparisons 4 and 5 showed that the syllable frequency effect does not result from a confound with the frequency of letter or phoneme clusters at the beginning of Words. The syllable frequency effect was shown to diminish with increasing Word frequency in Comparison 6. These results suggest that Visually presented polysyllabic Words are parsed into phonologically defined syllables during Visual Word Recognition. Materials and links may be accessed at www.psychonomic.org/archive.

  • Inhibition and facilitation in Visual Word Recognition: Prefrontal contribution to the orthographic neighborhood size effect
    NeuroImage, 2007
    Co-Authors: Christian J. Fiebach, Brigitte Ricker, Angela D. Friederici, Arthur M. Jacobs
    Abstract:

    The Recognition of Words is a central component of language processing. A major role for Visual Word Recognition has been attributed to the orthographic neighbors of a Word, i.e., Words that are orthographically similar to a target Word. It has been demonstrated that the presence of orthographic neighbors facilitates the Recognition of Words, but hinders the rejection of nonWords. It is therefore assumed that representations of orthographic neighbors are at least partially activated during Word Recognition, and that they influence Word Recognition depending on the specific task context. In the present study, we used fMRI to examine the neural bases of the effect of orthographic neighborhood size on speeded lexical decisions to Words and nonWords. Our results demonstrate lexicality x neighborhood size interactions in mid-dorsolateral and medial prefrontal cortex, suggesting the involvement of a domain-general, extra-lexical process for orthographic neighborhood effects on Word and nonWord processing. This result challenges computational models that offer purely lexical accounts of the orthographic neighborhood effect and suggests an important role for executive control functions during Visual Word Recognition.

  • A phoneme effect in Visual Word Recognition
    Cognition, 1998
    Co-Authors: Arnaud Rey, Arthur M. Jacobs, Florian Schmidt-weigand, Johannes C. Ziegler
    Abstract:

    In alphabetic writing systems like English or French, many Words are composed of more letters than phonemes (e.g. BEACH is composed of five letters and three phonemes, i.e./biJ/). This is due to the presence of higher order graphemes, that is, groups of letters that map into a single phoneme (e.g. EA and CH in BEACH map into the single phonemes /i/ and /J/, respectively). The present study investigated the potential role of these subsyllabic components for the Visual Recognition of Words in a perceptual identification task. In Experiment 1, we manipulated the number of phonemes in monosyllabic, low frequency, five-letter, English Words, and found that identification times were longer for Words with a small number of phonemes than for Words with a large number of phonemes. In Experiment 2, this 'phoneme effect' was replicated in French for low frequency, but not for high frequency, monosyllabic Words. These results suggest that subsyllabic components, also referred to as functional orthographic units, play a crucial role as elementary building blocks of Visual Word Recognition.

Manuel Carreiras - One of the best experts on this subject based on the ideXlab platform.

  • The what, when, where, and how of Visual Word Recognition
    Trends in cognitive sciences, 2013
    Co-Authors: Manuel Carreiras, Manuel Perea, Blair C. Armstrong, Ram Frost
    Abstract:

    A long-standing debate in reading research is whether printed Words are perceived in a feedforward manner on the basis of orthographic information, with other representations such as semantics and phonology activated subsequently, or whether the system is fully interactive and feedback from these representations shapes early Visual Word Recognition. We review recent evidence from behavioral, functional magnetic resonance imaging, electroencephalography, magnetoencephalography, and biologically plausible connectionist modeling approaches, focusing on how each approach provides insight into the temporal flow of information in the lexical system. We conclude that, consistent with interactive accounts, higher-order linguistic representations modulate early orthographic processing. We also discuss how biologically plausible interactive frameworks and coordinated empirical and computational work can advance theories of Visual Word Recognition and other domains (e.g., object Recognition).

  • consonants and vowels contribute differently to Visual Word Recognition erps of relative position priming
    Cerebral Cortex, 2009
    Co-Authors: Manuel Carreiras, Jon Andoni Dunabeitia, Nicola Molinaro
    Abstract:

    Abstract This paper shows that the nature of letters –consonant vs. vowel– modulates the process of letter position assignment during Visual Word Recognition. We recorded Event Related Potentials (ERPs) while participants read Words in a masked priming semantic categorization task. Half of the Words included a vowel as initial, third and fifth letters (e.g., acero [steel]). The other half included a consonant as initial, third and fifth (e.g., farol, [lantern]). Targets could be preceded 1) by the initial, third and fifth letters (relative position; e.g., aeo - acero and frl - farol), 2) by three consonants or vowels that did not appear in the target Word (control; e.g., iui - acero and tsb - farol), or 3) by the same Words (identity: acero-acero, farol-farol). The results showed modulation in two time windows (175-250 and 350-450 ms). Relative position primes composed of consonants produced similar effects to the identity condition. These two differed from the unrelated control condition, which showed a larger negativity. In contrast, relative position primes composed of vowels produced similar effects to the unrelated control condition and these two showed larger negativities as compared to the identity condition. This finding has important consequences for cracking the orthographic code and developing computational models of Visual Word Recognition.

  • Syllables and bigrams: orthographic redundancy and syllabic units affect Visual Word Recognition at different processing levels.
    Journal of experimental psychology. Human perception and performance, 2009
    Co-Authors: Markus Conrad, Manuel Carreiras, Sascha Tamm, Arthur M. Jacobs
    Abstract:

    Over the last decade, there has been increasing evidence for syllabic processing during Visual Word Recognition. If syllabic effects would prove to be independent from orthographic redundancy, this would seriously challenge the ability of current computational models to account for the processing of polysyllabic Words. Three experiments are presented to disentangle effects of the frequency of syllabic units and orthographic segments in lexical decision. In Experiment 1 we obtained an inhibitory syllable-frequency effect that was unaffected by the presence or absence of a “bigram trough” at the syllable boundary. In Experiments 2 and 3 an inhibitory effect of initial syllable-frequency but a facilitative effect of initial bigram-frequency emerged when manipulating one of the two measures and controlling for the other in Spanish Words starting with CV-syllables. We conclude that effects of syllable-frequency and letter cluster frequency are independent and arise at different processing levels of Visual Word Recognition. Results are discussed within the framework of an interactive activation model of Visual Word Recognition.

  • NoA’s ark: Influence of the number of associates in Visual Word Recognition
    Psychonomic bulletin & review, 2008
    Co-Authors: Jon Andoni Dunabeitia, Alberto Avilés, Manuel Carreiras
    Abstract:

    The main aim of this study was to explore the extent to which the number of associates of a Word (NoA) influences lexical access, in four tasks that focus on different processes of Visual Word Recognition: lexical decision, reading aloud, progressive demasking, and online sentence reading. Results consistently showed that Words with a dense associative neighborhood (high-NoA Words) were processed faster than Words with a sparse neighborhood (low-NoA Words), extending previous findings from English lexical decision and categorization experiments. These results are interpreted in terms of the higher degree of semantic richness of high-NoA Words as compared with low-NoA Words.

  • early event related potential effects of syllabic processing during Visual Word Recognition
    Journal of Cognitive Neuroscience, 2005
    Co-Authors: Manuel Carreiras, Marta Vergara, Horacio A Barber
    Abstract:

    A number of behavioral studies have suggested that syllables might play an important role in Visual Word Recognition in some languages. We report two event-related potential (ERP) experiments using a new paradigm showing that syllabic units modulate early ERP components. In Experiment 1, Words and pseudoWords were presented Visually and colored so that there was a match or a mismatch between the syllable boundaries and the color boundaries. The results showed color–syllable congruency effects in the time window of the P200. Lexicality modulated the N400 amplitude, but no effects of this variable were obtained at the P200 window. In Experiment 2, high- and low-frequency Words and pseudoWords were presented in the congruent and incongruent conditions. The results again showed congruency effects at the P200 for low-frequency Words and pseudoWords, but not for high-frequency Words. Lexicality and lexical frequency effects showed up at the N400 component. The results suggest a dissociation between syllabic and lexical effects with important consequences for models of Visual Word Recognition.

Jonathan Grainger - One of the best experts on this subject based on the ideXlab platform.

  • Frequency-tagged Visual evoked responses track syllable effects in Visual Word Recognition.
    Cortex; a journal devoted to the study of the nervous system and behavior, 2019
    Co-Authors: Veronica Montani, Jonathan Grainger, Valérie Chanoine, Johannes C. Ziegler
    Abstract:

    Abstract The processing of syllables in Visual Word Recognition was investigated using a novel paradigm based on steady-state Visual evoked potentials (SSVEPs). French Words were presented to proficient readers in a delayed naming task. Words were split into two segments, the first of which was flickered at 18.75 Hz and the second at 25 Hz. The first segment either matched (congruent condition) or did not match (incongruent condition) the first syllable. The SSVEP responses in the congruent condition showed increased power compared to the responses in the incongruent condition, providing new evidence that syllables are important sublexical units in Visual Word Recognition and reading aloud. With respect to the neural correlates of the effect, syllables elicited an early activation of a right hemisphere network. This network is typically associated with the programming of complex motor sequences, cognitive control and timing. Subsequently, responses were obtained in left hemisphere areas related to phonological processing.

  • Phonology as the source of syllable frequency effects in Visual Word Recognition: evidence from French.
    Memory & cognition, 2007
    Co-Authors: Markus Conrad, Jonathan Grainger, Arthur M. Jacobs
    Abstract:

    In order to investigate whether syllable frequency effects in Visual Word Recognition can be attributed to phonologically or orthographically defined syllables, we designed one experiment that allowed six critical comparisons. Whereas only a weak effect was obtained when both orthographic and phonological syllable frequency were conjointly manipulated in Comparison 1, robust effects for phonological and null effects for orthographic syllable frequency were found in Comparisons 2 and 3. Comparisons 4 and 5 showed that the syllable frequency effect does not result from a confound with the frequency of letter or phoneme clusters at the beginning of Words. The syllable frequency effect was shown to diminish with increasing Word frequency in Comparison 6. These results suggest that Visually presented polysyllabic Words are parsed into phonologically defined syllables during Visual Word Recognition. Materials and links may be accessed at www.psychonomic.org/archive.

  • effects of phonological and orthographic neighbourhood density interact in Visual Word Recognition
    Quarterly Journal of Experimental Psychology, 2005
    Co-Authors: Jonathan Grainger, Mathilde Muneaux, Fernand Farioli, Johannes C. Ziegler
    Abstract:

    The present study investigated the role of phonological and orthographic neighbourhood density in Visual Word Recognition. Three mechanisms were identified that predict distinct facilitatory or inh...

  • Sublexical representations and the ‘front end’ of Visual Word Recognition
    Language and Cognitive Processes, 2004
    Co-Authors: Manuel Carreiras, Jonathan Grainger
    Abstract:

    In this introduction to the special issue on sublexical representations in Visual Word Recognition, we briefly discuss the importance of research that attempts to describe the functional units that intervene between low-level perceptual processes and access to whole-Word representations in long-term memory. We will comment on how the different contributions to this issue add to our growing knowledge of the role of orthographic, phonological, and morphological information in the overall task of assigning the appropriate meaning to a given string of letters during reading. We also show how the empirical findings reported in this special issue present a challenge for current computational models of Visual Word Recognition.

  • models of Visual Word Recognition sampling the state of the art
    Journal of Experimental Psychology: Human Perception and Performance, 1994
    Co-Authors: Arthur M. Jacobs, Jonathan Grainger
    Abstract:

    A chart of models of Visual Word Recognition is presented that facilitates formal comparisons between models of different formats. In the light of the theoretical contributions to this special section, sets of criteria for the evaluation of models are discussed, as well as strategies for model construction.

Jennifer A. Stolz - One of the best experts on this subject based on the ideXlab platform.

  • Visual Word Recognition: On the reliability of repetition priming
    Visual Cognition, 2010
    Co-Authors: Stephanie Waechter, Jennifer A. Stolz, Derek Besner
    Abstract:

    Repetition priming is one of the most robust phenomena in cognitive psychology, but participants vary substantially on the amount of priming that they produce. The current experiments assessed the reliability of repetition priming within individuals. The results suggest that observed differences in the size of the repetition priming effect across participants are largely reliable and result primarily from systematic processes. We conclude that the unreliability of semantic priming observed by Stolz, Besner, and Carr (2005) is specific to uncoordinated processes in semantic memory, and that this unreliability does not generalize to other processes in Visual Word Recognition. We consider the implications of these results for theories of automatic and controlled processes that contribute to priming. Finally, we emphasize the importance of reliability for researchers who use similar paradigms to study individual and group differences in cognition.

  • Basic processes in reading: is Visual Word Recognition obligatory?
    Psychonomic bulletin & review, 2005
    Co-Authors: Evan F. Risko, Jennifer A. Stolz, Derek Besner
    Abstract:

    Visual Word Recognition is commonly argued to be automatic in the sense that it is obligatory and ballistic. The present experiments combined Stroop and Visual search paradigms to provide a novel test of this claim. An array of three, five, or seven Words including one colored target (a Word in Experiments 1 and 2, a bar in Experiment 3) was presented to participants. An irrelevant color Word also appeared in the display and was either integrated with or separated from the colored target. The participants classified the color of the single colored item in Experiments 1 and 3 and determined whether a target color was present or absent in Experiment 2. A Stroop effect was observed in Experiment 1 when the color Word and the color target were integral, but not when the color Word and the color target were separated. No Stroop effect was observed in Experiment 2. Visual Word Recognition is contingent on both the distribution of spatial attention and task demands.

  • Interactive activation in Visual Word Recognition: constraints imposed by the joint effects of spatial attention and semantics.
    Journal of experimental psychology. Human perception and performance, 2004
    Co-Authors: Jennifer A. Stolz, Biljana Stevanovski
    Abstract:

    Two lexical-decision experiments investigated the effects of semantic priming and stimulus intensity when target location varied and was cued by an abrupt onset. In Experiment 1, the spatial cue was a good predictor of target location, and in Experiment 2 it was not. The results indicate that Word Recognition processes were postponed until spatial attention was focused on the target and that whether attention further affected Word Recognition depended on cue validity. The joint effects of cue validity and priming interacted when cue validity was high but were additive when cue validity was low. The joint effects of stimulus intensity and semantic priming also varied according to cue validity (i.e., interactive when high and additive when low). The results are discussed in terms of their implications for Visual Word Recognition, the distinction between exogenous and endogenous spatial attention, and how attention is affected by Visual Word Recognition processes.

  • Visual Word Recognition: reattending to the role of spatial attention.
    Journal of experimental psychology. Human perception and performance, 2000
    Co-Authors: Jennifer A. Stolz, Robert S. Mccann
    Abstract:

    Three experiments examine whether spatial attention and Visual Word Recognition processes operate independently or interactively in a spatially cued lexical-decision task. Participants responded to target strings that had been preceded first by a prime Word at fixation and then by an abrupt onset cue either above or below fixation. Targets appeared either in the cued (i.e., valid) or uncued (i.e., invalid) location. The proportion of validly cued trials and the proportion of semantically related prime-target pairs were manipulated independently. It is concluded that spatial attention and Visual Word Recognition processes are best seen as interactive. Spatial attention affects Word Recognition in 2 distinct ways: (a) it affects the uptake of orthographic information, possibly acting as "glue" to hold letters in their proper places in Words, and (b) it (partly) determines whether or not activation from the semantic level feeds down to the lexical level during Word Recognition.

Manuel Perea - One of the best experts on this subject based on the ideXlab platform.

  • The time course of the lowercase advantage in Visual Word Recognition: An ERP investigation.
    Neuropsychologia, 2020
    Co-Authors: Marta Vergara-martínez, Manuel Perea, Barbara Leone-fernandez
    Abstract:

    Previous Word identification and sentence reading experiments have consistently shown faster reading for lowercase than for uppercase Words (e.g., table faster than TABLE). A theoretically relevant question for neural models of Word Recognition is whether the effect of letter-case only affects the early prelexical stages of Visual Word Recognition or whether it also influences lexical-semantic processing. To examine the locus and nature of the lowercase advantage in Visual Word Recognition, we conducted an event-related potential (ERP) lexical decision experiment. ERPs were recorded to Words and pseudoWords presented in lowercase or uppercase. Words also varied in lexical frequency, thus allowing us to assess the time-course of perceptual (letter-case) and lexical-semantic (Word-frequency) processing. Together with a lowercase advantage in Word Recognition times, results showed that letter-case influenced early perceptual components (N/P150), whereas Word frequency influenced lexical-semantic components (N400). These findings are consistent with those models of written Word Recognition that assume that letter-case information from the Visual input is quickly mapped onto the case-invariant letter and Word units that drive lexical access.

  • Is there a cost at encoding Words with joined letters during Visual Word Recognition
    Psicológica Journal, 2018
    Co-Authors: Manuel Roldán, Ana Marcet, Manuel Perea
    Abstract:

    For simplicity, models of Visual-Word Recognition have focused on printed Words composed of separated letters, thus overlooking the processing of cursive Words. Manso de Zuniga, Humphreys, and Evett (1991) claimed that there is an early “cursive normalization” encoding stage when processing written Words with joined letters. To test this claim, we conducted a lexical decision experiment in which Words were presented either with separated or joined letters. To examine if the cost of letter segmentation occurs early in processing, we also manipulated a factor (i.e., Word-frequency) that is posited to affect subsequent lexical processing. Results showed faster response times for the Words composed of separated letters than for the Words composed of joined letters. This effect occurred similarly for low- and high-frequency Words. Thus, the present data offer some empirical support to Manso de Zuniga et al.’s (1991) idea of an early “cursive normalization” stage when processing joined-letters Words. This pattern of data can be used to constrain the mapping of the Visual input into letter and Word units in future versions of models of Visual Word Recognition.

  • Resolving the locus of cAsE aLtErNaTiOn effects in Visual Word Recognition: Evidence from masked priming
    Cognition, 2015
    Co-Authors: Manuel Perea, Marta Vergara-martínez, Pablo Gomez
    Abstract:

    Determining the factors that modulate the early access of abstract lexical representations is imperative for the formulation of a comprehensive neural account of Visual-Word identification. There is a current debate on whether the effects of case alternation (e.g., tRaIn vs. train) have an early or late locus in the Word-processing stream. Here we report a lexical decision experiment using a technique that taps the early stages of Visual-Word Recognition (i.e., masked priming). In the design, uppercase targets could be preceded by an identity/unrelated prime that could be in lowercase or alternating case (e.g., table-TABLE vs. crash-TABLE; tAbLe-TABLE vs. cRaSh-TABLE). Results revealed that the lowercase and alternating case primes were equally effective at producing an identity priming effect. This finding demonstrates that case alternation does not hinder the initial access to the abstract lexical representations during Visual-Word Recognition.

  • Decomposing encoding and decisional components in Visual-Word Recognition: a diffusion model analysis.
    Quarterly journal of experimental psychology (2006), 2014
    Co-Authors: Pablo Gomez, Manuel Perea
    Abstract:

    In a diffusion model, performance as measured by latency and accuracy in two-choice tasks is decomposed into different parameters that can be linked to underlying cognitive processes. Although the diffusion model has been utilized to account for lexical decision data, the effects of stimulus manipulations in previous experiments originated from just one parameter: the quality of the evidence. Here we examined whether the diffusion model can be used to effectively decompose the underlying processes during Visual-Word Recognition. We explore this issue in an experiment that features a lexical manipulation (Word frequency) that we expected to affect mostly the quality of the evidence (the drift rate parameter), and a perceptual manipulation (stimulus orientation) that presumably affects the nondecisional time (the Ter parameter, time of encoding and response) more than it affects the drift rate. Results showed that although the manipulations do not affect only one parameter, Word frequency and stimulus orientation had differential effects on the model's parameters. Thus, the diffusion model is a useful tool to decompose the effects of stimulus manipulations in Visual-Word Recognition.

  • Does bold emphasis facilitate the process of Visual-Word Recognition?
    The Spanish journal of psychology, 2014
    Co-Authors: María Macaya, Manuel Perea
    Abstract:

    The study of the effects of typographical factors on lexical access has been rather neglected in the literature on Visual-Word Recognition. Indeed, current computational models of Visual-Word Recognition employ an unrefined letter feature level in their coding schemes. In a letter Recognition experiment, Pelli, Burns, Farell, and Moore-Page (2006), letters in Bookman boldface produced more efficiency (i.e., a higher ratio of thresholds of an ideal observer versus a human observer) than the letters in Bookman regular under Visual noise. Here we examined whether the effect of bold emphasis can be generalized to a common Visual-Word Recognition task (lexical decision: "is the item a Word?") under standard viewing conditions. Each stimulus was presented either with or without bold emphasis (e.g., actor vs. actor). To help determine the locus of the effect of bold emphasis, Word-frequency (low vs. high) was also manipulated. Results revealed that responses to Words in boldface were faster than the responses to the Words without emphasis -this advantage was restricted to low-frequency Words. Thus, typographical features play a non-negligible role during Visual-Word Recognition and, hence, the letter feature level of current models of Visual-Word Recognition should be amended.