Visual Search

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 360 Experts worldwide ranked by ideXlab platform

Jeremy M Wolfe - One of the best experts on this subject based on the ideXlab platform.

  • Visual Search how do we find what we are looking for
    Annual Review of Vision Science, 2020
    Co-Authors: Jeremy M Wolfe
    Abstract:

    In Visual Search tasks, observers look for targets among distractors. In the lab, this often takes the form of multiple Searches for a simple shape that may or may not be present among other items scattered at random on a computer screen (e.g., Find a red T among other letters that are either black or red.). In the real world, observers may Search for multiple classes of target in complex scenes that occur only once (e.g., As I emerge from the subway, can I find lunch, my friend, and a street sign in the scene before me?). This article reviews work on how Search is guided intelligently. I ask how serial and parallel processes collaborate in Visual Search, describe the distinction between Search templates in working memory and target templates in long-term memory, and consider how Searches are terminated.

  • Five factors that guide attention in Visual Search
    Nature Human Behaviour, 2017
    Co-Authors: Jeremy M Wolfe, Todd S Horowitz
    Abstract:

    How do we find what we are looking for? Wolfe and Horowitz discuss the five factors that guide attention in Visual Search. How do we find what we are looking for? Even when the desired target is in the current field of view, we need to Search because fundamental limits on Visual processing make it impossible to recognize everything at once. Searching involves directing attention to objects that might be the target. This deployment of attention is not random. It is guided to the most promising items and locations by five factors discussed here: bottom-up salience, top-down feature guidance, scene structure and meaning, the previous history of Search over timescales ranging from milliseconds to years, and the relative value of the targets and distractors. Modern theories of Visual Search need to incorporate all five factors and specify how these factors combine to shape Search behaviour. An understanding of the rules of guidance can be used to improve the accuracy and efficiency of socially important Search tasks, from security screening to medical image perception.

  • signal detection evidence for limited capacity in Visual Search
    Attention Perception & Psychophysics, 2011
    Co-Authors: Evan M Palmer, Jeremy M Wolfe, David E Fencsik, Stephen J Flusberg, Todd S Horowitz
    Abstract:

    The nature of capacity limits (if any) in Visual Search has been a topic of controversy for decades. In 30 years of work, reSearchers have attempted to distinguish between two broad classes of Visual Search models. Attention-limited models have proposed two stages of perceptual processing: an unlimited-capacity preattentive stage, and a limited-capacity selective attention stage. Conversely, noise-limited models have proposed a single, unlimited-capacity perceptual processing stage, with decision processes influenced only by stochastic noise. Here, we use signal detection methods to test a strong prediction of attention-limited models. In standard attention-limited models, performance of some Searches (feature Searches) should only be limited by a preattentive stage. Other Search tasks (e.g., spatial configuration Search for a “2” among “5”s) should be additionally limited by an attentional bottleneck. We equated average accuracies for a feature and a spatial configuration Search over set sizes of 1–8 for briefly presented stimuli. The strong prediction of attention-limited models is that, given overall equivalence in performance, accuracy should be better on the spatial configuration Search than on the feature Search for set size 1, and worse for set size 8. We confirm this crossover interaction and show that it is problematic for at least one class of one-stage decision models.

  • Visual Search in scenes involves selective and nonselective pathways
    Trends in Cognitive Sciences, 2011
    Co-Authors: Jeremy M Wolfe, Karla K Evans, Michelle R Greene
    Abstract:

    How does one find objects in scenes? For decades, Visual Search models have been built on experiments in which observers Search for targets, presented among distractor items, isolated and randomly arranged on blank backgrounds. Are these models relevant to Search in continuous scenes? This article argues that the mechanisms that govern artificial, laboratory Search tasks do play a role in Visual Search in scenes. However, scene-based information is used to guide Search in ways that had no place in earlier models. Search in scenes might be best explained by a dual-path model: a 'selective' path in which candidate objects must be individually selected for recognition and a 'nonselective' path in which information can be extracted from global and/or statistical information.

  • What are the shapes of response time distributions in Visual Search
    Journal of experimental psychology. Human perception and performance, 2011
    Co-Authors: Evan M Palmer, Todd S Horowitz, Antonio Torralba, Jeremy M Wolfe
    Abstract:

    Many Visual Search experiments measure response time (RT) as their primary dependent variable. Analyses typically focus on mean (or median) RT. However, given enough data, the RT distribution can be a rich source of information. For this paper, we collected about 500 trials per cell per observer for both target-present and target-absent displays in each of three classic Search tasks: feature Search, with the target defined by color; conjunction Search, with the target defined by both color and orientation; and spatial configuration Search for a 2 among distractor 5s. This large data set allows us to characterize the RT distributions in detail. We present the raw RT distributions and fit several psychologically motivated functions (ex-Gaussian, ex-Wald, Gamma, and Weibull) to the data. We analyze and interpret parameter trends from these four functions within the context of theories of Visual Search.

Steven J. Luck - One of the best experts on this subject based on the ideXlab platform.

  • Visual Search is slowed when visuospatial working memory is occupied
    Psychonomic Bulletin & Review, 2004
    Co-Authors: Geoffrey F Woodman, Steven J. Luck
    Abstract:

    Visual working memory plays a central role in most models of Visual Search. However, a recent study showed that Search efficiency was not impaired when working memory was filled to capacity by a concurrent object memory task (Woodman, Vogel, & Luck, 2001). Objects and locations may be stored in separate working memory subsystems, and it is plausible that Visual Search relies on the spatial subsystem, but not on the object subsystem. In the present study, we sought to determine whether maintaining spatial information in Visual working memory impairs the efficiency of a concurrent Visual Search task. Visual Search efficiency and spatial memory accuracy were both impaired when the Search and the memory tasks were performed concurrently, as compared with when the tasks were performed separately. These findings suggest that common mechanisms are used to process information during difficult Visual Search tasks and to maintain spatial information in working memory.

  • serial deployment of attention during Visual Search
    Journal of Experimental Psychology: Human Perception and Performance, 2003
    Co-Authors: Geoffrey F Woodman, Steven J. Luck
    Abstract:

    This study examined whether objects are attended in serial or in parallel during a demanding Visual Search task. A component of the event-related potential waveform, the N2pc wave, was used as a continuous measure of the allocation of attention to possible targets in the Search arrays. Experiment 1 demonstrated that the relative allocation of attention shifts rapidly, favoring one item and then another. In Experiment 2, a paradigm was used that made it possible to track the absolute allocation of attention to individual items. This experiment showed that attention was allocated to one object for 100-150 ms before attention began to be allocated to the next object. These findings support models of attention that posit serial processing in demanding Visual Search tasks.

  • Visual Search Remains Efficient when Visual Working Memory is Full
    Psychological science, 2001
    Co-Authors: Geoffrey F Woodman, Edward K. Vogel, Steven J. Luck
    Abstract:

    Many theories of attention have proposed that Visual working memory plays an important role in Visual Search tasks. The present study examined the involvement of Visual working memory in Search using a dual-task paradigm in which participants performed a Visual Search task while maintaining no, two, or four objects in Visual working memory. The presence of a working memory load added a constant delay to the Visual Search reaction times, irrespective of the number of items in the Visual Search array. That is, there was no change in the slope of the function relating reaction time to the number of items in the Search array, indicating that the Search process itself was not slowed by the memory load. Moreover, the Search task did not substantially impair the maintenance of information in Visual working memory. These results suggest that Visual Search requires minimal Visual working memory resources, a conclusion that is inconsistent with theories that propose a close link between attention and working memory.

  • neural sources of focused attention in Visual Search
    Cerebral Cortex, 2000
    Co-Authors: Jens Max Hopf, Steven J. Luck, Massimo Girelli, Tilman Hagner, George R Mangun, Henning Scheich, Hansjochen Heinze
    Abstract:

    Previous studies of Visual Search in humans using event-related potentials (ERPs) have revealed an ERP component called 'N2pc' (180-280 ms) that reflects the focusing of attention onto potential target items in the Search array. The present study was designed to localize the neuroanatomical sources of this component by means of magnetoencephalographic (MEG) recordings, which provide greater spatial precision than ERP recordings. MEG recordings were obtained with an array of 148 magnetometers from six normal adult subjects, one of whom was tested in multiple sessions so that both single-subject and group analyses could be performed. Source localization procedures revealed that the N2pc is composed of two distinct neural responses, an early parietal source (180-200 ms) and a later occipito-temporal source (220-240 ms). These findings are consistent with the proposal that parietal areas are used to initiate a shift of attention within a Visual Search array and that the focusing of attention is implemented by extrastriate areas of the occipital and inferior temporal cortex.

  • electrophysiological measurement of rapid shifts of attention during Visual Search
    Nature, 1999
    Co-Authors: Geoffrey F Woodman, Steven J. Luck
    Abstract:

    The perception of natural Visual scenes that contain many objects poses computational problems that are absent when objects are perceived in isolation1. Vision reSearchers have captured this attribute of real-world perception in the laboratory by using Visual Search tasks, in which subjects Search for a target object in arrays containing varying numbers of non-target distractor objects. Under many conditions, the amount of time required to detect a Visual Search target increases as the number of objects in the stimulus array increases, and some investigators have proposed that this reflects the serial application of attention to the individual objects in the array2,3. However, other investigators have argued that this pattern of results may instead be due to limitations in the processing capacity of a parallel processing system that identifies multiple objects concurrently4,5. Here we attempt to address this longstanding controversy by using an electrophysiological marker of the moment-by-moment direction of attention — the N2pc component of the event-related potential waveform — to show that attention shifts rapidly among objects during Visual Search.

Geoffrey F Woodman - One of the best experts on this subject based on the ideXlab platform.

  • Visual Search is slowed when visuospatial working memory is occupied
    Psychonomic Bulletin & Review, 2004
    Co-Authors: Geoffrey F Woodman, Steven J. Luck
    Abstract:

    Visual working memory plays a central role in most models of Visual Search. However, a recent study showed that Search efficiency was not impaired when working memory was filled to capacity by a concurrent object memory task (Woodman, Vogel, & Luck, 2001). Objects and locations may be stored in separate working memory subsystems, and it is plausible that Visual Search relies on the spatial subsystem, but not on the object subsystem. In the present study, we sought to determine whether maintaining spatial information in Visual working memory impairs the efficiency of a concurrent Visual Search task. Visual Search efficiency and spatial memory accuracy were both impaired when the Search and the memory tasks were performed concurrently, as compared with when the tasks were performed separately. These findings suggest that common mechanisms are used to process information during difficult Visual Search tasks and to maintain spatial information in working memory.

  • serial deployment of attention during Visual Search
    Journal of Experimental Psychology: Human Perception and Performance, 2003
    Co-Authors: Geoffrey F Woodman, Steven J. Luck
    Abstract:

    This study examined whether objects are attended in serial or in parallel during a demanding Visual Search task. A component of the event-related potential waveform, the N2pc wave, was used as a continuous measure of the allocation of attention to possible targets in the Search arrays. Experiment 1 demonstrated that the relative allocation of attention shifts rapidly, favoring one item and then another. In Experiment 2, a paradigm was used that made it possible to track the absolute allocation of attention to individual items. This experiment showed that attention was allocated to one object for 100-150 ms before attention began to be allocated to the next object. These findings support models of attention that posit serial processing in demanding Visual Search tasks.

  • Visual Search Remains Efficient when Visual Working Memory is Full
    Psychological science, 2001
    Co-Authors: Geoffrey F Woodman, Edward K. Vogel, Steven J. Luck
    Abstract:

    Many theories of attention have proposed that Visual working memory plays an important role in Visual Search tasks. The present study examined the involvement of Visual working memory in Search using a dual-task paradigm in which participants performed a Visual Search task while maintaining no, two, or four objects in Visual working memory. The presence of a working memory load added a constant delay to the Visual Search reaction times, irrespective of the number of items in the Visual Search array. That is, there was no change in the slope of the function relating reaction time to the number of items in the Search array, indicating that the Search process itself was not slowed by the memory load. Moreover, the Search task did not substantially impair the maintenance of information in Visual working memory. These results suggest that Visual Search requires minimal Visual working memory resources, a conclusion that is inconsistent with theories that propose a close link between attention and working memory.

  • electrophysiological measurement of rapid shifts of attention during Visual Search
    Nature, 1999
    Co-Authors: Geoffrey F Woodman, Steven J. Luck
    Abstract:

    The perception of natural Visual scenes that contain many objects poses computational problems that are absent when objects are perceived in isolation1. Vision reSearchers have captured this attribute of real-world perception in the laboratory by using Visual Search tasks, in which subjects Search for a target object in arrays containing varying numbers of non-target distractor objects. Under many conditions, the amount of time required to detect a Visual Search target increases as the number of objects in the stimulus array increases, and some investigators have proposed that this reflects the serial application of attention to the individual objects in the array2,3. However, other investigators have argued that this pattern of results may instead be due to limitations in the processing capacity of a parallel processing system that identifies multiple objects concurrently4,5. Here we attempt to address this longstanding controversy by using an electrophysiological marker of the moment-by-moment direction of attention — the N2pc component of the event-related potential waveform — to show that attention shifts rapidly among objects during Visual Search.

Gregor Schöner - One of the best experts on this subject based on the ideXlab platform.

  • Scene memory and spatial inhibition in Visual Search : A neural dynamic process model and new experimental evidence.
    Attention Perception & Psychophysics, 2020
    Co-Authors: Raul Grieben, Jonas Lins, Sebastian Schneegans, Jan Tekulve, Stephan K. U. Zibner, Gregor Schöner
    Abstract:

    Any object-oriented action requires that the object be first brought into the attentional foreground, often through Visual Search. Outside the laboratory, this would always take place in the presence of a scene representation acquired from ongoing Visual exploration. The interaction of scene memory with Visual Search is still not completely understood. Feature integration theory (FIT) has shaped both reSearch on Visual Search, emphasizing the scaling of Search times with set size when Searches entail feature conjunctions, and reSearch on Visual working memory through the change detection paradigm. Despite its neural motivation, there is no consistently neural process account of FIT in both its dimensions. We propose such an account that integrates (1) Visual exploration and the building of scene memory, (2) the attentional detection of Visual transients and the extraction of Search cues, and (3) Visual Search itself. The model uses dynamic field theory in which networks of neural dynamic populations supporting stable activation states are coupled to generate sequences of processing steps. The neural architecture accounts for basic findings in Visual Search and proposes a concrete mechanism for the integration of working memory into the Search process. In a behavioral experiment, we address the long-standing question of whether both the overall speed and the efficiency of Visual Search can be improved by scene memory. We find both effects and provide model fits of the behavioral results. In a second experiment, we show that the increase in efficiency is fragile, and trace that fragility to the resetting of spatial working memory.

  • Scene memory and spatial inhibition in Visual Search
    Attention Perception & Psychophysics, 2020
    Co-Authors: Raul Grieben, Jonas Lins, Sebastian Schneegans, Jan Tekulve, Stephan K. U. Zibner, Gregor Schöner
    Abstract:

    Any object-oriented action requires that the object be first brought into the attentional foreground, often through Visual Search. Outside the laboratory, this would always take place in the presence of a scene representation acquired from ongoing Visual exploration. The interaction of scene memory with Visual Search is still not completely understood. Feature integration theory (FIT) has shaped both reSearch on Visual Search, emphasizing the scaling of Search times with set size when Searches entail feature conjunctions, and reSearch on Visual working memory through the change detection paradigm. Despite its neural motivation, there is no consistently neural process account of FIT in both its dimensions. We propose such an account that integrates (1) Visual exploration and the building of scene memory, (2) the attentional detection of Visual transients and the extraction of Search cues, and (3) Visual Search itself. The model uses dynamic field theory in which networks of neural dynamic populations supporting stable activation states are coupled to generate sequences of processing steps. The neural architecture accounts for basic findings in Visual Search and proposes a concrete mechanism for the integration of working memory into the Search process. In a behavioral experiment, we address the long-standing question of whether both the overall speed and the efficiency of Visual Search can be improved by scene memory. We find both effects and provide model fits of the behavioral results. In a second experiment, we show that the increase in efficiency is fragile, and trace that fragility to the resetting of spatial working memory.

Minshik Kim - One of the best experts on this subject based on the ideXlab platform.

  • predictive spatial working memory content guides Visual Search
    Visual Cognition, 2010
    Co-Authors: Jangjin Kim, Minshik Kim, Marvin M Chun
    Abstract:

    In Visual Search tasks, repeating spatial contexts that are predictive of target location facilitate detection (Chun & Jiang, 1998). In past studies, the predicted spatial configurations appeared concurrently with target information. Here we examined whether repeatedly presented working memory (WM) arrays could also serve as contextual cues. Participants performed Visual Search while maintaining a WM array presented at the beginning of each trial. In the learning phase, each WM array was paired with a specific target location. The paired displays were repeated throughout the entire learning session. In the test phase, half of the pairings remained constant (old condition); the other half switched to new, unpaired locations (new condition). If participants learn the associations between the representations maintained in WM and target locations in the Search displays, then Search performance should be better in the old condition than in the new condition. In Experiment 1, four colour patches were used as WM...

  • the role of spatial working memory in Visual Search efficiency
    Psychonomic Bulletin & Review, 2004
    Co-Authors: Minshik Kim
    Abstract:

    Many theories have proposed that Visual working memory plays an important role in Visual Search. In contrast, by showing that a nonspatial working memory load did not interfere with Search efficiency, Woodman, Vogel, and Luck (2001) recently proposed that the role of working memory in Visual Search is insignificant. However, the Visual Search process may interfere with spatial working memory. In the present study, a Visual Search task was performed concurrently with either a spatial working memory task (Experiment 1) or a nonspatial working memory task (Experiment 2). We found that the Visual Search process interfered with a spatial working memory load, but not with a nonspatial working memory load. These results suggest that there is a distinction between spatial and nonspatial working memory in terms of interactions with Visual Search tasks. These results imply that the Visual Search process and spatial working memory storage require the same limited-capacity mechanisms.