Gaze Behavior

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 12486 Experts worldwide ranked by ideXlab platform

Caroline E Robertson - One of the best experts on this subject based on the ideXlab platform.

  • active vision in immersive 360 real world environments
    Scientific Reports, 2020
    Co-Authors: Amanda J Haskins, Jeff Mentch, Thomas L Botch, Caroline E Robertson
    Abstract:

    How do we construct a sense of place in a real-world environment? Real-world environments are actively explored via saccades, head turns, and body movements. Yet, little is known about how humans process real-world scene information during active viewing conditions. Here, we exploited recent developments in virtual reality (VR) and in-headset eye-tracking to test the impact of active vs. passive viewing conditions on Gaze Behavior while participants explored novel, real-world, 360° scenes. In one condition, participants actively explored 360° photospheres from a first-person perspective via self-directed motion (saccades and head turns). In another condition, photospheres were passively displayed to participants while they were head-restricted. We found that, relative to passive viewers, active viewers displayed increased attention to semantically meaningful scene regions, suggesting more exploratory, information-seeking Gaze Behavior. We also observed signatures of exploratory Behavior in eye movements, such as quicker, more entropic fixations during active as compared with passive viewing conditions. These results show that active viewing influences every aspect of Gaze Behavior, from the way we move our eyes to what we choose to attend to. Moreover, these results offer key benchmark measurements of Gaze Behavior in 360°, naturalistic environments.

  • active vision in immersive 360 real world environments
    bioRxiv, 2020
    Co-Authors: Amanda J Haskins, Jeff Mentch, Thomas L Botch, Caroline E Robertson
    Abstract:

    Abstract Vision is an active process. Humans actively sample their sensory environment via saccades, head turns, and body movements. Yet, little is known about active visual processing in real-world environments. Here, we exploited recent advances in immersive virtual reality (VR) and in-headset eye-tracking to show that active viewing conditions impact how humans process complex, real-world scenes. Specifically, we used quantitative, model-based analyses to compare which visual features participants prioritize over others while encoding a novel environment in two experimental conditions: active and passive. In the active condition, participants used head-mounted VR displays to explore 360o scenes from a first-person perspective via self-directed motion (saccades and head turns). In the passive condition, 360o scenes were passively displayed to participants within the VR headset while they were head-restricted. Our results show that signatures of top-down attentional guidance increase in active viewing conditions: active viewers disproportionately allocate their attention to semantically relevant scene features, as compared with passive viewers. We also observed increased signatures of exploratory Behavior in eye movements, such as quicker, more entropic fixations during active as compared with passive viewing conditions. These results have broad implications for studies of visual cognition, suggesting that active viewing influences every aspect of Gaze Behavior – from the way we move our eyes to what we choose to attend to – as we construct a sense of place in a real-world environment. Significance Statement Eye-tracking in immersive virtual reality offers an unprecedented opportunity to study human Gaze Behavior under naturalistic viewing conditions without sacrificing experimental control. Here, we advanced this new technique to show how humans deploy attention as they encode a diverse set of 360o, real-world scenes, actively explored from a first-person perspective using head turns and saccades. Our results build on classic studies in psychology, showing that active, as compared with passive, viewing conditions fundamentally alter perceptual processing. Specifically, active viewing conditions increase information-seeking Behavior in humans, producing faster, more entropic fixations, which are disproportionately deployed to scene areas that are rich in semantic meaning. In addition, our results offer key benchmark measurements of Gaze Behavior in 360°, naturalistic environments.

Michael Schrauf - One of the best experts on this subject based on the ideXlab platform.

  • What determines the take-over time? An integrated model approach of driver take-over after automated driving
    Accident Analysis and Prevention, 2015
    Co-Authors: Kathrin Zeeb, Axel Buchner, Michael Schrauf
    Abstract:

    Abstract In recent years the automation level of driver assistance systems has increased continuously. One of the major challenges for highly automated driving is to ensure a safe driver take-over of the vehicle guidance. This must be ensured especially when the driver is engaged in non-driving related secondary tasks. For this purpose it is essential to find indicators of the driver's readiness to take over and to gain more knowledge about the take-over process in general. A simulator study was conducted to explore how drivers' allocation of visual attention during highly automated driving influences a take-over action in response to an emergency situation. Therefore we recorded drivers' Gaze Behavior during automated driving while simultaneously engaging in a visually demanding secondary task, and measured their reaction times in a take-over situation. According to their Gaze Behavior the drivers were categorized into "high", "medium" and "low-risk". The Gaze parameters were found to be suitable for predicting the readiness to take-over the vehicle, in such a way that high-risk drivers reacted late and more often inappropriately in the take-over situation. However, there was no difference among the driver groups in the time required by the drivers to establish motor readiness to intervene after the take-over request. An integrated model approach of driver Behavior in emergency take-over situations during automated driving is presented. It is argued that primarily cognitive and not motor processes determine the take-over time. Given this, insights can be derived for further research and the development of automated systems.

Jeff Mentch - One of the best experts on this subject based on the ideXlab platform.

  • active vision in immersive 360 real world environments
    Scientific Reports, 2020
    Co-Authors: Amanda J Haskins, Jeff Mentch, Thomas L Botch, Caroline E Robertson
    Abstract:

    How do we construct a sense of place in a real-world environment? Real-world environments are actively explored via saccades, head turns, and body movements. Yet, little is known about how humans process real-world scene information during active viewing conditions. Here, we exploited recent developments in virtual reality (VR) and in-headset eye-tracking to test the impact of active vs. passive viewing conditions on Gaze Behavior while participants explored novel, real-world, 360° scenes. In one condition, participants actively explored 360° photospheres from a first-person perspective via self-directed motion (saccades and head turns). In another condition, photospheres were passively displayed to participants while they were head-restricted. We found that, relative to passive viewers, active viewers displayed increased attention to semantically meaningful scene regions, suggesting more exploratory, information-seeking Gaze Behavior. We also observed signatures of exploratory Behavior in eye movements, such as quicker, more entropic fixations during active as compared with passive viewing conditions. These results show that active viewing influences every aspect of Gaze Behavior, from the way we move our eyes to what we choose to attend to. Moreover, these results offer key benchmark measurements of Gaze Behavior in 360°, naturalistic environments.

  • active vision in immersive 360 real world environments
    bioRxiv, 2020
    Co-Authors: Amanda J Haskins, Jeff Mentch, Thomas L Botch, Caroline E Robertson
    Abstract:

    Abstract Vision is an active process. Humans actively sample their sensory environment via saccades, head turns, and body movements. Yet, little is known about active visual processing in real-world environments. Here, we exploited recent advances in immersive virtual reality (VR) and in-headset eye-tracking to show that active viewing conditions impact how humans process complex, real-world scenes. Specifically, we used quantitative, model-based analyses to compare which visual features participants prioritize over others while encoding a novel environment in two experimental conditions: active and passive. In the active condition, participants used head-mounted VR displays to explore 360o scenes from a first-person perspective via self-directed motion (saccades and head turns). In the passive condition, 360o scenes were passively displayed to participants within the VR headset while they were head-restricted. Our results show that signatures of top-down attentional guidance increase in active viewing conditions: active viewers disproportionately allocate their attention to semantically relevant scene features, as compared with passive viewers. We also observed increased signatures of exploratory Behavior in eye movements, such as quicker, more entropic fixations during active as compared with passive viewing conditions. These results have broad implications for studies of visual cognition, suggesting that active viewing influences every aspect of Gaze Behavior – from the way we move our eyes to what we choose to attend to – as we construct a sense of place in a real-world environment. Significance Statement Eye-tracking in immersive virtual reality offers an unprecedented opportunity to study human Gaze Behavior under naturalistic viewing conditions without sacrificing experimental control. Here, we advanced this new technique to show how humans deploy attention as they encode a diverse set of 360o, real-world scenes, actively explored from a first-person perspective using head turns and saccades. Our results build on classic studies in psychology, showing that active, as compared with passive, viewing conditions fundamentally alter perceptual processing. Specifically, active viewing conditions increase information-seeking Behavior in humans, producing faster, more entropic fixations, which are disproportionately deployed to scene areas that are rich in semantic meaning. In addition, our results offer key benchmark measurements of Gaze Behavior in 360°, naturalistic environments.

Christian Keysers - One of the best experts on this subject based on the ideXlab platform.

  • age related increase in inferior frontal gyrus activity and social functioning in autism spectrum disorder
    Biological Psychiatry, 2011
    Co-Authors: Jojanneke A Bastiaansen, Cees Ketelaars, Christian Keysers, Luca Nanetti, Christiaan Van Der Gaag, Marc Thioux, Ruud B Minderaa
    Abstract:

    Background: Hypoactivation of the inferior frontal gyrus during the perception of facial expressions has been interpreted as evidence for a deficit of the mirror neuron system in children with autism. We examined whether this dysfunction persists in adulthood, and how brain activity in the mirror neuron system relates to social functioning outside the laboratory. Methods: Twenty-one adult males with autism spectrum disorders and 21 typically developing subjects matched for age, sex, and IQ were scanned in three conditions: observing short movies showing facial expressions, performing a facial movement, and experiencing a disgusting taste. Symptom severity and level of social adjustment were measured with the Autism Diagnostic Observation Schedule and the Social Functioning Scale. Results: Inferior frontal gyrus activity during the observation of facial expressions increased with age in subjects with autism, but not in control subjects. The age-related increase in activity was associated with changes in Gaze Behavior and improvements in social functioning. These age-related neurocognitive improvements were not found in a group of individuals with schizophrenia, who had comparable levels of social functioning. Conclusions: The results of this cross-sectional study suggest that mirror neuron system activity augments with age in autism and that this is accompanied by changes in Gaze Behavior and improved social functioning. It is the first demonstration of an age-related neurocognitive improvement in autism. Increased motor simulation may contribute to the amelioration in social functioning documented in adolescence and adulthood. This finding should encourage the development of new therapeutic interventions directed at emotion simulation.

David Traum - One of the best experts on this subject based on the ideXlab platform.

  • the effects of virtual agent humor and Gaze Behavior on human virtual agent proxemics
    Intelligent Virtual Agents, 2011
    Co-Authors: Peter Khooshabeh, Cade Mccall, Sudeep Gandhe, Jonathan Gratch, Jim Blascovich, David Traum
    Abstract:

    We study whether a virtual agent that delivers humor through verbal Behavior can affect an individual's proxemic Behavior towards the agent. Participants interacted with a virtual agent through natural language and, in a separate task, performed an embodied interpersonal interaction task in a virtual environment. The study used minimum distance as the dependent measure. Humor generated by the virtual agent through a text chat did not have any significant effects on the proxemic task. This is likely due to the experimental constraint of only allowing participants to interact with a disembodied agent through a textual chat dialogue.