Temporal Offset

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 10776 Experts worldwide ranked by ideXlab platform

Mark T Wallace - One of the best experts on this subject based on the ideXlab platform.

  • the Temporal binding window for audiovisual speech children are like little adults
    Neuropsychologia, 2016
    Co-Authors: Andrea Hillockdunn, Wesley D Grantham, Mark T Wallace
    Abstract:

    Abstract During a typical communication exchange, both auditory and visual cues contribute to speech comprehension. The influence of vision on speech perception can be measured behaviorally using a task where incongruent auditory and visual speech stimuli are paired to induce perception of a novel token reflective of multisensory integration (i.e., the McGurk effect). This effect is Temporally constrained in adults, with illusion perception decreasing as the Temporal Offset between the auditory and visual stimuli increases. Here, we used the McGurk effect to investigate the development of the Temporal characteristics of audiovisual speech binding in 7–24 year-olds. Surprisingly, results indicated that although older participants perceived the McGurk illusion more frequently, no age-dependent change in the Temporal boundaries of audiovisual speech binding was observed.

Marc Pollefeys - One of the best experts on this subject based on the ideXlab platform.

  • 3DPVT - Visual-hull reconstruction from uncalibrated and unsynchronized video streams
    2004
    Co-Authors: Sudipta N. Sinha, Marc Pollefeys
    Abstract:

    We present an approach for automatic reconstruction of a dynamic event using multiple video cameras recording from different viewpoints. Those cameras do not need to be calibrated or even synchronized. Our approach recovers all the necessary information by analyzing the motion of the silhouettes in the multiple video streams. The first step consists of computing the calibration and synchronization for pairs of cameras. We compute the Temporal Offset and epipolar geometry using an efficient RANSAC-based algorithm to search for the epipoles as well as for robustness. In the next stage the calibration and synchronization for the complete camera network is recovered and then refined through maximum likelihood estimation. Finally, a visual-hull algorithm is used to the recover the dynamic shape of the observed object. For unsynchronized video streams silhouettes are interpolated to deal with subframe Temporal Offsets. We demonstrate the validity of our approach by obtaining the calibration, synchronization and 3D reconstruction of a moving person from a set of 4 minute videos recorded from 4 widely separated video cameras.

  • ICPR (1) - Synchronization and calibration of camera networks from silhouettes
    2004
    Co-Authors: Sudipta N. Sinha, Marc Pollefeys
    Abstract:

    We propose an automatic approach to synchronize a network of uncalibrated and unsynchronized video cameras, and recover the complete calibration of all these cameras. In this paper, we extend recent work on computing the epipolar geometry from dynamic silhouettes, to deal with unsynchronized sequences and find the Temporal Offset between them. This is used to compute the fundamental matrices and the Temporal Offsets between many view-pairs in the network. Knowing the time-shifts between enough view-pairs allows us to robustly synchronize the whole network. The calibration of all the cameras is recovered from these fundamental matrices. The dynamic shape of the object can then be recovered using a visual-hull algorithm. Our method is especially useful for multi-camera shape-from-silhouette systems, as visual hulls can now be reconstructed without the need for a specific calibration session.

  • CVPR (2) - Synchronization and calibration of a camera network for 3D event reconstruction from live video
    2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05), 1
    Co-Authors: Sudipta N. Sinha, Marc Pollefeys
    Abstract:

    We present an approach for automatic reconstruction of a dynamic event using multiple video cameras recording from different viewpoints. Our approach recovers all the necessary information by analyzing the motion of the silhouettes in the multiple video streams. The first step consists of computing the calibration and synchronization for pairs of cameras. We compute the Temporal Offset and epipolar geometry using an efficient RANSAC-based algorithm to search for the epipoles as well as for robustness. In the next stage the calibration and synchronization for the complete camera network is recovered and then refined through maximum likelihood estimation. Finally, a visual hull algorithm is used to the recover the dynamic shape of the observed object.

Andrea Hillockdunn - One of the best experts on this subject based on the ideXlab platform.

  • the Temporal binding window for audiovisual speech children are like little adults
    Neuropsychologia, 2016
    Co-Authors: Andrea Hillockdunn, Wesley D Grantham, Mark T Wallace
    Abstract:

    Abstract During a typical communication exchange, both auditory and visual cues contribute to speech comprehension. The influence of vision on speech perception can be measured behaviorally using a task where incongruent auditory and visual speech stimuli are paired to induce perception of a novel token reflective of multisensory integration (i.e., the McGurk effect). This effect is Temporally constrained in adults, with illusion perception decreasing as the Temporal Offset between the auditory and visual stimuli increases. Here, we used the McGurk effect to investigate the development of the Temporal characteristics of audiovisual speech binding in 7–24 year-olds. Surprisingly, results indicated that although older participants perceived the McGurk illusion more frequently, no age-dependent change in the Temporal boundaries of audiovisual speech binding was observed.

Sudipta N. Sinha - One of the best experts on this subject based on the ideXlab platform.

  • 3DPVT - Visual-hull reconstruction from uncalibrated and unsynchronized video streams
    2004
    Co-Authors: Sudipta N. Sinha, Marc Pollefeys
    Abstract:

    We present an approach for automatic reconstruction of a dynamic event using multiple video cameras recording from different viewpoints. Those cameras do not need to be calibrated or even synchronized. Our approach recovers all the necessary information by analyzing the motion of the silhouettes in the multiple video streams. The first step consists of computing the calibration and synchronization for pairs of cameras. We compute the Temporal Offset and epipolar geometry using an efficient RANSAC-based algorithm to search for the epipoles as well as for robustness. In the next stage the calibration and synchronization for the complete camera network is recovered and then refined through maximum likelihood estimation. Finally, a visual-hull algorithm is used to the recover the dynamic shape of the observed object. For unsynchronized video streams silhouettes are interpolated to deal with subframe Temporal Offsets. We demonstrate the validity of our approach by obtaining the calibration, synchronization and 3D reconstruction of a moving person from a set of 4 minute videos recorded from 4 widely separated video cameras.

  • ICPR (1) - Synchronization and calibration of camera networks from silhouettes
    2004
    Co-Authors: Sudipta N. Sinha, Marc Pollefeys
    Abstract:

    We propose an automatic approach to synchronize a network of uncalibrated and unsynchronized video cameras, and recover the complete calibration of all these cameras. In this paper, we extend recent work on computing the epipolar geometry from dynamic silhouettes, to deal with unsynchronized sequences and find the Temporal Offset between them. This is used to compute the fundamental matrices and the Temporal Offsets between many view-pairs in the network. Knowing the time-shifts between enough view-pairs allows us to robustly synchronize the whole network. The calibration of all the cameras is recovered from these fundamental matrices. The dynamic shape of the object can then be recovered using a visual-hull algorithm. Our method is especially useful for multi-camera shape-from-silhouette systems, as visual hulls can now be reconstructed without the need for a specific calibration session.

  • CVPR (2) - Synchronization and calibration of a camera network for 3D event reconstruction from live video
    2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05), 1
    Co-Authors: Sudipta N. Sinha, Marc Pollefeys
    Abstract:

    We present an approach for automatic reconstruction of a dynamic event using multiple video cameras recording from different viewpoints. Our approach recovers all the necessary information by analyzing the motion of the silhouettes in the multiple video streams. The first step consists of computing the calibration and synchronization for pairs of cameras. We compute the Temporal Offset and epipolar geometry using an efficient RANSAC-based algorithm to search for the epipoles as well as for robustness. In the next stage the calibration and synchronization for the complete camera network is recovered and then refined through maximum likelihood estimation. Finally, a visual hull algorithm is used to the recover the dynamic shape of the observed object.

Kevin G. Munhall - One of the best experts on this subject based on the ideXlab platform.

  • The effect of a concurrent working memory task and Temporal Offsets on the integration of auditory and visual speech information.
    Seeing and perceiving, 2012
    Co-Authors: Julie N. Buchan, Kevin G. Munhall
    Abstract:

    Audiovisual speech perception is an everyday occurrence of multisensory integration. Conflicting visual speech information can influence the perception of acoustic speech (namely the McGurk effect), and auditory and visual speech are integrated over a rather wide range of Temporal Offsets. This research examined whether the addition of a concurrent cognitive load task would affect the audiovisual integration in a McGurk speech task and whether the cognitive load task would cause more interference at increasing Offsets. The amount of integration was measured by the proportion of responses in incongruent trials that did not correspond to the audio (McGurk response). An eye-tracker was also used to examine whether the amount of Temporal Offset and the presence of a concurrent cognitive load task would influence gaze behavior. Results from this experiment show a very modest but statistically significant decrease in the number of McGurk responses when subjects also perform a cognitive load task, and that this effect is relatively constant across the various Temporal Offsets. Participant’s gaze behavior was also influenced by the addition of a cognitive load task. Gaze was less centralized on the face, less time was spent looking at the mouth and more time was spent looking at the eyes, when a concurrent cognitive load task was added to the speech task.

  • The influence of selective attention to auditory and visual speech on the integration of audiovisual speech information.
    Perception, 2011
    Co-Authors: Julie N. Buchan, Kevin G. Munhall
    Abstract:

    Conflicting visual speech information can influence the perception of acoustic speech, causing an illusory percept of a sound not present in the actual acoustic speech (the McGurk effect). We examined whether participants can voluntarily selectively attend to either the auditory or visual modality by instructing participants to pay attention to the information in one modality and to ignore competing information from the other modality. We also examined how performance under these instructions was affected by weakening the influence of the visual information by manipulating the Temporal Offset between the audio and video channels (experiment 1), and the spatial frequency information present in the video (experiment 2). Gaze behaviour was also monitored to examine whether attentional instructions influenced the gathering of visual information. While task instructions did have an influence on the observed integration of auditory and visual speech information, participants were unable to completely ignore conflicting information, particularly information from the visual stream. Manipulating Temporal Offset had a more pronounced interaction with task instructions than manipulating the amount of visual information. Participants' gaze behaviour suggests that the attended modality influences the gathering of visual information in audiovisual speech perception.

  • Quantifying the time‐varying coordination between performer and audience.
    The Journal of the Acoustical Society of America, 2009
    Co-Authors: Adriano Vilela Barbosa, Kevin G. Munhall, Hani Camille Yehia, Eric Vatikiotis-bateson
    Abstract:

    This study examines the coordination that occurs ubiquitously during behavioral interaction (e.g., linguistic, social, musical). Coordination, however, does not imply strict synchronization. Musicians in a quartet, for example, will deviate slightly from the measured beat, alternately playing slightly ahead or behind. This paper showcases a new algorithm for computing the continuous, instantaneous correlation between two signals, at ANY Temporal Offset, resulting in a two‐dimensional mapping of correlation and Temporal Offset that makes it possible to visualize and assess the time‐varying nature of the coordination. The algorithm is demonstrated through analysis of the extraordinary performer‐audience coordination evoked by Freddie Mercury during the rock group, Queen‘s, Live Aid Concert at Wembley Stadium in 1985.