Parallax

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 318 Experts worldwide ranked by ideXlab platform

Reinhard Koch - One of the best experts on this subject based on the ideXlab platform.

  • ICME Workshops - Parallax View Generation for Static Scenes Using Parallax-Interpolation Adaptive Separable Convolution
    2018 IEEE International Conference on Multimedia & Expo Workshops (ICMEW), 2018
    Co-Authors: Yuan Gao, Reinhard Koch
    Abstract:

    Reconstructing a Densely-Sampled Light Field (DSLF) from a Sparsely-Sampled Light Field (SSLF) is a challenging problem, for which various kinds of algorithms have been proposed. However, very few of them treat the angular information in a light field as the temporal information of a video from a virtual camera, i.e. the Parallax views of a SSLF for a static scene can be turned into the key frames of a video captured by a virtual camera moving along the Parallax axis. To this end, in this paper, a novel Parallax view generation method, Parallax-Interpolation Adaptive Separable Convolution (PIASC), is proposed. The presented PIASC method takes full advantage of the motion coherence of static objects captured by a SSLF device to enhance the motion-sensitive convolution kernels of a state-of-the-art video frame interpolation method, i.e. Adaptive Separable Convolution (AdaSep-Conv). Experimental results on three development datasets of the grand challenge demonstrate the superior performance of PIASC for DSLF reconstruction of static scenes.

  • Parallax view generation for static scenes using Parallax interpolation adaptive separable convolution
    International Conference on Multimedia and Expo, 2018
    Co-Authors: Yuan Gao, Reinhard Koch
    Abstract:

    Reconstructing a Densely-Sampled Light Field (DSLF) from a Sparsely-Sampled Light Field (SSLF) is a challenging problem, for which various kinds of algorithms have been proposed. However, very few of them treat the angular information in a light field as the temporal information of a video from a virtual camera, i.e. the Parallax views of a SSLF for a static scene can be turned into the key frames of a video captured by a virtual camera moving along the Parallax axis. To this end, in this paper, a novel Parallax view generation method, Parallax-Interpolation Adaptive Separable Convolution (PIASC), is proposed. The presented PIASC method takes full advantage of the motion coherence of static objects captured by a SSLF device to enhance the motion-sensitive convolution kernels of a state-of-the-art video frame interpolation method, i.e. Adaptive Separable Convolution (AdaSep-Conv). Experimental results on three development datasets of the grand challenge demonstrate the superior performance of PIASC for DSLF reconstruction of static scenes.

Yuan Gao - One of the best experts on this subject based on the ideXlab platform.

  • ICME Workshops - Parallax View Generation for Static Scenes Using Parallax-Interpolation Adaptive Separable Convolution
    2018 IEEE International Conference on Multimedia & Expo Workshops (ICMEW), 2018
    Co-Authors: Yuan Gao, Reinhard Koch
    Abstract:

    Reconstructing a Densely-Sampled Light Field (DSLF) from a Sparsely-Sampled Light Field (SSLF) is a challenging problem, for which various kinds of algorithms have been proposed. However, very few of them treat the angular information in a light field as the temporal information of a video from a virtual camera, i.e. the Parallax views of a SSLF for a static scene can be turned into the key frames of a video captured by a virtual camera moving along the Parallax axis. To this end, in this paper, a novel Parallax view generation method, Parallax-Interpolation Adaptive Separable Convolution (PIASC), is proposed. The presented PIASC method takes full advantage of the motion coherence of static objects captured by a SSLF device to enhance the motion-sensitive convolution kernels of a state-of-the-art video frame interpolation method, i.e. Adaptive Separable Convolution (AdaSep-Conv). Experimental results on three development datasets of the grand challenge demonstrate the superior performance of PIASC for DSLF reconstruction of static scenes.

  • Parallax view generation for static scenes using Parallax interpolation adaptive separable convolution
    International Conference on Multimedia and Expo, 2018
    Co-Authors: Yuan Gao, Reinhard Koch
    Abstract:

    Reconstructing a Densely-Sampled Light Field (DSLF) from a Sparsely-Sampled Light Field (SSLF) is a challenging problem, for which various kinds of algorithms have been proposed. However, very few of them treat the angular information in a light field as the temporal information of a video from a virtual camera, i.e. the Parallax views of a SSLF for a static scene can be turned into the key frames of a video captured by a virtual camera moving along the Parallax axis. To this end, in this paper, a novel Parallax view generation method, Parallax-Interpolation Adaptive Separable Convolution (PIASC), is proposed. The presented PIASC method takes full advantage of the motion coherence of static objects captured by a SSLF device to enhance the motion-sensitive convolution kernels of a state-of-the-art video frame interpolation method, i.e. Adaptive Separable Convolution (AdaSep-Conv). Experimental results on three development datasets of the grand challenge demonstrate the superior performance of PIASC for DSLF reconstruction of static scenes.

Gordon Wetzstein - One of the best experts on this subject based on the ideXlab platform.

  • Gaze-Contingent Ocular Parallax Rendering for Virtual Reality
    ACM Transactions on Graphics, 2020
    Co-Authors: Robert Konrad, Anastasios Angelopoulos, Gordon Wetzstein
    Abstract:

    Immersive computer graphics systems strive to generate perceptually realistic user experiences. Current-generation virtual reality (VR) displays are successful in accurately rendering many perceptually important effects, including perspective, disparity, motion Parallax, and other depth cues. In this article, we introduce ocular Parallax rendering, a technology that accurately renders small amounts of gaze-contingent Parallax capable of improving depth perception and realism in VR. Ocular Parallax describes the small amounts of depth-dependent image shifts on the retina that are created as the eye rotates. The effect occurs because the centers of rotation and projection of the eye are not the same. We study the perceptual implications of ocular Parallax rendering by designing and conducting a series of user experiments. Specifically, we estimate perceptual detection and discrimination thresholds for this effect and demonstrate that it is clearly visible in most VR applications. Additionally, we show that ocular Parallax rendering provides an effective ordinal depth cue and it improves the impression of realistic depth in VR.

  • gaze contingent ocular Parallax rendering for virtual reality
    International Conference on Computer Graphics and Interactive Techniques, 2019
    Co-Authors: Robert Konrad, Anastasios Angelopoulos, Gordon Wetzstein
    Abstract:

    Current-generation virtual reality (VR) displays aim to generate perceptually realistic user experiences by accurately rendering many perceptually important effects including perspective, disparity, motion Parallax, and other depth cues. We introduce ocular Parallax rendering, a technology that renders small amounts of gaze-contingent Parallax capable of further increasing perceptual realism in VR. Ocular Parallax, small depth-dependent image shifts on the retina created as the eye rotates, occurs because the centers of rotation and projection of the eye are not the same. We study the perceptual implications of ocular Parallax rendering by designing and conducting a series of user experiments. We estimate perceptual detection and discrimination thresholds for this effect and demonstrate that it is clearly visible in most VR applications. However, our studies also indicate that ocular Parallax rendering does not significantly improve depth perception in VR.

Nobuyoshi Terashima - One of the best experts on this subject based on the ideXlab platform.

  • Parallax distribution for ease of viewing in stereoscopic HDTV
    electronic imaging, 2002
    Co-Authors: Shinji Ide, Hirokazu Yamanoue, Makoto Okui, Fumio Okano, Mineo Bitou, Nobuyoshi Terashima
    Abstract:

    In order to identify the conditions which make stereoscopic images easier to view, we analyzed the psychological effects using a stereoscopic HDTV system, and examined the relationship between this analysis and the Parallax distribution patterns. First, we evaluated the impression of 3-D pictures of the standard 3-D test chart and past 3-D video programs using some evaluation terms. Two factors were thus extracted, the first related to the sense of presence and the second related to ease of viewing. Secondly, we applied principal component analysis to the Parallax distribution of the stereoscopic images used in the subjective evaluation tests, in order to extract the features of the Parallax distribution, then we examined the relationship between the factors and the features of the Parallax distribution. The results indicated that the features of the Parallax distribution are strongly related to ease of viewing, and for ease of viewing 3-D images, the upper part of the screen should be located further away from the viewer with less Parallax irregularity, and the entire image should be positioned at the back.

  • Parallax distribution for ease of viewing in stereoscopic HDTV
    2002
    Co-Authors: Hirokazu Yamanoue, Shinji Ide, Makoto Okui, Fumio Okano, Mineo Bitou, Nobuyoshi Terashima
    Abstract:

    In order to identify the conditions that make stereoscopic images easier to view, we analyzed the psychological effects using a stereoscopic HDTV system, and examined the relationship between this analysis and the Parallax distribution patterns. First, we evaluated the impressions of several stereoscopic images included in standard 3-D HDTV test charts and past 3-D HDTV programs using some evaluation terms. Two factors were thus extracted, the first related to the sense of presence and the second related to ease of viewing. Secondly, we applied principal component analysis to the Parallax distribution of the stereoscopic images used in the subjective evaluation tests, in order to extract the features of the Parallax distribution, then we examined the relationship between the factors and the features of the Parallax distribution. The results indicated that the features of the Parallax distribution are strongly related to ease of viewing, and for ease of viewing 3-D images, the upper part of the screen should be located further away from the viewer with less Parallax irregularity, and the entire image should be positioned behind the screen.

Bill Triggs - One of the best experts on this subject based on the ideXlab platform.

  • Plane + Parallax, Tensors and Factorization
    2000
    Co-Authors: Bill Triggs
    Abstract:

    We study the special form that the general multi-image tensor formalism takes under the plane + Parallax decomposition, including matching tensors and constraints, closure and depth recovery relations, and inter-tensor consistency constraints. Plane + Parallax alignment greatly simplifies the algebra, and uncovers the underlying geometric content. We relate plane + Parallax to the geometry of translating, calibrated cameras, and introduce a new Parallax-factorizing projective reconstruction method based on this. Initial plane + Parallax alignment reduces the problem to a single rank-one factorization of a matrix of rescaled Parallaxes into a vector of projection centres and a vector of projective heights above the reference plane. The method extends to 3D lines represented by via-points and 3D planes represented by homographies.

  • ECCV (1) - Plane+Parallax, Tensors and Factorization
    Computer Vision - ECCV 2000, 2000
    Co-Authors: Bill Triggs
    Abstract:

    We study the special form that the general multi-image tensor formalism takes under the plane + Parallax decomposition, including matching tensors and constraints, closure and depth recovery relations, and inter-tensor consistency constraints. Plane + Parallax alignment greatly simplifies the algebra, and uncovers the underlying geometric content. We relate plane + Parallax to the geometry of translating, calibrated cameras, and introduce a new Parallax-factorizing projective reconstruction method based on this. Initial plane + Parallax alignment reduces the problem to a single rank-one factorization of a matrix of rescaled Parallaxes into a vector of projection centres and a vector of projective heights above the reference plane. The method extends to 3D lines represented by via-points and 3D planes represented by homographies.