Video Players

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 29121 Experts worldwide ranked by ideXlab platform

Belen Masia - One of the best experts on this subject based on the ideXlab platform.

  • motion parallax for 360 rgbd Video
    IEEE Transactions on Visualization and Computer Graphics, 2019
    Co-Authors: Ana Serrano, Incheol Kim, Zhili Chen, Stephen Diverdi, Diego Gutierrez, Aaron Hertzmann, Belen Masia
    Abstract:

    We present a method for adding parallax and real-time playback of 360° Videos in Virtual Reality headsets. In current Video Players, the playback does not respond to translational head movement, which reduces the feeling of immersion, and causes motion sickness for some viewers. Given a 360° Video and its corresponding depth (provided by current stereo 360° stitching algorithms), a naive image-based rendering approach would use the depth to generate a 3D mesh around the viewer, then translate it appropriately as the viewer moves their head. However, this approach breaks at depth discontinuities, showing visible distortions, whereas cutting the mesh at such discontinuities leads to ragged silhouettes and holes at disocclusions. We address these issues by improving the given initial depth map to yield cleaner, more natural silhouettes. We rely on a three-layer scene representation, made up of a foreground layer and two static background layers, to handle disocclusions by propagating information from multiple frames for the first background layer, and then inpainting for the second one. Our system works with input from many of today's most popular 360° stereo capture devices (e.g., Yi Halo or GoPro Odyssey), and works well even if the original Video does not provide depth information. Our user studies confirm that our method provides a more compelling viewing experience than without parallax, increasing immersion while reducing discomfort and nausea.

Ana Serrano - One of the best experts on this subject based on the ideXlab platform.

  • motion parallax for 360 rgbd Video
    IEEE Transactions on Visualization and Computer Graphics, 2019
    Co-Authors: Ana Serrano, Incheol Kim, Zhili Chen, Stephen Diverdi, Diego Gutierrez, Aaron Hertzmann, Belen Masia
    Abstract:

    We present a method for adding parallax and real-time playback of 360° Videos in Virtual Reality headsets. In current Video Players, the playback does not respond to translational head movement, which reduces the feeling of immersion, and causes motion sickness for some viewers. Given a 360° Video and its corresponding depth (provided by current stereo 360° stitching algorithms), a naive image-based rendering approach would use the depth to generate a 3D mesh around the viewer, then translate it appropriately as the viewer moves their head. However, this approach breaks at depth discontinuities, showing visible distortions, whereas cutting the mesh at such discontinuities leads to ragged silhouettes and holes at disocclusions. We address these issues by improving the given initial depth map to yield cleaner, more natural silhouettes. We rely on a three-layer scene representation, made up of a foreground layer and two static background layers, to handle disocclusions by propagating information from multiple frames for the first background layer, and then inpainting for the second one. Our system works with input from many of today's most popular 360° stereo capture devices (e.g., Yi Halo or GoPro Odyssey), and works well even if the original Video does not provide depth information. Our user studies confirm that our method provides a more compelling viewing experience than without parallax, increasing immersion while reducing discomfort and nausea.

Zhili Chen - One of the best experts on this subject based on the ideXlab platform.

  • motion parallax for 360 rgbd Video
    IEEE Transactions on Visualization and Computer Graphics, 2019
    Co-Authors: Ana Serrano, Incheol Kim, Zhili Chen, Stephen Diverdi, Diego Gutierrez, Aaron Hertzmann, Belen Masia
    Abstract:

    We present a method for adding parallax and real-time playback of 360° Videos in Virtual Reality headsets. In current Video Players, the playback does not respond to translational head movement, which reduces the feeling of immersion, and causes motion sickness for some viewers. Given a 360° Video and its corresponding depth (provided by current stereo 360° stitching algorithms), a naive image-based rendering approach would use the depth to generate a 3D mesh around the viewer, then translate it appropriately as the viewer moves their head. However, this approach breaks at depth discontinuities, showing visible distortions, whereas cutting the mesh at such discontinuities leads to ragged silhouettes and holes at disocclusions. We address these issues by improving the given initial depth map to yield cleaner, more natural silhouettes. We rely on a three-layer scene representation, made up of a foreground layer and two static background layers, to handle disocclusions by propagating information from multiple frames for the first background layer, and then inpainting for the second one. Our system works with input from many of today's most popular 360° stereo capture devices (e.g., Yi Halo or GoPro Odyssey), and works well even if the original Video does not provide depth information. Our user studies confirm that our method provides a more compelling viewing experience than without parallax, increasing immersion while reducing discomfort and nausea.

Aaron Hertzmann - One of the best experts on this subject based on the ideXlab platform.

  • motion parallax for 360 rgbd Video
    IEEE Transactions on Visualization and Computer Graphics, 2019
    Co-Authors: Ana Serrano, Incheol Kim, Zhili Chen, Stephen Diverdi, Diego Gutierrez, Aaron Hertzmann, Belen Masia
    Abstract:

    We present a method for adding parallax and real-time playback of 360° Videos in Virtual Reality headsets. In current Video Players, the playback does not respond to translational head movement, which reduces the feeling of immersion, and causes motion sickness for some viewers. Given a 360° Video and its corresponding depth (provided by current stereo 360° stitching algorithms), a naive image-based rendering approach would use the depth to generate a 3D mesh around the viewer, then translate it appropriately as the viewer moves their head. However, this approach breaks at depth discontinuities, showing visible distortions, whereas cutting the mesh at such discontinuities leads to ragged silhouettes and holes at disocclusions. We address these issues by improving the given initial depth map to yield cleaner, more natural silhouettes. We rely on a three-layer scene representation, made up of a foreground layer and two static background layers, to handle disocclusions by propagating information from multiple frames for the first background layer, and then inpainting for the second one. Our system works with input from many of today's most popular 360° stereo capture devices (e.g., Yi Halo or GoPro Odyssey), and works well even if the original Video does not provide depth information. Our user studies confirm that our method provides a more compelling viewing experience than without parallax, increasing immersion while reducing discomfort and nausea.

Incheol Kim - One of the best experts on this subject based on the ideXlab platform.

  • motion parallax for 360 rgbd Video
    IEEE Transactions on Visualization and Computer Graphics, 2019
    Co-Authors: Ana Serrano, Incheol Kim, Zhili Chen, Stephen Diverdi, Diego Gutierrez, Aaron Hertzmann, Belen Masia
    Abstract:

    We present a method for adding parallax and real-time playback of 360° Videos in Virtual Reality headsets. In current Video Players, the playback does not respond to translational head movement, which reduces the feeling of immersion, and causes motion sickness for some viewers. Given a 360° Video and its corresponding depth (provided by current stereo 360° stitching algorithms), a naive image-based rendering approach would use the depth to generate a 3D mesh around the viewer, then translate it appropriately as the viewer moves their head. However, this approach breaks at depth discontinuities, showing visible distortions, whereas cutting the mesh at such discontinuities leads to ragged silhouettes and holes at disocclusions. We address these issues by improving the given initial depth map to yield cleaner, more natural silhouettes. We rely on a three-layer scene representation, made up of a foreground layer and two static background layers, to handle disocclusions by propagating information from multiple frames for the first background layer, and then inpainting for the second one. Our system works with input from many of today's most popular 360° stereo capture devices (e.g., Yi Halo or GoPro Odyssey), and works well even if the original Video does not provide depth information. Our user studies confirm that our method provides a more compelling viewing experience than without parallax, increasing immersion while reducing discomfort and nausea.