3d Scenes

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 52563 Experts worldwide ranked by ideXlab platform

Dieter Fox - One of the best experts on this subject based on the ideXlab platform.

  • detection based object labeling in 3d Scenes
    International Conference on Robotics and Automation, 2012
    Co-Authors: Kevin Lai, Xiaofeng Ren, Dieter Fox
    Abstract:

    We propose a view-based approach for labeling objects in 3d Scenes reconstructed from RGB-D (color+depth) videos. We utilize sliding window detectors trained from object views to assign class probabilities to pixels in every RGB-D frame. These probabilities are projected into the reconstructed 3d scene and integrated using a voxel representation. We perform efficient inference on a Markov Random Field over the voxels, combining cues from view-based detection and 3d shape, to label the scene. Our detection-based approach produces accurate scene labeling on the RGB-D Scenes Dataset and improves the robustness of object detection.

  • ICRA - Detection-based object labeling in 3d Scenes
    2012 IEEE International Conference on Robotics and Automation, 2012
    Co-Authors: Kevin Lai, Xiaofeng Ren, Dieter Fox
    Abstract:

    We propose a view-based approach for labeling objects in 3d Scenes reconstructed from RGB-D (color+depth) videos. We utilize sliding window detectors trained from object views to assign class probabilities to pixels in every RGB-D frame. These probabilities are projected into the reconstructed 3d scene and integrated using a voxel representation. We perform efficient inference on a Markov Random Field over the voxels, combining cues from view-based detection and 3d shape, to label the scene. Our detection-based approach produces accurate scene labeling on the RGB-D Scenes Dataset and improves the robustness of object detection.

Hao Zhang - One of the best experts on this subject based on the ideXlab platform.

  • language driven synthesis of 3d Scenes from scene databases
    ACM Transactions on Graphics, 2019
    Co-Authors: Akshay Gadi Patil, Matthew Fisher, Soren Pirk, Binhson Hua, Saikit Yeung, Xin Tong, Leonidas J Guibas, Hao Zhang
    Abstract:

    We introduce a novel framework for using natural language to generate and edit 3d indoor Scenes, harnessing scene semantics and text-scene grounding knowledge learned from large annotated 3d scene databases. The advantage of natural language editing interfaces is strongest when performing semantic operations at the sub-scene level, acting on groups of objects. We learn how to manipulate these sub-Scenes by analyzing existing 3d Scenes. We perform edits by first parsing a natural language command from the user and transforming it into a semantic scene graph that is used to retrieve corresponding sub-Scenes from the databases that match the command. We then augment this retrieved sub-scene by incorporating other objects that may be implied by the scene context. Finally, a new 3d scene is synthesized by aligning the augmented sub-scene with the user's current scene, where new objects are spliced into the environment, possibly triggering appropriate adjustments to the existing scene arrangement. A suggestive modeling interface with multiple interpretations of user commands is used to alleviate ambiguities in natural language. We conduct studies comparing our approach against both prior text-to-scene work and artist-made Scenes and find that our method significantly outperforms prior work and is comparable to handmade Scenes even when complex and varied natural sentences are used.

  • Fully computed holographic stereogram based algorithm for computer-generated holograms with accurate depth cues
    Optics Express, 2015
    Co-Authors: Hao Zhang, Liangcai Cao, Yan Zhao, Guofan Jin
    Abstract:

    We propose an algorithm based on fully computed holographic stereogram for calculating full-parallax computer-generated holograms (CGHs) with accurate depth cues. The proposed method integrates point source algorithm and holographic stereogram based algorithm to reconstruct the three-dimensional (3d) Scenes. Precise accommodation cue and occlusion effect can be created, and computer graphics rendering techniques can be employed in the CGH generation to enhance the image fidelity. Optical experiments have been performed using a spatial light modulator (SLM) and a fabricated high-resolution hologram, the results show that our proposed algorithm can perform quality reconstructions of 3d Scenes with arbitrary depth information.

Kevin Lai - One of the best experts on this subject based on the ideXlab platform.

  • detection based object labeling in 3d Scenes
    International Conference on Robotics and Automation, 2012
    Co-Authors: Kevin Lai, Xiaofeng Ren, Dieter Fox
    Abstract:

    We propose a view-based approach for labeling objects in 3d Scenes reconstructed from RGB-D (color+depth) videos. We utilize sliding window detectors trained from object views to assign class probabilities to pixels in every RGB-D frame. These probabilities are projected into the reconstructed 3d scene and integrated using a voxel representation. We perform efficient inference on a Markov Random Field over the voxels, combining cues from view-based detection and 3d shape, to label the scene. Our detection-based approach produces accurate scene labeling on the RGB-D Scenes Dataset and improves the robustness of object detection.

  • ICRA - Detection-based object labeling in 3d Scenes
    2012 IEEE International Conference on Robotics and Automation, 2012
    Co-Authors: Kevin Lai, Xiaofeng Ren, Dieter Fox
    Abstract:

    We propose a view-based approach for labeling objects in 3d Scenes reconstructed from RGB-D (color+depth) videos. We utilize sliding window detectors trained from object views to assign class probabilities to pixels in every RGB-D frame. These probabilities are projected into the reconstructed 3d scene and integrated using a voxel representation. We perform efficient inference on a Markov Random Field over the voxels, combining cues from view-based detection and 3d shape, to label the scene. Our detection-based approach produces accurate scene labeling on the RGB-D Scenes Dataset and improves the robustness of object detection.

Takeo Kanade - One of the best experts on this subject based on the ideXlab platform.

  • Modeling and Representations of Large-Scale 3d Scenes
    International Journal of Computer Vision, 2008
    Co-Authors: Zhigang Zhu, Takeo Kanade
    Abstract:

    Modeling large urban and historical Scenes, both indoors and outdoors, has many applications, such as mapping, surveillance, transportation, development planning, archeology, and architecture. Research on large-scale 3d scene modeling has recently attracted increased attention of both academia and industry, resulting in major research projects. In addition to aerial imagery, both closer-range airborne and ground video/lidar sensors are used for achieving rapid, accurate and realistic modeling. Also critical for modeling large-scale 3d man-made urban or historical Scenes is the choice of representations to accurately and properly represent fine structures, textureless regions, sharp depth changes, and occlusions. This IJCV special issue on Modeling and Representations of Large-Scale 3d Scenes is devoted to the latest research results in this interesting and challenging area. After a rigorous IJCV peer-review process, eight papers in four categories are selected:

P Anandan - One of the best experts on this subject based on the ideXlab platform.

  • a unified approach to moving object detection in 2d and 3d Scenes
    IEEE Transactions on Pattern Analysis and Machine Intelligence, 1998
    Co-Authors: Michal Irani, P Anandan
    Abstract:

    The detection of moving objects is important in many tasks. Previous approaches to this problem can be broadly divided into two classes: 2D algorithms which apply when the scene can be approximated by a flat surface and/or when the camera is only undergoing rotations and zooms, and 3d algorithms which work well only when significant depth variations are present in the scene and the camera is translating. We describe a unified approach to handling moving object detection in both 2D and 3d Scenes, with a strategy to gracefully bridge the gap between those two extremes. Our approach is based on a stratification of the moving object detection problem into scenarios which gradually increase in their complexity. We present a set of techniques that match the above stratification. These techniques progressively increase in their complexity, ranging from 2D techniques to more complex 3d techniques. Moreover, the computations required for the solution to the problem at one complexity level become the initial processing step for the solution at the next complexity level. We illustrate these techniques using examples from real-image sequences.

  • a unified approach to moving object detection in 2d and 3d Scenes
    International Conference on Pattern Recognition, 1996
    Co-Authors: Michal Irani, P Anandan
    Abstract:

    The detection of moving objects is important in many tasks. Previous approaches to this problem can be broadly divided into two classes: 2D algorithms which apply when the scene can be approximated by a flat surface and/or when the camera is only undergoing rotations and zooms; and 3d algorithms which work well only when significant depth variations are present in the scene and the camera is translating. In this paper, we describe a unified approach to handling moving object detection in both 2D and 3d Scenes, with a strategy to gracefully bridge the gap between those two extremes. Our approach is based on a stratification of the moving object detection problem into scenarios and corresponding techniques which gradually increase in their complexity. Moreover, the computations required for the solution to the problem at one complexity level become the initial processing step for the solution at the next complexity level.