Scene Structure

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 40710 Experts worldwide ranked by ideXlab platform

Christoph Schnorr - One of the best experts on this subject based on the ideXlab platform.

  • GCPR - Estimating Vehicle Ego-Motion and Piecewise Planar Scene Structure from Optical Flow in a Continuous Framework
    Lecture Notes in Computer Science, 2015
    Co-Authors: Andreas Neufeld, Johannes Berger, Florian Becker, Frank Lenzen, Christoph Schnorr
    Abstract:

    We propose a variational approach for estimating egomotion and Structure of a static Scene from a pair of images recorded by a single moving camera. In our approach the Scene Structure is described by a set of 3D planar surfaces, which are linked to a SLIC superpixel decomposition of the image domain. The continuously parametrized planes are determined along with the extrinsic camera parameters by jointly minimizing a non-convex smooth objective function, that comprises a data term based on the pre-calculated optical flow between the input images and suitable priors on the Scene variables. Our experiments demonstrate that our approach estimates egomotion and Scene Structure with a high quality, that reaches the accuracy of state-of-the-art stereo methods, but relies on a single sensor that is more cost-efficient for autonomous systems.

  • estimating vehicle ego motion and piecewise planar Scene Structure from optical flow in a continuous framework
    German Conference on Pattern Recognition, 2015
    Co-Authors: Andreas Neufeld, Johannes Berger, Florian Becker, Frank Lenzen, Christoph Schnorr
    Abstract:

    We propose a variational approach for estimating egomotion and Structure of a static Scene from a pair of images recorded by a single moving camera. In our approach the Scene Structure is described by a set of 3D planar surfaces, which are linked to a SLIC superpixel decomposition of the image domain. The continuously parametrized planes are determined along with the extrinsic camera parameters by jointly minimizing a non-convex smooth objective function, that comprises a data term based on the pre-calculated optical flow between the input images and suitable priors on the Scene variables. Our experiments demonstrate that our approach estimates egomotion and Scene Structure with a high quality, that reaches the accuracy of state-of-the-art stereo methods, but relies on a single sensor that is more cost-efficient for autonomous systems.

  • variational recursive joint estimation of dense Scene Structure and camera motion from monocular high speed traffic sequences
    International Conference on Computer Vision, 2011
    Co-Authors: Florian Becker, Frank Lenzen, Jorg Hendrik Kappes, Christoph Schnorr
    Abstract:

    We present an approach to jointly estimating camera motion and dense Scene Structure in terms of depth maps from monocular image sequences in driver-assistance scenarios. For two consecutive frames of a sequence taken with a single fast moving camera, the approach combines numerical estimation of egomotion on the Euclidean manifold of motion parameters with variational regularization of dense depth map estimation. Embedding this online joint estimator into a recursive framework achieves a pronounced spatio-temporal filtering effect and robustness. We report the evaluation of thousands of images taken from a car moving at speed up to 100 km/h. The results compare favorably with two alternative settings that require more input data: stereo based Scene reconstruction and camera motion estimation in batch mode using multiple frames. The employed benchmark dataset is publicly available.

  • ICCV - Variational recursive joint estimation of dense Scene Structure and camera motion from monocular high speed traffic sequences
    2011 International Conference on Computer Vision, 2011
    Co-Authors: Florian Becker, Frank Lenzen, Jorg Hendrik Kappes, Christoph Schnorr
    Abstract:

    We present an approach to jointly estimating camera motion and dense Scene Structure in terms of depth maps from monocular image sequences in driver-assistance scenarios. For two consecutive frames of a sequence taken with a single fast moving camera, the approach combines numerical estimation of egomotion on the Euclidean manifold of motion parameters with variational regularization of dense depth map estimation. Embedding this online joint estimator into a recursive framework achieves a pronounced spatio-temporal filtering effect and robustness. We report the evaluation of thousands of images taken from a car moving at speed up to 100 km/h. The results compare favorably with two alternative settings that require more input data: stereo based Scene reconstruction and camera motion estimation in batch mode using multiple frames. The employed benchmark dataset is publicly available.

Florian Becker - One of the best experts on this subject based on the ideXlab platform.

  • GCPR - Estimating Vehicle Ego-Motion and Piecewise Planar Scene Structure from Optical Flow in a Continuous Framework
    Lecture Notes in Computer Science, 2015
    Co-Authors: Andreas Neufeld, Johannes Berger, Florian Becker, Frank Lenzen, Christoph Schnorr
    Abstract:

    We propose a variational approach for estimating egomotion and Structure of a static Scene from a pair of images recorded by a single moving camera. In our approach the Scene Structure is described by a set of 3D planar surfaces, which are linked to a SLIC superpixel decomposition of the image domain. The continuously parametrized planes are determined along with the extrinsic camera parameters by jointly minimizing a non-convex smooth objective function, that comprises a data term based on the pre-calculated optical flow between the input images and suitable priors on the Scene variables. Our experiments demonstrate that our approach estimates egomotion and Scene Structure with a high quality, that reaches the accuracy of state-of-the-art stereo methods, but relies on a single sensor that is more cost-efficient for autonomous systems.

  • estimating vehicle ego motion and piecewise planar Scene Structure from optical flow in a continuous framework
    German Conference on Pattern Recognition, 2015
    Co-Authors: Andreas Neufeld, Johannes Berger, Florian Becker, Frank Lenzen, Christoph Schnorr
    Abstract:

    We propose a variational approach for estimating egomotion and Structure of a static Scene from a pair of images recorded by a single moving camera. In our approach the Scene Structure is described by a set of 3D planar surfaces, which are linked to a SLIC superpixel decomposition of the image domain. The continuously parametrized planes are determined along with the extrinsic camera parameters by jointly minimizing a non-convex smooth objective function, that comprises a data term based on the pre-calculated optical flow between the input images and suitable priors on the Scene variables. Our experiments demonstrate that our approach estimates egomotion and Scene Structure with a high quality, that reaches the accuracy of state-of-the-art stereo methods, but relies on a single sensor that is more cost-efficient for autonomous systems.

  • variational recursive joint estimation of dense Scene Structure and camera motion from monocular high speed traffic sequences
    International Conference on Computer Vision, 2011
    Co-Authors: Florian Becker, Frank Lenzen, Jorg Hendrik Kappes, Christoph Schnorr
    Abstract:

    We present an approach to jointly estimating camera motion and dense Scene Structure in terms of depth maps from monocular image sequences in driver-assistance scenarios. For two consecutive frames of a sequence taken with a single fast moving camera, the approach combines numerical estimation of egomotion on the Euclidean manifold of motion parameters with variational regularization of dense depth map estimation. Embedding this online joint estimator into a recursive framework achieves a pronounced spatio-temporal filtering effect and robustness. We report the evaluation of thousands of images taken from a car moving at speed up to 100 km/h. The results compare favorably with two alternative settings that require more input data: stereo based Scene reconstruction and camera motion estimation in batch mode using multiple frames. The employed benchmark dataset is publicly available.

  • ICCV - Variational recursive joint estimation of dense Scene Structure and camera motion from monocular high speed traffic sequences
    2011 International Conference on Computer Vision, 2011
    Co-Authors: Florian Becker, Frank Lenzen, Jorg Hendrik Kappes, Christoph Schnorr
    Abstract:

    We present an approach to jointly estimating camera motion and dense Scene Structure in terms of depth maps from monocular image sequences in driver-assistance scenarios. For two consecutive frames of a sequence taken with a single fast moving camera, the approach combines numerical estimation of egomotion on the Euclidean manifold of motion parameters with variational regularization of dense depth map estimation. Embedding this online joint estimator into a recursive framework achieves a pronounced spatio-temporal filtering effect and robustness. We report the evaluation of thousands of images taken from a car moving at speed up to 100 km/h. The results compare favorably with two alternative settings that require more input data: stereo based Scene reconstruction and camera motion estimation in batch mode using multiple frames. The employed benchmark dataset is publicly available.

Robert Pless - One of the best experts on this subject based on the ideXlab platform.

  • Two Cloud-Based Cues for Estimating Scene Structure and Camera Calibration
    IEEE Transactions on Pattern Analysis and Machine Intelligence, 2013
    Co-Authors: Nathan Jacobs, Austin Abrams, Robert Pless
    Abstract:

    We describe algorithms that use cloud shadows as a form of stochastically Structured light to support 3D Scene geometry estimation. Taking video captured from a static outdoor camera as input, we use the relationship of the time series of intensity values between pairs of pixels as the primary input to our algorithms. We describe two cues that relate the 3D distance between a pair of points to the pair of intensity time series. The first cue results from the fact that two pixels that are nearby in the world are more likely to be under a cloud at the same time than two distant points. We describe methods for using this cue to estimate focal length and Scene Structure. The second cue is based on the motion of cloud shadows across the Scene; this cue results in a set of linear constraints on Scene Structure. These constraints have an inherent ambiguity, which we show how to overcome by combining the cloud motion cue with the spatial cue. We evaluate our method on several time lapses of real outdoor Scenes.

  • using cloud shadows to infer Scene Structure and camera calibration
    Computer Vision and Pattern Recognition, 2010
    Co-Authors: Nathan Jacobs, Brian Bies, Robert Pless
    Abstract:

    We explore the use of clouds as a form of Structured lighting to capture the 3D Structure of outdoor Scenes observed over time from a static camera. We derive two cues that relate 3D distances to changes in pixel intensity due to clouds shadows. The first cue is primarily spatial, works with low frame-rate time lapses, and supports estimating focal length and Scene Structure, up to a scale ambiguity. The second cue depends on cloud motion and has a more complex, but still linear, ambiguity. We describe a method that uses the spatial cue to estimate a depth map and a method that combines both cues. Results on time lapses of several outdoor Scenes show that these cues enable estimating Scene geometry and camera focal length.

  • CVPR - Using cloud shadows to infer Scene Structure and camera calibration
    2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2010
    Co-Authors: Nathan Jacobs, Brian Bies, Robert Pless
    Abstract:

    We explore the use of clouds as a form of Structured lighting to capture the 3D Structure of outdoor Scenes observed over time from a static camera. We derive two cues that relate 3D distances to changes in pixel intensity due to clouds shadows. The first cue is primarily spatial, works with low frame-rate time lapses, and supports estimating focal length and Scene Structure, up to a scale ambiguity. The second cue depends on cloud motion and has a more complex, but still linear, ambiguity. We describe a method that uses the spatial cue to estimate a depth map and a method that combines both cues. Results on time lapses of several outdoor Scenes show that these cues enable estimating Scene geometry and camera focal length.

Miaomiao Liu - One of the best experts on this subject based on the ideXlab platform.

  • indoor Scene Structure analysis for single image depth estimation
    Computer Vision and Pattern Recognition, 2015
    Co-Authors: Wei Zhuo, Mathieu Salzmann, Miaomiao Liu
    Abstract:

    We tackle the problem of single image depth estimation, which, without additional knowledge, suffers from many ambiguities. Unlike previous approaches that only reason locally, we propose to exploit the global Structure of the Scene to estimate its depth. To this end, we introduce a hierarchical representation of the Scene, which models local depth jointly with mid-level and global Scene Structures. We formulate single image depth estimation as inference in a graphical model whose edges let us encode the interactions within and across the different layers of our hierarchy. Our method therefore still produces detailed depth estimates, but also leverages higher-level information about the Scene. We demonstrate the benefits of our approach over local depth estimation methods on standard indoor datasets.

  • CVPR - Indoor Scene Structure analysis for single image depth estimation
    2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015
    Co-Authors: Wei Zhuo, Mathieu Salzmann, Miaomiao Liu
    Abstract:

    We tackle the problem of single image depth estimation, which, without additional knowledge, suffers from many ambiguities. Unlike previous approaches that only reason locally, we propose to exploit the global Structure of the Scene to estimate its depth. To this end, we introduce a hierarchical representation of the Scene, which models local depth jointly with mid-level and global Scene Structures. We formulate single image depth estimation as inference in a graphical model whose edges let us encode the interactions within and across the different layers of our hierarchy. Our method therefore still produces detailed depth estimates, but also leverages higher-level information about the Scene. We demonstrate the benefits of our approach over local depth estimation methods on standard indoor datasets.

Frank Lenzen - One of the best experts on this subject based on the ideXlab platform.

  • GCPR - Estimating Vehicle Ego-Motion and Piecewise Planar Scene Structure from Optical Flow in a Continuous Framework
    Lecture Notes in Computer Science, 2015
    Co-Authors: Andreas Neufeld, Johannes Berger, Florian Becker, Frank Lenzen, Christoph Schnorr
    Abstract:

    We propose a variational approach for estimating egomotion and Structure of a static Scene from a pair of images recorded by a single moving camera. In our approach the Scene Structure is described by a set of 3D planar surfaces, which are linked to a SLIC superpixel decomposition of the image domain. The continuously parametrized planes are determined along with the extrinsic camera parameters by jointly minimizing a non-convex smooth objective function, that comprises a data term based on the pre-calculated optical flow between the input images and suitable priors on the Scene variables. Our experiments demonstrate that our approach estimates egomotion and Scene Structure with a high quality, that reaches the accuracy of state-of-the-art stereo methods, but relies on a single sensor that is more cost-efficient for autonomous systems.

  • estimating vehicle ego motion and piecewise planar Scene Structure from optical flow in a continuous framework
    German Conference on Pattern Recognition, 2015
    Co-Authors: Andreas Neufeld, Johannes Berger, Florian Becker, Frank Lenzen, Christoph Schnorr
    Abstract:

    We propose a variational approach for estimating egomotion and Structure of a static Scene from a pair of images recorded by a single moving camera. In our approach the Scene Structure is described by a set of 3D planar surfaces, which are linked to a SLIC superpixel decomposition of the image domain. The continuously parametrized planes are determined along with the extrinsic camera parameters by jointly minimizing a non-convex smooth objective function, that comprises a data term based on the pre-calculated optical flow between the input images and suitable priors on the Scene variables. Our experiments demonstrate that our approach estimates egomotion and Scene Structure with a high quality, that reaches the accuracy of state-of-the-art stereo methods, but relies on a single sensor that is more cost-efficient for autonomous systems.

  • variational recursive joint estimation of dense Scene Structure and camera motion from monocular high speed traffic sequences
    International Conference on Computer Vision, 2011
    Co-Authors: Florian Becker, Frank Lenzen, Jorg Hendrik Kappes, Christoph Schnorr
    Abstract:

    We present an approach to jointly estimating camera motion and dense Scene Structure in terms of depth maps from monocular image sequences in driver-assistance scenarios. For two consecutive frames of a sequence taken with a single fast moving camera, the approach combines numerical estimation of egomotion on the Euclidean manifold of motion parameters with variational regularization of dense depth map estimation. Embedding this online joint estimator into a recursive framework achieves a pronounced spatio-temporal filtering effect and robustness. We report the evaluation of thousands of images taken from a car moving at speed up to 100 km/h. The results compare favorably with two alternative settings that require more input data: stereo based Scene reconstruction and camera motion estimation in batch mode using multiple frames. The employed benchmark dataset is publicly available.

  • ICCV - Variational recursive joint estimation of dense Scene Structure and camera motion from monocular high speed traffic sequences
    2011 International Conference on Computer Vision, 2011
    Co-Authors: Florian Becker, Frank Lenzen, Jorg Hendrik Kappes, Christoph Schnorr
    Abstract:

    We present an approach to jointly estimating camera motion and dense Scene Structure in terms of depth maps from monocular image sequences in driver-assistance scenarios. For two consecutive frames of a sequence taken with a single fast moving camera, the approach combines numerical estimation of egomotion on the Euclidean manifold of motion parameters with variational regularization of dense depth map estimation. Embedding this online joint estimator into a recursive framework achieves a pronounced spatio-temporal filtering effect and robustness. We report the evaluation of thousands of images taken from a car moving at speed up to 100 km/h. The results compare favorably with two alternative settings that require more input data: stereo based Scene reconstruction and camera motion estimation in batch mode using multiple frames. The employed benchmark dataset is publicly available.