Scene Motion

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 39714 Experts worldwide ranked by ideXlab platform

Michael J. Black - One of the best experts on this subject based on the ideXlab platform.

  • Optical Flow with Semantic Segmentation and Localized Layers
    2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016
    Co-Authors: Laura Sevilla-lara, Varun Jampani, Michael J. Black
    Abstract:

    Existing optical flow methods make generic, spatially homogeneous, assumptions about the spatial structure of the flow. In reality, optical flow varies across an image depending on object class. Simply put, different objects move differently. Here we exploit recent advances in static semantic Scene segmentation to segment the image into objects of different types. We define different models of image Motion in these regions depending on the type of object. For example, we model the Motion on roads with homographies, vegetation with spatially smooth flow, and independently moving objects like cars and planes with affine Motion plus deviations. We then pose the flow estimation problem using a novel formulation of localized layers, which addresses limitations of traditional layered models for dealing with complex Scene Motion. Our semantic flow method achieves the lowest error of any published monocular method in the KITTI-2015 flow benchmark and produces qualitatively better flow and segmentation than recent top methods on a wide range of natural videos.

  • On the Spatial Statistics of Optical Flow
    International Journal of Computer Vision, 2007
    Co-Authors: Stefan Roth, Michael J. Black
    Abstract:

    We present an analysis of the spatial and temporal statistics of "natural" optical flow fields and a novel flow algorithm that exploits their spatial statistics. Training flow fields are constructed using range images of natural Scenes and 3D camera Motions recovered from hand-held and car-mounted video sequences. A detailed analysis of optical flow statistics in natural Scenes is presented and machine learning methods are developed to learn a Markov random field model of optical flow. The prior probability of a flow field is formulated as a Field-of-Experts model that captures the spatial statistics in overlapping patches and is trained using contrastive divergence. This new optical flow prior is compared with previous robust priors and is incorporated into a recent, accurate algorithm for dense optical flow computation. Experiments with natural and synthetic sequences illustrate how the learned optical flow prior quantitatively improves flow accuracy and how it captures the rich spatial structure found in natural Scene Motion.

  • on the spatial statistics of optical flow
    International Conference on Computer Vision, 2005
    Co-Authors: Stefan Roth, Michael J. Black
    Abstract:

    We develop a method for learning the spatial statistics of optical flow fields from a novel training database. Training flow fields are constructed using range images of natural Scenes and 3D camera Motions recovered from handheld and car-mounted video sequences. A detailed analysis of optical flow statistics in natural Scenes is presented and machine learning methods are developed to learn a Markov random field model of optical flow. The prior probability of a flow field is formulated as a field-of-experts model that captures the higher order spatial statistics in overlapping patches and is trained using contrastive divergence. This new optical flow prior is compared with previous robust priors and is incorporated into a recent, accurate algorithm for dense optical flow computation. Experiments with natural and synthetic sequences illustrate how the learned optical flow prior quantitatively improves flow accuracy and how it captures the rich spatial structure found in natural Scene Motion.

Senthil Yogamani - One of the best experts on this subject based on the ideXlab platform.

  • fusemodnet real time camera and lidar based moving object detection for robust low light autonomous driving
    arXiv: Computer Vision and Pattern Recognition, 2019
    Co-Authors: Hazem Rashed, Mohamed Ramzy, V Vaquero, Ahmad El Sallab, Ganesh Sistu, Senthil Yogamani
    Abstract:

    Moving object detection is a critical task for autonomous vehicles. As dynamic objects represent higher collision risk than static ones, our own ego-trajectories have to be planned attending to the future states of the moving elements of the Scene. Motion can be perceived using temporal information such as optical flow. Conventional optical flow computation is based on camera sensors only, which makes it prone to failure in conditions with low illumination. On the other hand, LiDAR sensors are independent of illumination, as they measure the time-of-flight of their own emitted lasers. In this work, we propose a robust and real-time CNN architecture for Moving Object Detection (MOD) under low-light conditions by capturing Motion information from both camera and LiDAR sensors. We demonstrate the impact of our algorithm on KITTI dataset where we simulate a low-light environment creating a novel dataset "Dark KITTI". We obtain a 10.1% relative improvement on Dark-KITTI, and a 4.25% improvement on standard KITTI relative to our baselines. The proposed algorithm runs at 18 fps on a standard desktop GPU using $256\times1224$ resolution images.

  • fusemodnet real time camera and lidar based moving object detection for robust low light autonomous driving
    International Conference on Computer Vision, 2019
    Co-Authors: Hazem Rashed, Mohamed Ramzy, V Vaquero, Ahmad El Sallab, Ganesh Sistu, Senthil Yogamani
    Abstract:

    Moving object detection is a critical task for autonomous vehicles. As dynamic objects represent higher collision risk than static ones, our own ego-trajectories have to be planned attending to the future states of the moving elements of the Scene. Motion can be perceived using temporal information such as optical flow. Conventional optical flow computation is based on camera sensors only, which makes it prone to failure in conditions with low illumination. On the other hand, LiDAR sensors are independent of illumination, as they measure the time-of-flight of their own emitted lasers. In this work we propose a robust and real-time CNN architecture for Moving Object Detection (MOD) under low-light conditions by capturing Motion information from both camera and LiDAR sensors. We demonstrate the impact of our algorithm on KITTI dataset where we simulate a low-light environment creating a novel dataset "Dark-KITTI". We obtain a 10.1 % relative improvement on Dark-KITTI, and a 4.25 % improvement on standard KITTI relative to our baselines. The proposed algorithm runs at 29 fps on a standard desktop GPU using 256x1224 resolution images.

Hazem Rashed - One of the best experts on this subject based on the ideXlab platform.

  • fusemodnet real time camera and lidar based moving object detection for robust low light autonomous driving
    arXiv: Computer Vision and Pattern Recognition, 2019
    Co-Authors: Hazem Rashed, Mohamed Ramzy, V Vaquero, Ahmad El Sallab, Ganesh Sistu, Senthil Yogamani
    Abstract:

    Moving object detection is a critical task for autonomous vehicles. As dynamic objects represent higher collision risk than static ones, our own ego-trajectories have to be planned attending to the future states of the moving elements of the Scene. Motion can be perceived using temporal information such as optical flow. Conventional optical flow computation is based on camera sensors only, which makes it prone to failure in conditions with low illumination. On the other hand, LiDAR sensors are independent of illumination, as they measure the time-of-flight of their own emitted lasers. In this work, we propose a robust and real-time CNN architecture for Moving Object Detection (MOD) under low-light conditions by capturing Motion information from both camera and LiDAR sensors. We demonstrate the impact of our algorithm on KITTI dataset where we simulate a low-light environment creating a novel dataset "Dark KITTI". We obtain a 10.1% relative improvement on Dark-KITTI, and a 4.25% improvement on standard KITTI relative to our baselines. The proposed algorithm runs at 18 fps on a standard desktop GPU using $256\times1224$ resolution images.

  • fusemodnet real time camera and lidar based moving object detection for robust low light autonomous driving
    International Conference on Computer Vision, 2019
    Co-Authors: Hazem Rashed, Mohamed Ramzy, V Vaquero, Ahmad El Sallab, Ganesh Sistu, Senthil Yogamani
    Abstract:

    Moving object detection is a critical task for autonomous vehicles. As dynamic objects represent higher collision risk than static ones, our own ego-trajectories have to be planned attending to the future states of the moving elements of the Scene. Motion can be perceived using temporal information such as optical flow. Conventional optical flow computation is based on camera sensors only, which makes it prone to failure in conditions with low illumination. On the other hand, LiDAR sensors are independent of illumination, as they measure the time-of-flight of their own emitted lasers. In this work we propose a robust and real-time CNN architecture for Moving Object Detection (MOD) under low-light conditions by capturing Motion information from both camera and LiDAR sensors. We demonstrate the impact of our algorithm on KITTI dataset where we simulate a low-light environment creating a novel dataset "Dark-KITTI". We obtain a 10.1 % relative improvement on Dark-KITTI, and a 4.25 % improvement on standard KITTI relative to our baselines. The proposed algorithm runs at 29 fps on a standard desktop GPU using 256x1224 resolution images.

Stefan Roth - One of the best experts on this subject based on the ideXlab platform.

  • On the Spatial Statistics of Optical Flow
    International Journal of Computer Vision, 2007
    Co-Authors: Stefan Roth, Michael J. Black
    Abstract:

    We present an analysis of the spatial and temporal statistics of "natural" optical flow fields and a novel flow algorithm that exploits their spatial statistics. Training flow fields are constructed using range images of natural Scenes and 3D camera Motions recovered from hand-held and car-mounted video sequences. A detailed analysis of optical flow statistics in natural Scenes is presented and machine learning methods are developed to learn a Markov random field model of optical flow. The prior probability of a flow field is formulated as a Field-of-Experts model that captures the spatial statistics in overlapping patches and is trained using contrastive divergence. This new optical flow prior is compared with previous robust priors and is incorporated into a recent, accurate algorithm for dense optical flow computation. Experiments with natural and synthetic sequences illustrate how the learned optical flow prior quantitatively improves flow accuracy and how it captures the rich spatial structure found in natural Scene Motion.

  • on the spatial statistics of optical flow
    International Conference on Computer Vision, 2005
    Co-Authors: Stefan Roth, Michael J. Black
    Abstract:

    We develop a method for learning the spatial statistics of optical flow fields from a novel training database. Training flow fields are constructed using range images of natural Scenes and 3D camera Motions recovered from handheld and car-mounted video sequences. A detailed analysis of optical flow statistics in natural Scenes is presented and machine learning methods are developed to learn a Markov random field model of optical flow. The prior probability of a flow field is formulated as a field-of-experts model that captures the higher order spatial statistics in overlapping patches and is trained using contrastive divergence. This new optical flow prior is compared with previous robust priors and is incorporated into a recent, accurate algorithm for dense optical flow computation. Experiments with natural and synthetic sequences illustrate how the learned optical flow prior quantitatively improves flow accuracy and how it captures the rich spatial structure found in natural Scene Motion.

Ganesh Sistu - One of the best experts on this subject based on the ideXlab platform.

  • fusemodnet real time camera and lidar based moving object detection for robust low light autonomous driving
    arXiv: Computer Vision and Pattern Recognition, 2019
    Co-Authors: Hazem Rashed, Mohamed Ramzy, V Vaquero, Ahmad El Sallab, Ganesh Sistu, Senthil Yogamani
    Abstract:

    Moving object detection is a critical task for autonomous vehicles. As dynamic objects represent higher collision risk than static ones, our own ego-trajectories have to be planned attending to the future states of the moving elements of the Scene. Motion can be perceived using temporal information such as optical flow. Conventional optical flow computation is based on camera sensors only, which makes it prone to failure in conditions with low illumination. On the other hand, LiDAR sensors are independent of illumination, as they measure the time-of-flight of their own emitted lasers. In this work, we propose a robust and real-time CNN architecture for Moving Object Detection (MOD) under low-light conditions by capturing Motion information from both camera and LiDAR sensors. We demonstrate the impact of our algorithm on KITTI dataset where we simulate a low-light environment creating a novel dataset "Dark KITTI". We obtain a 10.1% relative improvement on Dark-KITTI, and a 4.25% improvement on standard KITTI relative to our baselines. The proposed algorithm runs at 18 fps on a standard desktop GPU using $256\times1224$ resolution images.

  • fusemodnet real time camera and lidar based moving object detection for robust low light autonomous driving
    International Conference on Computer Vision, 2019
    Co-Authors: Hazem Rashed, Mohamed Ramzy, V Vaquero, Ahmad El Sallab, Ganesh Sistu, Senthil Yogamani
    Abstract:

    Moving object detection is a critical task for autonomous vehicles. As dynamic objects represent higher collision risk than static ones, our own ego-trajectories have to be planned attending to the future states of the moving elements of the Scene. Motion can be perceived using temporal information such as optical flow. Conventional optical flow computation is based on camera sensors only, which makes it prone to failure in conditions with low illumination. On the other hand, LiDAR sensors are independent of illumination, as they measure the time-of-flight of their own emitted lasers. In this work we propose a robust and real-time CNN architecture for Moving Object Detection (MOD) under low-light conditions by capturing Motion information from both camera and LiDAR sensors. We demonstrate the impact of our algorithm on KITTI dataset where we simulate a low-light environment creating a novel dataset "Dark-KITTI". We obtain a 10.1 % relative improvement on Dark-KITTI, and a 4.25 % improvement on standard KITTI relative to our baselines. The proposed algorithm runs at 29 fps on a standard desktop GPU using 256x1224 resolution images.