Moving Camera

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 37254 Experts worldwide ranked by ideXlab platform

Raj Rao Nadakuditi - One of the best experts on this subject based on the ideXlab platform.

  • panoramic robust pca for foreground background separation on noisy free motion Camera video
    IEEE Transactions on Computational Imaging, 2019
    Co-Authors: Brian E Moore, Chen Gao, Raj Rao Nadakuditi
    Abstract:

    This paper presents a new robust PCA method for foreground–background separation on freely Moving Camera video with possible dense and sparse corruptions. Our proposed method registers the frames of the corrupted video and then encodes the varying perspective arising from Camera motion as missing data in a global model. This formulation allows our algorithm to produce a panoramic background component that automatically stitches together corrupted data from partially overlapping frames to reconstruct the full field of view. We model the registered video as the sum of a low-rank component that captures the background, a smooth component that captures the dynamic foreground of the scene, and a sparse component that isolates possible outliers and other sparse corruptions in the video. The low-rank portion of our model is based on a recent low-rank matrix estimator (OptShrink) that has been shown to yield superior low-rank subspace estimates in practice. To estimate the smooth foreground component of our model, we use a weighted total variation framework that enables our method to reliably decouple the true foreground of the video from sparse corruptions. We perform extensive numerical experiments on both static and Moving Camera video subject to a variety of dense and sparse corruptions. Our experiments demonstrate the state-of-the-art performance of our proposed method compared to existing methods both in terms of foreground and background estimation accuracy.

  • panoramic robust pca for foreground background separation on noisy free motion Camera video
    arXiv: Machine Learning, 2017
    Co-Authors: Brian E Moore, Chen Gao, Raj Rao Nadakuditi
    Abstract:

    This work presents a new robust PCA method for foreground-background separation on freely Moving Camera video with possible dense and sparse corruptions. Our proposed method registers the frames of the corrupted video and then encodes the varying perspective arising from Camera motion as missing data in a global model. This formulation allows our algorithm to produce a panoramic background component that automatically stitches together corrupted data from partially overlapping frames to reconstruct the full field of view. We model the registered video as the sum of a low-rank component that captures the background, a smooth component that captures the dynamic foreground of the scene, and a sparse component that isolates possible outliers and other sparse corruptions in the video. The low-rank portion of our model is based on a recent low-rank matrix estimator (OptShrink) that has been shown to yield superior low-rank subspace estimates in practice. To estimate the smooth foreground component of our model, we use a weighted total variation framework that enables our method to reliably decouple the true foreground of the video from sparse corruptions. We perform extensive numerical experiments on both static and Moving Camera video subject to a variety of dense and sparse corruptions. Our experiments demonstrate the state-of-the-art performance of our proposed method compared to existing methods both in terms of foreground and background estimation accuracy.

  • augmented robust pca for foreground background separation on noisy Moving Camera video
    IEEE Global Conference on Signal and Information Processing, 2017
    Co-Authors: Chen Gao, Brian E Moore, Raj Rao Nadakuditi
    Abstract:

    This work presents a novel approach for robust PCA with total variation regularization for foreground-background separation and denoising on noisy, Moving Camera video. Our proposed algorithm registers the raw (possibly corrupted) frames of a video and then jointly processes the registered frames to produce a decomposition of the scene into a low-rank background component that captures the static components of the scene, a smooth foreground component that captures the dynamic components of the scene, and a sparse component that isolates corruptions. Unlike existing methods, our proposed algorithm produces a panoramic low-rank component that spans the entire field of view, automatically stitching together corrupted data from partially overlapping scenes. The low-rank portion of our robust PCA model is based on a recently discovered optimal low-rank matrix estimator (OptShrink) that requires no parameter tuning. We demonstrate the performance of our algorithm on both static and Moving Camera videos corrupted by noise and outliers.

  • augmented robust pca for foreground background separation on noisy Moving Camera video
    arXiv: Machine Learning, 2017
    Co-Authors: Chen Gao, Brian E Moore, Raj Rao Nadakuditi
    Abstract:

    This work presents a novel approach for robust PCA with total variation regularization for foreground-background separation and denoising on noisy, Moving Camera video. Our proposed algorithm registers the raw (possibly corrupted) frames of a video and then jointly processes the registered frames to produce a decomposition of the scene into a low-rank background component that captures the static components of the scene, a smooth foreground component that captures the dynamic components of the scene, and a sparse component that can isolate corruptions and other non-idealities. Unlike existing methods, our proposed algorithm produces a panoramic low-rank component that spans the entire field of view, automatically stitching together corrupted data from partially overlapping scenes. The low-rank portion of our robust PCA model is based on a recently discovered optimal low-rank matrix estimator (OptShrink) that requires no parameter tuning. We demonstrate the performance of our algorithm on both static and Moving Camera videos corrupted by noise and outliers.

Brian E Moore - One of the best experts on this subject based on the ideXlab platform.

  • panoramic robust pca for foreground background separation on noisy free motion Camera video
    IEEE Transactions on Computational Imaging, 2019
    Co-Authors: Brian E Moore, Chen Gao, Raj Rao Nadakuditi
    Abstract:

    This paper presents a new robust PCA method for foreground–background separation on freely Moving Camera video with possible dense and sparse corruptions. Our proposed method registers the frames of the corrupted video and then encodes the varying perspective arising from Camera motion as missing data in a global model. This formulation allows our algorithm to produce a panoramic background component that automatically stitches together corrupted data from partially overlapping frames to reconstruct the full field of view. We model the registered video as the sum of a low-rank component that captures the background, a smooth component that captures the dynamic foreground of the scene, and a sparse component that isolates possible outliers and other sparse corruptions in the video. The low-rank portion of our model is based on a recent low-rank matrix estimator (OptShrink) that has been shown to yield superior low-rank subspace estimates in practice. To estimate the smooth foreground component of our model, we use a weighted total variation framework that enables our method to reliably decouple the true foreground of the video from sparse corruptions. We perform extensive numerical experiments on both static and Moving Camera video subject to a variety of dense and sparse corruptions. Our experiments demonstrate the state-of-the-art performance of our proposed method compared to existing methods both in terms of foreground and background estimation accuracy.

  • panoramic robust pca for foreground background separation on noisy free motion Camera video
    arXiv: Machine Learning, 2017
    Co-Authors: Brian E Moore, Chen Gao, Raj Rao Nadakuditi
    Abstract:

    This work presents a new robust PCA method for foreground-background separation on freely Moving Camera video with possible dense and sparse corruptions. Our proposed method registers the frames of the corrupted video and then encodes the varying perspective arising from Camera motion as missing data in a global model. This formulation allows our algorithm to produce a panoramic background component that automatically stitches together corrupted data from partially overlapping frames to reconstruct the full field of view. We model the registered video as the sum of a low-rank component that captures the background, a smooth component that captures the dynamic foreground of the scene, and a sparse component that isolates possible outliers and other sparse corruptions in the video. The low-rank portion of our model is based on a recent low-rank matrix estimator (OptShrink) that has been shown to yield superior low-rank subspace estimates in practice. To estimate the smooth foreground component of our model, we use a weighted total variation framework that enables our method to reliably decouple the true foreground of the video from sparse corruptions. We perform extensive numerical experiments on both static and Moving Camera video subject to a variety of dense and sparse corruptions. Our experiments demonstrate the state-of-the-art performance of our proposed method compared to existing methods both in terms of foreground and background estimation accuracy.

  • augmented robust pca for foreground background separation on noisy Moving Camera video
    IEEE Global Conference on Signal and Information Processing, 2017
    Co-Authors: Chen Gao, Brian E Moore, Raj Rao Nadakuditi
    Abstract:

    This work presents a novel approach for robust PCA with total variation regularization for foreground-background separation and denoising on noisy, Moving Camera video. Our proposed algorithm registers the raw (possibly corrupted) frames of a video and then jointly processes the registered frames to produce a decomposition of the scene into a low-rank background component that captures the static components of the scene, a smooth foreground component that captures the dynamic components of the scene, and a sparse component that isolates corruptions. Unlike existing methods, our proposed algorithm produces a panoramic low-rank component that spans the entire field of view, automatically stitching together corrupted data from partially overlapping scenes. The low-rank portion of our robust PCA model is based on a recently discovered optimal low-rank matrix estimator (OptShrink) that requires no parameter tuning. We demonstrate the performance of our algorithm on both static and Moving Camera videos corrupted by noise and outliers.

  • augmented robust pca for foreground background separation on noisy Moving Camera video
    arXiv: Machine Learning, 2017
    Co-Authors: Chen Gao, Brian E Moore, Raj Rao Nadakuditi
    Abstract:

    This work presents a novel approach for robust PCA with total variation regularization for foreground-background separation and denoising on noisy, Moving Camera video. Our proposed algorithm registers the raw (possibly corrupted) frames of a video and then jointly processes the registered frames to produce a decomposition of the scene into a low-rank background component that captures the static components of the scene, a smooth foreground component that captures the dynamic components of the scene, and a sparse component that can isolate corruptions and other non-idealities. Unlike existing methods, our proposed algorithm produces a panoramic low-rank component that spans the entire field of view, automatically stitching together corrupted data from partially overlapping scenes. The low-rank portion of our robust PCA model is based on a recently discovered optimal low-rank matrix estimator (OptShrink) that requires no parameter tuning. We demonstrate the performance of our algorithm on both static and Moving Camera videos corrupted by noise and outliers.

Chen Gao - One of the best experts on this subject based on the ideXlab platform.

  • panoramic robust pca for foreground background separation on noisy free motion Camera video
    IEEE Transactions on Computational Imaging, 2019
    Co-Authors: Brian E Moore, Chen Gao, Raj Rao Nadakuditi
    Abstract:

    This paper presents a new robust PCA method for foreground–background separation on freely Moving Camera video with possible dense and sparse corruptions. Our proposed method registers the frames of the corrupted video and then encodes the varying perspective arising from Camera motion as missing data in a global model. This formulation allows our algorithm to produce a panoramic background component that automatically stitches together corrupted data from partially overlapping frames to reconstruct the full field of view. We model the registered video as the sum of a low-rank component that captures the background, a smooth component that captures the dynamic foreground of the scene, and a sparse component that isolates possible outliers and other sparse corruptions in the video. The low-rank portion of our model is based on a recent low-rank matrix estimator (OptShrink) that has been shown to yield superior low-rank subspace estimates in practice. To estimate the smooth foreground component of our model, we use a weighted total variation framework that enables our method to reliably decouple the true foreground of the video from sparse corruptions. We perform extensive numerical experiments on both static and Moving Camera video subject to a variety of dense and sparse corruptions. Our experiments demonstrate the state-of-the-art performance of our proposed method compared to existing methods both in terms of foreground and background estimation accuracy.

  • panoramic robust pca for foreground background separation on noisy free motion Camera video
    arXiv: Machine Learning, 2017
    Co-Authors: Brian E Moore, Chen Gao, Raj Rao Nadakuditi
    Abstract:

    This work presents a new robust PCA method for foreground-background separation on freely Moving Camera video with possible dense and sparse corruptions. Our proposed method registers the frames of the corrupted video and then encodes the varying perspective arising from Camera motion as missing data in a global model. This formulation allows our algorithm to produce a panoramic background component that automatically stitches together corrupted data from partially overlapping frames to reconstruct the full field of view. We model the registered video as the sum of a low-rank component that captures the background, a smooth component that captures the dynamic foreground of the scene, and a sparse component that isolates possible outliers and other sparse corruptions in the video. The low-rank portion of our model is based on a recent low-rank matrix estimator (OptShrink) that has been shown to yield superior low-rank subspace estimates in practice. To estimate the smooth foreground component of our model, we use a weighted total variation framework that enables our method to reliably decouple the true foreground of the video from sparse corruptions. We perform extensive numerical experiments on both static and Moving Camera video subject to a variety of dense and sparse corruptions. Our experiments demonstrate the state-of-the-art performance of our proposed method compared to existing methods both in terms of foreground and background estimation accuracy.

  • augmented robust pca for foreground background separation on noisy Moving Camera video
    IEEE Global Conference on Signal and Information Processing, 2017
    Co-Authors: Chen Gao, Brian E Moore, Raj Rao Nadakuditi
    Abstract:

    This work presents a novel approach for robust PCA with total variation regularization for foreground-background separation and denoising on noisy, Moving Camera video. Our proposed algorithm registers the raw (possibly corrupted) frames of a video and then jointly processes the registered frames to produce a decomposition of the scene into a low-rank background component that captures the static components of the scene, a smooth foreground component that captures the dynamic components of the scene, and a sparse component that isolates corruptions. Unlike existing methods, our proposed algorithm produces a panoramic low-rank component that spans the entire field of view, automatically stitching together corrupted data from partially overlapping scenes. The low-rank portion of our robust PCA model is based on a recently discovered optimal low-rank matrix estimator (OptShrink) that requires no parameter tuning. We demonstrate the performance of our algorithm on both static and Moving Camera videos corrupted by noise and outliers.

  • augmented robust pca for foreground background separation on noisy Moving Camera video
    arXiv: Machine Learning, 2017
    Co-Authors: Chen Gao, Brian E Moore, Raj Rao Nadakuditi
    Abstract:

    This work presents a novel approach for robust PCA with total variation regularization for foreground-background separation and denoising on noisy, Moving Camera video. Our proposed algorithm registers the raw (possibly corrupted) frames of a video and then jointly processes the registered frames to produce a decomposition of the scene into a low-rank background component that captures the static components of the scene, a smooth foreground component that captures the dynamic components of the scene, and a sparse component that can isolate corruptions and other non-idealities. Unlike existing methods, our proposed algorithm produces a panoramic low-rank component that spans the entire field of view, automatically stitching together corrupted data from partially overlapping scenes. The low-rank portion of our robust PCA model is based on a recently discovered optimal low-rank matrix estimator (OptShrink) that requires no parameter tuning. We demonstrate the performance of our algorithm on both static and Moving Camera videos corrupted by noise and outliers.

Mubarak Shah - One of the best experts on this subject based on the ideXlab platform.

  • detection of independently Moving objects in non planar scenes via multi frame monocular epipolar constraint
    European Conference on Computer Vision, 2012
    Co-Authors: Soumyabrata Dey, Vladimir Reilly, Imran Saleemi, Mubarak Shah
    Abstract:

    In this paper we present a novel approach for detection of independently Moving foreground objects in non-planar scenes captured by a Moving Camera. We avoid the traditional assumptions that the stationary background of the scene is planar, or that it can be approximated by dominant single or multiple planes, or that the Camera used to capture the video is orthographic. Instead we utilize a multiframe monocular epipolar constraint of Camera motion derived for monocular Moving Cameras defined by an evolving epipolar plane between the Moving Camera center and 3D scene points. This constraint is parameterized as a polynomial function of time, and unlike repeated computations of inter-frame fundamental matrix, requires the estimation of fewer unknowns, and provides a more consistent separation between Moving and static objects for different levels of noise. This constraint allows us to segment out Moving objects in a general 3D scene where other approaches fail because their initial assumptions do not hold, and provides a natural way of fusing temporal information across multiple frames. We use a combination of optical flow and particle advection to capture all motion in the video across a number of frames, in the form of particle trajectories. We then apply the derived multi-frame epipolar constraint to these trajectories to determine which trajectories violate it, thus segmenting out the independently Moving objects. We show superior results on a number of Moving Camera sequences observing non-planar scenes, where other methods fail.

  • city scale geo spatial trajectory estimation of a Moving Camera
    Computer Vision and Pattern Recognition, 2012
    Co-Authors: Gonzalo Vacacastano, Amir Roshan Zamir, Mubarak Shah
    Abstract:

    This paper presents a novel method for estimating the geospatial trajectory of a Moving Camera with unknown intrinsic parameters, in a city-scale urban environment. The proposed method is based on a three step process that includes: 1) finding the best visual matches of individual images to a dataset of geo-referenced street view images, 2) Bayesian tracking to estimate the frame localization and its temporal evolution, and 3) a trajectory reconstruction algorithm to eliminate inconsistent estimations. As a result of matching features in query image with the features in the reference geo-taged images, in the first step, we obtain a distribution of geolocated votes of matching features which is interpreted as the likelihood of the location (latitude and longitude) given the current observation. In the second step, Bayesian tracking framework is used to estimate the temporal evolution of frame geolocalization based on the previous state probabilities and current likelihood. Finally, once a trajectory is estimated, we perform a Minimum Spanning Trees (MST) based trajectory reconstruction algorithm to eliminate trajectory loops or noisy estimations. The proposed method was tested on sixty minutes of video, which included footage downloaded from YouTube and footage captured by random users in Orlando and Pittsburgh.

  • action recognition in videos acquired by a Moving Camera using motion decomposition of lagrangian particle trajectories
    International Conference on Computer Vision, 2011
    Co-Authors: Omar Oreifej, Mubarak Shah
    Abstract:

    Recognition of human actions in a video acquired by a Moving Camera typically requires standard preprocessing steps such as motion compensation, Moving object detection and object tracking. The errors from the motion compensation step propagate to the object detection stage, resulting in miss-detections, which further complicates the tracking stage, resulting in cluttered and incorrect tracks. Therefore, action recognition from a Moving Camera is considered very challenging. In this paper, we propose a novel approach which does not follow the standard steps, and accordingly avoids the aforementioned difficulties. Our approach is based on Lagrangian particle trajectories which are a set of dense trajectories obtained by advecting optical flow over time, thus capturing the ensemble motions of a scene. This is done in frames of unaligned video, and no object detection is required. In order to handle the Moving Camera, we propose a novel approach based on low rank optimization, where we decompose the trajectories into their Camera-induced and object-induced components. Having obtained the relevant object motion trajectories, we compute a compact set of chaotic invariant features which captures the characteristics of the trajectories. Consequently, a SVM is employed to learn and recognize the human actions using the computed motion features. We performed intensive experiments on multiple benchmark datasets and two new aerial datasets called ARG and APHill, and obtained promising results.

Jan Kautz - One of the best experts on this subject based on the ideXlab platform.

  • learning rigidity in dynamic scenes with a Moving Camera for 3d motion field estimation
    European Conference on Computer Vision, 2018
    Co-Authors: Kihwan Kim, Alejandro Troccoli, Deqing Sun, James M Rehg, Jan Kautz
    Abstract:

    Estimation of 3D motion in a dynamic scene from a temporal pair of images is a core task in many scene understanding problems. In real-world applications, a dynamic scene is commonly captured by a Moving Camera (i.e., panning, tilting or hand-held), increasing the task complexity because the scene is observed from different viewpoints. The primary challenge is the disambiguation of the Camera motion from scene motion, which becomes more difficult as the amount of rigidity observed decreases, even with successful estimation of 2D image correspondences. Compared to other state-of-the-art 3D scene flow estimation methods, in this paper, we propose to learn the rigidity of a scene in a supervised manner from an extensive collection of dynamic scene data, and directly infer a rigidity mask from two sequential images with depths. With the learned network, we show how we can effectively estimate Camera motion and projected scene flow using computed 2D optical flow and the inferred rigidity mask. For training and testing the rigidity network, we also provide a new semi-synthetic dynamic scene dataset (synthetic foreground objects with a real background) and an evaluation split that accounts for the percentage of observed non-rigid pixels. Through our evaluation, we show the proposed framework outperforms current state-of-the-art scene flow estimation methods in challenging dynamic scenes.

  • learning rigidity in dynamic scenes with a Moving Camera for 3d motion field estimation
    arXiv: Computer Vision and Pattern Recognition, 2018
    Co-Authors: Kihwan Kim, Alejandro Troccoli, Deqing Sun, James M Rehg, Jan Kautz
    Abstract:

    Estimation of 3D motion in a dynamic scene from a temporal pair of images is a core task in many scene understanding problems. In real world applications, a dynamic scene is commonly captured by a Moving Camera (i.e., panning, tilting or hand-held), increasing the task complexity because the scene is observed from different view points. The main challenge is the disambiguation of the Camera motion from scene motion, which becomes more difficult as the amount of rigidity observed decreases, even with successful estimation of 2D image correspondences. Compared to other state-of-the-art 3D scene flow estimation methods, in this paper we propose to \emph{learn} the rigidity of a scene in a supervised manner from a large collection of dynamic scene data, and directly infer a rigidity mask from two sequential images with depths. With the learned network, we show how we can effectively estimate Camera motion and projected scene flow using computed 2D optical flow and the inferred rigidity mask. For training and testing the rigidity network, we also provide a new semi-synthetic dynamic scene dataset (synthetic foreground objects with a real background) and an evaluation split that accounts for the percentage of observed non-rigid pixels. Through our evaluation we show the proposed framework outperforms current state-of-the-art scene flow estimation methods in challenging dynamic scenes.

  • background inpainting for videos with dynamic objects and a free Moving Camera
    European Conference on Computer Vision, 2012
    Co-Authors: Miguel Granados, Jan Kautz, Kwang In Kim, James Tompkin, Christian Theobalt
    Abstract:

    We propose a method for reMoving marked dynamic objects from videos captured with a free-Moving Camera, so long as the objects occlude parts of the scene with a static background. Our approach takes as input a video, a mask marking the object to be removed, and a mask marking the dynamic objects to remain in the scene. To inpaint a frame, we align other candidate frames in which parts of the missing region are visible. Among these candidates, a single source is chosen to fill each pixel so that the final arrangement is color-consistent. Intensity differences between sources are smoothed using gradient domain fusion. Our frame alignment process assumes that the scene can be approximated using piecewise planar geometry: A set of homographies is estimated for each frame pair, and one each is selected for aligning pixels such that the color-discrepancy is minimized and the epipolar constraints are maintained. We provide experimental validation with several real-world video sequences to demonstrate that, unlike in previous work, inpainting videos shot with free-Moving Cameras does not necessarily require estimation of absolute Camera positions and per-frame per-pixel depth maps.