Video Stabilization

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 3198 Experts worldwide ranked by ideXlab platform

Ravi Ramamoorthi - One of the best experts on this subject based on the ideXlab platform.

  • Selfie Video Stabilization
    IEEE transactions on pattern analysis and machine intelligence, 2021
    Co-Authors: Ravi Ramamoorthi
    Abstract:

    We propose a novel algorithm for stabilizing selfie Videos. Our goal is to automatically generate stabilized Video that has optimal smooth motion in the sense of both foreground and background. The key insight is that non-rigid foreground motion in selfie Videos can be analyzed using a 3D face model, and background motion can be analyzed using optical flow. We use second derivative of temporal trajectory of selected pixels as the measure of smoothness. Our algorithm stabilizes selfie Videos by minimizing the smoothness measure of the background, regularized by the motion of the foreground. Experiments show that our method outperforms state-of-the-art general Video Stabilization techniques in selfie Videos.

  • Real-Time Selfie Video Stabilization
    arXiv: Computer Vision and Pattern Recognition, 2020
    Co-Authors: Ravi Ramamoorthi, Ke-li Cheng, Michel Sarkis
    Abstract:

    We propose a novel real-time selfie Video Stabilization method. Our method is completely automatic and runs at 26 fps. We use a 1D linear convolutional network to directly infer the rigid moving least squares warping which implicitly balances between the global rigidity and local flexibility. Our network structure is specifically designed to stabilize the background and foreground at the same time, while providing optional control of Stabilization focus (relative importance of foreground vs. background) to the users. To train our network, we collect a selfie Video dataset with 1005 Videos, which is significantly larger than previous selfie Video datasets. We also propose a grid approximation method to the rigid moving least squares warping that enables the real-time frame warping. Our method is fully automatic and produces visually and quantitatively better results than previous real-time general Video Stabilization methods. Compared to previous offline selfie Video methods, our approach produces comparable quality with a speed improvement of orders of magnitude.

  • learning Video Stabilization using optical flow
    Computer Vision and Pattern Recognition, 2020
    Co-Authors: Ravi Ramamoorthi
    Abstract:

    We propose a novel neural network that infers the per-pixel warp fields for Video Stabilization from the optical flow fields of the input Video. While previous learning based Video Stabilization methods attempt to implicitly learn frame motions from color Videos, our method resorts to optical flow for motion analysis and directly learns the Stabilization using the optical flow. We also propose a pipeline that uses optical flow principal components for motion inpainting and warp field smoothing, making our method robust to moving objects, occlusion and optical flow inaccuracy, which is challenging for other Video Stabilization methods. Our method achieves quantitatively and visually better results than the state-of-the-art optimization based and deep learning based Video Stabilization methods. Our method also gives a ~3x speed improvement compared to the optimization based methods.

  • CVPR - Learning Video Stabilization Using Optical Flow
    2020 IEEE CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020
    Co-Authors: Ravi Ramamoorthi
    Abstract:

    We propose a novel neural network that infers the per-pixel warp fields for Video Stabilization from the optical flow fields of the input Video. While previous learning based Video Stabilization methods attempt to implicitly learn frame motions from color Videos, our method resorts to optical flow for motion analysis and directly learns the Stabilization using the optical flow. We also propose a pipeline that uses optical flow principal components for motion inpainting and warp field smoothing, making our method robust to moving objects, occlusion and optical flow inaccuracy, which is challenging for other Video Stabilization methods. Our method achieves quantitatively and visually better results than the state-of-the-art optimization based and deep learning based Video Stabilization methods. Our method also gives a ~3x speed improvement compared to the optimization based methods.

  • robust Video Stabilization by optimization in cnn weight space
    Computer Vision and Pattern Recognition, 2019
    Co-Authors: Ravi Ramamoorthi
    Abstract:

    We propose a novel robust Video Stabilization method. Unlike traditional Video Stabilization techniques that involve complex motion models, we directly model the appearance change of the frames as the dense optical flow field of consecutive frames. We introduce a new formulation of the Video Stabilization task based on first principles, which leads to a large scale non-convex problem. This problem is hard to solve, so previous optical flow based approaches have resorted to heuristics. In this paper, we propose a novel optimization routine that transfers this problem into the convolutional neural network parameter domain. While we exploit the general benefits of CNNs, including standard gradient-based optimization techniques, our method is a new approach to using CNNs purely as an optimizer rather than learning from data.Our method trains the CNN from scratch on each specific input example, and intentionally overfits the CNN parameters to produce the best result on the input example. By solving the problem in the CNN weight space rather than directly for image pixels, we make it a viable formulation for Video Stabilization. Our method produces both visually and quantitatively better results than previous work, and is robust in situations acknowledged as limitations in current state-of-the-art methods.

Ronggang Wang - One of the best experts on this subject based on the ideXlab platform.

  • ICME - Local subspace Video Stabilization
    2014 IEEE International Conference on Multimedia and Expo (ICME), 2014
    Co-Authors: Chengzhou Tang, Ronggang Wang
    Abstract:

    Video Stabilization enhances Video quality by stabilizing unstable motion. This paper proposes a new Video Stabilization method that simultaneously factors and smooths motion trajectories. We model the trajectories with a time-variant local subspace constraint. Every column of the trajectory matrix is factored and smoothed in separate local subspace. This model makes our method more flexible and accurate than subspace Video Stabilization. In addition, we design a novel outlier detection technique that utilizes the relationship between consecutive local subspaces. Experiments on synthetic data validate the numerical performance of our factorization. Quantitative comparisons on real Videos show that our local method is better than subspace Video Stabilization. Moreover, our stabilized Videos are comparable with the public results from some other state-of-the-art methods.

  • ICASSP - Sparse moving factorization for subspace Video Stabilization
    2014 IEEE International Conference on Acoustics Speech and Signal Processing (ICASSP), 2014
    Co-Authors: Chengzhou Tang, Ronggang Wang
    Abstract:

    This paper presents a new method for calculating the low-rank approximation of a highly incomplete trajectory matrix for subspace Video Stabilization. We extend moving factorization proposed in [1], which is a streamable method based on least squares. By utilizing sparse representation of trajectories, the proposed factorization method is more accurate while still streamable. We test our sparse moving factorization on synthetic data as well as real Videos. Experiments on synthetic sequence demonstrate the numerical properties of our method, and stabilized Videos show that our method outperforms moving factorization for subspace Video Stabilization. In addition, our results are also better than the ones from some other state-of-the-art Video Stabilization methods.

Chengzhou Tang - One of the best experts on this subject based on the ideXlab platform.

  • ICME - Local subspace Video Stabilization
    2014 IEEE International Conference on Multimedia and Expo (ICME), 2014
    Co-Authors: Chengzhou Tang, Ronggang Wang
    Abstract:

    Video Stabilization enhances Video quality by stabilizing unstable motion. This paper proposes a new Video Stabilization method that simultaneously factors and smooths motion trajectories. We model the trajectories with a time-variant local subspace constraint. Every column of the trajectory matrix is factored and smoothed in separate local subspace. This model makes our method more flexible and accurate than subspace Video Stabilization. In addition, we design a novel outlier detection technique that utilizes the relationship between consecutive local subspaces. Experiments on synthetic data validate the numerical performance of our factorization. Quantitative comparisons on real Videos show that our local method is better than subspace Video Stabilization. Moreover, our stabilized Videos are comparable with the public results from some other state-of-the-art methods.

  • ICASSP - Sparse moving factorization for subspace Video Stabilization
    2014 IEEE International Conference on Acoustics Speech and Signal Processing (ICASSP), 2014
    Co-Authors: Chengzhou Tang, Ronggang Wang
    Abstract:

    This paper presents a new method for calculating the low-rank approximation of a highly incomplete trajectory matrix for subspace Video Stabilization. We extend moving factorization proposed in [1], which is a streamable method based on least squares. By utilizing sparse representation of trajectories, the proposed factorization method is more accurate while still streamable. We test our sparse moving factorization on synthetic data as well as real Videos. Experiments on synthetic sequence demonstrate the numerical properties of our method, and stabilized Videos show that our method outperforms moving factorization for subspace Video Stabilization. In addition, our results are also better than the ones from some other state-of-the-art Video Stabilization methods.

Aseem Agarwala - One of the best experts on this subject based on the ideXlab platform.

  • subspace Video Stabilization
    ACM Transactions on Graphics, 2011
    Co-Authors: Feng Liu, Hailin Jin, Michael Gleicher, Jue Wang, Aseem Agarwala
    Abstract:

    We present a robust and efficient approach to Video Stabilization that achieves high-quality camera motion for a wide range of Videos. In this article, we focus on the problem of transforming a set of input 2D motion trajectories so that they are both smooth and resemble visually plausible views of the imaged scene; our key insight is that we can achieve this goal by enforcing subspace constraints on feature trajectories while smoothing them. Our approach assembles tracked features in the Video into a trajectory matrix, factors it into two low-rank matrices, and performs filtering or curve fitting in a low-dimensional linear space. In order to process long Videos, we propose a moving factorization that is both efficient and streamable. Our experiments confirm that our approach can efficiently provide Stabilization results comparable with prior 3D methods in cases where those methods succeed, but also provides smooth camera motions in cases where such approaches often fail, such as Videos that lack parallax. The presented approach offers the first method that both achieves high-quality Video Stabilization and is practical enough for consumer applications.

  • Light field Video Stabilization
    2009 IEEE 12th International Conference on Computer Vision, 2009
    Co-Authors: Brandon M. Smith, Hailin Jin, Li Zhang, Aseem Agarwala
    Abstract:

    We describe a method for producing a smooth, stabilized Video from the shaky input of a hand-held light field Video camera—specifically, a small camera array. Traditional Stabilization techniques dampen shake with 2D warps, and thus have limited ability to stabilize a significantly shaky camera motion through a 3D scene. Other recent Stabilization techniques synthesize novel views as they would have been seen along a virtual, smooth 3D camera path, but are limited to static scenes. We show that Video camera arrays enable much more powerful Video Stabilization, since they allow changes in viewpoint for a single time instant. Furthermore, we point out that the straightforward approach to light field Video Stabilization requires computing structure-from-motion, which can be brittle for typical consumer-level Video of general dynamic scenes. We present a more robust approach that avoids input camera path reconstruction. Instead, we employ a spacetime optimization that directly computes a sequence of relative poses between the virtual camera and the camera array, while minimizing acceleration of salient visual features in the virtual image plane. We validate our novel method by comparing it to state-of-the-art Stabilization software, such as Apple iMovie and 2d3 SteadyMove Pro, on a number of challenging scenes.

Arcangelo Ranieri Bruna - One of the best experts on this subject based on the ideXlab platform.

  • Digital Photography - Random-temporal block selection for Video Stabilization
    Digital Photography VII, 2011
    Co-Authors: Sebastiano Battiato, Arcangelo Ranieri Bruna, Giovanni Puglisi
    Abstract:

    Digital Video Stabilization allows to acquire Video sequences without disturbing jerkiness by removing from the image sequence the effects caused by unwanted camera movements. One of the bottlenecks of these approaches is the local motion estimation step. In this paper we propose a Block Selector able to speed-up the block matching based Video Stabilization techniques without considerably degrading the Stabilization performances. Both history and random criteria are taken into account in the selection process. Experiments on real cases confirm the effectiveness of the proposed approach even in critical conditions.

  • a robust Video Stabilization system by adaptive motion vectors filtering
    International Conference on Multimedia and Expo, 2008
    Co-Authors: Sebastiano Battiato, Giovanni Puglisi, Arcangelo Ranieri Bruna
    Abstract:

    Digital Video Stabilization allows to acquire Video sequences without disturbing jerkiness, removing unwanted camera movements. In this paper we propose a novel fast Video Stabilization algorithm based on block matching of local motion vectors. Some of these vectors are properly filtered out by making use of ad-hoc rules taking into account local similarity, local ldquoactivityrdquo and matching effectiveness. Also a temporal analysis of the relative error computed at each frame has been achieved. Reliable information are then used to retrieve inter-frame transformation parameters. Experiments on real cases confirm the effectiveness of the proposed approach even in critical conditions.

  • ICPR - Regular texture removal for Video Stabilization
    2008 19th International Conference on Pattern Recognition, 2008
    Co-Authors: Sebastiano Battiato, Giovanni Puglisi, Arcangelo Ranieri Bruna
    Abstract:

    In this paper we propose a novel fast fuzzy classifier able to find regular and low distorted near regular texture taking into account the constraints of Video Stabilization applications. Digital Video Stabilization allows to acquire Video sequences without disturbing jerkiness, removing unwanted camera movements. In presence of regular or near regular texture, Video Stabilization approaches typically fail. These kind of patterns, due to their periodicity, create multiple matching that degrade motion estimation performances. The proposed classifier has been used as a filtering module in a block based Video Stabilization approach. Experiments on real sequences with (and without) regular texture confirm the effectiveness of the proposed approach.