Frame Generator

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 5274 Experts worldwide ranked by ideXlab platform

Chunhong Pan - One of the best experts on this subject based on the ideXlab platform.

  • Deep generative video prediction
    Pattern Recognition Letters, 2018
    Co-Authors: Lingfeng Wang, Shiming Xiang, Chunhong Pan
    Abstract:

    Abstract Video prediction plays a fundamental role in video analysis and pattern recognition. However, the generated future Frames are often blurred, which are not sufficient for further research. To overcome this obstacle, this paper proposes a new deep generative video prediction network under the Framework of generative adversarial nets. The network consists of three components: a motion encoder, a Frame Generator and a Frame discriminator. The motion encoder receives multiple Frame differences (also known as Eulerian motion) as input and outputs a global video motion representation. The Frame Generator is a pseudo-reverse two-stream network to generate the future Frame. The Frame discriminator is a discriminative 3D convolution network to determine whether the given Frame is derived from the true future Frame distribution or not. The Frame Generator and Frame discriminator train jointly in an adversarial manner until a Nash equilibrium. Motivated by theories on color filter array, this paper also designs a novel cross channel color gradient (3CG) loss as a guidance of deblurring. Experiments on two state-of-the-art data sets demonstrate that the proposed network is promising.

Victoria Paternostro - One of the best experts on this subject based on the ideXlab platform.

  • Dynamical Sampling for Shift-preserving Operators
    arXiv: Functional Analysis, 2019
    Co-Authors: Alejandra Aguilera, Diana Carbajal, Carlos Cabrelli, Victoria Paternostro
    Abstract:

    In this note, we solve the dynamical sampling problem for a class of shift-preserving operators $L:V\to V$ acting on a finitely generated shift-invariant space $V$. We find conditions on $L$ and a finite set of functions of $V$ so that the iterations of the operator $L$ on the functions produce a Frame Generator set of $V$. This means that the integer translations of the Generators form a Frame of $V$.

  • Frames by Iterations in Shift-invariant Spaces
    2019 13th International conference on Sampling Theory and Applications (SampTA), 2019
    Co-Authors: Alejandra Aguilera, Diana Carbajal, Carlos Cabrelli, Victoria Paternostro
    Abstract:

    In this note we solve the dynamical sampling problem for a class of shift-preserving (SP) operators, acting on a finitely generated shift-invariant space (FSIS). We find conditions on the operator and on a finite set of functions in the FSIS in order that the iterations of the operator on the functions produce a Frame Generator set. That is, the integer translations of the Frame Generator set is a Frame of the FSIS. In order to obtain these results, we study the structure of SP operators and obtain a generalized finite dimensional spectral theorem.

Xunxiang Guo - One of the best experts on this subject based on the ideXlab platform.

Lingfeng Wang - One of the best experts on this subject based on the ideXlab platform.

  • Deep generative video prediction
    Pattern Recognition Letters, 2018
    Co-Authors: Lingfeng Wang, Shiming Xiang, Chunhong Pan
    Abstract:

    Abstract Video prediction plays a fundamental role in video analysis and pattern recognition. However, the generated future Frames are often blurred, which are not sufficient for further research. To overcome this obstacle, this paper proposes a new deep generative video prediction network under the Framework of generative adversarial nets. The network consists of three components: a motion encoder, a Frame Generator and a Frame discriminator. The motion encoder receives multiple Frame differences (also known as Eulerian motion) as input and outputs a global video motion representation. The Frame Generator is a pseudo-reverse two-stream network to generate the future Frame. The Frame discriminator is a discriminative 3D convolution network to determine whether the given Frame is derived from the true future Frame distribution or not. The Frame Generator and Frame discriminator train jointly in an adversarial manner until a Nash equilibrium. Motivated by theories on color filter array, this paper also designs a novel cross channel color gradient (3CG) loss as a guidance of deblurring. Experiments on two state-of-the-art data sets demonstrate that the proposed network is promising.

Jiashi Feng - One of the best experts on this subject based on the ideXlab platform.

  • ACCV (6) - Better Guider Predicts Future Better: Difference Guided Generative Adversarial Networks
    Computer Vision – ACCV 2018, 2019
    Co-Authors: Guohao Ying, Yingtian Zou, Lin Wan, Jiashi Feng
    Abstract:

    Predicting the future is a fantasy but practicality work. It is the key component to intelligent agents, such as self-driving vehicles, medical monitoring devices and robotics. In this work, we consider generating unseen future Frames from previous observations, which is notoriously hard due to the uncertainty in Frame dynamics. While recent works based on generative adversarial networks (GANs) made remarkable progress, there is still an obstacle for making accurate and realistic predictions. In this paper, we propose a novel GAN based on inter-Frame difference to circumvent the difficulties. More specifically, our model is a multi-stage generative network, which is named the Difference Guided Generative Adversarial Network (DGGAN). The DGGAN learns to explicitly enforce future-Frame predictions that is guided by synthetic inter-Frame difference. Given a sequence of Frames, DGGAN first uses dual paths to generate meta information. One path, called Coarse Frame Generator, predicts the coarse details about future Frames, and the other path, called Difference Guide Generator, generates the difference image which include complementary fine details. Then our coarse details will then be refined via guidance of difference image under the support of GANs. With this model and novel architecture, we achieve state-of-the-art performance for future video prediction on UCF-101, KITTI.

  • Better Guider Predicts Future Better: Difference Guided Generative Adversarial Networks
    arXiv: Computer Vision and Pattern Recognition, 2019
    Co-Authors: Guohao Ying, Yingtian Zou, Lin Wan, Jiashi Feng
    Abstract:

    Predicting the future is a fantasy but practicality work. It is the key component to intelligent agents, such as self-driving vehicles, medical monitoring devices and robotics. In this work, we consider generating unseen future Frames from previous obeservations, which is notoriously hard due to the uncertainty in Frame dynamics. While recent works based on generative adversarial networks (GANs) made remarkable progress, there is still an obstacle for making accurate and realistic predictions. In this paper, we propose a novel GAN based on inter-Frame difference to circumvent the difficulties. More specifically, our model is a multi-stage generative network, which is named the Difference Guided Generative Adversarial Netwok (DGGAN). The DGGAN learns to explicitly enforce future-Frame predictions that is guided by synthetic inter-Frame difference. Given a sequence of Frames, DGGAN first uses dual paths to generate meta information. One path, called Coarse Frame Generator, predicts the coarse details about future Frames, and the other path, called Difference Guide Generator, generates the difference image which include complementary fine details. Then our coarse details will then be refined via guidance of difference image under the support of GANs. With this model and novel architecture, we achieve state-of-the-art performance for future video prediction on UCF-101, KITTI.