Future Frame

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 123 Experts worldwide ranked by ideXlab platform

William T Freeman - One of the best experts on this subject based on the ideXlab platform.

  • visual dynamics probabilistic Future Frame synthesis via cross convolutional networks
    Neural Information Processing Systems, 2016
    Co-Authors: Tianfan Xue, Katherine L Bouman, William T Freeman
    Abstract:

    We study the problem of synthesizing a number of likely Future Frames from a single input image. In contrast to traditional methods, which have tackled this problem in a deterministic or non-parametric way, we propose to model Future Frames in a probabilistic manner. Our probabilistic model makes it possible for us to sample and synthesize many possible Future Frames from a single input image. To synthesize realistic movement of objects, we propose a novel network structure, namely a Cross Convolutional Network; this network encodes image and motion information as feature maps and convolutional kernels, respectively. In experiments, our model performs well on synthetic data, such as 2D shapes and animated game sprites, as well as on real-world video Frames. We also show that our model can be applied to visual analogy-making, and present an analysis of the learned network representations.

  • visual dynamics probabilistic Future Frame synthesis via cross convolutional networks
    arXiv: Computer Vision and Pattern Recognition, 2016
    Co-Authors: Tianfan Xue, Katherine L Bouman, William T Freeman
    Abstract:

    We study the problem of synthesizing a number of likely Future Frames from a single input image. In contrast to traditional methods, which have tackled this problem in a deterministic or non-parametric way, we propose a novel approach that models Future Frames in a probabilistic manner. Our probabilistic model makes it possible for us to sample and synthesize many possible Future Frames from a single input image. Future Frame synthesis is challenging, as it involves low- and high-level image and motion understanding. We propose a novel network structure, namely a Cross Convolutional Network to aid in synthesizing Future Frames; this network structure encodes image and motion information as feature maps and convolutional kernels, respectively. In experiments, our model performs well on synthetic data, such as 2D shapes and animated game sprites, as well as on real-wold videos. We also show that our model can be applied to tasks such as visual analogy-making, and present an analysis of the learned network representations.

Tianfan Xue - One of the best experts on this subject based on the ideXlab platform.

  • visual dynamics probabilistic Future Frame synthesis via cross convolutional networks
    Neural Information Processing Systems, 2016
    Co-Authors: Tianfan Xue, Katherine L Bouman, William T Freeman
    Abstract:

    We study the problem of synthesizing a number of likely Future Frames from a single input image. In contrast to traditional methods, which have tackled this problem in a deterministic or non-parametric way, we propose to model Future Frames in a probabilistic manner. Our probabilistic model makes it possible for us to sample and synthesize many possible Future Frames from a single input image. To synthesize realistic movement of objects, we propose a novel network structure, namely a Cross Convolutional Network; this network encodes image and motion information as feature maps and convolutional kernels, respectively. In experiments, our model performs well on synthetic data, such as 2D shapes and animated game sprites, as well as on real-world video Frames. We also show that our model can be applied to visual analogy-making, and present an analysis of the learned network representations.

  • visual dynamics probabilistic Future Frame synthesis via cross convolutional networks
    arXiv: Computer Vision and Pattern Recognition, 2016
    Co-Authors: Tianfan Xue, Katherine L Bouman, William T Freeman
    Abstract:

    We study the problem of synthesizing a number of likely Future Frames from a single input image. In contrast to traditional methods, which have tackled this problem in a deterministic or non-parametric way, we propose a novel approach that models Future Frames in a probabilistic manner. Our probabilistic model makes it possible for us to sample and synthesize many possible Future Frames from a single input image. Future Frame synthesis is challenging, as it involves low- and high-level image and motion understanding. We propose a novel network structure, namely a Cross Convolutional Network to aid in synthesizing Future Frames; this network structure encodes image and motion information as feature maps and convolutional kernels, respectively. In experiments, our model performs well on synthetic data, such as 2D shapes and animated game sprites, as well as on real-wold videos. We also show that our model can be applied to tasks such as visual analogy-making, and present an analysis of the learned network representations.

Inkwon Lee - One of the best experts on this subject based on the ideXlab platform.

  • Future Frame prediction for fast moving objects with motion blur
    Sensors, 2020
    Co-Authors: Dohae Lee, Inkwon Lee
    Abstract:

    We propose a deep neural network model that recognizes the position and velocity of a fast-moving object in a video sequence and predicts the object’s Future motion. When filming a fast-moving subject using a regular camera rather than a super-high-speed camera, there is often severe motion blur, making it difficult to recognize the exact location and speed of the object in the video. Additionally, because the fast moving object usually moves rapidly out of the camera’s field of view, the number of captured Frames used as input for Future-motion predictions should be minimized. Our model can capture a short video sequence of two Frames with a high-speed moving object as input, use motion blur as additional information to recognize the position and velocity of the object, and predict the video Frame containing the Future motion of the object. Experiments show that our model has significantly better performance than existing Future-Frame prediction models in determining the Future position and velocity of an object in two physical scenarios where a fast-moving two-dimensional object appears.

Katherine L Bouman - One of the best experts on this subject based on the ideXlab platform.

  • visual dynamics probabilistic Future Frame synthesis via cross convolutional networks
    Neural Information Processing Systems, 2016
    Co-Authors: Tianfan Xue, Katherine L Bouman, William T Freeman
    Abstract:

    We study the problem of synthesizing a number of likely Future Frames from a single input image. In contrast to traditional methods, which have tackled this problem in a deterministic or non-parametric way, we propose to model Future Frames in a probabilistic manner. Our probabilistic model makes it possible for us to sample and synthesize many possible Future Frames from a single input image. To synthesize realistic movement of objects, we propose a novel network structure, namely a Cross Convolutional Network; this network encodes image and motion information as feature maps and convolutional kernels, respectively. In experiments, our model performs well on synthetic data, such as 2D shapes and animated game sprites, as well as on real-world video Frames. We also show that our model can be applied to visual analogy-making, and present an analysis of the learned network representations.

  • visual dynamics probabilistic Future Frame synthesis via cross convolutional networks
    arXiv: Computer Vision and Pattern Recognition, 2016
    Co-Authors: Tianfan Xue, Katherine L Bouman, William T Freeman
    Abstract:

    We study the problem of synthesizing a number of likely Future Frames from a single input image. In contrast to traditional methods, which have tackled this problem in a deterministic or non-parametric way, we propose a novel approach that models Future Frames in a probabilistic manner. Our probabilistic model makes it possible for us to sample and synthesize many possible Future Frames from a single input image. Future Frame synthesis is challenging, as it involves low- and high-level image and motion understanding. We propose a novel network structure, namely a Cross Convolutional Network to aid in synthesizing Future Frames; this network structure encodes image and motion information as feature maps and convolutional kernels, respectively. In experiments, our model performs well on synthetic data, such as 2D shapes and animated game sprites, as well as on real-wold videos. We also show that our model can be applied to tasks such as visual analogy-making, and present an analysis of the learned network representations.

Yang Wang - One of the best experts on this subject based on the ideXlab platform.

  • Future Frame prediction using convolutional vrnn for anomaly detection
    Advanced Video and Signal Based Surveillance, 2019
    Co-Authors: Mahesh K Kumar, Seyed Shahabeddin Nabavi, Yang Wang
    Abstract:

    Anomaly detection in videos aims at reporting anything that does not conform the normal behaviour or distribution. However, due to the sparsity of abnormal video clips in real life, collecting annotated data for supervised learning is exceptionally cumbersome. Inspired by the practicability of generative models for semi-supervised learning, we propose a novel sequential generative model based on variational autoencoder (VAE) for Future Frame prediction with convolutional LSTM (ConvLSTM). To the best of our knowledge, this is the first work that considers temporal information in Future Frame prediction based anomaly detection Framework from the model perspective. Our experiments demonstrate that our approach is superior to the state-of-the-art methods on three benchmark datasets.