Video Enhancement

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 12372 Experts worldwide ranked by ideXlab platform

Heungyeung Shum - One of the best experts on this subject based on the ideXlab platform.

  • Full-frame Video stabilization with motion inpainting
    IEEE Transactions on Pattern Analysis and Machine Intelligence, 2006
    Co-Authors: Yasuyuki Matsushita, Eyal Ofek, Xiaoou Tang, Weina Ge, Heungyeung Shum
    Abstract:

    Video stabilization is an important Video Enhancement technology which aims at removing annoying shaky motion from Videos. We propose a practical and robust approach of Video stabilization that produces full-frame stabilized Videos with good visual quality. While most previous methods end up with producing smaller size stabilized Videos, our completion method can produce full-frame Videos by naturally filling in missing image parts by locally aligning image data of neighboring frames. To achieve this, motion inpainting is proposed to enforce spatial and temporal consistency of the completion in both static and dynamic image areas. In addition, image quality in the stabilized Video is enhanced with a new practical deblurring algorithm. Instead of estimating point spread functions, our method transfers and interpolates sharper image pixels of neighboring frames to increase the sharpness of the frame. The proposed Video completion and deblurring methods enabled us to develop a complete Video stabilizer which can naturally keep the original image quality in the stabilized Videos. The effectiveness of our method is confirmed by extensive experiments over a wide variety of Videos

  • full frame Video stabilization
    Computer Vision and Pattern Recognition, 2005
    Co-Authors: Yasuyuki Matsushita, Eyal Ofek, Xiaoou Tang, Heungyeung Shum
    Abstract:

    Video stabilization is an important Video Enhancement technology which aims at removing annoying shaky motion from Videos. We propose a practical and robust approach of Video stabilization that produces full-frame stabilized Videos with good visual quality. While most previous methods end up with producing low resolution stabilized Videos, our completion method can produce full-frame Videos by naturally filling in missing image parts by locally aligning image data of neighboring frames. To achieve this, motion inpainting is proposed to enforce spatial and temporal consistency of the completion in both static and dynamic image areas. In addition, image quality in the stabilized Video is enhanced with a new practical deblurring algorithm. Instead of estimating point spread functions, our method transfers and interpolates sharper image pixels of neighbouring frames to increase the sharpness of the frame. The proposed Video completion and deblurring methods enabled us to develop a complete Video stabilizer which can naturally keep the original image quality in the stabilized Videos. The effectiveness of our method is confirmed by extensive experiments over a wide variety of Videos.

Wei Meng - One of the best experts on this subject based on the ideXlab platform.

  • fast efficient algorithm for Enhancement of low lighting Video
    International Conference on Multimedia and Expo, 2011
    Co-Authors: Xuan Dong, Guan Wang, Yi Pang, Jiangtao Wen, Wei Meng
    Abstract:

    We describe a novel and effective Video Enhancement algorithm for low lighting Video. The algorithm works by first inverting an input low-lighting Video and then applying an optimized image de-haze algorithm on the inverted Video. To facilitate faster computation, temporal correlations between subsequent frames are utilized to expedite the calculation of key algorithm parameters. Simulation results show excellent Enhancement results and 4x speed up as compared with the frame-wise Enhancement algorithms.

Minghsuan Yang - One of the best experts on this subject based on the ideXlab platform.

  • memc net motion estimation and motion compensation driven neural network for Video interpolation and Enhancement
    IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021
    Co-Authors: Wenbo Bao, Xiaoyun Zhang, Weisheng Lai, Zhiyong Gao, Minghsuan Yang
    Abstract:

    Motion estimation (ME) and motion compensation (MC) have been widely used for classical Video frame interpolation systems over the past decades. Recently, a number of data-driven frame interpolation methods based on convolutional neural networks have been proposed. However, existing learning based methods typically estimate either flow or compensation kernels, thereby limiting performance on both computational efficiency and interpolation accuracy. In this work, we propose a motion estimation and compensation driven neural network for Video frame interpolation. A novel adaptive warping layer is developed to integrate both optical flow and interpolation kernels to synthesize target frame pixels. This layer is fully differentiable such that both the flow and kernel estimation networks can be optimized jointly. The proposed model benefits from the advantages of motion estimation and compensation methods without using hand-crafted features. Compared to existing methods, our approach is computationally efficient and able to generate more visually appealing results. Furthermore, the proposed MEMC-Net architecture can be seamlessly adapted to several Video Enhancement tasks, e.g., super-resolution, denoising, and deblocking. Extensive quantitative and qualitative evaluations demonstrate that the proposed method performs favorably against the state-of-the-art Video frame interpolation and Enhancement algorithms on a wide range of datasets.

  • memc net motion estimation and motion compensation driven neural network for Video interpolation and Enhancement
    arXiv: Computer Vision and Pattern Recognition, 2018
    Co-Authors: Wenbo Bao, Xiaoyun Zhang, Weisheng Lai, Zhiyong Gao, Minghsuan Yang
    Abstract:

    Motion estimation (ME) and motion compensation (MC) have been widely used for classical Video frame interpolation systems over the past decades. Recently, a number of data-driven frame interpolation methods based on convolutional neural networks have been proposed. However, existing learning based methods typically estimate either flow or compensation kernels, thereby limiting performance on both computational efficiency and interpolation accuracy. In this work, we propose a motion estimation and compensation driven neural network for Video frame interpolation. A novel adaptive warping layer is developed to integrate both optical flow and interpolation kernels to synthesize target frame pixels. This layer is fully differentiable such that both the flow and kernel estimation networks can be optimized jointly. The proposed model benefits from the advantages of motion estimation and compensation methods without using hand-crafted features. Compared to existing methods, our approach is computationally efficient and able to generate more visually appealing results. Furthermore, the proposed MEMC-Net can be seamlessly adapted to several Video Enhancement tasks, e.g., super-resolution, denoising, and deblocking. Extensive quantitative and qualitative evaluations demonstrate that the proposed method performs favorably against the state-of-the-art Video frame interpolation and Enhancement algorithms on a wide range of datasets.

Wei Zhou - One of the best experts on this subject based on the ideXlab platform.

  • histogram equalization image Enhancement based on fpga algorithm design and implementation
    International Conference on Frontier Computing, 2018
    Co-Authors: Huihua Jiao, Jieqing Xing, Wei Zhou
    Abstract:

    Histogram equalization is a method of image Enhancement which was widely used in the image processing. The algorithm is traditionally implemented with the CPU or DSP, but with the improvement of image resolution and frame rate, CPU and DSP is hard to meet the needs of real-time Video Enhancement. This paper propose a histogram equalization method based on FPGA, It use the way of parallel pipeline module, which improve the real-time performance of the algorithm. In this paper, a Video Enhancement experiments has been carried out to verify the real-time performance, with resolution of 1600 × 1200 image and 60 frames per second. real-time image Enhancement algorithm was implemented on a FPGA development board, and the experimental results are given, the results show that it is suitable for the high resolution Video real-time Enhancement processing for low contrast image.

Yasuyuki Matsushita - One of the best experts on this subject based on the ideXlab platform.

  • Full-frame Video stabilization with motion inpainting
    IEEE Transactions on Pattern Analysis and Machine Intelligence, 2006
    Co-Authors: Yasuyuki Matsushita, Eyal Ofek, Xiaoou Tang, Weina Ge, Heungyeung Shum
    Abstract:

    Video stabilization is an important Video Enhancement technology which aims at removing annoying shaky motion from Videos. We propose a practical and robust approach of Video stabilization that produces full-frame stabilized Videos with good visual quality. While most previous methods end up with producing smaller size stabilized Videos, our completion method can produce full-frame Videos by naturally filling in missing image parts by locally aligning image data of neighboring frames. To achieve this, motion inpainting is proposed to enforce spatial and temporal consistency of the completion in both static and dynamic image areas. In addition, image quality in the stabilized Video is enhanced with a new practical deblurring algorithm. Instead of estimating point spread functions, our method transfers and interpolates sharper image pixels of neighboring frames to increase the sharpness of the frame. The proposed Video completion and deblurring methods enabled us to develop a complete Video stabilizer which can naturally keep the original image quality in the stabilized Videos. The effectiveness of our method is confirmed by extensive experiments over a wide variety of Videos

  • full frame Video stabilization
    Computer Vision and Pattern Recognition, 2005
    Co-Authors: Yasuyuki Matsushita, Eyal Ofek, Xiaoou Tang, Heungyeung Shum
    Abstract:

    Video stabilization is an important Video Enhancement technology which aims at removing annoying shaky motion from Videos. We propose a practical and robust approach of Video stabilization that produces full-frame stabilized Videos with good visual quality. While most previous methods end up with producing low resolution stabilized Videos, our completion method can produce full-frame Videos by naturally filling in missing image parts by locally aligning image data of neighboring frames. To achieve this, motion inpainting is proposed to enforce spatial and temporal consistency of the completion in both static and dynamic image areas. In addition, image quality in the stabilized Video is enhanced with a new practical deblurring algorithm. Instead of estimating point spread functions, our method transfers and interpolates sharper image pixels of neighbouring frames to increase the sharpness of the frame. The proposed Video completion and deblurring methods enabled us to develop a complete Video stabilizer which can naturally keep the original image quality in the stabilized Videos. The effectiveness of our method is confirmed by extensive experiments over a wide variety of Videos.