Motion Vector

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 52764 Experts worldwide ranked by ideXlab platform

Mathias Wien - One of the best experts on this subject based on the ideXlab platform.

  • decoder side Motion Vector derivation for block based video coding
    IEEE Transactions on Circuits and Systems for Video Technology, 2012
    Co-Authors: Steffen Kamp, Mathias Wien
    Abstract:

    A decoder-side Motion Vector derivation algorithm for hybrid video coding is proposed. The algorithm is based on template matching and aims to reduce Motion parameter bit-rate by re-estimating the applicable Motion parameters at the decoder side. An average bit-rate savings of about 6%-8% is observed compared to the reference H.264/AVC. Decoder-side Motion Vector derivation was included in multiple proposals for the new High Efficiency Video Coding standard. This paper details and analyzes the algorithm and discusses its relation to other coding tools.

  • fast decoder side Motion Vector derivation for inter frame video coding
    Picture Coding Symposium, 2009
    Co-Authors: Steffen Kamp, Benjamin Bross, Mathias Wien
    Abstract:

    Decoder-side Motion Vector derivation (DMVD) using template matching has been shown to improve coding efficiency of H.264/AVC based video coding. Instead of explicitly coding Motion Vectors into the bitstream, the decoder performs Motion estimation in order to derive the Motion Vector used for Motion compensated prediction. In previous works, DMVD was performed using a full template matching search in a limited search range. In this paper, a candidate based fast search algorithm replaces the full search. While the complexity reduction especially for the decoder is quite significant, the coding efficiency remains comparable. While for the full search algorithm BD-Bitrate savings of 7.4% averaged over CIF and HD sequences according to the VCEG common conditions for IPPP high profile are observed, the proposed fast search achieves bitrate reductions of up to 7.5% on average. By further omitting sub-pel refinement, average savings observed for CIF and HD are still up to 7%.

  • decoder side Motion Vector derivation for inter frame video coding
    International Conference on Image Processing, 2008
    Co-Authors: Steffen Kamp, M Evertz, Mathias Wien
    Abstract:

    In this paper, a decoder side Motion Vector derivation scheme for inter frame video coding is proposed. Using a template matching algorithm, Motion information is derived at the decoder instead of explicitly coding the information into the bitstream. Based on Lagrangian rate-distortion optimisation, the encoder locally signals whether Motion derivation or forward Motion coding is used. While our method exploits multiple reference pictures for improved prediction performance and bitrate reduction, only a small template matching search range is required. Derived Motion information is reused to improve the performance of predictive Motion Vector coding in subsequent blocks. An efficient conditional signalling scheme for Motion derivation in Skip blocks is employed. The Motion Vector derivation method has been implemented as an extension to H.264/AVC. Simulation results show that a bitrate reduction of up to 10.4% over H.264/AVC is achieved by the proposed scheme.

Joël Jung - One of the best experts on this subject based on the ideXlab platform.

  • rate distortion data hiding of Motion Vector competition information in chroma and luma samples for video compression
    IEEE Transactions on Circuits and Systems for Video Technology, 2011
    Co-Authors: Jean-marc Thiesse, Joël Jung, Marc Antonini
    Abstract:

    New standardization activities have been recently launched by the JCT-VC experts group in order to challenge the current video compression standard H.264/AVC. Several improvements of this standard, previously integrated in the JM key technical area software, are already known and gathered in the high efficiency video coding test model. In particular, competition-based Motion Vector prediction has proved its efficiency. However, the targeted 50% bitrate saving for equivalent quality is not yet achieved. In this context, this paper proposes to reduce the signaling information resulting from this Motion Vector competition, by using data hiding techniques. As data hiding and video compression traditionally have contradictory goals, an advanced study of data hiding schemes is first performed. Then, an original way of using data hiding for video compression is proposed. The main idea of this paper is to hide the competition index into appropriately selected chroma and luma transform coefficients. To minimize the prediction errors, the transform coefficients modification is performed via a rate-distortion optimization. The proposed scheme is evaluated on several low and high resolution sequences. Objective improvements (up to 2.40% bitrate saving) and subjective assessment of the chroma loss are reported.

  • Motion Vector quantization for efficient low bit rate video coding
    Visual Communications and Image Processing, 2009
    Co-Authors: Marco Cagnazzo, Guillaume Laroche, Marc Antonini, M A Agostini, Joël Jung
    Abstract:

    The most recent video coding standard H.264 achieves excellent compression performances at many different bit-rates. However, it has been noted that, at very high compression ratios, a large part of the available coding resources is only used to code Motion Vectors. This can lead to a suboptimal coding performance. This paper introduces a new coding mode for a H.264-based video coder, using quantized Motion Vector (QMV) to improve the management of the resource allocation between Motion information and transform coeffcients. Several problems have to be faced with in order to get an efficient implementation of QMV techniques, yet encouraging results are reported in preliminary tests, allowing to improve the performances of H.264 at low bit-rates over several sequences.

  • rd optimized coding for Motion Vector predictor selection
    IEEE Transactions on Circuits and Systems for Video Technology, 2008
    Co-Authors: Guillaume Laroche, Joël Jung, Beatrice Pesquetpopescu
    Abstract:

    The H.264/MPEG4-AVC video coding standard has achieved a higher coding efficiency compared to its predecessors. The significant bitrate reduction is mainly obtained by efficient Motion compensation tools, as variable block sizes, multiple reference frames, 1/4-pel Motion accuracy and powerful prediction modes (e.g., SKIP and DIRECT). These tools have contributed to an increased proportion of the Motion information in the total bit- stream. To achieve the performance required by the future ITU-T challenge, namely to provide a codec with 50% bitrate reduction compared to the current H.264, the reduction of this Motion information cost is essential. This paper proposes a competing framework for better Motion Vector coding and SKIP mode. The predictors for the SKIP mode and the Motion Vector predictors are optimally selected by a rate-distortion criterion. These methods take advantage from the use of the spatial and the temporal redundancies in the Motion Vector fields, where the simple spatial median usually fails. An adaptation of the temporal predictors according to the temporal distances between Motion Vector fields is also described for multiple reference frames and B-slices options. These two combined schemes lead to a systematic bitrate saving on Baseline and High profile, compared to an H.264/MPEG4-AVC standard codec, which reaches up to 45%.

  • rd optimized coding for Motion Vector predictor selection
    IEEE Transactions on Circuits and Systems for Video Technology, 2008
    Co-Authors: Guillaume Laroche, Joël Jung, Beatrice Pesquetpopescu
    Abstract:

    The H.264/MPEG4-AVC video coding standard has achieved a higher coding efficiency compared to its predecessors. The significant bitrate reduction is mainly obtained by efficient Motion compensation tools, as variable block sizes, multiple reference frames, 1/4-pel Motion accuracy and powerful prediction modes (e.g., SKIP and DIRECT). These tools have contributed to an increased proportion of the Motion information in the total bit- stream. To achieve the performance required by the future ITU-T challenge, namely to provide a codec with 50% bitrate reduction compared to the current H.264, the reduction of this Motion information cost is essential. This paper proposes a competing framework for better Motion Vector coding and SKIP mode. The predictors for the SKIP mode and the Motion Vector predictors are optimally selected by a rate-distortion criterion. These methods take advantage from the use of the spatial and the temporal redundancies in the Motion Vector fields, where the simple spatial median usually fails. An adaptation of the temporal predictors according to the temporal distances between Motion Vector fields is also described for multiple reference frames and B-slices options. These two combined schemes lead to a systematic bitrate saving on Baseline and High profile, compared to an H.264/MPEG4-AVC standard codec, which reaches up to 45%.

Yeping Su - One of the best experts on this subject based on the ideXlab platform.

  • global Motion estimation from coarsely sampled Motion Vector field and the applications
    IEEE Transactions on Circuits and Systems for Video Technology, 2005
    Co-Authors: Yeping Su
    Abstract:

    Global Motion estimation is a powerful tool widely used in video processing and compression as well as in computer vision areas. We propose a new approach for estimating global Motions from coarsely sampled Motion Vector fields. The proposed method minimizes the fitting error between the input Motion Vectors and the Motion Vectors generated from the estimated Motion model using the Newton-Raphson method with outlier rejections. Applications of the proposed method in video coding include fast global Motion estimation for MPEG-4 Advanced Simple Profile coding, MPEG-2 to MPEG-4 ASP transcoding, and error concealments. Simulation results and analyses are provided for the proposed method and the applications, which show the effectiveness of the method in terms of accuracy, robustness, and speed.

  • global Motion estimation from coarsely sampled Motion Vector field and the applications
    International Symposium on Circuits and Systems, 2003
    Co-Authors: Yeping Su
    Abstract:

    In this paper, we propose a new approach for estimating global Motions from a coarsely sampled Motion Vector field. The proposed method minimizes the fitting error between the input Motion Vectors and the Motion Vectors generated from the estimated Motion model using the Newton-Raphson method with outlier rejections. Applications of the proposed method in video coding include fast global Motion estimation for MPEG-4 Advanced Simple Profile (ASP) coding, MPEG-2 to MPEG-4 ASP transcoding, and error concealment of Motion Vectors. Simulation results and analyses are provided for the proposed method and applications, which show the accuracy and robustness of the proposed algorithm.

Hanli Wang - One of the best experts on this subject based on the ideXlab platform.

  • real time action recognition with deeply transferred Motion Vector cnns
    IEEE Transactions on Image Processing, 2018
    Co-Authors: Bowen Zhang, Limin Wang, Zhe Wang, Yu Qiao, Hanli Wang
    Abstract:

    The two-stream CNNs prove very successful for video-based action recognition. However, the classical two-stream CNNs are time costly, mainly due to the bottleneck of calculating optical flows (OFs). In this paper, we propose a two-stream-based real-time action recognition approach by using Motion Vector (MV) to replace OF. MVs are encoded in video stream and can be extracted directly without extra calculation. However, directly training CNN with MVs degrades accuracy severely due to the noise and the lack of fine details in MVs. In order to relieve this problem, we propose four training strategies which leverage the knowledge learned from OF CNN to enhance the accuracy of MV CNN. Our insight is that MV and OF share inherent similar structures which allow us to transfer knowledge from one domain to another. To fully utilize the knowledge learned in OF domain, we develop deeply transferred MV CNN. Experimental results on various datasets show the effectiveness of our training strategies. Our approach is significantly faster than OF based approaches and achieves processing speed of 390.7 frames per second, surpassing real-time requirement. We release our model and code to facilitate further research. 1 1 https://github.com/zbwglory/MV-release

  • real time action recognition with enhanced Motion Vector cnns
    arXiv: Computer Vision and Pattern Recognition, 2016
    Co-Authors: Bowen Zhang, Limin Wang, Zhe Wang, Yu Qiao, Hanli Wang
    Abstract:

    The deep two-stream architecture exhibited excellent performance on video based action recognition. The most computationally expensive step in this approach comes from the calculation of optical flow which prevents it to be real-time. This paper accelerates this architecture by replacing optical flow with Motion Vector which can be obtained directly from compressed videos without extra calculation. However, Motion Vector lacks fine structures, and contains noisy and inaccurate Motion patterns, leading to the evident degradation of recognition performance. Our key insight for relieving this problem is that optical flow and Motion Vector are inherent correlated. Transferring the knowledge learned with optical flow CNN to Motion Vector CNN can significantly boost the performance of the latter. Specifically, we introduce three strategies for this, initialization transfer, supervision transfer and their combination. Experimental results show that our method achieves comparable recognition performance to the state-of-the-art, while our method can process 390.7 frames per second, which is 27 times faster than the original two-stream method.

Steffen Kamp - One of the best experts on this subject based on the ideXlab platform.

  • decoder side Motion Vector derivation for block based video coding
    IEEE Transactions on Circuits and Systems for Video Technology, 2012
    Co-Authors: Steffen Kamp, Mathias Wien
    Abstract:

    A decoder-side Motion Vector derivation algorithm for hybrid video coding is proposed. The algorithm is based on template matching and aims to reduce Motion parameter bit-rate by re-estimating the applicable Motion parameters at the decoder side. An average bit-rate savings of about 6%-8% is observed compared to the reference H.264/AVC. Decoder-side Motion Vector derivation was included in multiple proposals for the new High Efficiency Video Coding standard. This paper details and analyzes the algorithm and discusses its relation to other coding tools.

  • fast decoder side Motion Vector derivation for inter frame video coding
    Picture Coding Symposium, 2009
    Co-Authors: Steffen Kamp, Benjamin Bross, Mathias Wien
    Abstract:

    Decoder-side Motion Vector derivation (DMVD) using template matching has been shown to improve coding efficiency of H.264/AVC based video coding. Instead of explicitly coding Motion Vectors into the bitstream, the decoder performs Motion estimation in order to derive the Motion Vector used for Motion compensated prediction. In previous works, DMVD was performed using a full template matching search in a limited search range. In this paper, a candidate based fast search algorithm replaces the full search. While the complexity reduction especially for the decoder is quite significant, the coding efficiency remains comparable. While for the full search algorithm BD-Bitrate savings of 7.4% averaged over CIF and HD sequences according to the VCEG common conditions for IPPP high profile are observed, the proposed fast search achieves bitrate reductions of up to 7.5% on average. By further omitting sub-pel refinement, average savings observed for CIF and HD are still up to 7%.

  • decoder side Motion Vector derivation for inter frame video coding
    International Conference on Image Processing, 2008
    Co-Authors: Steffen Kamp, M Evertz, Mathias Wien
    Abstract:

    In this paper, a decoder side Motion Vector derivation scheme for inter frame video coding is proposed. Using a template matching algorithm, Motion information is derived at the decoder instead of explicitly coding the information into the bitstream. Based on Lagrangian rate-distortion optimisation, the encoder locally signals whether Motion derivation or forward Motion coding is used. While our method exploits multiple reference pictures for improved prediction performance and bitrate reduction, only a small template matching search range is required. Derived Motion information is reused to improve the performance of predictive Motion Vector coding in subsequent blocks. An efficient conditional signalling scheme for Motion derivation in Skip blocks is employed. The Motion Vector derivation method has been implemented as an extension to H.264/AVC. Simulation results show that a bitrate reduction of up to 10.4% over H.264/AVC is achieved by the proposed scheme.