Residual Coding

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 6585 Experts worldwide ranked by ideXlab platform

Wen Gao - One of the best experts on this subject based on the ideXlab platform.

  • Signal-Independent Separable KLT by Offline Training for Video Coding
    IEEE Access, 2019
    Co-Authors: Kui Fan, Ronggang Wang, Weisi Lin, Ling-yu Duan, Wen Gao
    Abstract:

    After the works on High Efficiency Video Coding (HEVC) standard, the standard organizations continued to study the next generation of video Coding standard, named Versatile Video Coding (VVC). The compression capacity of the VVC standard is expected to be substantially improved relative to the current HEVC standard by evolving the potential Coding tools greatly. Transform is a key technique for compression efficiency, and core experiment 6 (CE6) in JVET is established to explore the transform-related Coding tools. In this paper, we propose a novel signal-independent separable transform based on the Karhunen-Loeve transform (KLT) to improve the efficiency of both intra and inter Residual Coding. In the proposed method, the drawbacks of the traditional KLT are addressed. A group of mode-independent intra transform matrices is calculated from abundant intra Residual samples of all intra modes, while the inter separable KLT matrices are trained with the Residuals that cannot be efficiently processed by the discrete cosine transform type II (DCT-II). The separable KLT is developed as an additional transform type apart from DCT-II. The experimental results show that the proposed method can achieve 2.7% and 1.5% bitrate saving averagely under All Intra and Random Access configurations on top of the reference software of VVC (VTM-1.1). In addition, the consistent performance improvement on test set also validates the property of signal independency and the strong generalization capacity of the proposed separable KLT.

  • SORTING LOCAL DESCRIPTORS FOR LOW BIT RATE MOBILE VISUAL SEARCH
    2011
    Co-Authors: Hongxun Yao, Wen Gao
    Abstract:

    State-of-the-art mobile visual search systems put emphasis on developing compact visual descriptors [4][6], which enables low bit rate wireless transmission instead of delivering an entire query image. In this paper, we address the orderless nature of the transmission set of query descriptors . We propose to adapt the orders of local descriptors in transmission, which subsequently yields more consistent statistic distributions in each feature dimension towards more efficient Residual Coding based compression. Our scheme further enables lossy sorting by an adaptive quantization strategy within each feature dimension, which largely improves the compression rates of the Residual Coding in each dimension. We show that the performance degeneration of such lossy sorting is acceptable in our mobile landmark search applications. Our approach’s effectiveness and efficiency is demonstrated via extensive experimental comparisons to state-of-the art works in both mobile visual descriptors [2][4] and compact image signatures [5].

  • ICASSP - Sorting local descriptors for lowbit rate mobile visual search
    2011 IEEE International Conference on Acoustics Speech and Signal Processing (ICASSP), 2011
    Co-Authors: Jie Chen, Ling-yu Duan, Hongxun Yao, Wen Gao
    Abstract:

    State-of-the-art mobile visual search systems put emphasis on developing compact visual descriptors [4][6], which enables low bit rate wireless transmission instead of delivering an entire query image. In this paper, we address the orderless nature of the transmission set of query descriptors . We propose to adapt the orders of local descriptors in transmission, which subsequently yields more consistent statistic distributions in each feature dimension towards more efficient Residual Coding based compression. Our scheme further enables lossy sorting by an adaptive quantization strategy within each feature dimension, which largely improves the compression rates of the Residual Coding in each dimension. We show that the performance degeneration of such lossy sorting is acceptable in our mobile landmark search applications. Our approach's effectiveness and efficiency is demonstrated via extensive experimental comparisons to state-of-the art works in both mobile visual descriptors [2][4] and compact image signatures [5].

Thomas Wiegand - One of the best experts on this subject based on the ideXlab platform.

  • Residual Coding for transform skip mode in versatile video Coding
    Data Compression Conference, 2020
    Co-Authors: Tung Nguyen, Benjamin Bross, Heiko Schwarz, Detlev Marpe, Thomas Wiegand
    Abstract:

    The support for screen content Coding has received more attention with the latest development in video compression, the upcoming Versatile Video Coding (VVC) standard. Among the dedicated screen content Coding tools, the transform skip mode (TSM) represents a promising approach for improving the Coding efficiency at a low impact on implementation complexity. In this work, we present a dedicated Residual Coding for transform blocks coded in TSM. Due to the lack of the energy compaction of the transform, the quantization indexes for blocks coded in TSM have different statistical properties, which can be exploited in the entropy Coding. Our Coding experiments with screen content sequences yielded bit-rate savings of 3.9% for intra-only Coding and 2.8% for typical random access configurations.

  • DCC - Residual Coding for Transform Skip Mode in Versatile Video Coding
    2020 Data Compression Conference (DCC), 2020
    Co-Authors: Tung Nguyen, Benjamin Bross, Heiko Schwarz, Detlev Marpe, Thomas Wiegand
    Abstract:

    The support for screen content Coding has received more attention with the latest development in video compression, the upcoming Versatile Video Coding (VVC) standard. Among the dedicated screen content Coding tools, the transform skip mode (TSM) represents a promising approach for improving the Coding efficiency at a low impact on implementation complexity. In this work, we present a dedicated Residual Coding for transform blocks coded in TSM. Due to the lack of the energy compaction of the transform, the quantization indexes for blocks coded in TSM have different statistical properties, which can be exploited in the entropy Coding. Our Coding experiments with screen content sequences yielded bit-rate savings of 3.9% for intra-only Coding and 2.8% for typical random access configurations.

  • data driven optimization of row column transforms for block based hybrid video compression
    Picture Coding Symposium, 2019
    Co-Authors: Mischa Siekmann, Heiko Schwarz, Detlev Marpe, Sebastian Bosse, Thomas Wiegand
    Abstract:

    In state-of-the-art video compression Residual Coding is done by transforming the prediction error signals into a less correlated representation and performing the quantization and entropy Coding in the transform domain. For complexity reasons usually separable transforms are used. A more flexible transform structure is given by row-column transforms, which apply a separate transform to each row and each column of a signal block. This paper describes a method for training such structured transforms by maximizing the data likelihood under a parameterized probabilistic model with a compelled structure. An explicit model is derived for the case of row-column transforms and its efficiency is demonstrated in the application of video compression. It is shown that trained row-column transforms achieve almost the same Coding gain as unconstrained KLTs when applied as secondary transforms, while the encoder and decoder runtime are the same as in the separable transform case.

  • generalized binary splits a versatile partitioning scheme for block based hybrid video Coding
    Picture Coding Symposium, 2019
    Co-Authors: Adam Wieckowski, Heiko Schwarz, Detlev Marpe, Valeri George, Thomas Wiegand
    Abstract:

    Block partitioning is the basis of every modern hybrid video Coding standard. It specifies how the video pictures can be subdivided into blocks for prediction and Residual Coding. In H.265/HEVC, quad-tree partitioning is one of the key technologies allowing for flexible mode allocation and providing a substantial part of the gains over H.264/AVC. The current draft of the upcoming standard Versatile Video Coding (VVC) provides over 30% bit-rate savings over HEVC and almost one third of the gain is achieved by using a more flexible partitioning scheme than the quad-tree partitioning used in HEVC. In this paper, we describe a partitioning concept that generalizes many of the ideas developed during the exploration and early standardization phase of VVC. In fact, our method includes the VVC partitioning as well as many other state-of-the-art methods. The proposed method can be implemented in a fully configurable design. For instance, it can be configured to match the performance of VTM-1.0 at much faster runtime (69%) or it can be configured to obtain additional bit-rate savings of up to 3% by exploiting additional degrees of freedom.

  • Transform skip Residual Coding for the versatile video Coding standard
    Applications of Digital Image Processing XLII, 2019
    Co-Authors: Benjamin Bross, Tung Nguyen, Heiko Schwarz, Detlev Marpe, Thomas Wiegand
    Abstract:

    The development of the emerging Versatile Video Coding (VVC) standard was motivated by the need of significant bit-rate reductions for natural video content as well as content for different applications, such as computer generated screen content. The signal characteristics of screen content video are different to the ones of natural content. These include sharp edges as well as at areas of the same color. In block-based hybrid video Coding designs, as employed in VVC and its predecessors standards, skipping the transform stage of the prediction Residual for screen content signals can be beneficial due to the different Residual signal characteristics. In this paper, a modified transform coefficient level Coding tailored for transform skip Residual signals is presented. This includes no signaling of the last significant position, a coded block ag for every subblock, modified context modeling and binarization as well as a limit for the number of context coded bins per sample. Experimental results show bit-rate savings up to 3.45% and 9.55% for two different classes of screen content test sequences coded in a random access configuration.

J. Melsa - One of the best experts on this subject based on the ideXlab platform.

A. Pande - One of the best experts on this subject based on the ideXlab platform.

Eckehard Steinbach - One of the best experts on this subject based on the ideXlab platform.

  • fast motion estimation based reference frame generation in wyner ziv Residual video Coding
    International Conference on Multimedia and Expo, 2007
    Co-Authors: Hu Chen, Eckehard Steinbach
    Abstract:

    In practical Wyner-Ziv video Coding, every frame is encoded independently of others, but decoded based on the side information generated from adjacent frames. In Wyner-Ziv Residual Coding of video, the Residual of a frame with respect to a reference frame is Wyner-Ziv encoded, which leads to a higher Coding efficiency than directly Wyner-Ziv enCoding the original frame. In previous work, the reference frame is directly copied from the previously reconstructed frame. In this paper, we generate the reference Wyner-Ziv frame at the encoder using low complexity fast motion search. Experimental results show that the proposed scheme provides a significant gain in video Coding efficiency with the enCoding complexity being only slightly increased. Using our approach, Wyner-Ziv Residual video enCoding becomes flexible and allows us to trade-off the enCoding complexity and the overall rate-distortion performance. Moreover, the reference frame can help us in refining the side information generated at the decoder, which contributes to the improvement of the overall video Coding efficiency.