Loop Filter

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 9219 Experts worldwide ranked by ideXlab platform

Xinfeng Zhang - One of the best experts on this subject based on the ideXlab platform.

  • VCIP - Spatial-temporal residue network based in-Loop Filter for video coding
    2017 IEEE Visual Communications and Image Processing (VCIP), 2017
    Co-Authors: Shiqi Wang, Shanshe Wang, Xinfeng Zhang, Siwei Ma
    Abstract:

    Deep learning has demonstrated tremendous break through in the area of image/video processing. In this paper, a spatial-temporal residue network (STResNet) based in-Loop Filter is proposed to suppress visual artifacts such as blocking, ringing in video coding. Specifically, the spatial and temporal information is jointly exploited by taking both current block and co-located block in reference frame into consideration during the processing of in-Loop Filter. The architecture of STResNet only consists of four convolution layers which shows hospitality to memory and coding complexity. Moreover, to fully adapt the input content and improve the performance of the proposed in-Loop Filter, coding tree unit (CTU) level control flag is applied in the sense of rate-distortion optimization. Extensive experimental results show that our scheme provides up to 5.1% bit-rate reduction compared to the state-of-the-art video coding standard.

  • low rank based nonlocal adaptive Loop Filter for high efficiency video compression
    IEEE Transactions on Circuits and Systems for Video Technology, 2017
    Co-Authors: Xinfeng Zhang, Ruiqin Xiong, Jian Zhang, Shiqi Wang
    Abstract:

    In video coding, the in-Loop Filtering has emerged as a key module due to its significant improvement on compression performance since H.264/Advanced Video Coding. Existing incorporated in-Loop Filters in video coding standards mainly take advantage of the local smoothness prior model used for images. In this paper, we propose a novel adaptive Loop Filter utilizing image nonlocal prior knowledge by imposing the low-rank constraint on similar image patches for compression noise reduction. In the Filtering process, the reconstructed frame is first divided into image patch groups according to image patch similarity. The proposed in-Loop Filtering is formulated as an optimization problem with low-rank constraint for every group of image patches independently. It can be efficiently solved by soft-thresholding singular values of the matrix composed of image patches in the same group. To adapt the properties of the input sequences and bit budget, an adaptive threshold derivation model is established for every group of image patches according to the characteristics of compressed image patches, quantization parameters, and coding modes. Moreover, frame-level and largest coding unit-level control flags are signaled to further improve the adaptability from the sense of rate-distortion optimization. The performance of the proposed in-Loop Filter is analyzed when it collaborates with the existing in-Loop Filters in High Efficiency Video Coding. Extensive experimental results show that our proposed in-Loop Filter can further improve the performance of state-of-the-art video coding standard significantly, with up to 16% bit-rate savings.

  • spatial temporal residue network based in Loop Filter for video coding
    Visual Communications and Image Processing, 2017
    Co-Authors: Shiqi Wang, Xinfeng Zhang, Shanshe Wang
    Abstract:

    Deep learning has demonstrated tremendous break through in the area of image/video processing. In this paper, a spatial-temporal residue network (STResNet) based in-Loop Filter is proposed to suppress visual artifacts such as blocking, ringing in video coding. Specifically, the spatial and temporal information is jointly exploited by taking both current block and co-located block in reference frame into consideration during the processing of in-Loop Filter. The architecture of STResNet only consists of four convolution layers which shows hospitality to memory and coding complexity. Moreover, to fully adapt the input content and improve the performance of the proposed in-Loop Filter, coding tree unit (CTU) level control flag is applied in the sense of rate-distortion optimization. Extensive experimental results show that our scheme provides up to 5.1% bit-rate reduction compared to the state-of-the-art video coding standard.

  • VCIP - Transform-domain in-Loop Filter with block similarity for HEVC
    2016 Visual Communications and Image Processing (VCIP), 2016
    Co-Authors: Xinfeng Zhang, Ke Gu, Qiaohong Li, Shanshe Wang, Siwei Ma
    Abstract:

    In-Loop Filtering is an important technique in modern video coding standards. In this paper, we propose a transform-domain in-Loop Filter to further improve the compression performance of high efficiency video coding (HEVC) standard. The proposed method estimates block transform coefficients by adaptively fusing two prediction sources according to their uncertainties respectively. The first prediction is the block transform coefficients of compressed video frames, the uncertainty of which is related to quantization parameters. The second prediction is the weighted average of transform blocks in a neighborhood, and the weights are designed according to block similarity. Its uncertainty is estimated based on the coefficient variance. To optimize the Filtering performance, the parameters utilized in the proposed in-Loop Filter are learned from compressed videos for each quantization parameter offline, and frame level flags are utilized to switch the proposed in-Loop Filter according to rate-distortion cost. Extensive experimental results show that the proposed in-Loop Filter can further improves the compression efficiency of HEVC.

  • Nonlocal In-Loop Filter: The Way Toward Next-Generation Video Coding?
    IEEE MultiMedia, 2016
    Co-Authors: Siwei Ma, Xinfeng Zhang, Jian Zhang, Shiqi Wang
    Abstract:

    In-Loop Filtering has emerged as an essential coding tool since H.264/AVC, due to its delicate design, which reduces different kinds of compression artifacts. However, existing in-Loop Filters rely only on local image correlations, largely ignoring nonlocal similarities. In this article, the authors explore the design philosophy of in-Loop Filters and discuss their vision for the future of in-Loop Filter research by examining the potential of nonlocal similarities. Specifically, the group-based sparse representation, which jointly exploits an image's local and nonlocal self-similarities, lays a novel and meaningful groundwork for in-Loop Filter design. Hard- and soft-thresholding Filtering operations are applied to derive the sparse parameters that are appropriate for compression artifact reduction. Experimental results show that this in-Loop Filter design can significantly improve the compression performance of the High Efficiency Video Coding (HEVC) standard, leading us in a new direction for improving compression efficiency.

Siwei Ma - One of the best experts on this subject based on the ideXlab platform.

  • In-Loop Filter
    Advanced Video Coding Systems, 2020
    Co-Authors: Siwei Ma
    Abstract:

    This chapter provides an introduction to in-Loop Filters in AVS-2. The first part presents the characteristic of compression artifacts caused by block-based video coding methods, the necessity of in-Loop Filtering to improve video coding efficiency and the quality of compressed videos. In the following three parts, we describe the three important in-Loop Filters, i.e., deblocking Filter (DF), sample adaptive offset (SAO), and adaptive Loop Filter (ALF), respectively. The last part concludes this chapter.

  • PCS - Residual in Residual Based Convolutional Neural Network In-Loop Filter for AVS3
    2019 Picture Coding Symposium (PCS), 2019
    Co-Authors: Zhenghui Zhao, Shanshe Wang, Li Wang, Siwei Ma
    Abstract:

    Deep learning based video coding tools development has been an emerging topic recently. In this paper, we propose a novel deep convolutional neural network (CNN) based in-Loop Filter algorithm for the third generation of Audio Video Coding Standard (AVS3). Specifically, we first introduce a residual block based CNN model with global identity connection for the luminance in-Loop Filter to replace conventional rule-based algorithms in AVS3. Subsequently, the reconstructed luminance channel is deployed as textural and structural guidance for chrominance Filtering. The corresponding syntax elements are also designed for the CNN based in-Loop Filtering. In addition, we build a large scale database for the learning based in-Loop Filtering algorithm. Experimental results show that our method achieves on average 7.5%, 16.9% and 18.6% BD-rate reduction under all intra (AI) configuration on common test sequences. In particular, the performance for 4K videos is 6.4%, 15.5% and 17.5% respectively. Moreover, under random access (RA) configuration, the proposed method brings 3.3%, 14.4%, and 13.6% BD-rate reduction separately.

  • VCIP - Spatial-temporal residue network based in-Loop Filter for video coding
    2017 IEEE Visual Communications and Image Processing (VCIP), 2017
    Co-Authors: Shiqi Wang, Shanshe Wang, Xinfeng Zhang, Siwei Ma
    Abstract:

    Deep learning has demonstrated tremendous break through in the area of image/video processing. In this paper, a spatial-temporal residue network (STResNet) based in-Loop Filter is proposed to suppress visual artifacts such as blocking, ringing in video coding. Specifically, the spatial and temporal information is jointly exploited by taking both current block and co-located block in reference frame into consideration during the processing of in-Loop Filter. The architecture of STResNet only consists of four convolution layers which shows hospitality to memory and coding complexity. Moreover, to fully adapt the input content and improve the performance of the proposed in-Loop Filter, coding tree unit (CTU) level control flag is applied in the sense of rate-distortion optimization. Extensive experimental results show that our scheme provides up to 5.1% bit-rate reduction compared to the state-of-the-art video coding standard.

  • VCIP - Transform-domain in-Loop Filter with block similarity for HEVC
    2016 Visual Communications and Image Processing (VCIP), 2016
    Co-Authors: Xinfeng Zhang, Ke Gu, Qiaohong Li, Shanshe Wang, Siwei Ma
    Abstract:

    In-Loop Filtering is an important technique in modern video coding standards. In this paper, we propose a transform-domain in-Loop Filter to further improve the compression performance of high efficiency video coding (HEVC) standard. The proposed method estimates block transform coefficients by adaptively fusing two prediction sources according to their uncertainties respectively. The first prediction is the block transform coefficients of compressed video frames, the uncertainty of which is related to quantization parameters. The second prediction is the weighted average of transform blocks in a neighborhood, and the weights are designed according to block similarity. Its uncertainty is estimated based on the coefficient variance. To optimize the Filtering performance, the parameters utilized in the proposed in-Loop Filter are learned from compressed videos for each quantization parameter offline, and frame level flags are utilized to switch the proposed in-Loop Filter according to rate-distortion cost. Extensive experimental results show that the proposed in-Loop Filter can further improves the compression efficiency of HEVC.

  • Nonlocal In-Loop Filter: The Way Toward Next-Generation Video Coding?
    IEEE MultiMedia, 2016
    Co-Authors: Siwei Ma, Xinfeng Zhang, Jian Zhang, Shiqi Wang
    Abstract:

    In-Loop Filtering has emerged as an essential coding tool since H.264/AVC, due to its delicate design, which reduces different kinds of compression artifacts. However, existing in-Loop Filters rely only on local image correlations, largely ignoring nonlocal similarities. In this article, the authors explore the design philosophy of in-Loop Filters and discuss their vision for the future of in-Loop Filter research by examining the potential of nonlocal similarities. Specifically, the group-based sparse representation, which jointly exploits an image's local and nonlocal self-similarities, lays a novel and meaningful groundwork for in-Loop Filter design. Hard- and soft-thresholding Filtering operations are applied to derive the sparse parameters that are appropriate for compression artifact reduction. Experimental results show that this in-Loop Filter design can significantly improve the compression performance of the High Efficiency Video Coding (HEVC) standard, leading us in a new direction for improving compression efficiency.

Shiqi Wang - One of the best experts on this subject based on the ideXlab platform.

  • VCIP - Spatial-temporal residue network based in-Loop Filter for video coding
    2017 IEEE Visual Communications and Image Processing (VCIP), 2017
    Co-Authors: Shiqi Wang, Shanshe Wang, Xinfeng Zhang, Siwei Ma
    Abstract:

    Deep learning has demonstrated tremendous break through in the area of image/video processing. In this paper, a spatial-temporal residue network (STResNet) based in-Loop Filter is proposed to suppress visual artifacts such as blocking, ringing in video coding. Specifically, the spatial and temporal information is jointly exploited by taking both current block and co-located block in reference frame into consideration during the processing of in-Loop Filter. The architecture of STResNet only consists of four convolution layers which shows hospitality to memory and coding complexity. Moreover, to fully adapt the input content and improve the performance of the proposed in-Loop Filter, coding tree unit (CTU) level control flag is applied in the sense of rate-distortion optimization. Extensive experimental results show that our scheme provides up to 5.1% bit-rate reduction compared to the state-of-the-art video coding standard.

  • low rank based nonlocal adaptive Loop Filter for high efficiency video compression
    IEEE Transactions on Circuits and Systems for Video Technology, 2017
    Co-Authors: Xinfeng Zhang, Ruiqin Xiong, Jian Zhang, Shiqi Wang
    Abstract:

    In video coding, the in-Loop Filtering has emerged as a key module due to its significant improvement on compression performance since H.264/Advanced Video Coding. Existing incorporated in-Loop Filters in video coding standards mainly take advantage of the local smoothness prior model used for images. In this paper, we propose a novel adaptive Loop Filter utilizing image nonlocal prior knowledge by imposing the low-rank constraint on similar image patches for compression noise reduction. In the Filtering process, the reconstructed frame is first divided into image patch groups according to image patch similarity. The proposed in-Loop Filtering is formulated as an optimization problem with low-rank constraint for every group of image patches independently. It can be efficiently solved by soft-thresholding singular values of the matrix composed of image patches in the same group. To adapt the properties of the input sequences and bit budget, an adaptive threshold derivation model is established for every group of image patches according to the characteristics of compressed image patches, quantization parameters, and coding modes. Moreover, frame-level and largest coding unit-level control flags are signaled to further improve the adaptability from the sense of rate-distortion optimization. The performance of the proposed in-Loop Filter is analyzed when it collaborates with the existing in-Loop Filters in High Efficiency Video Coding. Extensive experimental results show that our proposed in-Loop Filter can further improve the performance of state-of-the-art video coding standard significantly, with up to 16% bit-rate savings.

  • spatial temporal residue network based in Loop Filter for video coding
    Visual Communications and Image Processing, 2017
    Co-Authors: Shiqi Wang, Xinfeng Zhang, Shanshe Wang
    Abstract:

    Deep learning has demonstrated tremendous break through in the area of image/video processing. In this paper, a spatial-temporal residue network (STResNet) based in-Loop Filter is proposed to suppress visual artifacts such as blocking, ringing in video coding. Specifically, the spatial and temporal information is jointly exploited by taking both current block and co-located block in reference frame into consideration during the processing of in-Loop Filter. The architecture of STResNet only consists of four convolution layers which shows hospitality to memory and coding complexity. Moreover, to fully adapt the input content and improve the performance of the proposed in-Loop Filter, coding tree unit (CTU) level control flag is applied in the sense of rate-distortion optimization. Extensive experimental results show that our scheme provides up to 5.1% bit-rate reduction compared to the state-of-the-art video coding standard.

  • Nonlocal In-Loop Filter: The Way Toward Next-Generation Video Coding?
    IEEE MultiMedia, 2016
    Co-Authors: Siwei Ma, Xinfeng Zhang, Jian Zhang, Shiqi Wang
    Abstract:

    In-Loop Filtering has emerged as an essential coding tool since H.264/AVC, due to its delicate design, which reduces different kinds of compression artifacts. However, existing in-Loop Filters rely only on local image correlations, largely ignoring nonlocal similarities. In this article, the authors explore the design philosophy of in-Loop Filters and discuss their vision for the future of in-Loop Filter research by examining the potential of nonlocal similarities. Specifically, the group-based sparse representation, which jointly exploits an image's local and nonlocal self-similarities, lays a novel and meaningful groundwork for in-Loop Filter design. Hard- and soft-thresholding Filtering operations are applied to derive the sparse parameters that are appropriate for compression artifact reduction. Experimental results show that this in-Loop Filter design can significantly improve the compression performance of the High Efficiency Video Coding (HEVC) standard, leading us in a new direction for improving compression efficiency.

  • ISM - Nonlocal Adaptive In-Loop Filter via Content-Dependent Soft-Thresholding for HEVC
    2015 IEEE International Symposium on Multimedia (ISM), 2015
    Co-Authors: Xinfeng Zhang, Shiqi Wang, Siwei Ma
    Abstract:

    In-Loop Filters have been widely utilized in latest video coding standards to improve the video coding efficiency by reducing compression artifacts. However, existing in-Loop Filters only utilize image local correlations, leading to limited performance improvement. In this paper, we explore a novel adaptive in-Loop Filter by means of the nonlocal similar content to improve the quality of reconstructed video frames. In our proposed Filter, the input video frame is first divided into different image patch groups based on their similarity, and then a soft-thresholding method is applied to the singular values of matrices composed of image patches in every group. Since compression noise is highly correlated with image content, we propose a group-wise threshold estimation method based on image statistical characteristics, coding modes and quantization parameters. To ensure the Filtering efficiency, slice level control flags are utilized and determined based on the distortion changes after Filtering. The proposed in-Loop Filter is integrated into HM7.0, and experimental results show that it can significantly improve the performance of HEVC on top of the state-of-the-art in-Loop Filters.

Dapeng Wu - One of the best experts on this subject based on the ideXlab platform.

  • classified quadtree based adaptive Loop Filter
    International Conference on Multimedia and Expo, 2011
    Co-Authors: Qian Chen, Yunfei Zheng, Xiaoan Lu, Joel Sole, Qian Xu, Edouard Francois, Dapeng Wu
    Abstract:

    In this paper, we propose a classified quadtree-based adaptive Loop Filter (CQALF) in video coding. Pixels in a picture are classified into two categories by considering the impact of the deblocking Filter, the pixels that are modified and the pixels that are not modified by the deblocking Filter. A wiener Filter is carefully designed for each category and the Filter coefficients are transmitted to decoder. For the pixels that are modified by the deblocking Filter, the Filter is estimated at encoder by minimizing the mean square error between the original input frame and a combined frame which is a weighted average of the reconstructed frames before and after the deblocking Filter. For pixels that the deblocking Filter does not modify, the Filter is estimated by minimizing the mean square error between the original frame and the reconstructed frame. The proposed algorithm is implemented on top of KTA software and compatible with the quadtree-based adaptive Loop Filter. Compared with kta2.6r1 anchor, the proposed CQALF achieves 10.05%, 7.55%, and 6.19% BD bitrate reduction in average for intra only, IPPP, and HB coding structures respectively.

  • ICME - Classified quadtree-based adaptive Loop Filter
    2011 IEEE International Conference on Multimedia and Expo, 2011
    Co-Authors: Qian Chen, Yunfei Zheng, Xiaoan Lu, Joel Sole, Qian Xu, Edouard Francois, Dapeng Wu
    Abstract:

    In this paper, we propose a classified quadtree-based adaptive Loop Filter (CQALF) in video coding. Pixels in a picture are classified into two categories by considering the impact of the deblocking Filter, the pixels that are modified and the pixels that are not modified by the deblocking Filter. A wiener Filter is carefully designed for each category and the Filter coefficients are transmitted to decoder. For the pixels that are modified by the deblocking Filter, the Filter is estimated at encoder by minimizing the mean square error between the original input frame and a combined frame which is a weighted average of the reconstructed frames before and after the deblocking Filter. For pixels that the deblocking Filter does not modify, the Filter is estimated by minimizing the mean square error between the original frame and the reconstructed frame. The proposed algorithm is implemented on top of KTA software and compatible with the quadtree-based adaptive Loop Filter. Compared with kta2.6r1 anchor, the proposed CQALF achieves 10.05%, 7.55%, and 6.19% BD bitrate reduction in average for intra only, IPPP, and HB coding structures respectively.

Ruiqin Xiong - One of the best experts on this subject based on the ideXlab platform.

  • low rank based nonlocal adaptive Loop Filter for high efficiency video compression
    IEEE Transactions on Circuits and Systems for Video Technology, 2017
    Co-Authors: Xinfeng Zhang, Ruiqin Xiong, Jian Zhang, Shiqi Wang
    Abstract:

    In video coding, the in-Loop Filtering has emerged as a key module due to its significant improvement on compression performance since H.264/Advanced Video Coding. Existing incorporated in-Loop Filters in video coding standards mainly take advantage of the local smoothness prior model used for images. In this paper, we propose a novel adaptive Loop Filter utilizing image nonlocal prior knowledge by imposing the low-rank constraint on similar image patches for compression noise reduction. In the Filtering process, the reconstructed frame is first divided into image patch groups according to image patch similarity. The proposed in-Loop Filtering is formulated as an optimization problem with low-rank constraint for every group of image patches independently. It can be efficiently solved by soft-thresholding singular values of the matrix composed of image patches in the same group. To adapt the properties of the input sequences and bit budget, an adaptive threshold derivation model is established for every group of image patches according to the characteristics of compressed image patches, quantization parameters, and coding modes. Moreover, frame-level and largest coding unit-level control flags are signaled to further improve the adaptability from the sense of rate-distortion optimization. The performance of the proposed in-Loop Filter is analyzed when it collaborates with the existing in-Loop Filters in High Efficiency Video Coding. Extensive experimental results show that our proposed in-Loop Filter can further improve the performance of state-of-the-art video coding standard significantly, with up to 16% bit-rate savings.

  • adaptive Loop Filter with temporal prediction
    Picture Coding Symposium, 2012
    Co-Authors: Xinfeng Zhang, Ruiqin Xiong
    Abstract:

    In this paper, we propose a method to improve adaptive Loop Filter (ALF) efficiency with temporal prediction. For one frame, two sets of adaptive Loop Filter parameters are adaptively selected by rate distortion optimization. The first set of ALF parameters is estimated by minimizing the mean square error between the original frame and the current reconstructed frame. The second set of Filter parameters is the one that is used in the latest prior frame. The proposed algorithm is implemented in HM3.0 software. Compared with the HM3.0 anchor, the proposed method achieves 0.4%, 0.3% and 0.3% BD bitrate reduction in average for high efficiency low delay B, high efficiency low delay P and high efficiency random access configuration, respectively. The encoding and decoding time increase by 1% and 2% on average, respectively.

  • PCS - Adaptive Loop Filter with temporal prediction
    2012 Picture Coding Symposium, 2012
    Co-Authors: Xinfeng Zhang, Ruiqin Xiong, Siwei Ma
    Abstract:

    In this paper, we propose a method to improve adaptive Loop Filter (ALF) efficiency with temporal prediction. For one frame, two sets of adaptive Loop Filter parameters are adaptively selected by rate distortion optimization. The first set of ALF parameters is estimated by minimizing the mean square error between the original frame and the current reconstructed frame. The second set of Filter parameters is the one that is used in the latest prior frame. The proposed algorithm is implemented in HM3.0 software. Compared with the HM3.0 anchor, the proposed method achieves 0.4%, 0.3% and 0.3% BD bitrate reduction in average for high efficiency low delay B, high efficiency low delay P and high efficiency random access configuration, respectively. The encoding and decoding time increase by 1% and 2% on average, respectively.