Scalable Coding

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 12009 Experts worldwide ranked by ideXlab platform

Suhas Diggavi - One of the best experts on this subject based on the ideXlab platform.

  • side information Scalable source Coding
    IEEE Transactions on Information Theory, 2008
    Co-Authors: Chao Tian, Suhas Diggavi
    Abstract:

    We consider the problem of side-information Scalable (SI-Scalable) source Coding, where the encoder constructs a two-layer description, such that the receiver with high quality side information will be able to use only the first layer to reconstruct the source in a lossy manner, while the receiver with low quality side information will have to receive both layers in order to decode. We provide inner and outer bounds to the rate-distortion (R-D) region for general discrete memoryless sources. The achievable region is tight when either one of the decoders requires a lossless reconstruction, and when the distortion measures are degraded and deterministic. Furthermore, the gap between the inner and the outer bounds can be bounded by certain constants when the squared error distortion measure is used. The notion of perfect scalability is introduced, for which necessary and sufficient conditions are given for sources satisfying a mild support condition. Using SI-Scalable Coding and successive refinement Wyner-Ziv Coding as basic building blocks, we provide a complete characterization of the rate-distortion region for the important quadratic Gaussian source with multiple jointly Gaussian side informations, where the side information quality is not necessarily monotonic along the Scalable Coding order. A partial result is provided for the doubly symmetric binary source under the Hamming distortion measure when the worse side information is a constant, for which one of the outer bounds is strictly tighter than the other.

  • side information Scalable source Coding
    arXiv: Information Theory, 2007
    Co-Authors: Chao Tian, Suhas Diggavi
    Abstract:

    The problem of side-information Scalable (SI-Scalable) source Coding is considered in this work, where the encoder constructs a progressive description, such that the receiver with high quality side information will be able to truncate the bitstream and reconstruct in the rate distortion sense, while the receiver with low quality side information will have to receive further data in order to decode. We provide inner and outer bounds for general discrete memoryless sources. The achievable region is shown to be tight for the case that either of the decoders requires a lossless reconstruction, as well as the case with degraded deterministic distortion measures. Furthermore we show that the gap between the achievable region and the outer bounds can be bounded by a constant when square error distortion measure is used. The notion of perfectly Scalable Coding is introduced as both the stages operate on the Wyner-Ziv bound, and necessary and sufficient conditions are given for sources satisfying a mild support condition. Using SI-Scalable Coding and successive refinement Wyner-Ziv Coding as basic building blocks, a complete characterization is provided for the important quadratic Gaussian source with multiple jointly Gaussian side-informations, where the side information quality does not have to be monotonic along the Scalable Coding order. Partial result is provided for the doubly symmetric binary source with Hamming distortion when the worse side information is a constant, for which one of the outer bound is strictly tighter than the other one.

M Van Der Schaar - One of the best experts on this subject based on the ideXlab platform.

  • multiple description Scalable Coding using wavelet based motion compensated temporal filtering
    International Conference on Image Processing, 2003
    Co-Authors: M Van Der Schaar, Deepak S Turaga
    Abstract:

    Packet delay jitter and loss due to network congestion pose significant challenges for designing and deploying delay sensitive multimedia applications over the best effort packet switched networks such as the Internet. Recent studies indicate that using multiple descriptions Coding (MDC) in conjunction with path or server diversity can mitigate these effects. However, the proposed MDC Coding and streaming techniques are based on non-Scalable Coding techniques. A key disadvantages of these techniques is that they can only improve the error resilience of the transmitted video, but are not able to address two other important challenges associated with the robust transmission of video over unreliable networks: adaptation to bandwidth variations and receiving device characteristics. In this paper, we present a new paradigm, referred to as multiple description Scalable Coding (MDSC), that is able to address all the previously mentioned challenges by combining the advantages of Scalable Coding and MDC. This framework enables tradeoffs between throughput, redundancy and complexity at transmission time, unlike previous non-Scalable MDC schemes. Furthermore, we also propose a novel MDSC scheme based on motion compensated temporal filtering (MCTF), denominated multiple description motion compensated temporal filtering (MD-MCTF). We use the inherent ability of current MCTF schemes, using the lifting implementation of temporal filtering. We show how tradeoffs between throughput, redundancy and complexity can easily be achieved by adaptively partitioning the video into several descriptions after MCTF. Based on our simulations using different network conditions, the proposed MD-MCTF framework outperforms existing MDC schemes over a variety of network conditions.

  • Scalable mpeg 4 video Coding with graceful packet loss resilience over bandwidth varying networks
    International Conference on Multimedia and Expo, 2000
    Co-Authors: M Van Der Schaar, Hayder Radha, C Dufour
    Abstract:

    We evaluate the packet loss resilience of the MPEG-4 Fine-Granular-Scalability (FGS) video Coding method for Internet streaming applications. Since unrecoverable packet losses are very common over the Internet, the focus of this study is to determine how robust the MPEG-4 FGS Coding tool is under unrecoverable IP packet-losses over a wide range of connection qualities (i.e. at various bit-rates). Our extensive study was performed upon an entire set of sequences, bit-rates and packet-loss rates. As shown in this paper, under unequal packet-loss protection, FGS provides a clear advantage over non-Scalable Coding. If equal packet-loss protection is employed, FGS performance is still considerably better, in particular under moderate-to-high packet-loss ratios.

  • ICIP (3) - Multiple description Scalable Coding using wavelet-based motion compensated temporal filtering
    Proceedings 2003 International Conference on Image Processing (Cat. No.03CH37429), 1
    Co-Authors: M Van Der Schaar, Deepak S Turaga
    Abstract:

    Packet delay jitter and loss due to network congestion pose significant challenges for designing and deploying delay sensitive multimedia applications over the best effort packet switched networks such as the Internet. Recent studies indicate that using multiple descriptions Coding (MDC) in conjunction with path or server diversity can mitigate these effects. However, the proposed MDC Coding and streaming techniques are based on non-Scalable Coding techniques. A key disadvantages of these techniques is that they can only improve the error resilience of the transmitted video, but are not able to address two other important challenges associated with the robust transmission of video over unreliable networks: adaptation to bandwidth variations and receiving device characteristics. In this paper, we present a new paradigm, referred to as multiple description Scalable Coding (MDSC), that is able to address all the previously mentioned challenges by combining the advantages of Scalable Coding and MDC. This framework enables tradeoffs between throughput, redundancy and complexity at transmission time, unlike previous non-Scalable MDC schemes. Furthermore, we also propose a novel MDSC scheme based on motion compensated temporal filtering (MCTF), denominated multiple description motion compensated temporal filtering (MD-MCTF). We use the inherent ability of current MCTF schemes, using the lifting implementation of temporal filtering. We show how tradeoffs between throughput, redundancy and complexity can easily be achieved by adaptively partitioning the video into several descriptions after MCTF. Based on our simulations using different network conditions, the proposed MD-MCTF framework outperforms existing MDC schemes over a variety of network conditions.

Chao Tian - One of the best experts on this subject based on the ideXlab platform.

  • side information Scalable source Coding
    IEEE Transactions on Information Theory, 2008
    Co-Authors: Chao Tian, Suhas Diggavi
    Abstract:

    We consider the problem of side-information Scalable (SI-Scalable) source Coding, where the encoder constructs a two-layer description, such that the receiver with high quality side information will be able to use only the first layer to reconstruct the source in a lossy manner, while the receiver with low quality side information will have to receive both layers in order to decode. We provide inner and outer bounds to the rate-distortion (R-D) region for general discrete memoryless sources. The achievable region is tight when either one of the decoders requires a lossless reconstruction, and when the distortion measures are degraded and deterministic. Furthermore, the gap between the inner and the outer bounds can be bounded by certain constants when the squared error distortion measure is used. The notion of perfect scalability is introduced, for which necessary and sufficient conditions are given for sources satisfying a mild support condition. Using SI-Scalable Coding and successive refinement Wyner-Ziv Coding as basic building blocks, we provide a complete characterization of the rate-distortion region for the important quadratic Gaussian source with multiple jointly Gaussian side informations, where the side information quality is not necessarily monotonic along the Scalable Coding order. A partial result is provided for the doubly symmetric binary source under the Hamming distortion measure when the worse side information is a constant, for which one of the outer bounds is strictly tighter than the other.

  • side information Scalable source Coding
    arXiv: Information Theory, 2007
    Co-Authors: Chao Tian, Suhas Diggavi
    Abstract:

    The problem of side-information Scalable (SI-Scalable) source Coding is considered in this work, where the encoder constructs a progressive description, such that the receiver with high quality side information will be able to truncate the bitstream and reconstruct in the rate distortion sense, while the receiver with low quality side information will have to receive further data in order to decode. We provide inner and outer bounds for general discrete memoryless sources. The achievable region is shown to be tight for the case that either of the decoders requires a lossless reconstruction, as well as the case with degraded deterministic distortion measures. Furthermore we show that the gap between the achievable region and the outer bounds can be bounded by a constant when square error distortion measure is used. The notion of perfectly Scalable Coding is introduced as both the stages operate on the Wyner-Ziv bound, and necessary and sufficient conditions are given for sources satisfying a mild support condition. Using SI-Scalable Coding and successive refinement Wyner-Ziv Coding as basic building blocks, a complete characterization is provided for the important quadratic Gaussian source with multiple jointly Gaussian side-informations, where the side information quality does not have to be monotonic along the Scalable Coding order. Partial result is provided for the doubly symmetric binary source with Hamming distortion when the worse side information is a constant, for which one of the outer bound is strictly tighter than the other one.

Bernd Girod - One of the best experts on this subject based on the ideXlab platform.

  • robust internet video transmission based on Scalable Coding and unequal error protection
    Signal Processing-image Communication, 1999
    Co-Authors: Uwe Horn, K Stuhlmuller, Michael Link, Bernd Girod
    Abstract:

    In this article we describe and investigate an Internet video streaming system based on a Scalable video coder combined with unequal error protection that maintains an acceptable picture quality over a wide range of connection qualities. The proposed approach does not require any specific support from the network layer and is especially suited for Internet multicast applications where different users are perceiving different transmission conditions and no feedback channel can be employed. We derive a theoretical framework for the overall system by which the Internet packet loss behavior can be directly related to the picture quality perceived at the receiver. We demonstrate how this framework can be used to select appropriate parameter values for the overall system design. Experimental results show how the presented system achieves a gracefully degrading picture quality for packet losses up to 30%.

Yong Man Ro - One of the best experts on this subject based on the ideXlab platform.

  • privacy protection in video surveillance systems analysis of subband adaptive scrambling in jpeg xr
    IEEE Transactions on Circuits and Systems for Video Technology, 2011
    Co-Authors: Hosik Sohn, Wesley De Neve, Yong Man Ro
    Abstract:

    This paper discusses a privacy-protected video surveillance system that makes use of JPEG extended range (JPEG XR). JPEG XR offers a low-complexity solution for the Scalable Coding of high-resolution images. To address privacy concerns, face regions are detected and scrambled in the transform domain, taking into account the quality and spatial scalability features of JPEG XR. Experiments were conducted to investigate the performance of our surveillance system, considering visual distortion, bit stream overhead, and security aspects. Our results demonstrate that subband-adaptive scrambling is able to conceal privacy-sensitive face regions with a feasible level of protection. In addition, our results show that subband-adaptive scrambling of face regions outperforms subband-adaptive scrambling of frames in terms of Coding efficiency, except when low video bit rates are in use.