Nonuniform Quantizers

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 69 Experts worldwide ranked by ideXlab platform

Jan P. Allebach - One of the best experts on this subject based on the ideXlab platform.

  • Quantization of accumulated diffused errors in error diffusion
    IEEE Transactions on Image Processing, 2005
    Co-Authors: Tichiun Chang, Jan P. Allebach
    Abstract:

    Due to its high image quality and moderate computational complexity, error diffusion is a popular halftoning algorithm for use with inkjet printers. However, error diffusion is an inherently serial algorithm that requires buffering a full row of accumulated diffused error (ADE) samples. For the best performance when the algorithm is implemented in hardware, the ADE data should be stored on the chip on which the error diffusion algorithm is implemented. However, this may result in an unacceptable hardware cost. In this paper, we examine the use of quantization of the ADE to reduce the amount of data that must be stored. We consider both uniform and Nonuniform Quantizers. For the Nonuniform Quantizers, we build on the concept of tone-dependency in error diffusion, by proposing several novel feature-dependent Quantizers that yield improved image quality at a given bit rate, compared to memoryless Quantizers. The optimal design of these Quantizers is coupled with the design of the tone-dependent parameters associated with error diffusion. This is done via a combination of the classical Lloyd-Max algorithm and the training framework for tone-dependent error diffusion. Our results show that 4-bit uniform quantization of the ADE yields the same halftone quality as error diffusion without quantization of the ADE. At rates that vary from 2 to 3 bits per pixel, depending on the selectivity of the feature on which the quantizer depends, the feature-dependent Quantizers achieve essentially the same quality as 4-bit uniform quantization.

  • quantization of accumulated diffused errors in error diffusion
    Color Imaging Conference, 2005
    Co-Authors: Tichiun Chang, Jan P. Allebach
    Abstract:

    Quantization of the accumulated diffused error (ADE) is an effective means to reduce on-chip storage in a hardware implementation of error diffusion. A simple uniform quantizer can yield a factor of 2 savings with no apparent loss in image quality. Nonuniform Quantizers with memory that depend on the quantizer index or various features13 can yield even greater savings -- up to a factor of 4, with essentially no loss in image quality. However, these Quantizers depend on the trainability of the tone-dependent error diffusion (TDED) framework to achieve this level of quality. In addition, the design of the Quantizers must be coupled to that of the TDED parameters in either a sequential or iterative fashion.

  • Color Imaging: Processing, Hardcopy, and Applications - Quantization of accumulated diffused errors in error diffusion
    Color Imaging X: Processing Hardcopy and Applications, 2005
    Co-Authors: Tichiun Chang, Jan P. Allebach
    Abstract:

    Quantization of the accumulated diffused error (ADE) is an effective means to reduce on-chip storage in a hardware implementation of error diffusion. A simple uniform quantizer can yield a factor of 2 savings with no apparent loss in image quality. Nonuniform Quantizers with memory that depend on the quantizer index or various features13 can yield even greater savings -- up to a factor of 4, with essentially no loss in image quality. However, these Quantizers depend on the trainability of the tone-dependent error diffusion (TDED) framework to achieve this level of quality. In addition, the design of the Quantizers must be coupled to that of the TDED parameters in either a sequential or iterative fashion.

Zoran Peric - One of the best experts on this subject based on the ideXlab platform.

  • Design of Quantizers with Huffman Coding for Laplacian Source
    Elektronika Ir Elektrotechnika, 2015
    Co-Authors: Milan R Dincic, Zoran Peric
    Abstract:

    Two new types of hybrid Quantizers are proposed for low and moderate bit-rates. They are compared with the uniform and the Nonuniform Quantizers in two cases: when VLC (variable length code) is not used and when Huffman VLC is used. It is shown that our hybrid Quantizers have excellent performances.

  • TWO-DIMENSIONAL VECTOR QUANTIZER WITH VARIABLE LENGTH LOSSLESS CODE FOR LAPLACIAN SOURCE
    Information Technology And Control, 2011
    Co-Authors: Milan R Dincic, Zoran Peric
    Abstract:

    The main aim of this paper is to apply variable length lossless code on output points of the two-dimensional vector quantizer for Laplacian source. This problem is not yet solved in the literature. In this paper two-dimensional quantizer is designed using Helmert transform and its optimization is done. New lossless code is introduced. It is very simple but very close to the ideal code since it gives the bit-rate very close to the entropy. It is applied on the two-dimensional quantizer. It is shown that our two-dimensional quantizer can achieve the same performances as scalar uniform and Nonuniform Quantizers, but using much smaller number of points per dimension. Therefore, two-dimensional quantizer has smaller execution complexity. http://dx.doi.org/10.5755/j01.itc.40.2.428

  • design of a hybrid quantizer with variable length code
    Fundamenta Informaticae, 2010
    Co-Authors: Zoran Peric, Milan R Dincic, Marko D Petkovic
    Abstract:

    In this paper a new model for compression of Laplacian source is given. This model consists of hybrid quantizer whose output levels are coded with Golomb-Rice code. Hybrid quantizer is combination of uniform and Nonuniform quantizer, and it can be considered as generalized quantizer, whose special cases are uniform and NonuniformQuantizers. We propose new generalized optimal compression function for companding Quantizers. Hybrid quantizer has better performances (smaller bit-rate and complexity for the same quality) than both uniform and NonuniformQuantizers, because it joins their good characteristics. Also, hybrid quantizer allows great flexibility, because there are many combinations of number of levels in uniform part and in Nonuniformpart, which give similar quality. Each of these combinations has different bit-rate and complexity, so we have freedom to choose combination which is the most appropriate for our application, in regard to quality, bit-rate and complexity. We do not have such freedom of choice when we use uniform or Nonuniform Quantizers. Until now, it has been thought that uniform quantizer is the most appropriate to use with lossless code, but in this paper we show that combination of hybrid quantizer and lossless code gives better performances. As lossless code we use Golomb-Rice code because it is especially suitable for Laplacian source since it gives average bit-rate very close to the entropy and it is easier for implementation than Huffman code. Golomb-Rice code is used in many modern compression standards. Our model can be used for compression of all signals with Laplacian distribution.

  • Analysis of Support Region for Laplacian Source's Scalar Quantizers
    TELSIKS 2005 - 2005 uth International Conference on Telecommunication in ModernSatellite Cable and Broadcasting Services, 1
    Co-Authors: Zoran Peric, Jelena Nikolic, Dragoljub Pokrajac
    Abstract:

    The goal of this paper is to find the estimation of the support region of optimal scalar Quantizers for Laplacian input signals with unrestricted amplitude range in terms of the number of quantization levels. The companding concept has been used because of its usefulness in analyzing and optimizing Nonuniform Quantizers with large number of levels. The support is shown to grow logarithmically with the number of quantization levels

Hideaki Ishii - One of the best experts on this subject based on the ideXlab platform.

  • Stabilization of uncertain systems using quantized and lossy observations and uncertain control inputs
    Automatica, 2017
    Co-Authors: Kunihisa Okano, Hideaki Ishii
    Abstract:

    In this paper, we consider a stabilization problem of an uncertain system in a networked control setting. Due to the network, the measurements are quantized to finite-bit signals and may be randomly lost in the communication. We study uncertain autoregressive systems whose state and input parameters vary within given intervals. We derive conditions for making the plant output to be mean square stable, characterizing limitations on data rate, packet loss probabilities, and magnitudes of uncertainty. It is shown that a specific class of Nonuniform Quantizers can achieve stability with a lower data rate compared with the common uniform one.

Muhammad Taher Abuelma'atti - One of the best experts on this subject based on the ideXlab platform.

  • Harmonic and Intermodulation Owing to Nonuniform Quantization
    European Transactions on Telecommunications, 1993
    Co-Authors: Muhammad Taher Abuelma'atti
    Abstract:

    By representing the Nonuniform quantizer characteristic by an infinite Fourier series, new, computationally simple, analytical expressions for predicting the harmonic and intermodulation performance of Nonuniform Quantizers are presented. Using these expressions Nonuniform quantizer characteristics can be tailored to maintain predetermined maximum intermodulation levels under any scenario of input signals.

Tichiun Chang - One of the best experts on this subject based on the ideXlab platform.

  • Quantization of accumulated diffused errors in error diffusion
    IEEE Transactions on Image Processing, 2005
    Co-Authors: Tichiun Chang, Jan P. Allebach
    Abstract:

    Due to its high image quality and moderate computational complexity, error diffusion is a popular halftoning algorithm for use with inkjet printers. However, error diffusion is an inherently serial algorithm that requires buffering a full row of accumulated diffused error (ADE) samples. For the best performance when the algorithm is implemented in hardware, the ADE data should be stored on the chip on which the error diffusion algorithm is implemented. However, this may result in an unacceptable hardware cost. In this paper, we examine the use of quantization of the ADE to reduce the amount of data that must be stored. We consider both uniform and Nonuniform Quantizers. For the Nonuniform Quantizers, we build on the concept of tone-dependency in error diffusion, by proposing several novel feature-dependent Quantizers that yield improved image quality at a given bit rate, compared to memoryless Quantizers. The optimal design of these Quantizers is coupled with the design of the tone-dependent parameters associated with error diffusion. This is done via a combination of the classical Lloyd-Max algorithm and the training framework for tone-dependent error diffusion. Our results show that 4-bit uniform quantization of the ADE yields the same halftone quality as error diffusion without quantization of the ADE. At rates that vary from 2 to 3 bits per pixel, depending on the selectivity of the feature on which the quantizer depends, the feature-dependent Quantizers achieve essentially the same quality as 4-bit uniform quantization.

  • quantization of accumulated diffused errors in error diffusion
    Color Imaging Conference, 2005
    Co-Authors: Tichiun Chang, Jan P. Allebach
    Abstract:

    Quantization of the accumulated diffused error (ADE) is an effective means to reduce on-chip storage in a hardware implementation of error diffusion. A simple uniform quantizer can yield a factor of 2 savings with no apparent loss in image quality. Nonuniform Quantizers with memory that depend on the quantizer index or various features13 can yield even greater savings -- up to a factor of 4, with essentially no loss in image quality. However, these Quantizers depend on the trainability of the tone-dependent error diffusion (TDED) framework to achieve this level of quality. In addition, the design of the Quantizers must be coupled to that of the TDED parameters in either a sequential or iterative fashion.

  • Color Imaging: Processing, Hardcopy, and Applications - Quantization of accumulated diffused errors in error diffusion
    Color Imaging X: Processing Hardcopy and Applications, 2005
    Co-Authors: Tichiun Chang, Jan P. Allebach
    Abstract:

    Quantization of the accumulated diffused error (ADE) is an effective means to reduce on-chip storage in a hardware implementation of error diffusion. A simple uniform quantizer can yield a factor of 2 savings with no apparent loss in image quality. Nonuniform Quantizers with memory that depend on the quantizer index or various features13 can yield even greater savings -- up to a factor of 4, with essentially no loss in image quality. However, these Quantizers depend on the trainability of the tone-dependent error diffusion (TDED) framework to achieve this level of quality. In addition, the design of the Quantizers must be coupled to that of the TDED parameters in either a sequential or iterative fashion.