Quantizer Step Size

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 105 Experts worldwide ranked by ideXlab platform

Soura Dasgupta - One of the best experts on this subject based on the ideXlab platform.

  • ICASSP - Average Quantizer adaptation rates for stable ADPCM
    1996 IEEE International Conference on Acoustics Speech and Signal Processing Conference Proceedings, 1
    Co-Authors: Soura Dasgupta
    Abstract:

    This paper gives a sufficient condition on the average rate of Quantizer Step Size adaptation rate that preserves adaptive differential pulse code modulated (ADPCM) stability under the assumption that the predictor is nonadaptive. The result appeals to some new developments in the passivity analysis of linear time varying systems.

  • Average Quantizer adaptation rates for stable nonstationary ADPCM
    Proceedings of 35th IEEE Conference on Decision and Control, 1
    Co-Authors: Soura Dasgupta
    Abstract:

    This paper considers finite time error recovery in ADPCM systems under the assumption of perfect prediction, but adaptive quantization with nonstationary prediction. Sufficient conditions are given on the average rate of Quantizer Step Size adaptation and variation in the predictor parameters that causes the ADPCM system to recover from initial errors in a finite time.

Marcia G. Ramos - One of the best experts on this subject based on the ideXlab platform.

  • Quantifying Visual Distortion in Low-Rate Wavelet-Coded Images
    2007
    Co-Authors: Sheila Hemanti Marcia, Marcia G. Ramos
    Abstract:

    An experiment quantifying human sensitivity to suprathreshold compression artifacts in wavelet-compressed images allows a computation and comparison of meansquared errors producing equivalent visual distortion, and demonstrates that common assumptions used in wavelet image compression do not necessarily hold for low-rate images. The resulting MSE in the image is constant for a spatial frequency and is independent of orientation. Because the Quantizers are not operating in the granular region, however, equal MSE does not imply equal Quantizer Step Sizes. Computed threshold elevations for the second and third visible degradations are predicted well by the constant MSE model. Supra-threshold distortions in complex stimuli do not therefore behave according to subthreshold assumptions, which suggest that Quantizer Step Size should be constant for a spatial frequency

  • ICIP (3) - Quantifying visual distortion in low-rate wavelet-coded images
    Proceedings 2001 International Conference on Image Processing (Cat. No.01CH37205), 1
    Co-Authors: Sheila S. Hemami, Marcia G. Ramos
    Abstract:

    An experiment quantifying human sensitivity to supra-threshold compression artifacts in wavelet-compressed images allows a computation and comparison of mean-squared errors producing equivalent visual distortion, and demonstrates that common assumptions used in wavelet image compression do not necessarily hold for low-rate images. The resulting MSE in the image is constant for a spatial frequency and is independent of orientation. Because the Quantizers are not operating in the granular region, however, equal MSE does not imply equal Quantizer Step Sizes. Computed threshold elevations for the second and third visible degradations are predicted well by the constant MSE model. Supra-threshold distortions in complex stimuli do not therefore behave according to sub-threshold assumptions, which suggest that the Quantizer Step Size should be constant for a spatial frequency.

Koch, Tobias Mirco - One of the best experts on this subject based on the ideXlab platform.

  • On the Information Dimension of Stochastic Processes
    'Institute of Electrical and Electronics Engineers (IEEE)', 2019
    Co-Authors: Geiger, Bernhard C., Koch, Tobias Mirco
    Abstract:

    In 1959, Rényi proposed the information dimension and the d-dimensional entropy to measure the information content of general random variables. This paper proposes a generalization of information dimension to stochastic processes by defining the information dimension rate as the entropy rate of the uniformly quantized stochastic process divided by minus the logarithm of the Quantizer Step Size 1/m in the limit as m to infty. It is demonstrated that the information dimension rate coincides with the rate-distortion dimension, defined as twice the rate-distortion function R(D) of the stochastic process divided by -log (D) in the limit as D downarrow 0 . It is further shown that among all multivariate stationary processes with a given (matrix-valued) spectral distribution function (SDF), the Gaussian process has the largest information dimension rate and the information dimension rate of multivariate stationary Gaussian processes is given by the average rank of the derivative of the SDF. The presented results reveal that the fundamental limits of almost zero-distortion recovery via compressible signal pursuit and almost lossless analog compression are different in general.The work of Bernhard C. Geiger has partly been funded by the Erwin Schrödinger Fellowship J 3765 of the Austrian Science Fund and by the German Ministry of Education and Research in the framework of an Alexander von Humboldt Professorship. The Know-Center is funded within the Austrian COMET Program - Competence Centers for Excellent Technologies - under the auspices of the Austrian Federal Ministry of Transport, Innovation and Technology, the Austrian Federal Ministry of Digital and Economic Affairs, and by the State of Styria. COMET is managed by the Austrian Research Promotion Agency FFG. The work of Tobias Koch has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement number 714161), from the 7th European Union Framework Programme under Grant 333680, from the Ministerio de EconomÍa y Competitividad of Spain under Grants TEC2013-41718-R, RYC-2014-16332, and TEC2016-78434-C3-3-R (AEI/FEDER, EU), and from the Comunidad de Madrid under Grant S2103/ICE-2845

  • On the information dimension rate of stochastic processes
    IEEE, 2017
    Co-Authors: Geiger Bernhard, Koch, Tobias Mirco
    Abstract:

    Proceeding of: 2017 IEEE International Symposium on Information Theory, Aachen, Germany, 25-30 June 2017Jalali and Poor ("Universal compressed sensing," arXiv:1406.7807v3, Jan. 2016) have recently proposed a generalization of Rényi's information dimension to stationary stochastic processes by defining the information dimension of the stochastic process as the information dimension of k samples divided by k in the limit as k →∞ to. This paper proposes an alternative definition of information dimension as the entropy rate of the uniformly-quantized stochastic process divided by minus the logarithm of the Quantizer Step Size 1/m in the limit as m →∞ ; to. It is demonstrated that both definitions are equivalent for stochastic processes that are ψ*-mixing, but that they may differ in general. In particular, it is shown that for Gaussian processes with essentially-bounded power spectral density (PSD), the proposed information dimension equals the Lebesgue measure of the PSD's support. This is in stark contrast to the information dimension proposed by Jalali and Poor, which is 1 if the process's PSD is positive on a set of positive Lebesgue measure, irrespective of its support Size.The work of Bernhard C. Geiger has been funded by the Erwin Schrödinger Fellowship J 3765 of the Austrian Science Fund and by the German Ministry of Education and Research in the framework of an Alexander von Humboldt Professorship. The work of Tobias Koch has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement number 714161), from the 7th European Union Framework Programme under Grant 333680, from the Spanish Ministerio de Economía y Competitividad under Grants TEC2013- 41718-R, RYC-2014-16332 and TEC2016-78434-C3-3-R (AEI/FEDER, EU), and from the Comunidad de Madrid under Grant S2103/ICE-2845

Alan V. Oppenheim - One of the best experts on this subject based on the ideXlab platform.

  • Quantization and Compensation in Sampled Interleaved Multichannel Systems
    IEEE Transactions on Signal Processing, 2012
    Co-Authors: Shay Maymon, Alan V. Oppenheim
    Abstract:

    This paper considers interleaved, multichannel measurements as arise for example in time-interleaved analog-to-digital (A/D) converters and in distributed sensor networks. Such systems take the form of either uniform or recurrent nonuniform sampling, depending on the relative timing between the channels. Uniform (i.e., linear) quantization in each channel results in an effective overall signal-to-quantization-error ratio (SQNR) in the reconstructed output which is dependent on the Quantizer Step Size in each channel, the relative timing between the channels, and the oversampling ratio. It is shown that in the multichannel sampling system when the quantization Step Size is not restricted to be the same in each channel and the channel timing is not constrained to correspond to uniform sampling, it is often possible to increase the SQNR relative to the uniform case. Appropriate choice of these parameters together with the design of appropriate compensation filtering is developed.

Jae Moon Jo - One of the best experts on this subject based on the ideXlab platform.

  • Some Adaptive Quantizers for HDTV Image Compression
    Signal Processing of HDTV, 2014
    Co-Authors: Juha Park, Jae Moon Jo, Jechang Jeong
    Abstract:

    This paper proposes scene adaptive Quantizers for image sequence compression. The proposed Quantizers incorporate properties of the human visual system as well as channel buffer status in the decision of Quantizer Step Size. Both spatial domain and DCT domain analyses are performed to determine the activity of the macroblock. The Quantizer Step Size is adjusted according to the activity of the macroblock. Performance of the proposed scene adaptive Quantizers is demonstrated by computer simulation using an HDTV test sequence. Subjective tests show that picture quality is substantially improved with the proposed Quantizers. In particular, block artifacts are considerably reduced with the proposed methods, compared to those with the conventional, buffer-controlled Quantizer.

  • Adaptive Huffman coding of 2-D DCT coefficients for image sequence compression
    Signal Processing-image Communication, 1995
    Co-Authors: Jechang Jeong, Jae Moon Jo
    Abstract:

    Abstract This paper presents a new approach to adaptive Huffman coding of 2-D DCT coefficients for image sequence compression. Based on the popular motion-compensated interframe coding, the proposed method employs self-switching multiple Huffman codebooks for entropy coding of quantized transform coefficients. Unlike the existing multiple codebook approaches where the type of block (intra/inter or luminance/chrominance) selects a codebook, the proposed method jointly utilizes the type of block, the Quantizer Step Size, and the zigzag scan position for the purpose of codebook selection. In addition, as another utilization of the Quantizer Step Size and the scan position, the proposed method uses a variable-length “Escape” sequence for encoding rare symbols. Experimental results show that the proposed method with two codebooks provides 0.1–0.4 dB improvement over the single-codebook scheme and this margin turns out to be substantially larger than that the MPEG-2, two-codebook approach has over the single-codebook approach.