Nonzero Coefficient

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 60 Experts worldwide ranked by ideXlab platform

Tong Zhang - One of the best experts on this subject based on the ideXlab platform.

  • On the Consistency of Feature Selection using Greedy Least Squares Regression
    Journal of Machine Learning Research, 2009
    Co-Authors: Tong Zhang
    Abstract:

    This paper studies the feature selection problem using a greedy least squares regression algorithm. We show that under a certain irrepresentable condition on the design matrix (but independent of the sparse target), the greedy algorithm can select features consistently when the sample size approaches infinity. The condition is identical to a corresponding condition for Lasso. Moreover, under a sparse eigenvalue condition, the greedy algorithm can reliably identify features as long as each Nonzero Coefficient is larger than a constant times the noise level. In comparison, Lasso may require the Coefficients to be larger than O(√s) times the noise level in the worst case, where s is the number of Nonzero Coefficients.

G Lakhani - One of the best experts on this subject based on the ideXlab platform.

  • optimal huffman coding of dct blocks
    IEEE Transactions on Circuits and Systems for Video Technology, 2004
    Co-Authors: G Lakhani
    Abstract:

    It is a well-observed characteristic that, when a discrete cosine transform block is traversed in the zigzag order, ac Coefficients generally decrease in size and the runs of zero Coefficients increase in length. This paper presents a minor modification to the Huffman coding of the JPEG baseline compression algorithm to exploit this characteristic. During the run-length coding, instead of pairing a Nonzero ac Coefficient with the run-length of the preceding zero Coefficients, our encoder pairs it with the run-length of subsequent zeros. This small change makes it possible for our codec to code a pair using a separate Huffman code table optimized for the position of the Nonzero Coefficient denoted by the pair. These position-dependent code tables can be encoded efficiently without incurring a sizable overhead. Experimental results show that our encoder produces a further reduction in the ac Coefficient Huffman code size by about 10%-15%.

Shih-hao Hung - One of the best experts on this subject based on the ideXlab platform.

  • ICASSP (2) - A Low Complexity Rate-Distortion Source Modeling Framework
    2006 IEEE International Conference on Acoustics Speed and Signal Processing Proceedings, 1
    Co-Authors: Chun-yuan Chang, Tsung-nan Lin, Din-yuan Chan, Shih-hao Hung
    Abstract:

    An accurate rate-distortion model, which characterizes the relationship among bitrate, distortion, and quantization parameter (QP), is very desirable for real time video transmission. It has been reported that the actual coding bitrates can be estimated by a linear combination of two characteristic rate curves in rho-domain where rho is defined as the percentage of zeros among the quantized transform Coefficients. The process is referred as the "pseudocoding" process. Unfortunately, since rho values are real numbers, an interpolation process must be required for the domain transformation from rho-domain to q-domain. Thus, the prediction inaccuracy can not be avoided and even be "propagated" due to the interpolation uncertainty error. Hence, three parameters are addressed in this paper to support a more accurate and direct estimate of encoding bit rates based on q-domain. They include the number of Nonzero Coefficients, the count of zeros before the last Nonzero Coefficient in the zigzag-scan order, and the sum of absolute quantized Nonzero Coefficients, respectively. We surprisingly find that the estimation accuracy in q-domain is better than currently well-known rho-domain based R-Q model. In addition, a quantization-free extraction method, which only involves some additions and a few multiplications, is developed. That is, the implementation complexity of the proposed mechanism is very low. Consequently, the proposed R-Q model is very suitable for real time applications

Xinling Shi - One of the best experts on this subject based on the ideXlab platform.

  • A Wavelet-Based ECG Compression Algorithm Using Golomb Codes
    2006 International Conference on Communications Circuits and Systems, 2006
    Co-Authors: Jianhua Chen, Yufeng Zhang, Xinling Shi
    Abstract:

    A new wavelet-based method for the compression of electrocardiogram (ECG) data is presented. The discrete wavelet transform (DWT) is applied to the digitized ECG signal. The DWT Coefficients are firstly quantized with a uniform scalar dead zone quantizer. Then, the quantized Coefficients are decomposed into two parts for efficient entropy coding: a Nonzero Coefficient stream and a binary significance symbol stream which indicates the locations of those Nonzero Coefficients. The Exp-Golomb coding is used to code the lengths of runs of the zero Coefficients. The Golomb-Rice coding is used to code the Nonzero Coefficients. Experiments on several records from the MIT-BIH arrhythmia database show that the proposed coding algorithm outperforms other recently developed ECG signal compression algorithms.

Zhimin Zhao - One of the best experts on this subject based on the ideXlab platform.

  • A novel image denoising algorithm in wavelet domain using total variation and grey theory
    Engineering Computations, 2010
    Co-Authors: Zhimin Zhao
    Abstract:

    Purpose – The traditional total variation (TV) models in wavelet domain use thresholding directly in Coefficients selection and show that Gibbs' phenomenon exists. However, the Nonzero Coefficient index set selected by hard thresholding techniques may not be the best choice to obtain the least oscillatory reconstructions near edges. This paper aims to propose an image denoising method based on TV and grey theory in the wavelet domain to solve the defect of traditional methods.Design/methodology/approach – In this paper, the authors divide wavelet into two parts: low frequency area and high frequency area; in different areas different methods are used. They apply grey theory in wavelet Coefficient selection. The new algorithm gives a new method of wavelet Coefficient selection, solves the Nonzero Coefficients sort, and achieves a good image denoising result while reducing the phenomenon of “Gibbs.”Findings – The results show that the method proposed in this paper can distinguish between the information of ...