Likelihood Ratio

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 122826 Experts worldwide ranked by ideXlab platform

Ahmed H. Tewfik - One of the best experts on this subject based on the ideXlab platform.

  • deep log Likelihood Ratio quantization
    European Signal Processing Conference, 2019
    Co-Authors: Marius Arvinte, Ahmed H. Tewfik, Sriram Vishwanath
    Abstract:

    In this work, a deep learning-based method for log-Likelihood Ratio (LLR) lossy compression and quantization is proposed, with emphasis on a single-input single-output uncorrelated fading communication setting. A deep autoencoder network is trained to compress, quantize and reconstruct the bit log-Likelihood Ratios corresponding to a single transmitted symbol. Specifically, the encoder maps to a latent space with dimension equal to the number of sufficient statistics required to recover the inputs — equal to three in this case — while the decoder aims to reconstruct a noisy version of the latent representation with the purpose of modeling quantization effects in a differentiable way. Simulation results show that, when applied to a standard rate-1/2 low-density parity-check (LDPC) code, a finite precision compression factor of nearly three times is achieved when storing an entire codeword, with an incurred loss of performance lower than 0.15 dB compared to straightforward scalar quantization of the log-Likelihood Ratios and the method is competitive with state-of-the-art approaches.

  • Deep Log-Likelihood Ratio Quantization
    arXiv: Learning, 2019
    Co-Authors: Marius Arvinte, Ahmed H. Tewfik, Sriram Vishwanath
    Abstract:

    In this work, a deep learning-based method for log-Likelihood Ratio (LLR) lossy compression and quantization is proposed, with emphasis on a single-input single-output uncorrelated fading communication setting. A deep autoencoder network is trained to compress, quantize and reconstruct the bit log-Likelihood Ratios corresponding to a single transmitted symbol. Specifically, the encoder maps to a latent space with dimension equal to the number of sufficient statistics required to recover the inputs - equal to three in this case - while the decoder aims to reconstruct a noisy version of the latent representation with the purpose of modeling quantization effects in a differentiable way. Simulation results show that, when applied to a standard rate-1/2 low-density parity-check (LDPC) code, a finite precision compression factor of nearly three times is achieved when storing an entire codeword, with an incurred loss of performance lower than 0.1 dB compared to straightforward scalar quantization of the log-Likelihood Ratios.

  • empirical Likelihood Ratio test with distribution function constraints
    IEEE Transactions on Signal Processing, 2013
    Co-Authors: Yingxi Liu, Ahmed H. Tewfik
    Abstract:

    In this work, we study non-parametric hypothesis testing problem with distribution function constraints. The empirical Likelihood Ratio test has been widely used in testing problems with moment (in)equality constraints. However, some detection problems cannot be described using moment (in)equalities. We propose a distribution function constraint along with an empirical Likelihood Ratio test. This detector is applicable to a wide variety of robust parametric/non-parametric detection problems. Since the distribution function constraints provide a more exact description of the null hypothesis, the test outperforms the empirical Likelihood Ratio test with moment constraints as well as many popular goodness-of-fit tests, such as the robust Kolmogorov-Smirnov test and the Cramer-von Mises test. Examples from communication systems with real-world noise samples are provided to show their performance. Specifically, the proposed test significantly outperforms the robust Kolmogorov-Smirnov test and the Cramer-von Mises test when the null hypothesis is nested in the alternative hypothesis. The same example is repeated when we assume no noise uncertainty. By doing so, we are able to claim that in our case, it is necessary to include uncertainty in noise distribution. Additionally, the asymptotic optimality of the proposed test is provided.

Sriram Vishwanath - One of the best experts on this subject based on the ideXlab platform.

  • deep log Likelihood Ratio quantization
    European Signal Processing Conference, 2019
    Co-Authors: Marius Arvinte, Ahmed H. Tewfik, Sriram Vishwanath
    Abstract:

    In this work, a deep learning-based method for log-Likelihood Ratio (LLR) lossy compression and quantization is proposed, with emphasis on a single-input single-output uncorrelated fading communication setting. A deep autoencoder network is trained to compress, quantize and reconstruct the bit log-Likelihood Ratios corresponding to a single transmitted symbol. Specifically, the encoder maps to a latent space with dimension equal to the number of sufficient statistics required to recover the inputs — equal to three in this case — while the decoder aims to reconstruct a noisy version of the latent representation with the purpose of modeling quantization effects in a differentiable way. Simulation results show that, when applied to a standard rate-1/2 low-density parity-check (LDPC) code, a finite precision compression factor of nearly three times is achieved when storing an entire codeword, with an incurred loss of performance lower than 0.15 dB compared to straightforward scalar quantization of the log-Likelihood Ratios and the method is competitive with state-of-the-art approaches.

  • Deep Log-Likelihood Ratio Quantization
    arXiv: Learning, 2019
    Co-Authors: Marius Arvinte, Ahmed H. Tewfik, Sriram Vishwanath
    Abstract:

    In this work, a deep learning-based method for log-Likelihood Ratio (LLR) lossy compression and quantization is proposed, with emphasis on a single-input single-output uncorrelated fading communication setting. A deep autoencoder network is trained to compress, quantize and reconstruct the bit log-Likelihood Ratios corresponding to a single transmitted symbol. Specifically, the encoder maps to a latent space with dimension equal to the number of sufficient statistics required to recover the inputs - equal to three in this case - while the decoder aims to reconstruct a noisy version of the latent representation with the purpose of modeling quantization effects in a differentiable way. Simulation results show that, when applied to a standard rate-1/2 low-density parity-check (LDPC) code, a finite precision compression factor of nearly three times is achieved when storing an entire codeword, with an incurred loss of performance lower than 0.1 dB compared to straightforward scalar quantization of the log-Likelihood Ratios.

Fan Yang - One of the best experts on this subject based on the ideXlab platform.

  • central limit theorems for classical Likelihood Ratio tests for high dimensional normal distributions
    Annals of Statistics, 2013
    Co-Authors: Tiefeng Jiang, Fan Yang
    Abstract:

    For random samples of size n obtained from p-variate normal distributions, we consider the classical Likelihood Ratio tests (LRT) for their means and covariance matrices in the high-dimensional setting. These test statistics have been extensively studied in multivariate analysis, and their limiting distributions under the null hypothesis were proved to be chi-square distributions as n goes to infinity and p remains fixed. In this paper, we consider the highdimensional case where both p and n go to infinity with p/n → y ∈ (0, 1]. We prove that the Likelihood Ratio test statistics under this assumption will converge in distribution to normal distributions with explicit means and variances. We perform the simulation study to show that the Likelihood Ratio tests using our central limit theorems outperform those using the traditional chisquare approximations for analyzing high-dimensional data.

David Ruppert - One of the best experts on this subject based on the ideXlab platform.

  • Likelihood Ratio tests for goodness of fit of a nonlinear regression model
    Journal of Multivariate Analysis, 2004
    Co-Authors: Ciprian M Crainiceanu, David Ruppert
    Abstract:

    We propose Likelihood and restricted Likelihood Ratio tests for goodness-of-fit of nonlinear regression. The first-order Taylor approximation around the MLE of the regression parameters is used to approximate the null hypothesis and the alternative is modeled nonparametrically using penalized splines. The exact finite sample distribution of the test statistics is obtained for the linear model approximation and can be easily simulated. We recommend using the restricted Likelihood instead of the Likelihood Ratio test because restricted maximum-Likelihood estimates are not as severely biased as the maximum-Likelihood estimates in the penalized splines framework.

  • Likelihood Ratio tests in linear mixed models with one variance component
    Journal of the Royal Statistical Society: Series B (Statistical Methodology), 2004
    Co-Authors: Ciprian M Crainiceanu, David Ruppert
    Abstract:

    Summary. We consider the problem of testing null hypotheses that include restrictions on the variance component in a linear mixed model with one variance component and we derive the finite sample and asymptotic distribution of the Likelihood Ratio test and the restricted Likelihood Ratio test. The spectral, representations of the Likelihood Ratio test and the restricted Likelihood Ratio test statistics are used as the basis of efficient simulation algorithms of their null distributions. The large sample x2 mixture approximations using the usual asymptotic theory for a null hypothesis on the boundary of the parameter space have been shown to be poor in simulation studies. Our asymptotic calculations explain these empirical results. The theory of Self and Liang applies only to linear mixed models for which the data vector can be partitioned into a large number of independent and identically distributed subvectors. One-way analysis of variance and penalized splines models illustrate the results.

Min Tsao - One of the best experts on this subject based on the ideXlab platform.