Stopping Criterion

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 7680 Experts worldwide ranked by ideXlab platform

Kyoungwoo Heo - One of the best experts on this subject based on the ideXlab platform.

  • a Stopping Criterion for low density parity check codes
    IEICE Transactions on Communications, 2008
    Co-Authors: Donghyuk Shin, Kyoungwoo Heo, Hyuckjae Lee
    Abstract:

    We propose a new Stopping Criterion for decoding LDPC codes which consists of a measure of decoder behaviors and a decision rule to predict decoding failure. We will show that the proposed measure, the number of satisfied check nodes, does not need (or minimizes) additional complexity, and the decision rule is efficient and more importantly channel independent, which was not possible in the previous work.

  • a Stopping Criterion for low density parity check codes
    Vehicular Technology Conference, 2007
    Co-Authors: Donghyuk Shin, Kyoungwoo Heo
    Abstract:

    Low-density parity-check (LDPC) codes have an inherent Stopping Criterion, parity-check constraints (equations). By testing the parity-check constraints, an LDPC decoder can detect successful decoding and stop their decoding, which is, however, not possible with turbo codes. In this paper, we propose a Stopping Criterion to predict decoding failure of LDPC codes, instead of detecting successful decoding. If the decoder predicts the decoding failure in advance, the receiver can more rapidly response to the transmitter and request for additional parity bits with an automatic repeat request (ARQ) protocol, which reduces overall system latency. The receiver can also save power consumption by avoiding unnecessary decoder iterations. The proposed Stopping Criterion makes use of the variations of the number of satisfied parity-check constraints in the belief-propagation (BP) decoding which is always tested in the conventional BP decoding to detect successful decoding. Thus, the proposed Stopping Criterion does not require any additional complexity. The counting of satisfied parity-check constraints shows behaviors of the BP decoding, which comes, otherwise, from the observations of changes of log-likelihood ratio (LLR) values in multi-bit resolution with additional complexity.

Xuanxuan Zhang - One of the best experts on this subject based on the ideXlab platform.

  • simplified early Stopping Criterion for belief propagation polar code decoder based on frozen bits
    IEEE Access, 2019
    Co-Authors: Yongli Yan, Xuanxuan Zhang
    Abstract:

    Polar codes were first proposed by E. Arikan in 2009 and have received significant attention in recent years. Successive-cancellation (SC) and belief-propagation (BP) decoding algorithms have been applied by some researchers to polar codes. However, unlike SC-based decoders, the performance optimization of BP-based decoders has not been fully explored yet, especially in regard to the impact of the number of iterations on the decoding complexity. In this paper, a novel early Stopping Criterion based on partial frozen bits for belief-propagation polar code decoders is designed. The proposed Criterion is based on the fact that some of the frozen bits that are known to the decoders have a higher average error probability than the information bits and can be used to terminate the decoding. Furthermore, the hardware architecture of the BP-based polar code decoder with the proposed Stopping Criterion is presented. The simulation results show that the proposed early Stopping Criterion greatly reduces the number of iterations of BP-based polar code decoders without any performance loss and reduces the hardware complexity from O (NlogN) to O (N) compared with state-of-the-art design.

Scott C.-h. Huang - One of the best experts on this subject based on the ideXlab platform.

  • A new Stopping Criterion for fast low-density parity-check decoders
    2013 IEEE Global Communications Conference (GLOBECOM), 2013
    Co-Authors: Hsiao-chun Wu, Scott C.-h. Huang
    Abstract:

    Nonbiliary low-density parity-check (LDPC) codes can lead to excellent error performance while the codewords are of short or moderate length. However, the high decoding complexity of nonbiliary LDPC codes inevitably depreciates their practical values. The computational bottleneck arises from the check node processing in the iterative message passing (MP) algorithms which terminate when either all parity checks are satisfied or the maximum iteration number is reached. We have observed that for undecodable blocks, the MP algorithms always run up to the maximum iteration limit and therefore cannot generate the correct codeword. Thus, it would be better to terminate the algorithms early so as to save the unnecessary computational time and reduce the extra power consumption when undecodable blocks are experienced. In this paper, we propose a new T-tolerance Stopping Criterion for LDPC decoders by exploiting the fact that the total a posteriori probability (APP) should increase as the iteration number grows. Simulation results demonstrate that our proposed new T-tolerance Criterion can greatly reduce the average iteration number (complexity) while restricting the decoding performance degradation within 0.1 dB in low bit-energy-to-noise ratio scenarios.

Hsiao-chun Wu - One of the best experts on this subject based on the ideXlab platform.

  • New Stopping Criterion for Fast Low-Density Parity-Check Decoders
    IEEE Communications Letters, 2014
    Co-Authors: Hsiao-chun Wu, Hong Jiang
    Abstract:

    Low-density parity-check (LDPC) codes are favorable in low bit-error-rate and high code-rate applications. However, the decoding complexity for LDPC codes is quite large, especially for nonbinary LDPC codes. In this paper, we propose a new T-tolerance Stopping Criterion for LDPC decoders by exploiting the fact that the total aposteriori probability (APP) should increase across iterations when message passing (MP) algorithms are employed. Simulation results demonstrate that our proposed new T-tolerance Criterion can greatly reduce the average iteration number (complexity) while the decoding performance degradation is controlled within 0.1 dB in the low bit-energy-to-noise-spectral-density ratio (Eb/N0) scenarios.

  • A new Stopping Criterion for fast low-density parity-check decoders
    2013 IEEE Global Communications Conference (GLOBECOM), 2013
    Co-Authors: Hsiao-chun Wu, Scott C.-h. Huang
    Abstract:

    Nonbiliary low-density parity-check (LDPC) codes can lead to excellent error performance while the codewords are of short or moderate length. However, the high decoding complexity of nonbiliary LDPC codes inevitably depreciates their practical values. The computational bottleneck arises from the check node processing in the iterative message passing (MP) algorithms which terminate when either all parity checks are satisfied or the maximum iteration number is reached. We have observed that for undecodable blocks, the MP algorithms always run up to the maximum iteration limit and therefore cannot generate the correct codeword. Thus, it would be better to terminate the algorithms early so as to save the unnecessary computational time and reduce the extra power consumption when undecodable blocks are experienced. In this paper, we propose a new T-tolerance Stopping Criterion for LDPC decoders by exploiting the fact that the total a posteriori probability (APP) should increase as the iteration number grows. Simulation results demonstrate that our proposed new T-tolerance Criterion can greatly reduce the average iteration number (complexity) while restricting the decoding performance degradation within 0.1 dB in low bit-energy-to-noise ratio scenarios.

Nicolas Meyer - One of the best experts on this subject based on the ideXlab platform.

  • a new universal resample stable bootstrap based Stopping Criterion for pls component construction
    Statistics and Computing, 2017
    Co-Authors: Jeremy Magnanensi, Frederic Bertrand, Myriam Maumybertrand, Nicolas Meyer
    Abstract:

    We develop a new robust Stopping Criterion for partial least squares regression (PLSR) component construction, characterized by a high level of stability. This new Criterion is universal since it is suitable both for PLSR and extensions to generalized linear regression (PLSGLR). The Criterion is based on a non-parametric bootstrap technique and must be computed algorithmically. It allows the testing of each successive component at a preset significance level $$\alpha $$ź. In order to assess its performance and robustness with respect to various noise levels, we perform dataset simulations in which there is a preset and known number of components. These simulations are carried out for datasets characterized both by $$n>p$$n>p, with n the number of subjects and p the number of covariates, as well as for $$n

  • a new universal resample stable bootstrap based Stopping Criterion in pls components construction
    arXiv: Methodology, 2015
    Co-Authors: Jeremy Magnanensi, Frederic Bertrand, Myriam Maumybertrand, Nicolas Meyer
    Abstract:

    We develop a new robust Stopping Criterion in Partial Least Squares Regressions (PLSR) components construction characterised by a high level of stability. This new Criterion is defined as a universal one since it is suitable both for PLSR and its extension to Generalized Linear Regressions (PLSGLR). This Criterion is based on a non-parametric bootstrap process and has to be computed algorithmically. It allows to test each successive components on a preset significant level alpha. In order to assess its performances and robustness with respect to different noise levels, we perform intensive datasets simulations, with a preset and known number of components to extract, both in the case n>p (n being the number of subjects and p the number of original predictors), and for datasets with nrobustness is particularly tested through resampling processes on a real allelotyping dataset. Our conclusion is that our Criterion presents also better global predictive performances, both in the PLSR and PLSGLR (Logistic and Poisson) frameworks.