Implementation Complexity

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 360 Experts worldwide ranked by ideXlab platform

Hyuck M Kwon - One of the best experts on this subject based on the ideXlab platform.

  • an improved quasi cyclic low density parity check code for memory channels
    Vehicular Technology Conference, 2004
    Co-Authors: M Jayabalan, Hyuck M Kwon
    Abstract:

    This paper presents an improved construction of circulant sub-matrices based quasi-cyclic low density parity check codes (QC-LDPC) under fading channels. The proposed construction yields a performance gain of about 2 to 5 dB at a 10/sup -4/ bit error rate (BER). This paper also proposes a decoding method to reduce the Implementation Complexity in the belief propagation algorithm. Furthermore, this paper studies the performance of circulant sub-matrices based QC-LDPC codes at higher rates under AWGN and fading channels.

George A Constantinides - One of the best experts on this subject based on the ideXlab platform.

  • error modelling of dual fixed point arithmetic and its application in field programmable logic
    Field-Programmable Logic and Applications, 2005
    Co-Authors: Chun Te Ewe, P V K Cheung, George A Constantinides
    Abstract:

    Dual FiXed-point (DFX) is a new data representation which is an efficient compromise between fixed-point and floating-point representations. DFX has an Implementation Complexity similar to that of a fixed-point system with the improved dynamic range capability of a floating-point system. Automating the process of DFX scaling optimisation requires the knowledge of its truncation/rounding noise properties. This paper presents truncation and rounding error models for DFX arithmetic as traditional error models do not apply to DFX. The models were tested on a 159-tap FIR filter and the benefits of using DFX over floating-point are demonstrated with Implementations on a Xilinx Virtex II Pro.

  • dual fixed point an efficient alternative to floating point computation
    Lecture Notes in Computer Science, 2004
    Co-Authors: Peter Y. K. Cheung, George A Constantinides
    Abstract:

    This paper presents a new data representation known as Dual FiXed-point (DFX), which employs a single bit exponent to select two different fixed-point scalings. DFX provides a compromise between conventional fixed-point and floating-point representations. It has the Implementation Complexity similar to that of a fixed-point system together with the improved dynamic range offered by a floating-point system. The benefit of using DFX over both fixed-point and floating-point is demonstrated with an IIR filter Implementation on a Xilinx Virtex II FPGA.

M Jayabalan - One of the best experts on this subject based on the ideXlab platform.

  • an improved quasi cyclic low density parity check code for memory channels
    Vehicular Technology Conference, 2004
    Co-Authors: M Jayabalan, Hyuck M Kwon
    Abstract:

    This paper presents an improved construction of circulant sub-matrices based quasi-cyclic low density parity check codes (QC-LDPC) under fading channels. The proposed construction yields a performance gain of about 2 to 5 dB at a 10/sup -4/ bit error rate (BER). This paper also proposes a decoding method to reduce the Implementation Complexity in the belief propagation algorithm. Furthermore, this paper studies the performance of circulant sub-matrices based QC-LDPC codes at higher rates under AWGN and fading channels.

Warren J Gross - One of the best experts on this subject based on the ideXlab platform.

  • improved bit flipping algorithm for successive cancellation decoding of polar codes
    IEEE Transactions on Communications, 2019
    Co-Authors: Furkan Ercan, Carlo Condo, Warren J Gross
    Abstract:

    The interest in polar codes has been increasing significantly since their adoption for use in the 5th generation wireless systems standard. Successive cancellation (SC) decoding algorithm has low Implementation Complexity, but yields mediocre error-correction performance at the code lengths of interest. SC-Flip algorithm improves the error-correction performance of SC by identifying possibly erroneous decisions made by SC and re-iterates after flipping one bit. It was recently shown that only a portion of bit-channels are most likely to be in error. In this paper, we investigate the average log-likelihood ratio (LLR) values and their distribution related to the erroneous bit-channels, and develop the Thresholded SC-Flip (TSCF) decoding algorithm. We also replace the LLR selection and sorting of SC-Flip with a comparator to reduce the Implementation Complexity. Simulation results demonstrate that for practical code lengths and a wide range of rates, TSCF shows negligible loss compared with the error-correction performance obtained when all single-errors are corrected. At matching maximum iterations, TSCF has an error-correction performance gain of up to 0.45 dB compared with SC-Flip decoding. At matching error-correction performance, the computational Complexity of TSCF is reduced by up to 40% on average and requires up to $5\times $ lower maximum number of iterations.

  • improved bit flipping algorithm for successive cancellation decoding of polar codes
    arXiv: Information Theory, 2018
    Co-Authors: Furkan Ercan, Carlo Condo, Warren J Gross
    Abstract:

    The interest in polar codes has been increasing significantly since their adoption for use in the 5$^{\rm th}$ generation wireless systems standard. Successive cancellation (SC) decoding algorithm has low Implementation Complexity, but yields mediocre error-correction performance at the code lengths of interest. SC-Flip algorithm improves the error-correction performance of SC by identifying possibly erroneous decisions made by SC and re-iterates after flipping one bit. It was recently shown that only a portion of bit-channels are most likely to be in error. In this work, we investigate the average log-likelihood ratio (LLR) values and their distribution related to the erroneous bit-channels, and develop the Thresholded SC-Flip (TSCF) decoding algorithm. We also replace the LLR selection and sorting of SC-Flip with a comparator to reduce the Implementation Complexity. Simulation results demonstrate that for practical code lengths and a wide range of rates, TSCF shows negligible loss compared to the error-correction performance obtained when all single-errors are corrected. At matching maximum iterations, TSCF has an error-correction performance gain of up to $0.45$ dB compared with SC-Flip decoding. At matching error-correction performance, the computational Complexity of TSCF is reduced by up to $40\%$ on average, and requires up to $5\times$ lower maximum number of iterations.

A A Chein - One of the best experts on this subject based on the ideXlab platform.

  • a cost and speed model for k ary n cube wormhole routers
    IEEE Transactions on Parallel and Distributed Systems, 1998
    Co-Authors: A A Chein
    Abstract:

    The evaluation of advanced routing features must be based on both of costs and benefits. To date, adaptive routers have generally been evaluated on the basis of the achieved network throughput (channel utilization), ignoring the effects of Implementation Complexity. In this paper, we describe a parameterized cost model for router performance, characterized by two numbers: router delay and flow control time. Grounding the cost model in a 0.8 micron gate array technology, we use it to compare a number of proposed routing algorithms. From these design studies, several insights into the Implementation Complexity of adaptive routers are clear. First, header update and selection is expensive in adaptive routers, suggesting that absolute addressing should be reconsidered. Second, virtual channels are expensive in terms of latency and cycle time, so decisions to include them to support adaptivity or even virtual lanes should not be taken lightly. Third, requirements of larger crossbars and more complex arbitration cause some increase in the Complexity of adaptive routers, but the rate of increase is small. Last, the Complexity of adaptive routers significantly increases their setup delay and flow control cycle times, implying that claims of performance advantages in channel utilization and low load latency must be carefully balanced against losses in achievable Implementation speed.