Decoder

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 436182 Experts worldwide ranked by ideXlab platform

Neri Merhav - One of the best experts on this subject based on the ideXlab platform.

  • the generalized stochastic likelihood Decoder random coding and expurgated bounds
    IEEE Transactions on Information Theory, 2017
    Co-Authors: Neri Merhav
    Abstract:

    The likelihood Decoder is a stochastic Decoder that selects the decoded message at random, using the posterior distribution of the true underlying message given the channel output. In this paper, we study a generalized version of this Decoder, where the posterior is proportional to a general function that depends only on the joint empirical distribution of the output vector and the code word. This framework allows both mismatched versions and universal versions of the likelihood Decoder, as well as the corresponding ordinary deterministic Decoders, among many others. We provide a direct analysis method that yields the exact random coding exponent (as opposed to separate upper bounds and lower bounds that turn out to be compatible, which were derived earlier by Scarlett et al. ). We also extend the result from pure channel coding to combined source and channel coding (random binning followed by random channel coding) with side information available to the Decoder. Finally, returning to pure channel coding, we derive also an expurgated exponent for the stochastic likelihood Decoder, which turns out to be at least as tight (and in some cases, strictly so) as the classical expurgated exponent of the maximum likelihood Decoder, even though the stochastic likelihood Decoder is suboptimal.

  • the generalized stochastic likelihood Decoder random coding and expurgated bounds
    International Symposium on Information Theory, 2016
    Co-Authors: Neri Merhav
    Abstract:

    The likelihood Decoder is a stochastic Decoder that selects the decoded message at random, using the posterior distribution of the true underlying message given the channel output. In this work, we study a generalized version of this Decoder where the posterior is proportional to a general function that depends only on the joint empirical distribution of the output vector and the codeword. This framework allows both mismatched versions and universal (MMI) versions of the likelihood Decoder, as well as the corresponding ordinary deterministic Decoders, among many others. We provide a direct analysis method that yields the exact random coding exponent (as opposed to separate upper bounds and lower bounds that turn out to be compatible, which were derived earlier by Scarlett et al.). We also extend the result from pure channel coding to combined source and channel coding (random binning followed by random channel coding) with side information (SI) available to the Decoder. Finally, returning to pure channel coding, we derive also an expurgated exponent for the stochastic likelihood Decoder, which turns out to be at least as tight (and in some cases, strictly so) as the classical expurgated exponent of the maximum likelihood Decoder, even though the stochastic likelihood Decoder is suboptimal.

  • Universal Decoding for Arbitrary Channels Relative to a Given Class of Decoding Metrics
    IEEE Transactions on Information Theory, 2013
    Co-Authors: Neri Merhav
    Abstract:

    We consider the problem of universal decoding for arbitrary, finite-alphabet unknown channels in the random coding regime. For a given random coding distribution and a given class of metric Decoders, we propose a generic universal Decoder whose average error probability is, within a subexponential multiplicative factor, no larger than that of the best Decoder within this class of Decoders. Since the optimum, maximum likelihood (ML) Decoder of the underlying channel is not necessarily assumed to belong to the given class of Decoders, this setting suggests a common generalized framework for: 1) mismatched decoding, 2) universal decoding for a given family of channels, and 3) universal coding and decoding for deterministic channels using the individual sequence approach. The proof of our universality result is fairly simple, and it is demonstrated how some earlier results on universal decoding are obtained as special cases. We also demonstrate how our method extends to more complicated scenarios, like incorporation of noiseless feedback, the multiple access channel, and continuous alphabet channels.

Warren J Gross - One of the best experts on this subject based on the ideXlab platform.

  • polarbear a 28 nm fd soi asic for decoding of polar codes
    IEEE Journal on Emerging and Selected Topics in Circuits and Systems, 2017
    Co-Authors: Pascal Giard, Thomas Christoph Muller, Warren J Gross, Alexios Balatsoukasstimming, Andrea Bonetti, Claude Thibeault, Philippe Flatresse, Andreas Burg
    Abstract:

    Polar codes are a recently proposed class of block codes that provably achieve the capacity of various communication channels. They received a lot of attention as they can do so with low-complexity encoding and decoding algorithms, and they have an explicit construction. Their recent inclusion in a 5G communication standard will only spur more research. However, only a couple of ASICs featuring Decoders for polar codes were fabricated, and none of them implements a list-based decoding algorithm. In this paper, we present ASIC measurement results for a fabricated 28-nm CMOS chip that implements two different Decoders: the first Decoder is tailored toward error-correction performance and flexibility. It supports any code rate as well as three different decoding algorithms: successive cancellation (SC), SC flip, and SC list (SCL). The flexible Decoder can also decode both non-systematic and systematic polar codes. The second Decoder targets speed and energy efficiency. We present measurement results for the first silicon-proven SCL Decoder, where its coded throughput is shown to be of 306.8 Mbps with a latency of 3.34 us and an energy per bit of 418.3 pJ/b at a clock frequency of 721 MHz for a supply of 1.3 V. The energy per bit drops down to 178.1 pJ/b with a more modest clock frequency of 308 MHz, lower throughput of 130.9 Mbps and a reduced supply voltage of 0.9 V. For the other two operating modes, the energy per bit is shown to be of approximately 95 pJ/b. The less flexible high-throughput unrolled Decoder can achieve a coded throughput of 9.2 Gbps and a latency of 628 ns for a measured energy per bit of 1.15 pJ/b at 451 MHz.

  • hardware implementation of successive cancellation Decoders for polar codes
    Signal Processing Systems, 2012
    Co-Authors: Camille Leroux, Alexandre J Raymond, Gabi Sarkis, Alexander Vardy, Warren J Gross
    Abstract:

    The recently-discovered polar codes are seen as a major breakthrough in coding theory; they provably achieve the theoretical capacity of discrete memoryless channels using the low-complexity successive cancellation decoding algorithm. Motivated by recent developments in polar coding theory, we propose a family of efficient hardware implementations for successive cancellation (SC) polar Decoders. We show that such Decoders can be implemented with O(N) processing elements and O(N) memory elements. Furthermore, we show that SC decoding can be implemented in the logarithmic domain, thereby eliminating costly multiplication and division operations, and reducing the complexity of each processing element greatly. We also present a detailed architecture for an SC Decoder and provide logic synthesis results confirming the linear complexity growth of the Decoder as the code length increases.

  • hardware implementation of successive cancellation Decoders for polar codes
    arXiv: Hardware Architecture, 2011
    Co-Authors: Camille Leroux, Alexandre J Raymond, Gabi Sarkis, Alexander Vardy, Warren J Gross
    Abstract:

    The recently-discovered polar codes are seen as a major breakthrough in coding theory; they provably achieve the theoretical capacity of discrete memoryless channels using the low complexity successive cancellation (SC) decoding algorithm. Motivated by recent developments in polar coding theory, we propose a family of efficient hardware implementations for SC polar Decoders. We show that such Decoders can be implemented with O(n) processing elements, O(n) memory elements, and can provide a constant throughput for a given target clock frequency. Furthermore, we show that SC decoding can be implemented in the logarithm domain, thereby eliminating costly multiplication and division operations and reducing the complexity of each processing element greatly. We also present a detailed architecture for an SC Decoder and provide logic synthesis results confirming the linear growth in complexity of the Decoder as the code length increases.

  • majority based tracking forecast memories for stochastic ldpc decoding
    IEEE Transactions on Signal Processing, 2010
    Co-Authors: Saeed Sharifi Tehrani, Ali Naderi, Guyarmand Kamendje, Saied Hemati, Shie Mannor, Warren J Gross
    Abstract:

    This paper proposes majority-based tracking forecast memories (MTFMs) for area efficient high throughput ASIC implementation of stochastic Low-Density Parity-Check (LDPC) Decoders. The proposed method is applied for ASIC implementation of a fully parallel stochastic Decoder that decodes the (2048, 1723) LDPC code from the IEEE 802.3an (10GBASE-T) standard. The Decoder occupies a silicon core area of 6.38 mm2 in CMOS 90 nm technology, achieves a maximum clock frequency of 500 MHz, and provides a maximum core throughput of 61.3 Gb/s. The Decoder also has good decoding performance and error-floor behavior and provides a bit error rate (BER) of about 4 × 10-13 at Eb/N0=5.15 dB. To the best of our knowledge, the implemented Decoder is the most area efficient fully parallel soft -decision LDPC Decoder reported in the literature.

Pascal Urard - One of the best experts on this subject based on the ideXlab platform.

  • Low-Complexity Decoding for Non-Binary LDPC Codes in High Order Fields
    IEEE Transactions on Communications, 2010
    Co-Authors: Adrian Voicila, David Declercq, Francois Verdier, Marc Fossorier, Pascal Urard
    Abstract:

    In this paper, we propose a new implementation of the Extended Min-Sum (EMS) Decoder for non-binary LDPC codes. A particularity of the new algorithm is that it takes into accounts the memory problem of the non-binary LDPC Decoders, together with a significant complexity reduction per decoding iteration. The key feature of our Decoder is to truncate the vector messages of the Decoder to a limited number n m of values in order to reduce the memory requirements. Using the truncated messages, we propose an efficient implementation of the EMS Decoder which reduces the order of complexity to O(n m log2 n m ). This complexity starts to be reasonable enough to compete with binary Decoders. The performance of the low complexity algorithm with proper compensation is quite good with respect to the important complexity reduction, which is shown both with a simulated density evolution approach and actual simulations.

  • low complexity low memory ems algorithm for non binary ldpc codes
    International Conference on Communications, 2007
    Co-Authors: Adrian Voicila, Francois Verdier, Marc Fossorier, D Declereq, Pascal Urard
    Abstract:

    In this paper, we propose a new implementation of the EMS Decoder for non binary LDPC codes presented in (D. Declencq and M. Fossorier, 2007). A particularity of the new algorithm is that it takes into accounts the memory problem of the non binary LDPC Decoders, together with a significant complexity reduction per decoding iteration. The key feature of our Decoder is to truncate the vector messages of the Decoder to a limited number nm of values in order to reduce the memory requirements. Using the truncated messages, we propose an efficient implementation of the EMS Decoder which reduces the order of complexity to O(nm log2 nm), which starts to be reasonable enough to compete with binary Decoders. The performance of the low complexity algorithm with proper compensation are quite good with respect to the important complexity reduction, which is shown both with a simulated density evolution approach and actual FER simulations.

Hitoshi Kiya - One of the best experts on this subject based on the ideXlab platform.

  • ISPACS - A new color QR code forward compatible with the standard QR code Decoder
    2013 International Symposium on Intelligent Signal Processing and Communication Systems, 2013
    Co-Authors: Masanori Kikuchi, Masaaki Fujiyoshi, Hitoshi Kiya
    Abstract:

    This paper proposes a new color QR code which is forward compatible with standard QR code Decoders for increasing the conveyable capacity of encoded information. The proposed method allocates three standard bicolor QR code to color channels of YCbCr color space so that one QR code in the Y channel can be decoded by a standard QR Decoder. In addition, a proprietary Decoder further decodes two more QR codes in Cb and Cr channels. The proposed method is based on the standard bicolor QR code in its encoding and decoding processes, whereas conventional methods increasing the conveyable capacity require complex proprietary codecs or different technology. Experimental results show the effectiveness of the proposed method.

  • A new color QR code forward compatible with the standard QR code Decoder
    2013 International Symposium on Intelligent Signal Processing and Communication Systems, 2013
    Co-Authors: Masanori Kikuchi, Masaaki Fujiyoshi, Hitoshi Kiya
    Abstract:

    This paper proposes a new color QR code which is forward compatible with standard QR code Decoders for increasing the conveyable capacity of encoded information. The proposed method allocates three standard bicolor QR code to color channels of YCbCr color space so that one QR code in the Y channel can be decoded by a standard QR Decoder. In addition, a proprietary Decoder further decodes two more QR codes in Cb and Cr channels. The proposed method is based on the standard bicolor QR code in its encoding and decoding processes, whereas conventional methods increasing the conveyable capacity require complex proprietary codecs or different technology. Experimental results show the effectiveness of the proposed method.

You Yin Chen - One of the best experts on this subject based on the ideXlab platform.

  • inhibition of long term variability in decoding forelimb trajectory using evolutionary neural networks with error correction learning
    Frontiers in Computational Neuroscience, 2020
    Co-Authors: Shihhung Yang, Hanlin Wang, Yu Chun Lo, Kuanyu Chen, Chin Chou, Jyunwe Huang, Chingfu Wang, You Yin Chen
    Abstract:

    Objective: In brain machine interfaces (BMIs), the functional mapping between neural activities and kinematic parameters varied over time owing to changes in neural recording conditions. The variability in neural recording conditions might result in unstable long-term decoding performance. Relevant studies trained Decoders with several days of training data to make them inherently robust to changes in neural recording conditions. However, these Decoders might not be robust to changes in neural recording conditions when only a few days of training data are available. In time-series prediction and feedback control system, an error feedback was commonly adopted to reduce the effects of model uncertainty. This motivated us to introduce an error feedback to a neural Decoder for dealing with the variability in neural recording conditions. Approach: We proposed an evolutionary constructive and pruning neural network with error feedback (ECPNN-EF) as a neural Decoder. The ECPNN-EF with partially connected topology decoded the instantaneous firing rates of each sorted unit into forelimb movement of a rat. Furthermore, an error feedback was adopted as an additional input to provide kinematic information and thus compensate for changes in functional mapping. The proposed neural Decoder was trained on data collected from a water reward-related lever-pressing task for a rat. The first 2 days of data were used to train the Decoder, and the subsequent 10 days of data were used to test the Decoder. Main results: The ECPNN-EF under different settings was evaluated to better understand the impact of the error feedback and partially connected topology. The experimental results demonstrated that the ECPNN-EF achieved significantly higher daily decoding performance with smaller daily variability when using the error feedback and partially connected topology. Significance: These results suggested that the ECPNN-EF with partially connected topology could cope with both within- and across-day changes in neural recording conditions. The error feedback in the ECPNN-EF compensated for decreases in decoding performance when neural recording conditions changed. This mechanism made the ECPNN-EF robust against changes in functional mappings and thus improved the long-term decoding stability when only a few days of training data were available.