Longest Codeword

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 33 Experts worldwide ranked by ideXlab platform

Haibin Wang - One of the best experts on this subject based on the ideXlab platform.

  • ChinaCom - A fast algorithm for calculating minimum redundancy prefix codes with unsorted alphabet
    9th International Conference on Communications and Networking in China, 2014
    Co-Authors: Yupeng Tai, Haibin Wang
    Abstract:

    Minimum redundancy coding (also known as Huffman code) is one of the most well-known algorithm of data compression. Many efforts have been made to improve the efficiency of it. Most of them are based on the assumption that the input alphabet has been already sorted. In this paper, we propose an algorithm of calculating the minimum-redundancy codes directly with unsorted alphabet. It consumes only O(nlog(n/k)) time in the worst cases, where « is the alphabet size and k is the Longest Codeword length. It is fast because only a part of the symbols requires to be sorted before the final minimum redundancy code is generated. The theoretical analysis and numerical simulation results show that this algorithm achieves a substantial improvement upon the best previous O(nlogn) algorithms for this problem.

Saber H. - One of the best experts on this subject based on the ideXlab platform.

  • A new reliability-based incremental redundancy hybrid ARQ scheme using LDPC codes
    2015
    Co-Authors: Saber H.
    Abstract:

    We present a new reliability-based hybrid automatic repeat request (RB-HARQ) scheme based on low density parity check (LDPC) codes. With the proposed RB-HARQ, which uses a rate-compatible LDPC code with puncturing and extending, the Longest Codeword is divided into clusters of code bits. Unlike previous works, in the event of a decoding failure, the receiver measures the reliability of received clusters, instead of code bits, and determines which cluster would be most beneficial for retransmission. Several metrics to determine the best cluster candidates for retransmission are derived analytically. We show that one of the new metrics outperforms the previous metrics. We also show that even with the feedback overhead taken into account, our RB-HARQ can still result in significant gain over the previous works, provided that the cluster size is appropriately selected

Ian Marsland - One of the best experts on this subject based on the ideXlab platform.

  • CWIT - A new reliability-based incremental redundancy hybrid ARQ scheme using LDPC codes
    2015 IEEE 14th Canadian Workshop on Information Theory (CWIT), 2015
    Co-Authors: Hamid Saber, Ian Marsland
    Abstract:

    We present a new reliability-based hybrid automatic repeat request (RB-HARQ) scheme based on low density parity check (LDPC) codes. With the proposed RB-HARQ, which uses a rate-compatible LDPC code with puncturing and extending, the Longest Codeword is divided into clusters of code bits. Unlike previous works, in the event of a decoding failure, the receiver measures the reliability of received clusters, instead of code bits, and determines which cluster would be most beneficial for retransmission. Several metrics to determine the best cluster candidates for retransmission are derived analytically. We show that one of the new metrics outperforms the previous metrics. We also show that even with the feedback overhead taken into account, our RB-HARQ can still result in significant gain over the previous works, provided that the cluster size is appropriately selected.

Eduardo Sany Laber - One of the best experts on this subject based on the ideXlab platform.

  • SPIRE/CRIWG - A fast and space-economical algorithm for calculating minimum redundancy prefix codes
    6th International Symposium on String Processing and Information Retrieval. 5th International Workshop on Groupware (Cat. No.PR00268), 1
    Co-Authors: Ruy Luiz Milidiú, Artur Alves Pessoa, Eduardo Sany Laber
    Abstract:

    The minimum redundancy prefix code problem is to determine, for a given list W=[w/sub 1/,...,w/sub n/] of n positive symbol weights, a list L=[l/sub 1/,...,l/sub n/] of n corresponding integer Codeword lengths such that /spl Sigma//sub i=1//sup n/2/sup -li//spl les/1 and /spl Sigma//sub i=1//sup n/w/sub i/l/sub i/ is minimized. With the optimal list of Codeword lengths, an optimal canonical code can be easily obtained. If W is already sorted, then this optimal code can also be represented by the list M=[m/sub 1/,...,m/sub H/], where m/sub l/, for l=1,...,H, denotes the number of Codewords with length l and H is the length of the Longest Codeword. Fortunately, H is proved to be O(min{log(1/p/sub 1/),n}, where p/sub 1/ is the smallest symbol probability, given by w/sub 1///spl Sigma//sub i=1//sup n/w/sub i/. The E-LazyHuff algorithm uses a lazy approach to calculate optimal codes in O(nlog(n/H)) time, requiring only O(H) additional space. In addition, the input weights are not destroyed during the code calculation. We propose a new technique, which we call homogenization, that can be used to improve the time efficiency of algorithms for constructing optimal prefix codes. Next, we introduce the Best LazyHuff algorithm (B-LazyHuff) as an application of this technique. B-LazyHuff is an O(n)-time variation of the E-LazyHuff algorithm. It also requires O(H) additional space and does not destroy the input data.

Yupeng Tai - One of the best experts on this subject based on the ideXlab platform.

  • ChinaCom - A fast algorithm for calculating minimum redundancy prefix codes with unsorted alphabet
    9th International Conference on Communications and Networking in China, 2014
    Co-Authors: Yupeng Tai, Haibin Wang
    Abstract:

    Minimum redundancy coding (also known as Huffman code) is one of the most well-known algorithm of data compression. Many efforts have been made to improve the efficiency of it. Most of them are based on the assumption that the input alphabet has been already sorted. In this paper, we propose an algorithm of calculating the minimum-redundancy codes directly with unsorted alphabet. It consumes only O(nlog(n/k)) time in the worst cases, where « is the alphabet size and k is the Longest Codeword length. It is fast because only a part of the symbols requires to be sorted before the final minimum redundancy code is generated. The theoretical analysis and numerical simulation results show that this algorithm achieves a substantial improvement upon the best previous O(nlogn) algorithms for this problem.