Source Alphabet

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 291 Experts worldwide ranked by ideXlab platform

Ken-ichi Iwata - One of the best experts on this subject based on the ideXlab platform.

  • ISIT - Countably Infinite Multilevel Source Polarization for Non-Stationary Erasure Distributions
    2019 IEEE International Symposium on Information Theory (ISIT), 2019
    Co-Authors: Yuta Sakai, Ken-ichi Iwata, Hiroshi Fujisaki
    Abstract:

    Polar transforms are central operations in the study of polar codes. This paper examines polar transforms for non-stationary memoryless Sources on possibly infinite Source Alphabets. This is the first attempt of Source polarization analysis over infinite Alphabets. The Source Alphabet is defined to be a Polish group, and we handle the Arikan-style two-by-two polar transform based on the group. Defining erasure distributions based on the normal subgroup structure, we give recursive formulas of the polar transform for erasure distributions. We then show concrete examples of multilevel Source polarization with countably infinite levels when the group is locally cyclic. We derive this result via elementary techniques in lattice theory.

  • Countably Infinite Multilevel Source Polarization for Non-Stationary Erasure Distributions.
    arXiv: Information Theory, 2019
    Co-Authors: Yuta Sakai, Ken-ichi Iwata, Hiroshi Fujisaki
    Abstract:

    Polar transforms are central operations in the study of polar codes. This paper examines polar transforms for non-stationary memoryless Sources on possibly infinite Source Alphabets. This is the first attempt of Source polarization analysis over infinite Alphabets. The Source Alphabet is defined to be a Polish group, and we handle the Ar{\i}kan-style two-by-two polar transform based on the group. Defining erasure distributions based on the normal subgroup structure, we give recursive formulas of the polar transform for our proposed erasure distributions. As a result, the recursive formulas lead to concrete examples of multilevel Source polarization with countably infinite levels when the group is locally cyclic. We derive this result via elementary techniques in lattice theory.

  • ITW - An iterative algorithm to construct optimal binary AIFV-m codes
    2017 IEEE Information Theory Workshop (ITW), 2017
    Co-Authors: Hirosuke Yamamoto, Ken-ichi Iwata
    Abstract:

    We propose an algorithm to construct an optimal code that achieves the minimum average codeword length in the class of binary AIFV-m codes with m code trees T 0 , T 1 ,…, T m−1 for a given stationary memoryless Source. The algorithm is an iterative algorithm such that the optimal T k for a given set of costs is derived by dynamic programming (DP) and the costs are updated from the set of code trees (T 0 , T 1 , · · ·, T m -1), iteratively. The proposed DP works with polynomial time and space for Source Alphabet size. We prove the AIFV-m code obtained by the proposed algorithm is optimal for m = 2, 3,4, 5 although the algorithm works for any m and we conjecture the optimality also holds for m ≥ 6. Furthermore, we verify by some examples of Sources that the average codeword length of the optimal binary AIFV-m codes can be decreased as m becomes large.

  • An iterative algorithm to construct optimal binary AIFV-m codes
    2017 IEEE Information Theory Workshop (ITW), 2017
    Co-Authors: Hirosuke Yamamoto, Ken-ichi Iwata
    Abstract:

    We propose an algorithm to construct an optimal code that achieves the minimum average codeword length in the class of binary AIFV-m codes with m code trees T0, T1,..., Tm-1 for a given stationary memoryless Source. The algorithm is an iterative algorithm such that the optimal Tk for a given set of costs is derived by dynamic programming (DP) and the costs are updated from the set of code trees (T0, T1, · · ·, Tm-1), iteratively. The proposed DP works with polynomial time and space for Source Alphabet size. We prove the AIFV-m code obtained by the proposed algorithm is optimal for m = 2, 3,4, 5 although the algorithm works for any m and we conjecture the optimality also holds for m ≥ 6. Furthermore, we verify by some examples of Sources that the average codeword length of the optimal binary AIFV-m codes can be decreased as m becomes large.

  • ISITA - A dynamic programming algorithm to construct optimal code trees of AIFV codes
    2016
    Co-Authors: Ken-ichi Iwata, Hirosuke Yamamoto
    Abstract:

    Binary AIFV (almost instantaneous fixed-to-variable length) codes, which uses two code trees, can attain better compression rates than binary Huffman codes. Although the optimal binary AIFV codes can be constructed by combining an iterative algorithm to improve a parameter and an integer programming (IP) to derive the optimal code trees for a given parameter, the complexity of the IP problem is NP hard in general. In this paper, we propose a dynamic programming algorithm, which can be used instead of the above IP. The proposed dynamic programming algorithm works with O(n5) time and O(n3) space for Source Alphabet size n.

Sorina Dumitrescu - One of the best experts on this subject based on the ideXlab platform.

  • Fast Joint Source-Channel Decoding of Convolutional Coded Markov Sequences with
    2020
    Co-Authors: Sorina Dumitrescu
    Abstract:

    This work addresses the problem of joint Source- channel decoding of a Markov sequence which is first encoded by a Source code, then encoded by a convolutional code, and sent through a noisy memoryless channel. It is shown that for Markov Sources satisfying the so-called Monge property, both the maximum a posteriori probability (MAP) sequence decoding, as well as the soft output Max-Log-MAP decoding can be accelerated by a factor of �� without compromising the optimality, where �� is the size of the Markov Source Alphabet. The key to achieve a higher decoding speed is a convenient organization of computations at the decoder combined with a fast matrix search technique enabled by the Monge property. The same decrease in complexity follows, as a by-product of the development, for the soft output Max-Log-MAP joint Source channel decoding in the case when the convolutional coder is absent, result which was not known previously.

  • Fast joint Source-channel decoding of convolutional coded Markov sequences with Monge property
    IEEE Transactions on Communications, 2020
    Co-Authors: Sorina Dumitrescu
    Abstract:

    This work addresses the problem of joint Source-channel decoding of a Markov sequence which is first encoded by a Source code, then encoded by a convolutional code, and sent through a noisy memoryless channel. It is shown that for Markov Sources satisfying the so-called Monge property, both the maximum a posteriori probability (MAP) sequence decoding, as well as the soft output Max-Log-MAP decoding can be accelerated by a factor of K without compromising the optimality, where K is the size of the Markov Source Alphabet. The key to achieve a higher decoding speed is a convenient organization of computations at the decoder combined with a fast matrix search technique enabled by the Monge property. The same decrease in complexity follows, as a by-product of the development, for the soft output Max-Log-MAP joint Source channel decoding in the case when the convolutional coder is absent, result which was not known previously.

  • Optimal Design of a Two-Stage Wyner-Ziv Scalar Quantizer With Forwardly/Reversely Degraded Side Information
    IEEE Transactions on Communications, 2019
    Co-Authors: Qixue Zheng, Sorina Dumitrescu
    Abstract:

    This paper addresses the optimal design of a two-stage Wyner-Ziv scalar quantizer with forwardly or reversely degraded side information (SI) for finite-Alphabet Sources and SI. We assume that the binning is performed optimally and address the design of the quantizer partitions. The optimization problem is formulated as the minimization of a weighted sum of distortions and rates. The proposed solution is globally optimal when the cells in each partition are contiguous. The solution algorithm is based on solving the single-Source or the all-pairs minimum-weight path (MWP) problem in certain weighted directed acyclic graphs. When the conventional dynamic programming technique is used to solve the underlying MWP problems, the time complexity achieved is $O(N^{3})$ , where $N$ is the size of the Source Alphabet. A so-called partial Monge property is additionally introduced, and a faster solution algorithm exploiting this property is proposed. Experimental results assess the practical performance of the proposed scheme.

  • Fast Joint Source-Channel Decoding of Convolutional Coded Markov Sequences with Monge Property
    2007 IEEE Information Theory Workshop, 2007
    Co-Authors: Sorina Dumitrescu
    Abstract:

    We address the problem of joint Source-channel maximum a posteriori (MAP) decoding of a Markov sequence which is first encoded by a Source code, then encoded by a convolutional code, and sent through a noisy memoryless channel. The existing joint Source-channel decoding algorithm for the case of general convolutional encoder has O(M K2 N) time complexity, where M is the length in bits of the information sequence, K is the size of the Markov Source Alphabet and N is the number of states of the convolutional encoder. We show that for Markov Sources satisfying the so-called Monge property the decoding complexity can be decreased to O(M K N) by applying a fast matrix search technique.

  • Optimal Two-Description Scalar Quantizer Design
    Algorithmica, 2005
    Co-Authors: Sorina Dumitrescu, Xiaolin Wu
    Abstract:

    Multiple description quantization is a signal compression technique for robust networked multimedia communication. In this paper we consider the problem of optimally quantizing a random variable into two descriptions, with each description being produced by a side quantizer of convex codecells. The optimization objective is to minimize the expected distortion given the probabilities of receiving either and both descriptions. The problem is formulated as one of shortest path in a weighted directed acyclic graph with constraints on the number and types of edges. An $O(K_1K_2N^3)$ time algorithm for designing the optimal two-description quantizer is presented, where $N$ is the cardinality of the Source Alphabet, and $K_1$, $K_2$ are the number of codewords of the two quantizers, respectively. This complexity is reduced to $O(K_1K_2N^2)$ by exploiting the Monge property of the objective function. Furthermore, if $K_1 = K_2 = K$ and the two descriptions are transmitted through two channels of the same statistics, then the optimal two-description quantizer design problem can be solved in $O(KN^2)$ time.

Hiroshi Fujisaki - One of the best experts on this subject based on the ideXlab platform.

  • ISIT - Countably Infinite Multilevel Source Polarization for Non-Stationary Erasure Distributions
    2019 IEEE International Symposium on Information Theory (ISIT), 2019
    Co-Authors: Yuta Sakai, Ken-ichi Iwata, Hiroshi Fujisaki
    Abstract:

    Polar transforms are central operations in the study of polar codes. This paper examines polar transforms for non-stationary memoryless Sources on possibly infinite Source Alphabets. This is the first attempt of Source polarization analysis over infinite Alphabets. The Source Alphabet is defined to be a Polish group, and we handle the Arikan-style two-by-two polar transform based on the group. Defining erasure distributions based on the normal subgroup structure, we give recursive formulas of the polar transform for erasure distributions. We then show concrete examples of multilevel Source polarization with countably infinite levels when the group is locally cyclic. We derive this result via elementary techniques in lattice theory.

  • Countably Infinite Multilevel Source Polarization for Non-Stationary Erasure Distributions.
    arXiv: Information Theory, 2019
    Co-Authors: Yuta Sakai, Ken-ichi Iwata, Hiroshi Fujisaki
    Abstract:

    Polar transforms are central operations in the study of polar codes. This paper examines polar transforms for non-stationary memoryless Sources on possibly infinite Source Alphabets. This is the first attempt of Source polarization analysis over infinite Alphabets. The Source Alphabet is defined to be a Polish group, and we handle the Ar{\i}kan-style two-by-two polar transform based on the group. Defining erasure distributions based on the normal subgroup structure, we give recursive formulas of the polar transform for our proposed erasure distributions. As a result, the recursive formulas lead to concrete examples of multilevel Source polarization with countably infinite levels when the group is locally cyclic. We derive this result via elementary techniques in lattice theory.

P De La Fuente - One of the best experts on this subject based on the ideXlab platform.

  • on the use of words as Source Alphabet symbols in ppm
    Data Compression Conference, 2006
    Co-Authors: Joaquin Adiego, P De La Fuente
    Abstract:

    Summary form only given. We explore the use of words as the basic unit in PPM. Our goal has been carried out following two different ways: (1) we have added an additional previous layer to PPM that allows to replace words by two bytes codewords, and then these codewords will be codified with a conventional PPM; and (2) we have modified PPM so that it considers the words like symbols instead of characters, thus this PPM variation will make their predictions on consecutive words sequences handling words. Experimental results show that both techniques improve character-based compressors (including the PPM version used and adapted in the prototypes) for files of size greater than 1 Mb, due to the overload in storing the vocabulary. Prototype 1 accomplished compression in the same time but requiring slightly more memory (closest to PPMDi as the size grows) and it is about 5000% faster and it uses a 92% less of memory than PPMZ. Prototype 2 demanding much more time and memory and it is similar to the PPMZ, one of the better PPM variations.

  • DCC - On the use of words as Source Alphabet symbols in PPM
    Data Compression Conference (DCC'06), 2006
    Co-Authors: Joaquin Adiego, P De La Fuente
    Abstract:

    Summary form only given. We explore the use of words as the basic unit in PPM. Our goal has been carried out following two different ways: (1) we have added an additional previous layer to PPM that allows to replace words by two bytes codewords, and then these codewords will be codified with a conventional PPM; and (2) we have modified PPM so that it considers the words like symbols instead of characters, thus this PPM variation will make their predictions on consecutive words sequences handling words. Experimental results show that both techniques improve character-based compressors (including the PPM version used and adapted in the prototypes) for files of size greater than 1 Mb, due to the overload in storing the vocabulary. Prototype 1 accomplished compression in the same time but requiring slightly more memory (closest to PPMDi as the size grows) and it is about 5000% faster and it uses a 92% less of memory than PPMZ. Prototype 2 demanding much more time and memory and it is similar to the PPMZ, one of the better PPM variations.

E C Van Der Meulen - One of the best experts on this subject based on the ideXlab platform.