Alphabet Size

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 12105 Experts worldwide ranked by ideXlab platform

David Burshtein - One of the best experts on this subject based on the ideXlab platform.

  • On the Finite Length Scaling of $q$ -Ary Polar Codes
    IEEE Transactions on Information Theory, 2018
    Co-Authors: Dina Goldin, David Burshtein
    Abstract:

    The polarization process of polar codes over a prime q-ary Alphabet is studied. Recently, it has been shown that the blocklength of polar codes with prime Alphabet Size scales polynomially with respect to the inverse of the gap between code rate and channel capacity. However, except for the binary case, the degree of the polynomial in the bound is extremely large. In this paper, a different approach to computing the degree of this polynomial for any prime Alphabet Size is shown. This approach yields a lower degree polynomial for various values of the Alphabet Size that were examined. It is also shown that even lower degree polynomial can be computed with an additional numerical effort.

  • On the Finite Length Scaling of Ternary Polar Codes
    arXiv: Information Theory, 2015
    Co-Authors: Dina Goldin, David Burshtein
    Abstract:

    The polarization process of polar codes over a ternary Alphabet is studied. Recently it has been shown that the scaling of the blocklength of polar codes with prime Alphabet Size scales polynomially with respect to the inverse of the gap between code rate and channel capacity. However, except for the binary case, the degree of the polynomial in the bound is extremely large. In this work, it is shown that a much lower degree polynomial can be computed numerically for the ternary case. Similar results are conjectured for the general case of prime Alphabet Size.

  • ISIT - On the finite length scaling of ternary polar codes
    2015 IEEE International Symposium on Information Theory (ISIT), 2015
    Co-Authors: Dina Goldin, David Burshtein
    Abstract:

    The polarization process of polar codes over a ternary Alphabet is studied. Recently it has been shown that the scaling of the blocklength of polar codes with prime Alphabet Size scales polynomially with respect to the inverse of the gap between code rate and channel capacity. However, except for the binary case, the degree of the polynomial in the bound is extremely large. In this work, it is shown that a much lower degree polynomial can be computed numerically for the ternary case. Similar results are conjectured for the general case of prime Alphabet Size.

Mohammadali Khosravifard - One of the best experts on this subject based on the ideXlab platform.

  • A New Code for Encoding All Monotone Sources With a Fixed Large Alphabet Size
    IEEE Transactions on Information Theory, 2020
    Co-Authors: Hamed Narimani, Mohammadali Khosravifard
    Abstract:

    The problem of designing a fixed code for encoding all monotone sources with a large Alphabet Size $n$ is studied. It is proved that if the codelengths of a code sequence are of order $O(\log n)$ , then the redundancy for almost all monotone sources with $n$ symbols is very close to that for a specific distribution, so-called average distribution. Noting this point, for any large Alphabet Size $n$ , a new code with a very simple structure is proposed whose redundancy is close to that of the Huffman code for almost all monotone sources with Alphabet Size $n$ .

  • On the Penalty of Optimal Fix-Free Codes
    IEEE Transactions on Information Theory, 2015
    Co-Authors: Sayed Jalal Zahabi, Mohammadali Khosravifard
    Abstract:

    In this paper, the difference between the redundancy of the optimal asymmetric/symmetric fix-free code, and that of the optimal prefix-free code is considered as the penalty of benefiting from the desired properties of fix-free codes. This penalty is studied from different perspectives. In particular, it is shown that the average penalty of asymmetric fix-free codes is less than 0.21 bit per symbol. Moreover, it is proved that when the source Alphabet Size is sufficiently large, for almost all sources, the penalty is less than or equal to 0.182 bit per symbol. Regarding symmetric fix-free codes, it is shown that the average penalty tends to infinity as the source Alphabet Size increases.

  • Huffman Redundancy for Large Alphabet Sources
    IEEE Transactions on Information Theory, 2014
    Co-Authors: Hamed Narimani, Mohammadali Khosravifard
    Abstract:

    The performance of optimal prefix-free encoding for memoryless sources with a large Alphabet Size is studied. It is shown that the redundancy of the Huffman code for almost all sources with a large Alphabet Size n is very close to that of the average distribution of the monotone sources with n symbols. This value lies between 0.02873 and 0.02877 bit for sufficiently large n.

  • The overall performance of the Shannon code
    2008 International Symposium on Information Theory and Its Applications, 2008
    Co-Authors: Mohammadali Khosravifard, Hamed Narimani, M. Razaviyayn, T.a. Gulliver
    Abstract:

    It is well-known that the redundancy of the Shannon code lies in the interval (0, 1). In order to study the overall performance of the Shannon code, we consider its redundancy as a random variable on the set of sources with n symbols, i.e., Rsh(n), and examine its statistical parameters. It is shown that the mean of Rsh(n) gets close to 0.5 for sources with large Alphabet Size n. Moreover, we observe that its variance tends to zero as n increases. Briefly speaking, for almost all sources with a large Alphabet Size, the redundancy of the Shannon code is almost 0.5 bits.

Tuvi Etzion - One of the best experts on this subject based on the ideXlab platform.

  • Network Coding Solutions for the Combination Network and its Subgraphs
    arXiv: Information Theory, 2019
    Co-Authors: Han Cai, Moshe Schwartz, Tuvi Etzion, Antonia Wachter-zeh
    Abstract:

    The combination network is one of the simplest and insightful networks in coding theory. The vector network coding solutions for this network and some of its sub-networks are examined. For a fixed Alphabet Size of a vector network coding solution, an upper bound on the number of nodes in the network is obtained. This bound is an MDS bound for subspaces over a finite field. A family of sub-networks of combination networks is defined. It is proved that for this family of networks, which are minimal multicast networks, there is a gap in the minimum Alphabet Size between vector network coding solutions and scalar network coding solutions. This gap is obtained for any number of messages and is based on coloring of the $q$-Kneser graph and a new hypergraph generalization for it.

  • ISIT - Network Coding Solutions for the Combination Network and its Subgraphs
    2019 IEEE International Symposium on Information Theory (ISIT), 2019
    Co-Authors: Han Cai, Moshe Schwartz, Tuvi Etzion, Antonia Wachter-zeh
    Abstract:

    The combination network is one of the simplest and insightful networks in coding theory. The vector network coding solutions for this network and some of its sub-networks are examined. For a fixed Alphabet Size of a vector network coding solution, an upper bound on the number of nodes in the network is obtained. This bound is an MDS bound for subspaces over a finite field. A family of sub-networks of combination networks is defined. It is proved that for this family of networks, which are minimal multicast networks, there is a gap in the minimum Alphabet Size between vector network coding solutions and scalar network coding solutions. This gap is obtained for any number of messages and is based on coloring of the q-Kneser graph and a new hypergraph generalization for it.

  • vector network coding based on subspace codes outperforms scalar linear network coding
    IEEE Transactions on Information Theory, 2018
    Co-Authors: Tuvi Etzion, Antonia Wachterzeh
    Abstract:

    This paper considers vector network coding solutions based on rank-metric codes and subspace codes. The main result of this paper is that vector solutions can significantly reduce the required Alphabet Size compared to the optimal scalar linear solution for the same multicast network. The multicast networks considered in this paper have one source with $h$ messages, and the vector solution is over a field of Size $q$ with vectors of length $t$ . For a given network, let the smallest field Size for which the network has a scalar linear solution be $q_{s}$ , then the gap in the Alphabet Size between the vector solution and the scalar linear solution is defined to be $q_{s}-q^{t}$ . In this contribution, the achieved gap is $q^{(h-2)t^{2}/h + o(t)}$ for any $q \geq 2$ and any even $h \geq 4$ . If $h \geq 5$ is odd, then the achieved gap of the Alphabet Size is $q^{(h-3)t^{2}/(h-1) + o(t)}$ . Previously, only a gap of Size one had been shown for networks with a very large number of messages. These results imply the same gap of the Alphabet Size between the optimal scalar linear and some scalar nonlinear network coding solution for multicast networks. For three messages, we also show an advantage of vector network coding, while for two messages the problem remains open. Several networks are considered, all of them are generalizations and modifications of the well-known combination networks. The vector network codes that are used as solutions for those networks are based on subspace codes, particularly subspace codes obtained from rank-metric codes. Some of these codes form a new family of subspace codes, which poses a new research problem.

  • vector network coding based on subspace codes outperforms scalar linear network coding
    arXiv: Information Theory, 2015
    Co-Authors: Tuvi Etzion, Antonia Wachterzeh
    Abstract:

    This paper considers vector network coding solutions based on rank-metric codes and subspace codes. The main result of this paper is that vector solutions can significantly reduce the required Alphabet Size compared to the optimal scalar linear solution for the same multicast network. The multicast networks considered in this paper have one source with $h$ messages, and the vector solution is over a field of Size $q$ with vectors of length~$t$. For a given network, let the smallest field Size for which the network has a scalar linear solution be $q_s$, then the gap in the Alphabet Size between the vector solution and the scalar linear solution is defined to be $q_s-q^t$. In this contribution, the achieved gap is $q^{(h-2)t^2/h + o(t)}$ for any $q \geq 2$ and any even $h \geq 4$. If $h \geq 5$ is odd, then the achieved gap of the Alphabet Size is $q^{(h-3)t^2/(h-1) + o(t)}$. Previously, only a gap of Size Size one had been shown for networks with a very large number of messages. These results imply the same gap of the Alphabet Size between the optimal scalar linear and some scalar nonlinear network coding solution for multicast networks. For three messages, we also show an advantage of vector network coding, while for two messages the problem remains open. Several networks are considered, all of them are generalizations and modifications of the well-known combination networks. The vector network codes that are used as solutions for those networks are based on subspace codes, particularly subspace codes obtained from rank-metric codes. Some of these codes form a new family of subspace codes, which poses a new research problem.

Emina Soljanin - One of the best experts on this subject based on the ideXlab platform.

  • On average throughput and Alphabet Size in network coding
    IEEE ACM Transactions on Networking, 2006
    Co-Authors: Chandra Chekuri, Christina Fragouli, Emina Soljanin
    Abstract:

    We examine the throughput benefits that network coding offers with respect to the average throughput achievable by routing, where the average throughput refers to the average of the rates that the individual receivers experience. We relate these benefits to the integrality gap of a standard linear programming formulation for the directed Steiner tree problem. We describe families of configurations over which network coding at most doubles the average throughput, and analyze a class of directed graph configurations with N receivers where network coding offers benefits proportional to √N. We also discuss other throughput measures in networks, and show how in certain classes of networks, average throughput bounds can be translated into minimum throughput bounds, by employing vector routing and channel coding. Finally, we show configurations where use of randomized coding may require an Alphabet Size exponentially larger than the minimum Alphabet Size required.

  • on average throughput and Alphabet Size in network coding
    International Symposium on Information Theory, 2005
    Co-Authors: Chandra Chekuri, Christina Fragouli, Emina Soljanin
    Abstract:

    We analyze a special class of configurations with h sources and N receivers to demonstrate the throughput benefits of network coding and deterministic code design. We show that the throughput benefits network coding offers can increase proportionally to radicN, with respect to the average as well as the minimum throughput. For this class of configurations we also show that there exists a deterministic coding scheme that realizes these benefits using a binary Alphabet whereas randomized coding may require an exponentially large Alphabet Size

  • ISIT - On average throughput and Alphabet Size in network coding
    Proceedings. International Symposium on Information Theory 2005. ISIT 2005., 2005
    Co-Authors: Chandra Chekuri, Christina Fragouli, Emina Soljanin
    Abstract:

    We analyze a special class of configurations with h sources and N receivers to demonstrate the throughput benefits of network coding and deterministic code design. We show that the throughput benefits network coding offers can increase proportionally to radicN, with respect to the average as well as the minimum throughput. For this class of configurations we also show that there exists a deterministic coding scheme that realizes these benefits using a binary Alphabet whereas randomized coding may require an exponentially large Alphabet Size

  • On Average Throughput Benefits and Alphabet Size in Network Coding
    2005
    Co-Authors: Chandra Chekuri, Christina Fragouli, Emina Soljanin
    Abstract:

    e analyze a special class of configurations with h sources and N receivers to demonstrate the throughput benefits of network coding and deterministic code design. We show that the throughput benefits network coding offers can increase proportionally to \sqrt{N}, with respect to the average as well as the minimum throughput. We also show that while for this class of configurations there exists a deterministic coding scheme that realizes these benefits using a binary Alphabet, randomized coding may require an exponentially large Alphabet Size.

Dina Goldin - One of the best experts on this subject based on the ideXlab platform.

  • On the Finite Length Scaling of $q$ -Ary Polar Codes
    IEEE Transactions on Information Theory, 2018
    Co-Authors: Dina Goldin, David Burshtein
    Abstract:

    The polarization process of polar codes over a prime q-ary Alphabet is studied. Recently, it has been shown that the blocklength of polar codes with prime Alphabet Size scales polynomially with respect to the inverse of the gap between code rate and channel capacity. However, except for the binary case, the degree of the polynomial in the bound is extremely large. In this paper, a different approach to computing the degree of this polynomial for any prime Alphabet Size is shown. This approach yields a lower degree polynomial for various values of the Alphabet Size that were examined. It is also shown that even lower degree polynomial can be computed with an additional numerical effort.

  • On the Finite Length Scaling of Ternary Polar Codes
    arXiv: Information Theory, 2015
    Co-Authors: Dina Goldin, David Burshtein
    Abstract:

    The polarization process of polar codes over a ternary Alphabet is studied. Recently it has been shown that the scaling of the blocklength of polar codes with prime Alphabet Size scales polynomially with respect to the inverse of the gap between code rate and channel capacity. However, except for the binary case, the degree of the polynomial in the bound is extremely large. In this work, it is shown that a much lower degree polynomial can be computed numerically for the ternary case. Similar results are conjectured for the general case of prime Alphabet Size.

  • ISIT - On the finite length scaling of ternary polar codes
    2015 IEEE International Symposium on Information Theory (ISIT), 2015
    Co-Authors: Dina Goldin, David Burshtein
    Abstract:

    The polarization process of polar codes over a ternary Alphabet is studied. Recently it has been shown that the scaling of the blocklength of polar codes with prime Alphabet Size scales polynomially with respect to the inverse of the gap between code rate and channel capacity. However, except for the binary case, the degree of the polynomial in the bound is extremely large. In this work, it is shown that a much lower degree polynomial can be computed numerically for the ternary case. Similar results are conjectured for the general case of prime Alphabet Size.