Memoryless Source

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 4770 Experts worldwide ranked by ideXlab platform

Tsachy Weissman - One of the best experts on this subject based on the ideXlab platform.

  • rateless lossy compression via the extremes
    IEEE Transactions on Information Theory, 2016
    Co-Authors: Albert No, Tsachy Weissman
    Abstract:

    We begin by presenting a simple lossy compressor operating at near-zero rate: The encoder merely describes the indices of the few maximal Source components, while the decoder’s reconstruction is a natural estimate of the Source components based on this information. This scheme turns out to be near optimal for the Memoryless Gaussian Source in the sense of achieving the zero-rate slope of its distortion-rate function. Motivated by this finding, we then propose a scheme comprised of iterating the above lossy compressor on an appropriately transformed version of the difference between the Source and its reconstruction from the previous iteration. The proposed scheme achieves the rate distortion function of the Gaussian Memoryless Source (under squared error distortion) when employed on any finite-variance ergodic Source. It further possesses desirable properties, and we, respectively, refer to as infinitesimal successive refinability, ratelessness, and complete separability. Its storage and computation requirements are of order no more than $({n^{2}})/({\log ^{\beta } n})$ per Source symbol for $\beta >0$ at both the encoder and the decoder. Though the details of its derivation, construction, and analysis differ considerably, we discuss similarities between the proposed scheme and the recently introduced Sparse Regression Codes of Venkataramanan et al.

  • rateless lossy compression via the extremes
    Allerton Conference on Communication Control and Computing, 2014
    Co-Authors: Albert No, Tsachy Weissman
    Abstract:

    We begin by presenting a simple lossy compressor operating at near-zero rate: The encoder merely describes the indices of the few maximal Source components, while the decoder's reconstruction is a natural estimate of the Source components based on this information. This scheme turns out to be near-optimal for the Memoryless Gaussian Source in the sense of achieving the zero-rate slope of its distortion-rate function. Motivated by this finding, we then propose a scheme comprising of iterating the above lossy compressor on an appropriately transformed version of the difference between the Source and its reconstruction from the previous iteration. The proposed scheme achieves the rate distortion function of the Gaussian Memoryless Source (under squared error distortion) when employed on any finite-variance ergodic Source. It further possesses desirable properties we respectively refer to as infinitesimal successive refinability, ratelessness, and complete separability. Its storage and computation requirements are of order no more than n2/logβn per Source symbol for β > 0 at both the encoder and decoder. Though the details of its derivation, construction, and analysis differ considerably, we discuss similarities between the proposed scheme and the recently introduced SPARC of Venkataramanan et al.

  • rateless lossy compression via the extremes
    arXiv: Information Theory, 2014
    Co-Authors: Albert No, Tsachy Weissman
    Abstract:

    We begin by presenting a simple lossy compressor operating at near-zero rate: The encoder merely describes the indices of the few maximal Source components, while the decoder's reconstruction is a natural estimate of the Source components based on this information. This scheme turns out to be near-optimal for the Memoryless Gaussian Source in the sense of achieving the zero-rate slope of its distortion-rate function. Motivated by this finding, we then propose a scheme comprised of iterating the above lossy compressor on an appropriately transformed version of the difference between the Source and its reconstruction from the previous iteration. The proposed scheme achieves the rate distortion function of the Gaussian Memoryless Source (under squared error distortion) when employed on any finite-variance ergodic Source. It further possesses desirable properties we respectively refer to as infinitesimal successive refinability, ratelessness, and complete separability. Its storage and computation requirements are of order no more than $\frac{n^2}{\log^{\beta} n}$ per Source symbol for $\beta>0$ at both the encoder and decoder. Though the details of its derivation, construction, and analysis differ considerably, we discuss similarities between the proposed scheme and the recently introduced Sparse Regression Codes (SPARC) of Venkataramanan et al.

  • network compression worst case analysis
    International Symposium on Information Theory, 2013
    Co-Authors: Himanshu Asnani, Ilan Shomorony, Salman A Avestimehr, Tsachy Weissman
    Abstract:

    We consider the problem of communicating a distributed correlated Memoryless Source over a Memoryless network, from Source nodes to destination nodes, under quadratic distortion constraints. We show the following two complementary results: (a) for an arbitrary Memoryless network, among all distributed Memoryless Sources with a particular correlation, Gaussian Sources are the worst compressible, that is, they admit the smallest set of achievable distortion tuples, and (b) for any arbitrarily distributed Memoryless Source to be communicated over a Memoryless additive noise network, among all noise processes with a fixed correlation, Gaussian noise admits the smallest achievable set of distortion tuples. In each case, given a coding scheme for the corresponding Gaussian problem, we provide a technique for the construction of a new coding scheme that achieves the same distortion at the destination nodes in a non-Gaussian scenario with the same correlation structure.

  • network compression worst case analysis
    arXiv: Information Theory, 2013
    Co-Authors: Himanshu Asnani, Ilan Shomorony, Salman A Avestimehr, Tsachy Weissman
    Abstract:

    We study the problem of communicating a distributed correlated Memoryless Source over a Memoryless network, from Source nodes to destination nodes, under quadratic distortion constraints. We establish the following two complementary results: (a) for an arbitrary Memoryless network, among all distributed Memoryless Sources of a given correlation, Gaussian Sources are least compressible, that is, they admit the smallest set of achievable distortion tuples, and (b) for any Memoryless Source to be communicated over a Memoryless additive-noise network, among all noise processes of a given correlation, Gaussian noise admits the smallest achievable set of distortion tuples. We establish these results constructively by showing how schemes for the corresponding Gaussian problems can be applied to achieve similar performance for (Source or noise) distributions that are not necessarily Gaussian but have the same covariance.

Sergio Verdu - One of the best experts on this subject based on the ideXlab platform.

  • fixed length lossy compression in the finite blocklength regime discrete Memoryless Sources
    International Symposium on Information Theory, 2011
    Co-Authors: Victoria Kostina, Sergio Verdu
    Abstract:

    This paper studies the minimum achievable Source coding rate as a function of blocklength n and tolerable distortion level d. Tight general achievability and converse bounds are derived that hold at arbitrary fixed blocklength. For stationary Memoryless Sources with separable distortion, the minimum rate achievable is shown to be q closely approximated by equation, where R(d) is the rate-distortion function, V (d) is the rate dispersion, a characteristic of the Source which measures its stochastic variability, Q−1 (·) is the inverse of the standard Gaussian complementary cdf, and e is the probability that the distortion exceeds d. The new bounds and the second-order approximation of the minimum achievable rate are evaluated for the discrete Memoryless Source with symbol error rate distortion. In this case, the second-order approximation reduces to equation if the Source is non-redundant.

  • minimum expected length of fixed to variable lossless compression without prefix constraints
    IEEE Transactions on Information Theory, 2011
    Co-Authors: Wojciech Szpankowski, Sergio Verdu
    Abstract:

    The minimum expected length for fixed-to-variable length encoding of an n-block Memoryless Source with entropy H grows as nH + O(1), where the term O(1) lies between 0 and 1. However, this well-known performance is obtained under the implicit constraint that the code assigned to the whole n-block is a prefix code. Dropping the prefix constraint, which is rarely necessary at the block level, we show that the minimum expected length for a finite-alphabet Memoryless Source with known distribution grows as nH-1/2 log n + O(1) unless the Source is equiprobable. We also refine this result up to o(1) for those Memoryless Sources whose log probabilities do not reside on a lattice.

  • Rate-distortion in near-linear time
    2008 IEEE International Symposium on Information Theory, 2008
    Co-Authors: Ankit Gupta, Sergio Verdu, Tsachy Weissman
    Abstract:

    We present two results related to the computational complexity of lossy compression. The first result shows that for a Memoryless Source Ps with rate-distortion function R(D), the rate-distortion pair (R(D) + gamma, D + isin) can be achieved with constant decoding time per symbol and encoding time per symbol proportional to C1(gamma)isin-C2(gamma). The second results establishes that for any given R, there exists a universal lossy compression scheme with O(ng(n)) encoding complexity and O(n) decoding complexity, that achieves the point (R,D(R)) asymptotically for any ergodic Source with distortion-rate function D(.), where g(n) is an arbitrary non-decreasing unbounded function. A computationally feasible implementation of the first scheme outperforms many of the best previously proposed schemes for binary Sources with blocklengths of the order of 1000.

Toshiyasu Matsushima - One of the best experts on this subject based on the ideXlab platform.

  • cumulant generating function of codeword lengths in variable length lossy compression allowing positive excess distortion probability
    International Symposium on Information Theory, 2018
    Co-Authors: Shota Saito, Toshiyasu Matsushima
    Abstract:

    This paper considers the problem of variable-length lossy Source coding. The performance criteria are the excess distortion probability and the cumulant generating function of codeword lengths. We derive a non-asymptotic fundamental limit of the cumulant generating function of codeword lengths allowing positive excess distortion probability. It is shown that the achievability and converse bounds are characterized by the Renyi entropy-based quantity. In the proof of the achievability result, the explicit code construction is provided. Further, we investigate an asymptotic single-letter characterization of the fundamental limit for a stationary Memoryless Source. A full version of this paper is accessible at: http://arxiv.org/abs/1801.02496

  • cumulant generating function of codeword lengths in variable length lossy compression allowing positive excess distortion probability
    arXiv: Information Theory, 2018
    Co-Authors: Shota Saito, Toshiyasu Matsushima
    Abstract:

    This paper considers the problem of variable-length lossy Source coding. The performance criteria are the excess distortion probability and the cumulant generating function of codeword lengths. We derive a non-asymptotic fundamental limit of the cumulant generating function of codeword lengths allowing positive excess distortion probability. It is shown that the achievability and converse bounds are characterized by the R\'enyi entropy-based quantity. In the proof of the achievability result, the explicit code construction is provided. Further, we investigate an asymptotic single-letter characterization of the fundamental limit for a stationary Memoryless Source.

L L Campbell - One of the best experts on this subject based on the ideXlab platform.

  • error exponents for asymmetric two user discrete Memoryless Source channel coding systems
    IEEE Transactions on Information Theory, 2009
    Co-Authors: Yangfan Zhong, Fady Alajaji, L L Campbell
    Abstract:

    We study the transmission of two discrete Memoryless correlated Sources, consisting of a common and a private Source, over a discrete Memoryless multiterminal channel with two transmitters and two receivers. At the transmitter side, the common Source is observed by both encoders but the private Source can only be accessed by one encoder. At the receiver side, both decoders need to reconstruct the common Source, but only one decoder needs to reconstruct the private Source. We hence refer to this system by the asymmetric two-user Source-channel coding system. We derive a universally achievable lossless joint Source-channel coding (JSCC) error exponent pair for the two-user system by using a technique which generalizes Csiszar's type-packing lemma (1980) for the point-to-point (single-user) discrete Memoryless Source-channel system. We next investigate the largest convergence rate of asymptotic exponential decay of the system (overall) probability of erroneous transmission, i.e., the system JSCC error exponent. We obtain lower and upper bounds for the exponent. As a consequence, we establish a JSCC theorem with single-letter characterization and we show that the separation principle holds for the asymmetric two-user scenario. By introducing common randomization, we also provide a formula for the tandem (separate) Source-channel coding error exponent. Numerical examples show that for a large class of systems consisting of two correlated Sources and an asymmetric multiple-access channel with additive noise, the JSCC error exponent considerably outperforms the corresponding tandem coding error exponent.

  • error exponents for asymmetric two user discrete Memoryless Source channel systems
    International Symposium on Information Theory, 2007
    Co-Authors: Yangfan Zhong, Fady Alajaji, L L Campbell
    Abstract:

    Consider transmitting two discrete Memoryless correlated Sources, consisting of a common and a private Source, over a discrete Memoryless multi-terminal channel with two transmitters and two receivers. At the transmitter side, the common Source is observed by both encoders but the private Source can only be accessed by one encoder. At the receiver side, both decoders need to reconstruct the common Source, but only one decoder needs to reconstruct the private Source. We hence refer to this system by the asymmetric 2-user Source-channel system. In this work, we derive a universally achievable joint Source-channel coding (JSCC) error exponent pair for the 2-user system by using a technique which generalizes Csiszar's method (1980) for the point- to-point (single-user) discrete Memoryless Source-channel system. We next investigate the largest convergence rate of asymptotic exponential decay of the system (overall) probability of erroneous transmission, i.e., the system JSCC error exponent. We obtain lower and upper bounds for the exponent. As a consequence, we establish the JSCC theorem with single letter characterization.

  • On the joint Source-channel coding error exponent for discrete Memoryless systems
    IEEE Transactions on Information Theory, 2006
    Co-Authors: Yangfan Zhong, F. Alajaji, L L Campbell
    Abstract:

    We investigate the computation of Csisza/spl acute/r's bounds for the joint Source-channel coding (JSCC) error exponent E/sub J/ of a communication system consisting of a discrete Memoryless Source and a discrete Memoryless channel. We provide equivalent expressions for these bounds and derive explicit formulas for the rates where the bounds are attained. These equivalent representations can be readily computed for arbitrary Source-channel pairs via Arimoto's algorithm. When the channel's distribution satisfies a symmetry property, the bounds admit closed-form parametric expressions. We then use our results to provide a systematic comparison between the JSCC error exponent E/sub J/ and the tandem coding error exponent E/sub T/, which applies if the Source and channel are separately coded. It is shown that E/sub T//spl les/E/sub J//spl les/2E/sub T/. We establish conditions for which E/sub J/>E/sub T/ and for which E/sub J/=2E/sub T/. Numerical examples indicate that E/sub J/ is close to 2E/sub T/ for many Source-channel pairs. This gain translates into a power saving larger than 2 dB for a binary Source transmitted over additive white Gaussian noise (AWGN) channels and Rayleigh-fading channels with finite output quantization. Finally, we study the computation of the lossy JSCC error exponent under the Hamming distortion measure.

Yangfan Zhong - One of the best experts on this subject based on the ideXlab platform.

  • error exponents for asymmetric two user discrete Memoryless Source channel coding systems
    IEEE Transactions on Information Theory, 2009
    Co-Authors: Yangfan Zhong, Fady Alajaji, L L Campbell
    Abstract:

    We study the transmission of two discrete Memoryless correlated Sources, consisting of a common and a private Source, over a discrete Memoryless multiterminal channel with two transmitters and two receivers. At the transmitter side, the common Source is observed by both encoders but the private Source can only be accessed by one encoder. At the receiver side, both decoders need to reconstruct the common Source, but only one decoder needs to reconstruct the private Source. We hence refer to this system by the asymmetric two-user Source-channel coding system. We derive a universally achievable lossless joint Source-channel coding (JSCC) error exponent pair for the two-user system by using a technique which generalizes Csiszar's type-packing lemma (1980) for the point-to-point (single-user) discrete Memoryless Source-channel system. We next investigate the largest convergence rate of asymptotic exponential decay of the system (overall) probability of erroneous transmission, i.e., the system JSCC error exponent. We obtain lower and upper bounds for the exponent. As a consequence, we establish a JSCC theorem with single-letter characterization and we show that the separation principle holds for the asymmetric two-user scenario. By introducing common randomization, we also provide a formula for the tandem (separate) Source-channel coding error exponent. Numerical examples show that for a large class of systems consisting of two correlated Sources and an asymmetric multiple-access channel with additive noise, the JSCC error exponent considerably outperforms the corresponding tandem coding error exponent.

  • error exponents for asymmetric two user discrete Memoryless Source channel systems
    International Symposium on Information Theory, 2007
    Co-Authors: Yangfan Zhong, Fady Alajaji, L L Campbell
    Abstract:

    Consider transmitting two discrete Memoryless correlated Sources, consisting of a common and a private Source, over a discrete Memoryless multi-terminal channel with two transmitters and two receivers. At the transmitter side, the common Source is observed by both encoders but the private Source can only be accessed by one encoder. At the receiver side, both decoders need to reconstruct the common Source, but only one decoder needs to reconstruct the private Source. We hence refer to this system by the asymmetric 2-user Source-channel system. In this work, we derive a universally achievable joint Source-channel coding (JSCC) error exponent pair for the 2-user system by using a technique which generalizes Csiszar's method (1980) for the point- to-point (single-user) discrete Memoryless Source-channel system. We next investigate the largest convergence rate of asymptotic exponential decay of the system (overall) probability of erroneous transmission, i.e., the system JSCC error exponent. We obtain lower and upper bounds for the exponent. As a consequence, we establish the JSCC theorem with single letter characterization.

  • on the joint Source channel coding error exponent for discrete Memoryless systems computation and comparison with separate coding
    arXiv: Information Theory, 2006
    Co-Authors: Yangfan Zhong, Fady Alajaji, Lorne L Campbell
    Abstract:

    We investigate the computation of Csiszar's bounds for the joint Source-channel coding (JSCC) error exponent, E_J, of a communication system consisting of a discrete Memoryless Source and a discrete Memoryless channel. We provide equivalent expressions for these bounds and derive explicit formulas for the rates where the bounds are attained. These equivalent representations can be readily computed for arbitrary Source-channel pairs via Arimoto's algorithm. When the channel's distribution satisfies a symmetry property, the bounds admit closed-form parametric expressions. We then use our results to provide a systematic comparison between the JSCC error exponent E_J and the tandem coding error exponent E_T, which applies if the Source and channel are separately coded. It is shown that E_T E_T and for which E_J = 2E_T. Numerical examples indicate that E_J is close to 2E_T for many Source-channel pairs. This gain translates into a power saving larger than 2 dB for a binary Source transmitted over additive white Gaussian noise channels and Rayleigh fading channels with finite output quantization. Finally, we study the computation of the lossy JSCC error exponent under the Hamming distortion measure.

  • On the joint Source-channel coding error exponent for discrete Memoryless systems
    IEEE Transactions on Information Theory, 2006
    Co-Authors: Yangfan Zhong, F. Alajaji, L L Campbell
    Abstract:

    We investigate the computation of Csisza/spl acute/r's bounds for the joint Source-channel coding (JSCC) error exponent E/sub J/ of a communication system consisting of a discrete Memoryless Source and a discrete Memoryless channel. We provide equivalent expressions for these bounds and derive explicit formulas for the rates where the bounds are attained. These equivalent representations can be readily computed for arbitrary Source-channel pairs via Arimoto's algorithm. When the channel's distribution satisfies a symmetry property, the bounds admit closed-form parametric expressions. We then use our results to provide a systematic comparison between the JSCC error exponent E/sub J/ and the tandem coding error exponent E/sub T/, which applies if the Source and channel are separately coded. It is shown that E/sub T//spl les/E/sub J//spl les/2E/sub T/. We establish conditions for which E/sub J/>E/sub T/ and for which E/sub J/=2E/sub T/. Numerical examples indicate that E/sub J/ is close to 2E/sub T/ for many Source-channel pairs. This gain translates into a power saving larger than 2 dB for a binary Source transmitted over additive white Gaussian noise (AWGN) channels and Rayleigh-fading channels with finite output quantization. Finally, we study the computation of the lossy JSCC error exponent under the Hamming distortion measure.