Progressive Transmission

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 17520 Experts worldwide ranked by ideXlab platform

Eve A Riskin - One of the best experts on this subject based on the ideXlab platform.

  • Progressive Transmission of images using map detection over channels with memory
    IEEE Transactions on Image Processing, 1999
    Co-Authors: B S Srinivas, Richard E Ladner, M Azizoglu, Eve A Riskin
    Abstract:

    We propose a new maximum a posteriori (MAP) detector, without the need for explicit channel coding, to lessen the impact of communication channel errors on compressed image sources. The MAP detector exploits the spatial correlation in the compressed bitstream as well as the temporal memory in the channel to correct channel errors. We first present a technique for computing the residual redundancy inherent in a compressed grayscale image (compressed using VQ). The performance of the proposed MAP detector is compared to that of a memoryless MAP detector. We also investigate the dependence of the performance on memory characteristics of the Gilbert-Elliott channel as well as average channel error rate. Finally, we study the robustness of the proposed MAP detector's performance to estimation errors.

  • embedded multilevel error diffusion
    IEEE Transactions on Image Processing, 1997
    Co-Authors: J R Goldschneider, Eve A Riskin, Ping Wah Wong
    Abstract:

    We present an algorithm for image browsing systems that embeds the output of binary Floyd-Steinberg (1975) error diffusion, or a low bit-depth gray-scale or color error diffused image into higher bit-depth gray-scale or color error diffused images. The benefits of this algorithm are that a low bit-depth halftoned image can be directly obtained from a higher bit-depth halftone for printing or Progressive Transmission simply by masking one or more bits off of the higher bit-depth image. The embedding can be done in any bits of the output, although the most significant or the least significant bits are most convenient. Due to constraints on the palette introduced by embedding, the image quality for the higher bit-depth halftone may be reduced. To preserve the image quality, we present algorithms for color palette organization, or binary index assignment, to be used as a preprocessing step to the embedding algorithm.

  • codebook organization to enhance maximum a posteriori detection of Progressive Transmission of vector quantized images over noisy channels
    IEEE Transactions on Image Processing, 1996
    Co-Authors: Renyuh Wang, Eve A Riskin, Richard E Ladner
    Abstract:

    We describe a new way to organize a full-search vector quantization codebook so that images encoded with it can be sent Progressively and have resilience to channel noise. The codebook organization guarantees that the most significant bits (MSBs) of the codeword index are most important to the overall image quality and are highly correlated. Simulations show that the effective channel error rates of the MSBs can be substantially lowered by implementing a maximum a posteriori (MAP) detector similar to one suggested by Phamdo and Farvardin (see IEEE Trans. Inform. Theory, vol.40, no.1, p.156-193, 1994). The performance of the scheme is close to that of pseudo-gray coding at lower bit error rates and outperforms it at higher error rates. No extra bits are used for channel error correction.

  • index assignment for Progressive Transmission of full search vector quantization
    IEEE Transactions on Image Processing, 1994
    Co-Authors: Eve A Riskin, Renyuh Wang, Richard E Ladner, Les Atlas
    Abstract:

    The authors study codeword index assignment to allow for Progressive image Transmission of fixed rate full-search vector quantization (VQ). They develop three new methods of assigning indices to a vector quantization codebook and formulate these assignments as labels of nodes of a full-search Progressive Transmission tree. The tree is used to design intermediate codewords for the decoder so that full-search VQ has a successive approximation character. The binary representation for the path through the tree represents the Progressive Transmission code. The methods of designing the tree that they apply are the generalized Lloyd algorithm, minimum cost perfect matching from optimization theory, and a method of principal component partitioning. Their empirical results show that the final method gives intermediate signal-to-noise ratios (SNRs) that are close to those obtained with tree-structured vector quantization, yet they have higher final SNRs. >

  • codebook organization to enhance maximum a posteriori detection of Progressive Transmission of vector quantized images over noisy channels
    International Conference on Acoustics Speech and Signal Processing, 1993
    Co-Authors: Renyuh Wang, Eve A Riskin, Richard E Ladner
    Abstract:

    The authors describe a new way to organize a full search vector quantization codebook so that images encoded with it can be sent Progressively and have immunity against channel noise. Due to the codebook organization, the most significant bits (MSBs) of the codeword index are most important to the overall image quality and are highly correlated. Simulations show that the effective channel error rates of the MSBs can be substantially lowered by implementing a maximum a posteriori (MAP) detector similar to one suggested by N. Phamdo and N. Farvardin (1992). The performance of the scheme is close to that of pseudo-Gray coding at low bit error rates and outperforms it at higher error rates. >

Richard E Ladner - One of the best experts on this subject based on the ideXlab platform.

  • Progressive Transmission of images using map detection over channels with memory
    IEEE Transactions on Image Processing, 1999
    Co-Authors: B S Srinivas, Richard E Ladner, M Azizoglu, Eve A Riskin
    Abstract:

    We propose a new maximum a posteriori (MAP) detector, without the need for explicit channel coding, to lessen the impact of communication channel errors on compressed image sources. The MAP detector exploits the spatial correlation in the compressed bitstream as well as the temporal memory in the channel to correct channel errors. We first present a technique for computing the residual redundancy inherent in a compressed grayscale image (compressed using VQ). The performance of the proposed MAP detector is compared to that of a memoryless MAP detector. We also investigate the dependence of the performance on memory characteristics of the Gilbert-Elliott channel as well as average channel error rate. Finally, we study the robustness of the proposed MAP detector's performance to estimation errors.

  • codebook organization to enhance maximum a posteriori detection of Progressive Transmission of vector quantized images over noisy channels
    IEEE Transactions on Image Processing, 1996
    Co-Authors: Renyuh Wang, Eve A Riskin, Richard E Ladner
    Abstract:

    We describe a new way to organize a full-search vector quantization codebook so that images encoded with it can be sent Progressively and have resilience to channel noise. The codebook organization guarantees that the most significant bits (MSBs) of the codeword index are most important to the overall image quality and are highly correlated. Simulations show that the effective channel error rates of the MSBs can be substantially lowered by implementing a maximum a posteriori (MAP) detector similar to one suggested by Phamdo and Farvardin (see IEEE Trans. Inform. Theory, vol.40, no.1, p.156-193, 1994). The performance of the scheme is close to that of pseudo-gray coding at lower bit error rates and outperforms it at higher error rates. No extra bits are used for channel error correction.

  • index assignment for Progressive Transmission of full search vector quantization
    IEEE Transactions on Image Processing, 1994
    Co-Authors: Eve A Riskin, Renyuh Wang, Richard E Ladner, Les Atlas
    Abstract:

    The authors study codeword index assignment to allow for Progressive image Transmission of fixed rate full-search vector quantization (VQ). They develop three new methods of assigning indices to a vector quantization codebook and formulate these assignments as labels of nodes of a full-search Progressive Transmission tree. The tree is used to design intermediate codewords for the decoder so that full-search VQ has a successive approximation character. The binary representation for the path through the tree represents the Progressive Transmission code. The methods of designing the tree that they apply are the generalized Lloyd algorithm, minimum cost perfect matching from optimization theory, and a method of principal component partitioning. Their empirical results show that the final method gives intermediate signal-to-noise ratios (SNRs) that are close to those obtained with tree-structured vector quantization, yet they have higher final SNRs. >

  • codebook organization to enhance maximum a posteriori detection of Progressive Transmission of vector quantized images over noisy channels
    International Conference on Acoustics Speech and Signal Processing, 1993
    Co-Authors: Renyuh Wang, Eve A Riskin, Richard E Ladner
    Abstract:

    The authors describe a new way to organize a full search vector quantization codebook so that images encoded with it can be sent Progressively and have immunity against channel noise. Due to the codebook organization, the most significant bits (MSBs) of the codeword index are most important to the overall image quality and are highly correlated. Simulations show that the effective channel error rates of the MSBs can be substantially lowered by implementing a maximum a posteriori (MAP) detector similar to one suggested by N. Phamdo and N. Farvardin (1992). The performance of the scheme is close to that of pseudo-Gray coding at low bit error rates and outperforms it at higher error rates. >

  • index assignment for Progressive Transmission of full search vector quantization
    International Symposium on Information Theory, 1993
    Co-Authors: Les Atlas, Renyuh Wang, Richard E Ladner
    Abstract:

    We address Progressive Transmission of full search image vector quantization. We build a Progressive Transmission tree to define binary mergings of codewords for successively smaller sized codebooks. The tree design methods we apply are the generalized Lloyd algorithm splitting algorithm, minimum cost perfect matching, and a method of principal eigenvectors.

P P Vaidyanathan - One of the best experts on this subject based on the ideXlab platform.

  • results on principal component filter banks colored noise suppression and existence issues
    IEEE Transactions on Information Theory, 2001
    Co-Authors: S Akkarakaran, P P Vaidyanathan
    Abstract:

    We have made explicit the precise connection between the optimization of orthonormal filter banks (FBs) and the principal component property: the principal component filter bank (PCFB) is optimal whenever the minimization objective is a concave function of the subband variances of the FB. This explains PCFB optimality for compression, Progressive Transmission, and various hitherto unnoticed white-noise, suppression applications such as subband Wiener filtering. The present work examines the nature of the FB optimization problems for such schemes when PCFBs do not exist. Using the geometry of the optimization search spaces, we explain exactly why these problems are usually analytically intractable. We show the relation between compaction filter design (i.e., variance maximization) and optimum FBs. A sequential maximization of subband variances produces a PCFB if one exists, but is otherwise suboptimal for several concave objectives. We then study PCFB optimality for colored noise suppression. Unlike the case when the noise is white, here the minimization objective is a function of both the signal and the noise subband variances. We show that for the transform coder class, if a common signal and noise PCFB (KLT) exists, it is, optimal for a large class of concave objectives. Common PCFBs for general FB classes have a considerably more restricted optimality, as we show using the class of unconstrained orthonormal FBs. For this class, we also show how to find an optimum FB when the signal and noise spectra are both piecewise constant with all discontinuities at rational multiples of /spl pi/.

  • nonuniform principal component filter banks definitions existence and optimality
    Proceedings of SPIE the International Society for Optical Engineering, 2000
    Co-Authors: S Akkarakaran, P P Vaidyanathan
    Abstract:

    The optimality of principal component filter banks (PCFBs) for data compression has been observed in many works to varying extents. Recent work by the authors has made explicit the precise connection between the optimality of uniform orthonormal filter banks (FBs) and the principal component property: The PCFB is optimal whenever the minimization objective is a concave function of the subband variances of the FB. This gives a unified explanation of PCFB optimality for compression, denoising and Progressive Transmission. However not much is known for the case when the optimization is over a class of nonuniform Fbs. In this paper we first define the notion of a PCFB for a class of nonuniform orthonormal Fbs. We then show how it generalizes the uniform PCFBs by being optimal for a certain family of concave objectives. Lastly, we show that existence of nonuniform PCFBs could imply severe restrictions on the input power spectrum. For example, for the class of unconstrained orthonormal nonuniform Fbs with any given set of decimators that are not all equal, there is no PCFB if the input spectrum is strictly monotone.

  • are nonuniform principal component filter banks optimal
    European Signal Processing Conference, 2000
    Co-Authors: S Akkarakaran, P P Vaidyanathan
    Abstract:

    The notion of a principal component filter bank (PCFB) for a given class of uniform filter banks (FB's) has been well studied. Recent work by the authors has shown that PCFB's are optimal orthonormal FB's whenever the minimization objective is a concave function of the vector of subband variances of the FB. This result gives a unified explanation of PCFB optimality for Progressive Transmission, compression, noise suppression, and as shown more recently, for use in DMT (discrete multi-tone modulation) systems. This paper generalizes such results to nonuniform FB's. We propose two distinct definitions of nonuniform PCFB's. Each definition results in PCFB optimality for certain types of concave objectives whose form is somewhat more restricted than in the case of uniform FB's. We study existence of the defined PCFB's, and observe that it can be very delicate: Small perturbations of the input spectra can sometimes destroy the existence of nonuniform PCFB's.

  • principal component filter banks existence issues and application to modulated filter banks
    International Symposium on Circuits and Systems, 2000
    Co-Authors: S Akkarakaran, P P Vaidyanathan
    Abstract:

    Principal component filter banks (PCFBs) sequentially compress most of the input signal energy into the first few subbands, and are mathematically defined using the notion of majorization. In a series of recent works, we have exploited connections between majorization and convexity theory to provide a unified explanation of PCFB optimality for numerous signal processing problems, involving compression, noise suppression and Progressive Transmission, However PCFBs are known to exist for all input spectra only for three special classes of orthonormal filter banks (FBs): any class of two channel FBs, the transform coder class and the unconstrained class. This paper uses the developed theory to describe techniques to examine existence of PCFBs. We prove that the classes of DFT and cosine-modulated FBs do not have PCFBs for large families of input spectra. This result is new and quite different from most known facts on nonexistence of PCFBs, which usually involve very specific examples and proofs with numerical optimizations.

Ping Wah Wong - One of the best experts on this subject based on the ideXlab platform.

  • embedded multilevel error diffusion
    IEEE Transactions on Image Processing, 1997
    Co-Authors: J R Goldschneider, Eve A Riskin, Ping Wah Wong
    Abstract:

    We present an algorithm for image browsing systems that embeds the output of binary Floyd-Steinberg (1975) error diffusion, or a low bit-depth gray-scale or color error diffused image into higher bit-depth gray-scale or color error diffused images. The benefits of this algorithm are that a low bit-depth halftoned image can be directly obtained from a higher bit-depth halftone for printing or Progressive Transmission simply by masking one or more bits off of the higher bit-depth image. The embedding can be done in any bits of the output, although the most significant or the least significant bits are most convenient. Due to constraints on the palette introduced by embedding, the image quality for the higher bit-depth halftone may be reduced. To preserve the image quality, we present algorithms for color palette organization, or binary index assignment, to be used as a preprocessing step to the embedding algorithm.

  • adaptive error diffusion and its application in multiresolution rendering
    IEEE Transactions on Image Processing, 1996
    Co-Authors: Ping Wah Wong
    Abstract:

    Error diffusion is a procedure for generating high quality bilevel images from continuous-tone images so that both the continuous and halftone images appear similar when observed from a distance. It is well known that certain objectionable patterning artifacts can occur in error-diffused images. Here, we consider a method for adjusting the error-diffusion filter concurrently with the error-diffusion process so that an error criterion is minimized. The minimization is performed using the least mean squares (LMS) algorithm in adaptive signal processing. Using both raster and serpentine scanning, we show that such an algorithm produces better halftone image quality compared to traditional error diffusion with a fixed filter. Based on the adaptive error-diffusion algorithm, we propose a method for constructing a halftone image that can be rendered at multiple resolutions. Specifically, the method generates a halftone from a continuous tone image such that if the halftone is down-sampled, a binary image would result that is also a high quality rendition of the continuous-tone image at a reduced resolution. Such a halftone image is suitable for Progressive Transmission, and for cases where rendition at several resolutions is required. Cases for noninteger scaling factors are also considered.

William A Pearlman - One of the best experts on this subject based on the ideXlab platform.

  • hyperspectral image compression using three dimensional wavelet coding a lossy to lossless solution
    2004
    Co-Authors: Xiaoli Tang, William A Pearlman, James W. Modestino
    Abstract:

    We propose an embedded, block-based, image wavelet transform coding algorithm of low complexity. The embedded coding of Set Partitioned Embedded bloCK (SPECK) algorithm is modified and extended to three dimensions. The resultant algorithm, three-Dimensional Set Partitioned Embedded bloCK (3D-SPECK), efficiently encodes 3D volumetric image data by exploiting the dependencies in all dimensions. 3D-SPECK generates embedded bit stream and therefore provides Progressive Transmission. We describe the use of this coding algorithm in two implementations, including integer wavelet transform as well as floating point wavelet transform, where the former one enables lossy and lossless decompression from the same bit stream, and the latter one achieves better performance in lossy compression. Wavelet packet structure and coefficient scaling are used to make the integer filter transform approximately unitary. The structure of hyperspectral images reveals spectral responses that would seem ideal candidate for compression by 3D-SPECK. We demonstrate that 3D-SPECK, a wavelet domain compression algorithm, can preserve spectral profiles well. Compared with the lossless version of the benchmark JPEG2000 (multi-component), the 3D-SPECK lossless algorithm produces average of 3.0% decrease in compressed file size for Airborne Visible Infrared Imaging Spectrometer images, the typical hyperspectral imagery. We also conduct comparisons of the lossy implementation with other the state-of-the-art algorithms such as three-Dimensional Set Partitioning In Hierarchical Trees (3D-SPIHT) and JPEG2000. We conclude that this algorithm, in addition to being very flexible, retains all the desirable features of these algorithms and is highly competitive to 3D-SPIHT and better than JPEG2000 in compression efficiency.

  • embedded video subband coding with 3d spiht
    2002
    Co-Authors: William A Pearlman, Zixiang Xiong
    Abstract:

    This chapter is devoted to the exposition of a complete video coding system, which is based on coding of three dimensional (wavelet) subbands with the SPIHT (set partitioning in hierarchical trees) coding algorithm. The SPIHT algorithm, which has proved so successful in still image coding, is also shown to be quite effective in video coding, while retaining its attributes of complete embeddedness and scalability by fidelity and resolution. Three-dimensional spatio-temporal orientation trees coupled with powerful SPIHT sorting and refinement renders 3D SPIHT video coder so efficient that it provides performance superior to that of MPEG-2 and comparable to that of H.263 with minimal system complexity. Extension to color-embedded video coding is accomplished without explicit bit-allocation, and can be used for any color plane representation. In addition to being rate scalable, the proposed video coder allows multiresolution scalability in encoding and decoding in both time and space from one bit-stream. These attributes of scalability, lacking in MPEG-2 and H.263, along with many desirable features, such as full embedded-ness for Progressive Transmission, precise rate control for constant bit-rate (CBR) traffic, and low-complexity for possible software-only video applications, makes the proposed video coder an attractive candidate for for multi-media applications. Moreover, the codec is fast and efficient from low to high rates, obviating the need for a different standard for each rate range.

  • low bit rate scalable video coding with 3 d set partitioning in hierarchical trees 3 d spiht
    IEEE Transactions on Circuits and Systems for Video Technology, 2000
    Co-Authors: Beongjo Kim, Zixiang Xiong, William A Pearlman
    Abstract:

    We propose a low bit-rate embedded video coding scheme that utilizes a 3-D extension of the set partitioning in hierarchical trees (SPIHT) algorithm which has proved so successful in still image coding. Three-dimensional spatio-temporal orientation trees coupled with powerful SPIHT sorting and refinement renders 3-D SPIHT video coder so efficient that it provides comparable performance to H.263 objectively and subjectively when operated at the bit rates of 30 to 60 kbits/s with minimal system complexity. Extension to color-embedded video coding is accomplished without explicit bit allocation, and can be used for any color plane representation. In addition to being rate scalable, the proposed video coder allows multiresolutional scalability in encoding and decoding in both time and space from one bit stream. This added functionality along with many desirable attributes, such as full embeddedness for Progressive Transmission, precise rate control for constant bit-rate traffic, and low complexity for possible software-only video applications, makes the proposed video coder an attractive candidate for multimedia applications.

  • an image multiresolution representation for lossless and lossy compression
    IEEE Transactions on Image Processing, 1996
    Co-Authors: Amir Said, William A Pearlman
    Abstract:

    We propose a new image multiresolution transform that is suited for both lossless (reversible) and lossy compression. The new transformation is similar to the subband decomposition, but can be computed with only integer addition and bit-shift operations. During its calculation, the number of bits required to represent the transformed image is kept small through careful scaling and truncations. Numerical results show that the entropy obtained with the new transform is smaller than that obtained with predictive coding of similar complexity. In addition, we propose entropy-coding methods that exploit the multiresolution structure, and can efficiently compress the transformed image for Progressive Transmission (up to exact recovery). The lossless compression ratios are among the best in the literature, and simultaneously the rate versus distortion performance is comparable to those of the most efficient lossy compression methods.