Quantization Bin

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 126 Experts worldwide ranked by ideXlab platform

Gene Cheung - One of the best experts on this subject based on the ideXlab platform.

  • Soft Decoding of Light Field Images Using Pocs and Fast Graph Spectrayl Filters
    2018 IEEE International Conference on Acoustics Speech and Signal Processing (ICASSP), 2018
    Co-Authors: Shuai Yang, Gene Cheung
    Abstract:

    Light field data captured by a lenslet-based image sensor is typically demosaicked, aligned and rearranged into a series of sub-aperture (viewpoint) images, before a disparity-compensated coding scheme is employed for compression. In this paper, we focus on the problem of soft decoding of block-based compressed sub-aperture images at the decoder: given Quantization Bin indices of DCT coefficients of non-overlapping code blocks, we select appropriate coefficient values that are low-pass filtered using graph spectral filters and view-consistent across sub-aperture images via projection on convex sets (POCS). Specifically, after an initial pixel estimate, we low-pass filter each pixel block using accelerated graph filters based on the Lanczos method. We then map filtered pixels to a neighborhood of sub-aperture images based on estimated disparity to enforce indexed Quantization Bin constraints of multiple images. Experimental results show that our algorithm achieves PSNR gain of 2.34dB over JPEG hard decoding.

  • ICASSP - Soft Decoding of Light Field Images Using Pocs and Fast Graph Spectrayl Filters
    2018 IEEE International Conference on Acoustics Speech and Signal Processing (ICASSP), 2018
    Co-Authors: Shuai Yang, Gene Cheung
    Abstract:

    Light field data captured by a lenslet-based image sensor is typically demosaicked, aligned and rearranged into a series of sub-aperture (viewpoint) images, before a disparity-compensated coding scheme is employed for compression. In this paper, we focus on the problem of soft decoding of block-based compressed sub-aperture images at the decoder: given Quantization Bin indices of DCT coefficients of non-overlapping code blocks, we select appropriate coefficient values that are low-pass filtered using graph spectral filters and view-consistent across sub-aperture images via projection on convex sets (POCS). Specifically, after an initial pixel estimate, we low-pass filter each pixel block using accelerated graph filters based on the Lanczos method. We then map filtered pixels to a neighborhood of sub-aperture images based on estimated disparity to enforce indexed Quantization Bin constraints of multiple images. Experimental results show that our algorithm achieves PSNR gain of 2.34dB over JPEG hard decoding.

  • Prior-Based Quantization Bin Matching for Cloud Storage of JPEG Images
    IEEE Transactions on Image Processing, 2018
    Co-Authors: Gene Cheung, Debin Zhao
    Abstract:

    Millions of user-generated images are uploaded to social media sites like Facebook daily, which translate to a large storage cost. However, there exists an asymmetry in upload and download data: only a fraction of the uploaded images are subsequently retrieved for viewing. In this paper, we propose a cloud storage system that reduces the storage cost of all uploaded JPEG photos, at the expense of a controlled increase in computation mainly during download of requested image subset. Specifically, the system first selectively re-encodes code blocks of uploaded JPEG images using coarser Quantization parameters for smaller storage sizes. Then during download, the system exploits known signal priors - sparsity prior and graph-signal smoothness prior - for reverse mapping to recover original fine Quantization Bin indices, with either deterministic guarantee (lossless mode) or statistical guarantee (near-lossless mode). For fast reverse mapping, we use small dictionaries and sparse graphs that are tailored for specific clusters of similar blocks, which are classified via tree-structured vector quantizer. During image upload, cluster indices identifying the appropriate dictionaries and graphs for the re-quantized blocks are encoded as side information using a differential distributed source coding scheme to facilitate reverse mapping during image download. Experimental results show that our system can reap significant storage savings (up to 12.05%) at roughly the same image PSNR (within 0.18 dB).

  • Quantization Bin matching for cloud storage of JPEG images
    2016 IEEE International Conference on Acoustics Speech and Signal Processing (ICASSP), 2016
    Co-Authors: Gene Cheung, Debin Zhao
    Abstract:

    Social media sites like Facebook are obligated to store all photos uploaded by an ever growing user base-which translates to an increasingly expensive storage cost-but only a fraction of uploaded images are revisited thereafter. In this paper, we propose a cloud storage system that trades off computation of a small fraction of requested images with storage of all photos. The key idea is to re-encode uploaded JPEG photos with coarser Quantization parameters (QP) for permanent storage, then exploit a signal sparsity prior during inverse mapping to recover fine Quantization Bin indices via a maximum a posteriori (MAP) formulation. Because by design the system guarantees recovery of an original compressed image (either with exactly the same input fine Quantization Bin indices or has visual quality indistinguishable by human eyes), from the user's viewpoint it is a normal cloud storage, while from the operator's viewpoint there is pure compression gain and hence lower storage cost. Experimental results show that our storage system can reap significant storage savings (up to 20%) at roughly the same image PSNR (within 0.13dB).

  • ICASSP - Quantization Bin matching for cloud storage of JPEG images
    2016 IEEE International Conference on Acoustics Speech and Signal Processing (ICASSP), 2016
    Co-Authors: Gene Cheung, Debin Zhao
    Abstract:

    Social media sites like Facebook are obligated to store all photos uploaded by an ever growing user base—which translates to an increasingly expensive storage cost—but only a fraction of uploaded images are revisited thereafter. In this paper, we propose a cloud storage system that trades off computation of a small fraction of requested images with storage of all photos. The key idea is to re-encode uploaded JPEG photos with coarser Quantization parameters (QP) for permanent storage, then exploit a signal sparsity prior during inverse mapping to recover fine Quantization Bin indices via a maximum a posteriori (MAP) formulation. Because by design the system guarantees recovery of an original compressed image (either with exactly the same input fine Quantization Bin indices or has visual quality indistinguishable by human eyes), from the user's viewpoint it is a normal cloud storage, while from the operator's viewpoint there is pure compression gain and hence lower storage cost. Experimental results show that our storage system can reap significant storage savings (up to 20%) at roughly the same image PSNR (within 0.13dB).

Debin Zhao - One of the best experts on this subject based on the ideXlab platform.

  • Prior-Based Quantization Bin Matching for Cloud Storage of JPEG Images
    IEEE Transactions on Image Processing, 2018
    Co-Authors: Gene Cheung, Debin Zhao
    Abstract:

    Millions of user-generated images are uploaded to social media sites like Facebook daily, which translate to a large storage cost. However, there exists an asymmetry in upload and download data: only a fraction of the uploaded images are subsequently retrieved for viewing. In this paper, we propose a cloud storage system that reduces the storage cost of all uploaded JPEG photos, at the expense of a controlled increase in computation mainly during download of requested image subset. Specifically, the system first selectively re-encodes code blocks of uploaded JPEG images using coarser Quantization parameters for smaller storage sizes. Then during download, the system exploits known signal priors - sparsity prior and graph-signal smoothness prior - for reverse mapping to recover original fine Quantization Bin indices, with either deterministic guarantee (lossless mode) or statistical guarantee (near-lossless mode). For fast reverse mapping, we use small dictionaries and sparse graphs that are tailored for specific clusters of similar blocks, which are classified via tree-structured vector quantizer. During image upload, cluster indices identifying the appropriate dictionaries and graphs for the re-quantized blocks are encoded as side information using a differential distributed source coding scheme to facilitate reverse mapping during image download. Experimental results show that our system can reap significant storage savings (up to 12.05%) at roughly the same image PSNR (within 0.18 dB).

  • Quantization Bin matching for cloud storage of JPEG images
    2016 IEEE International Conference on Acoustics Speech and Signal Processing (ICASSP), 2016
    Co-Authors: Gene Cheung, Debin Zhao
    Abstract:

    Social media sites like Facebook are obligated to store all photos uploaded by an ever growing user base-which translates to an increasingly expensive storage cost-but only a fraction of uploaded images are revisited thereafter. In this paper, we propose a cloud storage system that trades off computation of a small fraction of requested images with storage of all photos. The key idea is to re-encode uploaded JPEG photos with coarser Quantization parameters (QP) for permanent storage, then exploit a signal sparsity prior during inverse mapping to recover fine Quantization Bin indices via a maximum a posteriori (MAP) formulation. Because by design the system guarantees recovery of an original compressed image (either with exactly the same input fine Quantization Bin indices or has visual quality indistinguishable by human eyes), from the user's viewpoint it is a normal cloud storage, while from the operator's viewpoint there is pure compression gain and hence lower storage cost. Experimental results show that our storage system can reap significant storage savings (up to 20%) at roughly the same image PSNR (within 0.13dB).

  • ICASSP - Quantization Bin matching for cloud storage of JPEG images
    2016 IEEE International Conference on Acoustics Speech and Signal Processing (ICASSP), 2016
    Co-Authors: Gene Cheung, Debin Zhao
    Abstract:

    Social media sites like Facebook are obligated to store all photos uploaded by an ever growing user base—which translates to an increasingly expensive storage cost—but only a fraction of uploaded images are revisited thereafter. In this paper, we propose a cloud storage system that trades off computation of a small fraction of requested images with storage of all photos. The key idea is to re-encode uploaded JPEG photos with coarser Quantization parameters (QP) for permanent storage, then exploit a signal sparsity prior during inverse mapping to recover fine Quantization Bin indices via a maximum a posteriori (MAP) formulation. Because by design the system guarantees recovery of an original compressed image (either with exactly the same input fine Quantization Bin indices or has visual quality indistinguishable by human eyes), from the user's viewpoint it is a normal cloud storage, while from the operator's viewpoint there is pure compression gain and hence lower storage cost. Experimental results show that our storage system can reap significant storage savings (up to 20%) at roughly the same image PSNR (within 0.13dB).

  • ICIP - Inter-block consistent soft decoding of JPEG images with sparsity and graph-signal smoothness priors
    2015 IEEE International Conference on Image Processing (ICIP), 2015
    Co-Authors: Gene Cheung, Xiaolin Wu, Debin Zhao
    Abstract:

    Given the prevalence of JPEG compressed images on the Internet, image reconstruction from the compressed format remains an important and practical problem. Instead of simply reconstructing a pixel block from the centers of assigned DCT coefficient Quantization Bins (hard decoding), we propose to jointly reconstruct a neighborhood group of pixel patches using two image priors while satisfying the Quantization Bin constraints. First, we assume that a pixel patch can be approximated as a sparse linear comBination of atoms from an offline-learned over-complete dictionary. Second, we assume that a patch, when interpreted as a graph-signal, is smooth with respect to an appropriately defined graph that captures the estimated structure of the target image. Finally, neighboring patches in the optimization have sufficient overlaps and are forced to be consistent, so that blocking artifacts typical in JPEG decoded images are avoided. To find the optimal group of patches, we formulate a constrained optimization problem and propose a fast alternating algorithm to find locally optimal solutions. Experimental results show that our proposed algorithm outperforms state-of-the-art soft decoding algorithms by up to 1.47dB in PSNR.

  • Inter-block consistent soft decoding of JPEG images with sparsity and graph-signal smoothness priors
    2015 IEEE International Conference on Image Processing (ICIP), 2015
    Co-Authors: Gene Cheung, Xiaolin Wu, Debin Zhao
    Abstract:

    Given the prevalence of JPEG compressed images on the Internet, image reconstruction from the compressed format remains an important and practical problem. Instead of simply reconstructing a pixel block from the centers of assigned DCT coefficient Quantization Bins (hard decoding), we propose to jointly reconstruct a neighborhood group of pixel patches using two image priors while satisfying the Quantization Bin constraints. First, we assume that a pixel patch can be approximated as a sparse linear comBination of atoms from an offline-learned over-complete dictionary. Second, we assume that a patch, when interpreted as a graph-signal, is smooth with respect to an appropriately defined graph that captures the estimated structure of the target image. Finally, neighboring patches in the optimization have sufficient overlaps and are forced to be consistent, so that blocking artifacts typical in JPEG decoded images are avoided. To find the optimal group of patches, we formulate a constrained optimization problem and propose a fast alternating algorithm to find locally optimal solutions. Experimental results show that our proposed algorithm outperforms state-of-the-art soft decoding algorithms by up to 1.47dB in PSNR.

Feng Zhou - One of the best experts on this subject based on the ideXlab platform.

  • A Novel Technique for Improving Temperature Independency of Ring-ADC
    2008 Design Automation and Test in Europe, 2008
    Co-Authors: Shun Li, Hua Chen, Feng Zhou
    Abstract:

    A new temperature compensation technique for ring-oscillator-based ADC is proposed in this paper. It employs a novel fixed-number-based algorithm and a CTAT current biasing technology to compensate the temperature- dependent variations of the output, thus eliminates the need of digital calibrations. Simulation results prove that, with the proposed technique, the resolution under the temperature range of 0degC to 100degC can reach a 2-mV Quantization Bin size with an input voltage span of 120 mV, at the sampling frequency fs=100 KHz.

  • DATE - A novel technique for improving temperature independency of ring-ADC
    Proceedings of the conference on Design automation and test in Europe - DATE '08, 2008
    Co-Authors: Shun Li, Hua Chen, Feng Zhou
    Abstract:

    A new temperature compensation technique for ring-oscillator-based ADC is proposed in this paper. It employs a novel fixed-number-based algorithm and a CTAT current biasing technology to compensate the temperature- dependent variations of the output, thus eliminates the need of digital calibrations. Simulation results prove that, with the proposed technique, the resolution under the temperature range of 0degC to 100degC can reach a 2-mV Quantization Bin size with an input voltage span of 120 mV, at the sampling frequency fs=100 KHz.

Jalal M. Fadili - One of the best experts on this subject based on the ideXlab platform.

  • Stabilizing Nonuniformly Quantized Compressed Sensing With Scalar Companders
    IEEE Transactions on Information Theory, 2013
    Co-Authors: Laurent Jacques, David K. Hammond, Jalal M. Fadili
    Abstract:

    This paper addresses the problem of stably recovering sparse or compressible signals from compressed sensing measurements that have undergone optimal nonuniform scalar Quantization, i.e., minimizing the common ℓ2-norm distortion. Generally, this quantized compressed sensing (QCS) problem is solved by minimizing the ℓ1-norm constrained by the ℓ2-norm distortion. In such cases, remeasurement and Quantization of the reconstructed signal do not necessarily match the initial observations, showing that the whole QCS model is not consistent. Our approach considers instead that Quantization distortion more closely resembles heteroscedastic uniform noise, with variance depending on the observed Quantization Bin. Generalizing our previous work on uniform Quantization, we show that for nonuniform quantizers described by the “compander” formalism, Quantization distortion may be better characterized as having bounded weighted ℓp-norm (p ≥ 2), for a particular weighting. We develop a new reconstruction approach, termed Generalized Basis Pursuit DeNoise (GBPDN), which minimizes the ℓ1-norm of the signal to reconstruct constrained by this weighted ℓp-norm fidelity. We prove that, for standard Gaussian sensing matrices and K sparse or compressible signals in RN with at least Ω((K logN/K)p/2) measurements, i.e., under strongly oversampled QCS scenario, GBPDN is ℓ2-ℓ1 instance optimal and stable recovers all such sparse or compressible signals. The reconstruction error decreases as O(2-B/√(p+1)) given a budget of B bits per measurement. This yields a reduction by a factor √(p+1) of the reconstruction error compared to the one produced by ℓ2-norm constrained decoders. We also propose an primal-dual proximal splitting scheme to solve the GBPDN program which is efficient for large-scale problems. Interestingly, extensive simulations testing the GBPDN effectiveness confirm the trend predicted by the theory, that the reconstruction error can indeed be reduced by increasing p, but this is achieved at a much less stringent oversampling regime than the one expected by the theoretical bounds. Besides the QCS scenario, we also show that GBPDN applies straightforwardly to the related case of CS measurements corrupted by heteroscedastic generalized Gaussian noise with provable reconstruction error reduction.

Mostafa El-khamy - One of the best experts on this subject based on the ideXlab platform.

  • ICCV - Variable Rate Deep Image Compression With a Conditional Autoencoder
    2019 IEEE CVF International Conference on Computer Vision (ICCV), 2019
    Co-Authors: Yoojin Choi, Mostafa El-khamy
    Abstract:

    In this paper, we propose a novel variable-rate learned image compression framework with a conditional autoencoder. Previous learning-based image compression methods mostly require training separate networks for different compression rates so they can yield compressed images of varying quality. In contrast, we train and deploy only one variable-rate image compression network implemented with a conditional autoencoder. We provide two rate control parameters, i.e., the Lagrange multiplier and the Quantization Bin size, which are given as conditioning variables to the network. Coarse rate adaptation to a target is performed by changing the Lagrange multiplier, while the rate can be further fine-tuned by adjusting the Bin size used in quantizing the encoded representation. Our experimental results show that the proposed scheme provides a better rate-distortion trade-off than the traditional variable-rate image compression codecs such as JPEG2000 and BPG. Our model also shows comparable and sometimes better performance than the state-of-the-art learned image compression models that deploy multiple networks trained for varying rates.

  • Variable Rate Deep Image Compression With a Conditional Autoencoder
    2019 IEEE CVF International Conference on Computer Vision (ICCV), 2019
    Co-Authors: Yoojin Choi, Mostafa El-khamy
    Abstract:

    In this paper, we propose a novel variable-rate learned image compression framework with a conditional autoencoder. Previous learning-based image compression methods mostly require training separate networks for different compression rates so they can yield compressed images of varying quality. In contrast, we train and deploy only one variable-rate image compression network implemented with a conditional autoencoder. We provide two rate control parameters, i.e., the Lagrange multiplier and the Quantization Bin size, which are given as conditioning variables to the network. Coarse rate adaptation to a target is performed by changing the Lagrange multiplier, while the rate can be further fine-tuned by adjusting the Bin size used in quantizing the encoded representation. Our experimental results show that the proposed scheme provides a better rate-distortion trade-off than the traditional variable-rate image compression codecs such as JPEG2000 and BPG. Our model also shows comparable and sometimes better performance than the state-of-the-art learned image compression models that deploy multiple networks trained for varying rates.