Lossless Image Compression

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 6969 Experts worldwide ranked by ideXlab platform

Martin Fuchs - One of the best experts on this subject based on the ideXlab platform.

  • GCPR - High Speed Lossless Image Compression
    Lecture Notes in Computer Science, 2015
    Co-Authors: Hendrik Siedelmann, Alexander Wender, Martin Fuchs
    Abstract:

    We introduce a simple approach to Lossless Image Compression, which makes use of SIMD vectorization at every processing step to provide very high speed on modern CPUs. This is achieved by basing the Compression on delta coding for prediction and bit packing for the actual Compression, allowing a tuneable tradeoff between efficiency and speed, via the block size used for bit packing. The maximum achievable speed surpasses main memory bandwidth on the tested CPU, as well as the speed of all previous methods that achieve at least the same coding efficiency.

A. Fukunaga - One of the best experts on this subject based on the ideXlab platform.

B Johnson - One of the best experts on this subject based on the ideXlab platform.

  • a chip set for Lossless Image Compression
    IEEE Journal of Solid-state Circuits, 1991
    Co-Authors: Imran Ali Shah, Olu Akiwumiassani, B Johnson
    Abstract:

    The authors describe two chips which form the basis of a high-speed Lossless Image Compression/deCompression system. They present the transform and coding algorithms and the main architectural features of the chips and outline some performance specifications. Lossless Compression can be achieved by a transformation process followed by entropy coding. The two application-specific integrated circuits (ASICs) perform S-transform Image decomposition and the Lempel-Ziv (L-Z) type of entropy coding. The S-transform, besides decorrelating the Image, provides a convenient method of hierarchical Image decomposition. The data compressor/decompressor IC is a fast and efficient implementation of the L-Z algorithm. The chips can be used independently or together for Image Compression. >

Khalid Sayood - One of the best experts on this subject based on the ideXlab platform.

  • Lossless Image Compression
    Introduction to Data Compression, 2012
    Co-Authors: Khalid Sayood
    Abstract:

    This chapter examines a number of schemes used for Lossless Compression of Images. It highlights schemes for Compression of grayscale and color Images as well as schemes for Compression of binary Images. Among these schemes are several that are a part of international standards. The joint photographic experts group (JPEG) is a joint ISO/ITU committee responsible for developing standards for continuous-tone still-picture coding. The more famous standard produced by this group is the lossy Image Compression standard. However, at the time of the creation of the famous JPEG standard, the committee also created a Lossless standard. The old JPEG Lossless still Compression standard provides eight different predictive schemes from which the user can select. In addition, the context adaptive Lossless Image Compression (CALIC) scheme, which came into being in response to a call for proposal for a new Lossless Image Compression scheme in 1994, uses both context and prediction of the pixel values. The CALIC scheme actually functions in two modes, one for gray-scale Images, and another for bi-level Images. One of the approaches used by CALIC to reduce the size of its alphabet is to use a modification of a technique called recursive indexing. Recursive indexing is a technique for representing a large range of numbers using only a small set.

  • A differential Lossless Image Compression scheme
    IEEE Transactions on Signal Processing, 1992
    Co-Authors: Khalid Sayood, K. Anderson
    Abstract:

    A low-complexity, Lossless Image Compression algorithm which is suitable for real-time implementation is presented. The algorithm codes each pixel into a set of two symbols, a prefix and a suffix. The prefix is the number of most significant bits which are identical in a reference pixel, while the suffix is the remaining bits except for the left-most bit. Three methods, two fixed and one adaptive, are investigated. The results compare favorably with those obtained using other schemes. >

  • ISCAS (4) - An evolvable predictor for Lossless Image Compression
    2002 IEEE International Symposium on Circuits and Systems. Proceedings (Cat. No.02CH37353), 1
    Co-Authors: D. Leon, Sina Balkir, Khalid Sayood
    Abstract:

    This paper presents the adaptation via evolutionary techniques of a pixel predictor for Lossless Image Compression. The pixel prediction is based on a linear combination of some neighbor pixels. The evolutionary algorithm selects the coefficients and the pixels involved in the pixel prediction. Experiments carried out on gray level Images of the proposed system show a performance comparable and in some cases better than existing predictive coding techniques.

Hendrik Siedelmann - One of the best experts on this subject based on the ideXlab platform.

  • GCPR - High Speed Lossless Image Compression
    Lecture Notes in Computer Science, 2015
    Co-Authors: Hendrik Siedelmann, Alexander Wender, Martin Fuchs
    Abstract:

    We introduce a simple approach to Lossless Image Compression, which makes use of SIMD vectorization at every processing step to provide very high speed on modern CPUs. This is achieved by basing the Compression on delta coding for prediction and bit packing for the actual Compression, allowing a tuneable tradeoff between efficiency and speed, via the block size used for bit packing. The maximum achievable speed surpasses main memory bandwidth on the tested CPU, as well as the speed of all previous methods that achieve at least the same coding efficiency.