Compression Technique

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 77427 Experts worldwide ranked by ideXlab platform

Robert Wrembel - One of the best experts on this subject based on the ideXlab platform.

  • rlh bitmap Compression Technique based on run length and huffman encoding
    Information Systems, 2009
    Co-Authors: Michal Stabno, Robert Wrembel
    Abstract:

    In this paper we propose a Technique of compressing bitmap indexes for application in data warehouses. This Technique, called run-length Huffman (RLH), is based on run-length encoding and on Huffman encoding. Additionally, we present a variant of RLH, called RLH-N. In RLH-N a bitmap is divided into N-bit words that are compressed by RLH. RLH and RLH-N were implemented and experimentally compared to the well-known word aligned hybrid (WAH) bitmap Compression Technique that has been reported to provide the shortest query execution time. The experiments discussed in this paper show that: (1) RLH-compressed bitmaps are smaller than corresponding WAH-compressed bitmaps, regardless of the cardinality of an indexed attribute, (2) RLH-N-compressed bitmaps are smaller than corresponding WAH-compressed bitmaps for certain range of cardinalities of an indexed attribute, (3) RLH and RLH-N-compressed bitmaps offer shorter query response times than WAH-compressed bitmaps, for certain range of cardinalities of an indexed attribute, and (4) RLH-N assures shorter update time of compressed bitmaps than RLH.

  • rlh bitmap Compression Technique based on run length and huffman encoding
    Data Warehousing and OLAP, 2007
    Co-Authors: Michal Stabno, Robert Wrembel
    Abstract:

    In this paper we present a Technique of compressing bitmap indexes for application in data warehouses. The developed Compression Technique, called Run-Length Huffman (RLH), is based on the run-length encoding and on the Huffman encoding. RLH was implemented and experimentally compared to the well known Word Aligned Hybrid bitmap Compression Technique that has been reported to provide the shortest query execution time. The experiments discussed in this paper show that RLH offers shorter query response times than WAH, for certain cardinalities of indexed attributes. Moreover, bitmaps compressed with RLH are smaller than corresponding bitmaps compressed with WAH. Additionally, we propose a modified RLH, called RLH-1024, which is designed to better support bitmap updates.

Michal Stabno - One of the best experts on this subject based on the ideXlab platform.

  • rlh bitmap Compression Technique based on run length and huffman encoding
    Information Systems, 2009
    Co-Authors: Michal Stabno, Robert Wrembel
    Abstract:

    In this paper we propose a Technique of compressing bitmap indexes for application in data warehouses. This Technique, called run-length Huffman (RLH), is based on run-length encoding and on Huffman encoding. Additionally, we present a variant of RLH, called RLH-N. In RLH-N a bitmap is divided into N-bit words that are compressed by RLH. RLH and RLH-N were implemented and experimentally compared to the well-known word aligned hybrid (WAH) bitmap Compression Technique that has been reported to provide the shortest query execution time. The experiments discussed in this paper show that: (1) RLH-compressed bitmaps are smaller than corresponding WAH-compressed bitmaps, regardless of the cardinality of an indexed attribute, (2) RLH-N-compressed bitmaps are smaller than corresponding WAH-compressed bitmaps for certain range of cardinalities of an indexed attribute, (3) RLH and RLH-N-compressed bitmaps offer shorter query response times than WAH-compressed bitmaps, for certain range of cardinalities of an indexed attribute, and (4) RLH-N assures shorter update time of compressed bitmaps than RLH.

  • rlh bitmap Compression Technique based on run length and huffman encoding
    Data Warehousing and OLAP, 2007
    Co-Authors: Michal Stabno, Robert Wrembel
    Abstract:

    In this paper we present a Technique of compressing bitmap indexes for application in data warehouses. The developed Compression Technique, called Run-Length Huffman (RLH), is based on the run-length encoding and on the Huffman encoding. RLH was implemented and experimentally compared to the well known Word Aligned Hybrid bitmap Compression Technique that has been reported to provide the shortest query execution time. The experiments discussed in this paper show that RLH offers shorter query response times than WAH, for certain cardinalities of indexed attributes. Moreover, bitmaps compressed with RLH are smaller than corresponding bitmaps compressed with WAH. Additionally, we propose a modified RLH, called RLH-1024, which is designed to better support bitmap updates.

Cong Liu - One of the best experts on this subject based on the ideXlab platform.

  • a wavelet based data Compression Technique for smart grid
    IEEE Transactions on Smart Grid, 2011
    Co-Authors: Jiaxin Ning, Wenzhong Gao, Jianhui Wang, Cong Liu
    Abstract:

    This paper proposes a wavelet-based data Compression approach for the smart grid (SG). In particular, wavelet transform (WT)-based multiresolution analysis (MRA), as well as its properties, are studied for its data Compression and denoising capabilities for power system signals in SG. Selection of the Order 2 Daubechies wavelet and scale 5 as the best wavelet function and the optimal decomposition scale, respectively, for disturbance signals is demonstrated according to the criterion of the maximum wavelet energy of wavelet coefficients (WCs). To justify the proposed method, phasor data are simulated under disturbance circumstances in the IEEE New England 39-bus system. The results indicate that WT-based MRA can not only compress disturbance signals but also depress the sinusoidal and white noise contained in the signals.

Jörg Sander - One of the best experts on this subject based on the ideXlab platform.

  • Independent Quantization: An Index Compression Technique for High-Dimensional Data Spaces
    Proceedings of the 16th International Conference on Data Engineering, 2000
    Co-Authors: Stefan Berchtold, Hans-Peter P Kriegel, Hosagrahar Visvesvaraya Jagadish, Christian Boehm, Jörg Sander
    Abstract:

    Two major approaches have been proposed to efficiently process\nqueries in databases: speeding up the search by using index structures,\nand speeding up the search by operating on a compressed database, such\nas a signature file. Both approaches have their limitations: indexing\nTechniques are inefficient in extreme configurations, such as\nhigh-dimensional spaces, where even a simple scan may be cheaper than an\nindex-based search. Compression Techniques are not very efficient in all\nother situations. We propose to combine both Techniques to search for\nnearest neighbors in a high-dimensional space. For this purpose, we\ndevelop a compressed index, called the IQ-tree, with a three-level\nstructure: the first level is a regular (flat) directory consisting of\nminimum bounding boxes, the second level contains data points in a\ncompressed representation, and the third level contains the actual data.\nWe overcome several engineering challenges in constructing an effective\nindex structure of this type. The most significant of these is to decide\nhow much to compress at the second level. Too much Compression will lead\nto many needless expensive accesses to the third level. Too little\nCompression will increase both the storage and the access cost for the\nfirst two levels. We develop a cost model and an optimization algorithm\nbased on this cost model that permits an independent determination of\nthe degree of Compression for each second level page to minimize\nexpected query cost. In an experimental evaluation, we demonstrate that\nthe IQ-tree shows a performance that is the “best of both\nworlds” for a wide range of data distributions and\ndimensionalities

Krishnendu Chakrabarty - One of the best experts on this subject based on the ideXlab platform.

  • ATS - Core-Level Compression Technique Selection and SOC Test Architecture Design
    2008 17th Asian Test Symposium, 2008
    Co-Authors: Anders Larsson, Xin Zhang, Erik G. Larsson, Krishnendu Chakrabarty
    Abstract:

    The increasing test-data volumes needed for the testing of system-on-chip (SOC) integrated circuits lead to long test-application times and high tester memory requirements. Efficient test planning and test-data Compression are therefore needed. We present an analysis to highlight the fact that the impact of a test-data Compression Technique on test time and Compression ratio are method-dependant as well as TAM-width dependant. This implies that for a given set of Compression schemes, there is no Compression scheme that is the optimal with respect to test time reduction and test-data Compression at all TAM widths. We therefore propose a Technique where we integrate core wrapper design, test architecture design and test scheduling with test-data Compression Technique selection for each core in order to minimize the SOC test-application time and the test-data volume. Experimental results for several SOCs crafted from industrial cores demonstrate that the proposed method leads to significant reduction in test-data volume and test time.

  • ITC - SOC Test Optimization with Compression-Technique Selection
    2008 IEEE International Test Conference, 2008
    Co-Authors: Anders Larsson, Xin Zhang, Erik Larsson, Krishnendu Chakrabarty
    Abstract:

    The increasing test-data volumes needed for the testing of system-on-chip (SOC) lead to long test times and high memory requirements. We present an analysis to highlight the fact that the impact of a test-data Compression Technique on test time and Compression ratio are method-dependant as well as TAM-width dependant. Therefore, we propose a Technique where Compression-Technique selection is integrated with core wrapper design, test architecture design, and test scheduling to minimize the SOC test time and the test-data volume.

  • nine coded Compression Technique for testing embedded cores in socs
    IEEE Transactions on Very Large Scale Integration Systems, 2005
    Co-Authors: Mohammad Tehranipoor, Mehrdad Nourani, Krishnendu Chakrabarty
    Abstract:

    This paper presents a new test-data Compression Technique that uses exactly nine codewords. Our Technique aims at precomputed data of intellectual property cores in system-on-chips and does not require any structural information of cores. The Technique is flexible in utilizing both fixed- and variable-length blocks. In spite of its simplicity, it provides significant reduction in test-data volume and test-application time. The deCompression logic is very small and can be implemented fully independent of the precomputed test-data set. Our Technique is flexible and can be efficiently adopted for single- or multiple-scan chain designs. Experimental results for ISCAS'89 benchmarks illustrate the flexibility and efficiency of the proposed Technique.

  • nine coded Compression Technique with application to reduced pin count testing and flexible on chip deCompression
    Design Automation and Test in Europe, 2004
    Co-Authors: M H Tehranipour, Mehrdad Nourani, Krishnendu Chakrabarty
    Abstract:

    This paper presents a new test data Compression Technique based on a Compression code that uses exactly nine code-words. In spite of its simplicity, it provides significant reduction intest data volume and test application time. In addition, the deCompression logic is very small and independent of the precomputed test data set. Our Technique leaves many don't-care bitsunchanged in the compressed test set, and these bits can be filled randomly to detect non-modeled faults. The proposed Technique can be efficiently adopted for single- or multiple-scan chain designs to reduce test application time and pin requirement. Experimentalresults for ISCAS'89 benchmarks illustrate the flexibility and efficiency of the proposed Technique.