Decoding Process

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 11970 Experts worldwide ranked by ideXlab platform

Petia Radeva - One of the best experts on this subject based on the ideXlab platform.

  • On the Decoding Process in Ternary Error-Correcting Output Codes
    IEEE Transactions on Pattern Analysis and Machine Intelligence, 2010
    Co-Authors: Sergio Escalera, Oriol Pujol, Petia Radeva
    Abstract:

    A common way to model multiclass classification problems is to design a set of binary classifiers and to combine them. Error-correcting output codes (ECOC) represent a successful framework to deal with these type of problems. Recent works in the ECOC framework showed significant performance improvements by means of new problem-dependent designs based on the ternary ECOC framework. The ternary framework contains a larger set of binary problems because of the use of a ldquodo not carerdquo symbol that allows us to ignore some classes by a given classifier. However, there are no proper studies that analyze the effect of the new symbol at the Decoding step. In this paper, we present a taxonomy that embeds all binary and ternary ECOC Decoding strategies into four groups. We show that the zero symbol introduces two kinds of biases that require redefinition of the Decoding design. A new type of Decoding measure is proposed, and two novel Decoding strategies are defined. We evaluate the state-of-the-art coding and Decoding strategies over a set of UCI machine learning repository data sets and into a real traffic sign categorization problem. The experimental results show that, following the new Decoding strategies, the performance of the ECOC design is significantly improved.

I A L Rassan - One of the best experts on this subject based on the ideXlab platform.

  • speeding up string matching over compressed text on handheld devices using tagged sub optimal code tsc
    Real Time Technology and Applications Symposium, 2004
    Co-Authors: A Bellaachia, I A L Rassan
    Abstract:

    Tagged suboptimal code (TSC) is a new coding technique presented in this paper to speed up string matching over compressed databases on PDAs. TSC is a variable-length suboptimal code that supports minimal prefix property. It always determines its codeword boundary without traversing a tree or lookup table. TSC technique may be beneficial in many types of applications: speeding up string matching over compressed text, speeding Decoding Process, as well as any general-purpose integer representation code. Experimental results show that TSC is 8.9 times faster than string matching over compressed text using Huffman encoding, and 3 times faster in the Decoding Process. On the other hand, the compression ratio of TSC is 6% less than that of Huffman encoding. Additionally, TSC is 14 times faster than byte pair encoding (BPE) compression Process, and achieves better performance than searching over compressed text using BPE scheme on handheld devices.

  • fast searching over compressed text using a new coding technique tagged sub optimal code tsc
    Data Compression Conference, 2004
    Co-Authors: A Bellaachia, I A L Rassan
    Abstract:

    In this paper, a new coding technique called tagged sub-optimal code (TSC) is proposed. TSC is a variable-length sub-optimal code that supports minimal prefix property. TSC technique is beneficial in many types of applications: speeding up string matching over compressed text, speeding Decoding Process, robustness of error detection and recovery during transmission, as well as in general-purpose integer representation code. The experimental results show that TSC is 8.9 times faster than string matching over compressed text using Huffman encoding, and 3 times faster in the Decoding Process.

  • Data Compression Conference - Fast searching over compressed text using a new coding technique: tagged sub-optimal code (TSC)
    Data Compression Conference 2004. Proceedings. DCC 2004, 2004
    Co-Authors: A Bellaachia, I A L Rassan
    Abstract:

    In this paper, a new coding technique called tagged sub-optimal code (TSC) is proposed. TSC is a variable-length sub-optimal code that supports minimal prefix property. TSC technique is beneficial in many types of applications: speeding up string matching over compressed text, speeding Decoding Process, robustness of error detection and recovery during transmission, as well as in general-purpose integer representation code. The experimental results show that TSC is 8.9 times faster than string matching over compressed text using Huffman encoding, and 3 times faster in the Decoding Process.

Satoshi Goto - One of the best experts on this subject based on the ideXlab platform.

  • Adaptive low power Decoding Process with temporal prediction method for common video
    2011 IEEE 7th International Colloquium on Signal Processing and its Applications, 2011
    Co-Authors: Wenxin Yu, Ning Jiang, Satoshi Goto
    Abstract:

    This paper introduces an adaptive low power Decoding Process with temporal prediction method for common video. This method can be used to reduce the Decoding time and reduce the Decoding power consumption by skipping the Decoding Process of some frames and reducing the frame rate. With the temporal prediction, it is different from the certain frame skipping scheme in the temporal scalable Decoding Process with frame rate down conversion method (TSDP). This method considers the video quality loss when the current frame is skipped and chooses the skipping scheme which causes minimum cost, so this method can be also used in the common video cases. And compares with the temporal scalable Decoding Process with frame rate down conversion method, the video quality (PSNR) is improved about 0.01 - 2.4 dB in the experimental common video cases by using the temporal prediction method. And can reduce the Decoding time based on the number of the skipped frames, and it can get about 65% - 86% reduction which compares with the frame rate reduction in the experimental cases.

  • temporal scalable Decoding Process with frame rate conversion method for surveillance video
    Pacific Rim Conference on Multimedia, 2010
    Co-Authors: Wenxin Yu, Satoshi Goto
    Abstract:

    This paper proposed a temporal scalable Decoding Process with frame rate conversion method for surveillance video. This method can be used to reduce the computational complexity in the Decoding Process and keep the video quality at the same time, and make the single layer bit stream sources much more flexible for various terminal devices. It is realized based on frameskipping conception with the proposed reference frame index decision algorithm, motion vector composition algorithm and block-partition mode decision algorithm. Compare with the frame rate conversion in transcoding Process, it is much lower complexity and more flexible. Through the experimental results, the reduction of computational complexity (Decoding time) depends on the number of skipped frames, the more frames was skipped the more reduction of the computational complexity will be got. The PSNR loss is very small (about 0.1 ∼ 0.2 (dB)) for B frame skipping. And the PSNR loss is about 0.7 ∼ 2 (dB) (the loss of SSIM is only 0.002 ∼ 0.007) for 2/3 P frame skipping and reduce the computational complexity about 60%.

  • PCM (2) - Temporal scalable Decoding Process with frame rate conversion method for surveillance video
    Advances in Multimedia Information Processing - PCM 2010, 2010
    Co-Authors: Wenxin Yu, Satoshi Goto
    Abstract:

    This paper proposed a temporal scalable Decoding Process with frame rate conversion method for surveillance video. This method can be used to reduce the computational complexity in the Decoding Process and keep the video quality at the same time, and make the single layer bit stream sources much more flexible for various terminal devices. It is realized based on frameskipping conception with the proposed reference frame index decision algorithm, motion vector composition algorithm and block-partition mode decision algorithm. Compare with the frame rate conversion in transcoding Process, it is much lower complexity and more flexible. Through the experimental results, the reduction of computational complexity (Decoding time) depends on the number of skipped frames, the more frames was skipped the more reduction of the computational complexity will be got. The PSNR loss is very small (about 0.1 ∼ 0.2 (dB)) for B frame skipping. And the PSNR loss is about 0.7 ∼ 2 (dB) (the loss of SSIM is only 0.002 ∼ 0.007) for 2/3 P frame skipping and reduce the computational complexity about 60%.

  • Adaptive solution of temporal scalable Decoding Process with frame rate conversion method for surveillance video
    2010 International Symposium on Intelligent Signal Processing and Communication Systems, 2010
    Co-Authors: Wenxin Yu, Satoshi Goto
    Abstract:

    This paper proposes an adaptive solution of temporal scalable Decoding Process with frame rate conversion method for surveillance video. It realizes the adaptive skipping scheme in the temporal scalable Decoding Process based on the content of the pictures. By analyzing the relationship between motion vector energy and the video quality loss of the same frame in probability, chooses the suitable form of the motion vector value to qualify the video quality loss which is caused by the skipping Process in a frame. And uses a selecting algorithm based on the energy accumulation principle to realize the adaptive frame skipping. By using this frame rate-down conversion algorithm, the PSNR is improved about 0.2-1.4 dB (compared with the certain frame skipping scheme) in different skipping cases. And the loss of the Decoding time reduction is less than 5% in the worst case, but in the most of the cases it is only 0 ~ 2%.

Maria Balta - One of the best experts on this subject based on the ideXlab platform.

  • TSP - ARP and QPP interleaves selection based on the convergence of iterative Decoding Process for the construction of 16-state Duo Binary Turbo Codes
    2011 34th International Conference on Telecommunications and Signal Processing (TSP), 2011
    Co-Authors: Horia Balta, Alexandru Isar, Maria Kovaci, Miranda Nafornita, Maria Balta
    Abstract:

    This paper presents the results obtained in a second algorithmic step of the design of Duo Binary Turbo Codes (DBTCs). The design as a whole is based on an exhaustive search on the code-interleaver pairs having as selection criterion the convergence of the iterative Decoding Process. The first step of this search establishes a hierarchy of the recursive systematic duo-binary convolutional (RSDBC) codes with memory 2, 3, 4 and 5. The second step consists in searching the interlevears set individually for each of the best codes found previously. In this paper the obtained results for the 16-state DBTCs are presented. The permutations considered in this paper correspond at two of the most efficient types of interleavers: Almost Regular Permutations (ARP) and Quadratic Polynomial Permutations (QPP). The Bit and the Frame Error Rate (BER/FER) performances obtained in this paper with the best code-interleaver pairs are superior to those already reported in literature.

  • Double-binary RSC convolutional codes selection based on convergence of iterative turbo-Decoding Process
    ISSCS 2011 - International Symposium on Signals Circuits and Systems, 2011
    Co-Authors: Horia Balta, Alexandru Isar, Maria Kovaci, Miranda Nafornita, Maria Balta
    Abstract:

    This paper presents an analysis of the recursive systematic double-binary convolutional codes (RSDBC) and a performance criterion which can be used to establish their hierarchy. This hierarchy serves for the selection of high performance turbo-codes. The criterion already mentioned consists in the convergence of the corresponding iterative turbo Decoding Process. We investigated the families of codes of memory 2, 3, 4 and 5. The simulation results are presented in two manners: statistically for the entire set of codes and nominal for the best ones.

  • QPP interleavers selection based on convergence of iterative turbo-Decoding Process at small block size
    ISSCS 2011 - International Symposium on Signals Circuits and Systems, 2011
    Co-Authors: Maria Kovaci, Horia Balta, Miranda Nafornita, Maria Balta
    Abstract:

    In this paper, performance in terms of Bit Error Rate (BER) and Frame Error Rate (FER) of the best quadratic polynomial permutations (QPP) interleavers are presented. The selection criterion proposed here was the convergence of the Decoding Process of duo-binary turbo codes. The analyses for ten best duo-binary memory 3 convolutional component codes and for Asynchronous Transfer Mode (ATM) blocks, with length N=2×212 bits=53 bytes are performed. The best turbo-code and permutation combination we found achieves a frame error rate of 10-4 at less than 0.75 decibels from the Shannon limit.

  • ARP and QPP interleaves selection based on the convergence of iterative Decoding Process for the construction of 16-state Duo Binary Turbo Codes
    2011 34th International Conference on Telecommunications and Signal Processing (TSP), 2011
    Co-Authors: Horia Balta, Alexandru Isar, Maria Kovaci, Miranda Nafornita, Maria Balta
    Abstract:

    This paper presents the results obtained in a second algorithmic step of the design of Duo Binary Turbo Codes (DBTCs). The design as a whole is based on an exhaustive search on the code-interleaver pairs having as selection criterion the convergence of the iterative Decoding Process. The first step of this search establishes a hierarchy of the recursive systematic duo-binary convolutional (RSDBC) codes with memory 2, 3, 4 and 5. The second step consists in searching the interlevears set individually for each of the best codes found previously. In this paper the obtained results for the 16-state DBTCs are presented. The permutations considered in this paper correspond at two of the most efficient types of interleavers: Almost Regular Permutations (ARP) and Quadratic Polynomial Permutations (QPP). The Bit and the Frame Error Rate (BER/FER) performances obtained in this paper with the best code-interleaver pairs are superior to those already reported in literature.

Sergio Escalera - One of the best experts on this subject based on the ideXlab platform.

  • On the Decoding Process in Ternary Error-Correcting Output Codes
    IEEE Transactions on Pattern Analysis and Machine Intelligence, 2010
    Co-Authors: Sergio Escalera, Oriol Pujol, Petia Radeva
    Abstract:

    A common way to model multiclass classification problems is to design a set of binary classifiers and to combine them. Error-correcting output codes (ECOC) represent a successful framework to deal with these type of problems. Recent works in the ECOC framework showed significant performance improvements by means of new problem-dependent designs based on the ternary ECOC framework. The ternary framework contains a larger set of binary problems because of the use of a ldquodo not carerdquo symbol that allows us to ignore some classes by a given classifier. However, there are no proper studies that analyze the effect of the new symbol at the Decoding step. In this paper, we present a taxonomy that embeds all binary and ternary ECOC Decoding strategies into four groups. We show that the zero symbol introduces two kinds of biases that require redefinition of the Decoding design. A new type of Decoding measure is proposed, and two novel Decoding strategies are defined. We evaluate the state-of-the-art coding and Decoding strategies over a set of UCI machine learning repository data sets and into a real traffic sign categorization problem. The experimental results show that, following the new Decoding strategies, the performance of the ECOC design is significantly improved.