Lapped Orthogonal Transform

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 426 Experts worldwide ranked by ideXlab platform

K R Rao - One of the best experts on this subject based on the ideXlab platform.

  • HVS- Weighted Progressive Image Transmission Using the Lapped Orthogonal Transform
    2016
    Co-Authors: Ricardo L De Queiroz, K R Rao
    Abstract:

    Abstract- Progressive transmission of images based on Lapped Orthogonal Transform (LOT), adaptive classification and human visual sensitivity (HVS) weighting is proposed. HVS weighting for LOT is developed using a general tech-nique that can be applied to any Orthogonal Transform. The method is compared with discrete cosine Transform (DCT) based progressive image transmission (PIT). It is shown that LOT based P IT yields subjectively improved images compared to those based on DCT.

  • 277 Generalized Linear-Phase Lapped Orthogonal Transforms
    2008
    Co-Authors: K R Rao
    Abstract:

    The general factorization of a linear-phase parauni-tary filter bank (LPPUFB) is revisited and we introduce a class of Lapped Orthogonal Transforms with extended overlap (GenLOT). In this formulation, the discrete co-sine Transform (DCT) is the order-l GenLOT, the Lapped Orthogonal Transform is the order-:! GenLOT, and so on, for any filter length which is an integer multiple of the block size. All GenLOTs are based on the DCT and have fast implementation algorithms. The degrees of freedom in the design of GenLOTs are described and design exam-ples are presented along with some practical applications.

  • the genlot generalized linear phase Lapped Orthogonal Transform
    IEEE Transactions on Signal Processing, 1996
    Co-Authors: R.l. De Queiroz, T.q. Nguyen, K R Rao
    Abstract:

    The general factorization of a linear-phase paraunitary filter bank (LPPUFB) is revisited. From this new perspective, a class of Lapped Orthogonal Transforms with extended overlap (generalized linear-phase Lapped Orthogonal Transforms (GenLOTs)) is developed as a subclass of the general class of LPPUFB. In this formulation, the discrete cosine Transform (DCT) is the order-1 GenLOT, the Lapped Orthogonal Transform is the order-2 GenLOT, and so on, for any filter length that is an integer multiple of the block size. The GenLOTs are based on the DCT and have fast implementation algorithms. The implementation of GenLOTs is explained, including the method to process finite-length signals. The degrees of freedom in the design of GenLOTs are described, and design examples are presented along with image compression tests.

  • image coding based on classified Lapped Orthogonal Transform vector quantization
    IEEE Transactions on Circuits and Systems for Video Technology, 1995
    Co-Authors: S Verkatraman, J Y Nam, K R Rao
    Abstract:

    Classified Transform coding of images using vector quantization (VQ) has proved to be an efficient technique. Transform VQ combines the energy compaction properties of Transform coding and the superior performance of VQ. Classification improves the reconstructed image quality considerably because of adaptive bit allocation. A classified Transform VQ technique using the Lapped Orthogonal Transform (LOT) is presented. Image blocks are Transformed using the LOT and are classified into four classes based on their structural properties. These are further divided adaptively into subvectors depending on the LOT coefficient statistics as this allows efficient distribution of bits. These subvectors are then vector quantized. Simulation results indicate subjectively improved images with LOT/VQ as compared to DCT/VQ. >

  • human visual system weighted progressive image transmission using Lapped Orthogonal Transform classified vector quantization
    Optical Engineering, 1993
    Co-Authors: Chansik Hwang, Suresh Venkatraman, K R Rao
    Abstract:

    A progressive image transmission (PIT) scheme based on the classified Transform vector quantization (CVQ) technique using the Lapped Orthogonal Transform (LOT) and human visual system (HVS) weighting is proposed. Conventional block Transform coding of images using the discrete cosine Transform (DCT) produces, in general, undesirable blocking artifacts at low bit rates. Here image blocks are Transformed using the LOT and classified into four classes based on their structural properties and further subdivided adaptively into subvectors depending on the LOT coefficient statistics with HVS weighting to improve the reconstructed image quality by adaptive bit allocation. The subvectors are vector quantized and transmitted progressively. Coding tests using computer simulations show that the LOT/CVQ-based PIT of images is an effective coding scheme. The results are also compared with those obtained using PIT/DCTVQ. The LOT/CVQ-based PIT reduces the blocking artifact significantly.

Kjersti Engan - One of the best experts on this subject based on the ideXlab platform.

  • Optimized signal expansions for sparse representation
    IEEE Transactions on Signal Processing, 2001
    Co-Authors: S.o. Aase, John Hakon Husoy, J.h.h.k. Skretting, Kjersti Engan
    Abstract:

    Traditional signal decompositions such as Transforms, filterbanks, and wavelets generate signal expansions using the analysis-synthesis setting: the expansion coefficients are found by taking the inner product of the signal with the corresponding analysis vector. In this paper, we try to free ourselves from the analysis-synthesis paradigm by concentrating on the synthesis or reconstruction part of the signal expansion. Ignoring the analysis issue completely, we construct sets of synthesis vectors, which are denoted waveform dictionaries, for efficient signal representation. Within this framework, we present an algorithm for designing waveform dictionaries that allow sparse representations: the objective is to approximate a training signal using a small number of dictionary vectors. Our algorithm optimizes the dictionary vectors with respect to the average nonlinear approximation error, i.e., the error resulting when keeping a fixed number n of expansion coefficients but not necessarily the first n coefficients. Using signals from a Gaussian, autoregressive process with correlation factor 0.95, it is demonstrated that for established signal expansions like the Karhunen-Loeve Transform, the Lapped Orthogonal Transform, and the biOrthogonal 7/9 wavelet, it is possible to improve the approximation capabilities by up to 30% by fine tuning of the expansion vectors.

  • ICASSP - Design of signal expansions for sparse representation
    2000 IEEE International Conference on Acoustics Speech and Signal Processing. Proceedings (Cat. No.00CH37100), 1
    Co-Authors: S.o. Aase, Karl Skretting, John Hakon Husoy, Kjersti Engan
    Abstract:

    Traditional signal decompositions generate signal expansions using the analysis-synthesis setting: the expansion coefficients are found by taking the inner product of the signal with the corresponding analysis vector. In this paper we try to free ourselves from the analysis-synthesis paradigm by concentrating on the synthesis or reconstruction part of the signal expansion. Ignoring the analysis issue completely, we construct sets of synthesis vectors, denoted waveform dictionaries, for sparse signal representation. The objective is to approximate a training signal using a small number of dictionary vectors. Our algorithm optimize the dictionary vectors with respect to the average non-linear approximation error. Using signals from a Gaussian, autoregressive process with correlation factor 0.95, it is demonstrated that for established signal expansions like the Karhunen-Loeve Transform, the Lapped Orthogonal Transform, and the biOrthogonal 7/9 wavelet, it is possible to improve the approximation capabilities by up to 30% by optimizing the expansion vectors.

Akira Kurematsu - One of the best experts on this subject based on the ideXlab platform.

  • generalized unequal length Lapped Orthogonal Transform for subband image coding
    IEEE Transactions on Signal Processing, 2000
    Co-Authors: Takayuki Nagai, Masaaki Ikehara, Masahide Kaneko, Akira Kurematsu
    Abstract:

    Generalized linear phase Lapped Orthogonal Transforms with unequal length basis functions (GULLOTs) are considered. The length of each basis of the proposed GULLOT can be different from each other, whereas all the bases of the conventional GenLOT are of equal length. In general, for image coding application, the long basis for a low-frequency band and the short basis for a high-frequency one are desirable to reduce the blocking and the ringing artifact simultaneously. Therefore, the GULLOT is suitable especially for a subband image coding. In order to apply the GULLOT to a subband image coding, we also investigate the size-limited structure to process the finite length signal, which is important in practice. Finally, some design and image coding examples are shown to confirm the validity of the proposed GULLOT.

  • generalized unequal length Lapped Orthogonal Transform for subband image coding
    International Conference on Acoustics Speech and Signal Processing, 2000
    Co-Authors: Takayuki Nagai, Masaaki Ikehara, Masahide Kaneko, Akira Kurematsu
    Abstract:

    In this paper, generalized linear phase Lapped Orthogonal Transforms with unequal length basis functions (GULLOT) are considered. The length of each basis of the proposed GULLOT can be different from each other, while all the bases of the conventional GenLOT are of equal length. In order to apply the GULLOT to subband image coding, we also investigate the size-limited structure to process the finite length signal which is important in practice.

J N Gowdy - One of the best experts on this subject based on the ideXlab platform.

  • Subband Feature Extraction Using Lapped Orthogonal Transform For Speech Recognition
    2007
    Co-Authors: Zekeriya Tufekci, J N Gowdy
    Abstract:

    It is well known that dividing speech into frequency subbands can improve the performance of a speech recognizer. This is especially true for the case of speech corrupted with noise. Subband (SUB) features are typically extracted by dividing the frequency band into subbands by using non-overlapping rectangular windows and then processing each subband's spectrum separately. However, multiplying a signal by a rectangular window creates discontinuities which produce large amplitude frequency coefficients at high frequencies that degrade the performance of the speech recognizer. In this paper we propose the Lapped Subband (LAP) features which are calculated by applying the Discrete Orthogonal Lapped Transform (DOLT) to the mel-scaled, log-filterbank energies of a speech frame. Performance of the LAP features was evaluated on a phoneme recognition task and compared with the performance of SUB features and MFCC features. Experimental results have shown that the proposed LAP features outperform SUB features and Mel Frequency Cepstral Coefficients (MFCC) features under white noise, band-limited white noise and no noise conditions

  • subband feature extraction using Lapped Orthogonal Transform for speech recognition
    International Conference on Acoustics Speech and Signal Processing, 2001
    Co-Authors: Zekeriya Tufekci, J N Gowdy
    Abstract:

    It is well known that dividing speech into frequency subbands can improve the performance of a speech recognizer. This is especially true for the case of speech corrupted with noise. Subband (SUB) features are typically extracted by dividing the frequency band into subbands by using non-overlapping rectangular windows and then processing each subband s spectrum separately. However, multiplying a signal by a rectangular window creates discontinuities which produce large amplitude frequency coefficients at high frequencies that degrade the performance of the speech recognizer. In this paper we propose the Lapped subband (LAP) features which are calculated by applying the discrete Orthogonal Lapped Transform (DOLT) to the mel-scaled, log-filterbank energies of a speech frame. Performance of the LAP features is evaluated on a phoneme recognition task and compared with the performance of SUB features and MFCC features. Experimental results show that the proposed LAP features outperform SUB features and mel frequency cepstral coefficients (MFCC) features under white noise, band-limited white noise and no noise conditions.

S.o. Aase - One of the best experts on this subject based on the ideXlab platform.

  • Optimized signal expansions for sparse representation
    IEEE Transactions on Signal Processing, 2001
    Co-Authors: S.o. Aase, John Hakon Husoy, J.h.h.k. Skretting, Kjersti Engan
    Abstract:

    Traditional signal decompositions such as Transforms, filterbanks, and wavelets generate signal expansions using the analysis-synthesis setting: the expansion coefficients are found by taking the inner product of the signal with the corresponding analysis vector. In this paper, we try to free ourselves from the analysis-synthesis paradigm by concentrating on the synthesis or reconstruction part of the signal expansion. Ignoring the analysis issue completely, we construct sets of synthesis vectors, which are denoted waveform dictionaries, for efficient signal representation. Within this framework, we present an algorithm for designing waveform dictionaries that allow sparse representations: the objective is to approximate a training signal using a small number of dictionary vectors. Our algorithm optimizes the dictionary vectors with respect to the average nonlinear approximation error, i.e., the error resulting when keeping a fixed number n of expansion coefficients but not necessarily the first n coefficients. Using signals from a Gaussian, autoregressive process with correlation factor 0.95, it is demonstrated that for established signal expansions like the Karhunen-Loeve Transform, the Lapped Orthogonal Transform, and the biOrthogonal 7/9 wavelet, it is possible to improve the approximation capabilities by up to 30% by fine tuning of the expansion vectors.

  • ICASSP - Design of signal expansions for sparse representation
    2000 IEEE International Conference on Acoustics Speech and Signal Processing. Proceedings (Cat. No.00CH37100), 1
    Co-Authors: S.o. Aase, Karl Skretting, John Hakon Husoy, Kjersti Engan
    Abstract:

    Traditional signal decompositions generate signal expansions using the analysis-synthesis setting: the expansion coefficients are found by taking the inner product of the signal with the corresponding analysis vector. In this paper we try to free ourselves from the analysis-synthesis paradigm by concentrating on the synthesis or reconstruction part of the signal expansion. Ignoring the analysis issue completely, we construct sets of synthesis vectors, denoted waveform dictionaries, for sparse signal representation. The objective is to approximate a training signal using a small number of dictionary vectors. Our algorithm optimize the dictionary vectors with respect to the average non-linear approximation error. Using signals from a Gaussian, autoregressive process with correlation factor 0.95, it is demonstrated that for established signal expansions like the Karhunen-Loeve Transform, the Lapped Orthogonal Transform, and the biOrthogonal 7/9 wavelet, it is possible to improve the approximation capabilities by up to 30% by optimizing the expansion vectors.