Temporal Masking

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 6342 Experts worldwide ranked by ideXlab platform

Hari Kalva - One of the best experts on this subject based on the ideXlab platform.

  • visually lossless coding based on Temporal Masking in human vision
    Proceedings of SPIE, 2014
    Co-Authors: Velibor Adzic, Howard S Hock, Hari Kalva
    Abstract:

    This paper presents a method for perceptual video compression that exploits the phenomenon of backward Temporal Masking. We present an overview of visual Temporal Masking and discuss models to identify portions of a video sequences masked due to this phenomenon exhibited by the human visual system. A quantization control model based on the psychophysical model of backward visual Temporal Masking was developed. We conducted two types of subjective evaluations and demonstrated that the proposed method up to 10% bitrate savings on top of state of the art encoder with visually identical video. The proposed methods were evaluated using HEVC encoder.

  • Human Vision and Electronic Imaging - Visually lossless coding based on Temporal Masking in human vision
    Human Vision and Electronic Imaging XIX, 2014
    Co-Authors: Velibor Adzic, Howard S Hock, Hari Kalva
    Abstract:

    This paper presents a method for perceptual video compression that exploits the phenomenon of backward Temporal Masking. We present an overview of visual Temporal Masking and discuss models to identify portions of a video sequences masked due to this phenomenon exhibited by the human visual system. A quantization control model based on the psychophysical model of backward visual Temporal Masking was developed. We conducted two types of subjective evaluations and demonstrated that the proposed method up to 10% bitrate savings on top of state of the art encoder with visually identical video. The proposed methods were evaluated using HEVC encoder.

  • exploring visual Temporal Masking for video compression
    International Conference on Consumer Electronics, 2013
    Co-Authors: Velibor Adzic, Hari Kalva, Borko Furht
    Abstract:

    In this paper we present work on exploiting visual Temporal Masking phenomenon applied to video compression. Our results show that it is possible to reduce bitrate of the compressed video sequence without affecting subjective quality and quality of experience (perceptually lossless). The principles we present here are applicable to all modern hybrid coding systems and can be implemented seamlessly with video delivery platforms. Results show up to 6% of additional savings when implemented on top of the state of the art encoder.

  • ICCE - Exploring visual Temporal Masking for video compression
    2013 IEEE International Conference on Consumer Electronics (ICCE), 2013
    Co-Authors: Velibor Adzic, Hari Kalva, Borko Furht
    Abstract:

    In this paper we present work on exploiting visual Temporal Masking phenomenon applied to video compression. Our results show that it is possible to reduce bitrate of the compressed video sequence without affecting subjective quality and quality of experience (perceptually lossless). The principles we present here are applicable to all modern hybrid coding systems and can be implemented seamlessly with video delivery platforms. Results show up to 6% of additional savings when implemented on top of the state of the art encoder.

Bradley W Dickinson - One of the best experts on this subject based on the ideXlab platform.

  • Temporally adaptive motion interpolation exploiting Temporal Masking in visual perception
    IEEE Transactions on Image Processing, 1994
    Co-Authors: Bradley W Dickinson
    Abstract:

    In this paper we present a novel technique to dynamically adapt motion interpolation structures by Temporal segmentation. The number of reference frames and the intervals between them are adjusted according to the Temporal variation of the input video. Bit-rate control for this dynamic group of pictures (GOP) structure is achieved by taking advantage of Temporal Masking in human vision. Constant picture quality can be obtained by variable-bit-rate coding using this approach. Further improvement can be made when the intervals between reference frames are chosen by minimizing a measure of the coding difficulty of a GOP. Advantages for low bit-rate coding and implications for variable-bit-rate coding are discussed. Simulations on test video are presented for various GOP structures and Temporal segmentation methods, and the results compare favorably with those for conventional fixed GOP structures. >

  • scene adaptive motion interpolation structures based on Temporal Masking in human visual perception
    Visual Communications and Image Processing, 1993
    Co-Authors: Jungwoo Lee, Bradley W Dickinson
    Abstract:

    In this paper we present a novel technique to dynamically adapt motion interpolation structures by Temporal segmentation. The interval between two reference frames is adjusted according to the Temporal variation of the input video. The difficulty of bit rate control for this dynamic group of pictures (GOP) structure is resolved by taking advantage of Temporal Masking in human vision. Six different frame types are used for efficient bit rate control, and telescopic search is used for fast motion estimation because frame distances between reference frames are dynamically varying. Constant picture quality can be obtained by variable bit rate coding using this approach and the statistical bit rate behavior of the coder is discussed. Advantages for low bit rate coding and storage media applications and implications for HDTV coding are discussed. Simulations on test video including HDTV sequences are presented for various GOP structures and different bit rates, and the results compare favorably with those for conventional fixed GOP structures.© (1993) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Velibor Adzic - One of the best experts on this subject based on the ideXlab platform.

  • visually lossless coding based on Temporal Masking in human vision
    Proceedings of SPIE, 2014
    Co-Authors: Velibor Adzic, Howard S Hock, Hari Kalva
    Abstract:

    This paper presents a method for perceptual video compression that exploits the phenomenon of backward Temporal Masking. We present an overview of visual Temporal Masking and discuss models to identify portions of a video sequences masked due to this phenomenon exhibited by the human visual system. A quantization control model based on the psychophysical model of backward visual Temporal Masking was developed. We conducted two types of subjective evaluations and demonstrated that the proposed method up to 10% bitrate savings on top of state of the art encoder with visually identical video. The proposed methods were evaluated using HEVC encoder.

  • Human Vision and Electronic Imaging - Visually lossless coding based on Temporal Masking in human vision
    Human Vision and Electronic Imaging XIX, 2014
    Co-Authors: Velibor Adzic, Howard S Hock, Hari Kalva
    Abstract:

    This paper presents a method for perceptual video compression that exploits the phenomenon of backward Temporal Masking. We present an overview of visual Temporal Masking and discuss models to identify portions of a video sequences masked due to this phenomenon exhibited by the human visual system. A quantization control model based on the psychophysical model of backward visual Temporal Masking was developed. We conducted two types of subjective evaluations and demonstrated that the proposed method up to 10% bitrate savings on top of state of the art encoder with visually identical video. The proposed methods were evaluated using HEVC encoder.

  • exploring visual Temporal Masking for video compression
    International Conference on Consumer Electronics, 2013
    Co-Authors: Velibor Adzic, Hari Kalva, Borko Furht
    Abstract:

    In this paper we present work on exploiting visual Temporal Masking phenomenon applied to video compression. Our results show that it is possible to reduce bitrate of the compressed video sequence without affecting subjective quality and quality of experience (perceptually lossless). The principles we present here are applicable to all modern hybrid coding systems and can be implemented seamlessly with video delivery platforms. Results show up to 6% of additional savings when implemented on top of the state of the art encoder.

  • ICCE - Exploring visual Temporal Masking for video compression
    2013 IEEE International Conference on Consumer Electronics (ICCE), 2013
    Co-Authors: Velibor Adzic, Hari Kalva, Borko Furht
    Abstract:

    In this paper we present work on exploiting visual Temporal Masking phenomenon applied to video compression. Our results show that it is possible to reduce bitrate of the compressed video sequence without affecting subjective quality and quality of experience (perceptually lossless). The principles we present here are applicable to all modern hybrid coding systems and can be implemented seamlessly with video delivery platforms. Results show up to 6% of additional savings when implemented on top of the state of the art encoder.

Kuo-cheng Liu - One of the best experts on this subject based on the ideXlab platform.

  • Color Video JND Model Using Compound Spatial Masking and Structure-Based Temporal Masking
    IEEE Access, 2020
    Co-Authors: Kuo-cheng Liu
    Abstract:

    In order to provide high transmission efficiency and visual quality for the color image/video in internet, effective estimation of its inherent just noticeable distortion (JND) is continuously an important topic. In this paper, a discrete cosine transform (DCT)-based perceptual model for estimating spatio-Temporal JNDs of color videos is presented. Firstly, the compound Masking adjustment integrated with different spatial Masking factors is proposed. Based on the base detection threshold for DCT coefficients in luminance and chrominance components of the color image, the adjustment combined by luminance Masking, pattern-based contrast Masking, and cross Masking is exploited to measure visibility thresholds for luminance component, while the adjustment induced by the variance-based statistical properties is utilized to measure visibility thresholds for chrominance components. Then, the local Temporal statistics of luminance component are considered to design the structure-based Temporal Masking adjustment for further estimating visibility thresholds of video signals. To verify the proposed method, a subjective viewing test is designed and a fair rating process of evaluating the visual quality is carried out under the specified viewing condition. Experimental results show that the proposed method is able to measure more visual distortion tolerance than the existing color-based methods at high visual quality.

  • Block Based Temporal Masking Estimation for Video Sequences.
    2015
    Co-Authors: Kuo-cheng Liu
    Abstract:

    1 ABSTRACT: A method which reveals the visual Masking in the human visual system (HVS) is useful in perceptual data coding. In this paper, we propose a color DCT-based method to estimate color spatio-Temporal just noticeable distortion (JND) for video coding. The spatio-Temporal JND profiles are assessed by incorporating a new Temporal Masking adjustment into the mathematical model of estimating the DCT-based spatial JND profiles for luminance component and chrominance components of color images. In this paper, the new block-based Temporal Masking adjustment mainly considering the variation of local Temporal statistics in luminance component between successive video frames is proposed. The spatioTemporal JND profiles are used to tune the H.264 video codec for higher performance. The simulation results demonstrate the performance of the perceptual video coding in terms of bit rates and visual quality. The bit rates required by the perceptually tuned video codec are lower than that required by the un-tuned video codec at nearly the same visual quality.

Eliathamby Ambikairajah - One of the best experts on this subject based on the ideXlab platform.

  • Perceptual speech enhancement exploiting Temporal Masking properties of human auditory system
    Speech Communication, 2010
    Co-Authors: Teddy Surya Gunawan, Eliathamby Ambikairajah, Julien Epps
    Abstract:

    The use of simultaneous Masking in speech enhancement has shown promise for a range of noise types. In this paper, a new speech enhancement algorithm based on a short-term Temporal Masking threshold to noise ratio (MNR) is presented. A novel functional model for forward Masking based on three parameters is incorporated into a speech enhancement framework based on speech boosting. The performance of the speech enhancement algorithm using the proposed forward Masking model was compared with seven other speech enhancement methods over 12 different noise types and four SNRs. Objective evaluation using PESQ revealed that using the proposed forward Masking model, the speech enhancement algorithm outperforms the other algorithms by 6-20% depending on the SNR. Moreover, subjective evaluation using 16 listeners confirmed the objective test results.

  • Temporal Masking models for audio coding
    Irish Signals and Systems Conference 2004, 2004
    Co-Authors: Teddy Surya Gunawan, Eliathamby Ambikairajah, Edward Jones
    Abstract:

    Conventional audio coding algorithms do not fully exploit knowledge of the Temporal Masking properties of the human auditory system, as they generally rely on simultaneous Masking models. The paper presents a comparison of four auditory Temporal Masking models for audio coding applications. It is shown that the combination of Temporal Masking and simultaneous Masking in an audio coding algorithm results in a further bit rate reduction of approximately 20%, compared to using simultaneous Masking alone, while retaining perceptual quality. We also describe modifications to one of the models that enhance its performance; in particular, this modified model is shown to outperform the other ones. Results from objective tests have been confirmed by semi-formal subjective testing on audio signals.

  • Speech and Audio Coding Using Temporal Masking
    Multimedia Systems and Applications Series, 1
    Co-Authors: Teddy Surya Gunawan, Eliathamby Ambikairajah, Deep Sen
    Abstract:

    This paper presents a comparison of three auditory Temporal Masking models for speech and audio coding applications. The first model was developed based upon the existing forward Masking psychoacoustic data with an assumption of an approximately 200 ms. The model’s dynamic parameters were derived from this data. The previously developed second model was based upon the principle of an exponential decay following higher energy stimuli, where the Masking effects have a relatively short duration. The existing third model best matches the previously reported forward Masking data using an exponential curve but the effects of the forward Masking are restricted to 100-200ms. Objective assessments employing the PESQ measure reveal that these three Temporal models have potential for removing perceptually redundant information in speech and audio coding applications. Results show that the incorporation of Temporal Masking along with simultaneous Masking into a speech/audio coding algorithm results in a further bit rate reduction of approximately 17% compared with simultaneous Masking alone, while preserving perceptual quality

  • Single channel speech enhancement using Temporal Masking
    The Ninth International Conference onCommunications Systems 2004. ICCS 2004., 1
    Co-Authors: Teddy Surya Gunawan, Eliathamby Ambikairajah
    Abstract:

    A novel speech enhancement technique is presented based on the Temporal Masking properties of the human auditory system. The input signal is divided into a number of sub-bands that are individually and adaptively weighted in the time domain according to a short term SNR estimate in each sub-band. The short time SNR estimate is calculated using the Temporal Masking thresholds in each sub-band. Objective measures and informal listening tests demonstrated significant improvements over other methods when tested with speech signals corrupted by car noise for SNR values of 0, 10, and 20 dB

  • Wavelet packet based audio coding using Temporal Masking
    Fourth International Conference on Information Communications and Signal Processing 2003 and the Fourth Pacific Rim Conference on Multimedia. Proceedi, 1
    Co-Authors: F. Sinaga, Teddy Surya Gunawan, Eliathamby Ambikairajah
    Abstract:

    Conventional audio coding algorithms do not exploit knowledge of the Temporal Masking properties of the human auditory system, relying solely on simultaneous Masking models. This paper presents wavelet packet based audio coding incorporating both Temporal and simultaneous Masking models. By applying a novel Temporal Masking model, following simultaneous Masking, combined Masking thresholds were calculated more accurately resulting in a bit rate reduction of approximately 25 kbps while preserving perceptual quality. This result was confirmed by semi-formal subjective test on audio signals.