Luminance Masking

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 1371 Experts worldwide ranked by ideXlab platform

Aydin A Alatan - One of the best experts on this subject based on the ideXlab platform.

  • oblivious spatio temporal watermarking of digital video by exploiting the human visual system
    IEEE Transactions on Circuits and Systems for Video Technology, 2008
    Co-Authors: Aydin A Alatan
    Abstract:

    Imperceptibility requirement in video watermarking is more challenging compared with its image counterpart due to the additional dimension existing in video. The embedding system should not only yield spatially invisible watermarks for each frame of the video, but it should also take the temporal dimension into account in order to avoid any flicker distortion between frames. While some of the methods in the literature approach this problem by only allowing arbitrarily small modifications within frames in different transform domains, some others simply use implicit spatial properties of the human visual system (HVS), such as Luminance Masking, spatial Masking, and contrast Masking. In addition, some approaches exploit explicitly the spatial thresholds of HVS to determine the location and strength of the watermark. However, none of the former approaches have focused on guaranteeing temporal invisibility and achieving maximum watermark strength along the temporal direction. In this paper, temporal dimension is exploited for video watermarking by means of utilizing temporal sensitivity of the HVS. The proposed method utilizes the temporal contrast thresholds of HVS to determine the maximum strength of watermark, which still gives imperceptible distortion after watermark insertion. Compared with some recognized methods in the literature, the proposed method avoids the typical visual degradations in the watermarked video, while still giving much better robustness against common video distortions, such as additive Gaussian noise, video coding, frame rate conversions, and temporal shifts, in terms of bit error rate.

Guoxi Wang - One of the best experts on this subject based on the ideXlab platform.

  • ADAPTIVE SPREAD-TRA1 SFORM DITHER MODULATIO1 USI1 G A1 IMPROVED LUMI1 A1 CE-MASKED THRESHOLD
    2008
    Co-Authors: Guoxi Wang
    Abstract:

    Based on the improved Watson’s model, an adaptive spread-transform dither modulation (ASTDM) algorithm is proposed for efficient watermarking to resist the amplitude scaling and coefficient re-quantization attacks. Firstly, the Luminance Masking is modified to be consistent with amplitude scaling without affecting the original parameters. Then, by incorporating the improved model with STDM algorithm, a maximum labeling strength and an appropriate quantization step-size shrinking with the local values of a host image are achieved. Experiments demonstrate that with high embedding rate and transparence, ASTDM can greatly increase the robustness against amplitude scaling while retaining anti-re-quantization property, which means the two drawbacks of QIM algorithm are solved simultaneously. Index Terms—Digital watermarking, quantization index modulation, ASTDM, Watson’s model, Luminance Masking 1. I1 TRODUCTIO1

  • ICIP - Adaptive spread-transform dither modulation using an improved Luminance-masked threshold
    2008 15th IEEE International Conference on Image Processing, 2008
    Co-Authors: Guoxi Wang
    Abstract:

    Based on the improved Watson's model, an adaptive spread-transform dither modulation (ASTDM) algorithm is proposed for efficient watermarking to resist the amplitude scaling and coefficient re-quantization attacks. Firstly, the Luminance Masking is modified to be consistent with amplitude scaling without affecting the original parameters. Then, by incorporating the improved model with STDM algorithm, a maximum labeling strength and an appropriate quantization step-size shrinking with the local values of a host image are achieved. Experiments demonstrate that with high embedding rate and transparence, ASTDM can greatly increase the robustness against amplitude scaling while retaining anti-re-quantization property, which means the two drawbacks of QIM algorithm are solved simultaneously.

Lee Prangnell - One of the best experts on this subject based on the ideXlab platform.

  • JND-Based Perceptual Video Coding for 4:4:4 Screen Content Data in HEVC
    2018 IEEE International Conference on Acoustics Speech and Signal Processing (ICASSP), 2018
    Co-Authors: Lee Prangnell, Victor Sanchez
    Abstract:

    The JCT - VC standardized Screen Content Coding (SCC) extension in the HEVC HM RExt + SCM reference codec offers an impressive coding efficiency performance when compared with HM RExt alone; however, it is not significantly perceptually optimized. For instance, it does not include advanced HVS-based perceptual coding methods, such as Just Noticeable Distortion (JND)-based spatiotemporal Masking schemes. In this paper, we propose a novel JND-based perceptual video coding technique for HM RExt + SCM. The proposed method is designed to further improve the compression performance of HM RExt + SCM when applied to YCbCr 4:4:4 SC video data. In the proposed technique, Luminance Masking and chrominance Masking are exploited to perceptually adjust the Quantization Step Size (QStep) at the Coding Block (CB) level. Compared with HM RExt 16.10 + SCM 8.0, the proposed method considerably reduces bitrates (Kbps), with a maximum reduction of 48.3%. In addition to this, the subjective evaluations reveal that SC-PAQ achieves visually lossless coding at very low bitrates.

  • ICASSP - JND-Based Perceptual Video Coding for 4:4:4 Screen Content Data in HEVC
    2018 IEEE International Conference on Acoustics Speech and Signal Processing (ICASSP), 2018
    Co-Authors: Lee Prangnell, Victor Sanchez
    Abstract:

    The JCT - VC standardized Screen Content Coding (SCC) extension in the HEVC HM RExt + SCM reference codec offers an impressive coding efficiency performance when compared with HM RExt alone; however, it is not significantly perceptually optimized. For instance, it does not include advanced HVS-based perceptual coding methods, such as Just Noticeable Distortion (JND)-based spatiotemporal Masking schemes. In this paper, we propose a novel JND-based perceptual video coding technique for HM RExt + SCM. The proposed method is designed to further improve the compression performance of HM RExt + SCM when applied to YCbCr 4:4:4 SC video data. In the proposed technique, Luminance Masking and chrominance Masking are exploited to perceptually adjust the Quantization Step Size (QStep) at the Coding Block (CB) level. Compared with HM RExt 16.10 + SCM 8.0, the proposed method considerably reduces bitrates (Kbps), with a maximum reduction of 48.3%. In addition to this, the subjective evaluations reveal that SC-PAQ achieves visually lossless coding at very low bitrates.

  • JND-Based Perceptual Video Coding for 4:4:4 Screen Content Data in HEVC
    arXiv: Multimedia, 2017
    Co-Authors: Lee Prangnell, Victor Sanchez
    Abstract:

    The JCT-VC standardized Screen Content Coding (SCC) extension in the HEVC HM RExt + SCM reference codec offers an impressive coding efficiency performance when compared with HM RExt alone; however, it is not significantly perceptually optimized. For instance, it does not include advanced HVS-based perceptual coding methods, such as JND-based spatiotemporal Masking schemes. In this paper, we propose a novel JND-based perceptual video coding technique for HM RExt + SCM. The proposed method is designed to further improve the compression performance of HM RExt + SCM when applied to YCbCr 4:4:4 SC video data. In the proposed technique, Luminance Masking and chrominance Masking are exploited to perceptually adjust the Quantization Step Size (QStep) at the Coding Block (CB) level. Compared with HM RExt 16.10 + SCM 8.0, the proposed method considerably reduces bitrates (Kbps), with a maximum reduction of 48.3%. In addition to this, the subjective evaluations reveal that SC-PAQ achieves visually lossless coding at very low bitrates.

  • Visually Lossless Coding in HEVC: A High Bit Depth and 4:4:4 Capable JND-Based Perceptual Quantisation Technique for HEVC
    arXiv: Multimedia, 2017
    Co-Authors: Lee Prangnell
    Abstract:

    Due to the increasing prevalence of high bit depth and YCbCr 4:4:4 video data, it is desirable to develop a JND-based visually lossless coding technique which can account for high bit depth 4:4:4 data in addition to standard 8-bit precision chroma subsampled data. In this paper, we propose a Coding Block (CB)-level JND-based luma and chroma perceptual quantisation technique for HEVC named Pixel-PAQ. Pixel-PAQ exploits both Luminance Masking and chrominance Masking to achieve JND-based visually lossless coding; the proposed method is compatible with high bit depth YCbCr 4:4:4 video data of any resolution. When applied to YCbCr 4:4:4 high bit depth video data, Pixel-PAQ can achieve vast bitrate reductions, of up to 75% (68.6% over four QP data points), compared with a state-of-the-art luma-based JND method for HEVC named IDSQ. Moreover, the participants in the subjective evaluations confirm that visually lossless coding is successfully achieved by Pixel-PAQ (at a PSNR value of 28.04 dB in one test).

Victor Sanchez - One of the best experts on this subject based on the ideXlab platform.

  • JND-Based Perceptual Video Coding for 4:4:4 Screen Content Data in HEVC
    2018 IEEE International Conference on Acoustics Speech and Signal Processing (ICASSP), 2018
    Co-Authors: Lee Prangnell, Victor Sanchez
    Abstract:

    The JCT - VC standardized Screen Content Coding (SCC) extension in the HEVC HM RExt + SCM reference codec offers an impressive coding efficiency performance when compared with HM RExt alone; however, it is not significantly perceptually optimized. For instance, it does not include advanced HVS-based perceptual coding methods, such as Just Noticeable Distortion (JND)-based spatiotemporal Masking schemes. In this paper, we propose a novel JND-based perceptual video coding technique for HM RExt + SCM. The proposed method is designed to further improve the compression performance of HM RExt + SCM when applied to YCbCr 4:4:4 SC video data. In the proposed technique, Luminance Masking and chrominance Masking are exploited to perceptually adjust the Quantization Step Size (QStep) at the Coding Block (CB) level. Compared with HM RExt 16.10 + SCM 8.0, the proposed method considerably reduces bitrates (Kbps), with a maximum reduction of 48.3%. In addition to this, the subjective evaluations reveal that SC-PAQ achieves visually lossless coding at very low bitrates.

  • ICASSP - JND-Based Perceptual Video Coding for 4:4:4 Screen Content Data in HEVC
    2018 IEEE International Conference on Acoustics Speech and Signal Processing (ICASSP), 2018
    Co-Authors: Lee Prangnell, Victor Sanchez
    Abstract:

    The JCT - VC standardized Screen Content Coding (SCC) extension in the HEVC HM RExt + SCM reference codec offers an impressive coding efficiency performance when compared with HM RExt alone; however, it is not significantly perceptually optimized. For instance, it does not include advanced HVS-based perceptual coding methods, such as Just Noticeable Distortion (JND)-based spatiotemporal Masking schemes. In this paper, we propose a novel JND-based perceptual video coding technique for HM RExt + SCM. The proposed method is designed to further improve the compression performance of HM RExt + SCM when applied to YCbCr 4:4:4 SC video data. In the proposed technique, Luminance Masking and chrominance Masking are exploited to perceptually adjust the Quantization Step Size (QStep) at the Coding Block (CB) level. Compared with HM RExt 16.10 + SCM 8.0, the proposed method considerably reduces bitrates (Kbps), with a maximum reduction of 48.3%. In addition to this, the subjective evaluations reveal that SC-PAQ achieves visually lossless coding at very low bitrates.

  • JND-Based Perceptual Video Coding for 4:4:4 Screen Content Data in HEVC
    arXiv: Multimedia, 2017
    Co-Authors: Lee Prangnell, Victor Sanchez
    Abstract:

    The JCT-VC standardized Screen Content Coding (SCC) extension in the HEVC HM RExt + SCM reference codec offers an impressive coding efficiency performance when compared with HM RExt alone; however, it is not significantly perceptually optimized. For instance, it does not include advanced HVS-based perceptual coding methods, such as JND-based spatiotemporal Masking schemes. In this paper, we propose a novel JND-based perceptual video coding technique for HM RExt + SCM. The proposed method is designed to further improve the compression performance of HM RExt + SCM when applied to YCbCr 4:4:4 SC video data. In the proposed technique, Luminance Masking and chrominance Masking are exploited to perceptually adjust the Quantization Step Size (QStep) at the Coding Block (CB) level. Compared with HM RExt 16.10 + SCM 8.0, the proposed method considerably reduces bitrates (Kbps), with a maximum reduction of 48.3%. In addition to this, the subjective evaluations reveal that SC-PAQ achieves visually lossless coding at very low bitrates.

Daniele D. Giusto - One of the best experts on this subject based on the ideXlab platform.

  • A multi-factors approach for image quality assessment based on a human visual system model
    Signal Processing: Image Communication, 2006
    Co-Authors: Giaime Ginesu, Francesco Massidda, Daniele D. Giusto
    Abstract:

    In this paper, a multi-factor full-reference image quality index is presented. The proposed visual quality metric is based on an effective Human Visual System model. Images are pre-processed in order to take into account Luminance Masking and contrast sensitivity effects. The proposed metric relies on the computation of three distortion factors: blockiness, edge errors and visual impairments, which take into account the typical artifacts introduced by several classes of coders. A pooling algorithm is used in order to obtain a single distortion index. Results show the effectiveness of the proposed approach and its consistency with subjective evaluations.

  • Human Vision and Electronic Imaging - No reference video quality estimation based on human visual system for 2.5/3g devices
    Human Vision and Electronic Imaging X, 2005
    Co-Authors: Francesco Massidda, Daniele D. Giusto, Cristian Perra
    Abstract:

    2.5/3G devices should achieve satisfactory QoS, overcoming mobile standards drawbacks. In-service/blind quality monitoring is essential in order to improve perceptual quality according to Human Visual System. Several techniques have been proposed for image/video quality assessment. A novel no-reference quality index which uses an effective HVS model is proposed. Luminance Masking, Contrast Sensitivity Function and temporal Masking are taken into account with fast in-service algorithms. The proposed index is able to assess blockiness distortion with a fast image-domain measure. Compression/post-processing blurring effects are measured with a standard approach. Moving artifacts distortion is evaluated taking into account standard deviation with respect to a natural image statistical model. Several distortion effects, in wireless noisy channels with low video-streaming/playback bit rates (e.g. edge busyness and image persistence) are evaluated. A multi-level pooling algorithm (block, temporal-window, frame, and sequence levels) is used. Validation tests have been developed in order to assess index performance and computational complexity. The final measure provides human-like threshold-effect and high correlation with subjective data. Low complexity algorithms can be derived for real-time, HVS-based, QoS management for low-power consumer devices. Different distortion effects (e.g. ringing and jerkiness) can be easily included.

  • A human visual system model for no-reference digital video quality estimation
    Image Quality and System Performance II, 2005
    Co-Authors: Francesco Massidda, Cristian Perra, Daniele D. Giusto
    Abstract:

    No-reference metrics are very useful for In-Service streaming applications. In this paper a blind measure for video quality assessment is presented. The proposed approach takes into account HVS Luminance Masking, Contrast Sensitivity and Temporal Masking. Video distortion level is then computed evaluating blockiness, blurring and moving artifacts. A global quality index is obtained using a multi-dimensional pooling algorithm (block, temporal window, frame, and sequence levels). Different video standard and several compression ratios have been used. A non-linear regression method has been derived, in order to obtain high linear and rank order correlation factors between human observer ratings and the proposed HVS-based index. Validation tests have been developed to assess index performance and computational complexity. Experimental results show that high correlation factors are obtained using the HVS models.