Luminance Signal

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 4269 Experts worldwide ranked by ideXlab platform

A Jacquin - One of the best experts on this subject based on the ideXlab platform.

  • automatic face location detection and tracking for model assisted coding of video teleconferencing sequences at low bit rates
    Signal Processing-image Communication, 1995
    Co-Authors: Alexandros Eleftheriadis, A Jacquin
    Abstract:

    Abstract We present a novel and practical way to integrate techniques from computer vision to low bit-rate coding systems for video teleconferencing applications. Our focus is to locate and track the faces of persons in typical head-and-shoulders video sequences, and to exploit the face location information in a ‘classical’ video coding/decoding system. The motivation is to enable the system to selectively encode various image areas and to produce psychologically pleasing coded images where faces are sharper. We refer to this approach as model-assisted coding. We propose a totally automatic, low-complexity algorithm, which robustly performs face detection and tracking. A priori assumptions regarding sequence content are minimal and the algorithm operates accurately even in cases of partial occlusion by moving objects. Face location information is exploited by a low bit-rate 3D subband-based video coder which uses both a novel model-assisted pixel-based motion compensation scheme, as well as model-assisted dynamic bit allocation with object-selective quantization. By transferring a small fraction of the total available bit-rate from the non-facial to the facial area, the coder produces images with better-rendered facial features. The improvement was found to be perceptually significant on video sequences coded at 96 kbps for an input Luminance Signal in CIF format. The technique is applicable to any video coding scheme that allows for fine-grain quantizer selection (e.g. MPEG, H.261), and can maintain full decoder compatibility.

  • model assisted coding of video teleconferencing sequences at low bit rates
    International Symposium on Circuits and Systems, 1994
    Co-Authors: Alexandros Eleftheriadis, A Jacquin
    Abstract:

    We present a novel and practical way to integrate techniques from computer vision to low bit rate coding systems for video teleconferencing applications. Our focus is to locate and track the faces of persons in typical head-and-shoulders video sequences, and to exploit the face location information in a "classical" video coding/decoding system. The motivation is to enable the system to selectively encode various image areas and to produce psychologically pleasing coded images where faces are sharper. We refer to this approach as model-assisted coding. We propose a totally automatic, low-complexity algorithm, which robustly performs face detection and tracking. A priori assumptions regarding sequence content are minimal and the algorithm operates accurately even in cases of occlusion by moving objects. Face location information is exploited by a low bit rate 3D subband-based video coder which uses a model-assisted dynamic bit allocation with object-selective quantization. By transferring a small fraction of the total available bit rate from the non-facial to the facial area, the coder produces images with better-rendered facial features. The improvement was found to be perceptually significant on video sequences coded at 96 kbps for an input Luminance Signal in CIF format. The technique is applicable to any video coding scheme that allows for fine-grain quantizer selection (e.g. MPEG, H.261), and can maintain full decoder compatibility. >

Alexandros Eleftheriadis - One of the best experts on this subject based on the ideXlab platform.

  • automatic face location detection and tracking for model assisted coding of video teleconferencing sequences at low bit rates
    Signal Processing-image Communication, 1995
    Co-Authors: Alexandros Eleftheriadis, A Jacquin
    Abstract:

    Abstract We present a novel and practical way to integrate techniques from computer vision to low bit-rate coding systems for video teleconferencing applications. Our focus is to locate and track the faces of persons in typical head-and-shoulders video sequences, and to exploit the face location information in a ‘classical’ video coding/decoding system. The motivation is to enable the system to selectively encode various image areas and to produce psychologically pleasing coded images where faces are sharper. We refer to this approach as model-assisted coding. We propose a totally automatic, low-complexity algorithm, which robustly performs face detection and tracking. A priori assumptions regarding sequence content are minimal and the algorithm operates accurately even in cases of partial occlusion by moving objects. Face location information is exploited by a low bit-rate 3D subband-based video coder which uses both a novel model-assisted pixel-based motion compensation scheme, as well as model-assisted dynamic bit allocation with object-selective quantization. By transferring a small fraction of the total available bit-rate from the non-facial to the facial area, the coder produces images with better-rendered facial features. The improvement was found to be perceptually significant on video sequences coded at 96 kbps for an input Luminance Signal in CIF format. The technique is applicable to any video coding scheme that allows for fine-grain quantizer selection (e.g. MPEG, H.261), and can maintain full decoder compatibility.

  • model assisted coding of video teleconferencing sequences at low bit rates
    International Symposium on Circuits and Systems, 1994
    Co-Authors: Alexandros Eleftheriadis, A Jacquin
    Abstract:

    We present a novel and practical way to integrate techniques from computer vision to low bit rate coding systems for video teleconferencing applications. Our focus is to locate and track the faces of persons in typical head-and-shoulders video sequences, and to exploit the face location information in a "classical" video coding/decoding system. The motivation is to enable the system to selectively encode various image areas and to produce psychologically pleasing coded images where faces are sharper. We refer to this approach as model-assisted coding. We propose a totally automatic, low-complexity algorithm, which robustly performs face detection and tracking. A priori assumptions regarding sequence content are minimal and the algorithm operates accurately even in cases of occlusion by moving objects. Face location information is exploited by a low bit rate 3D subband-based video coder which uses a model-assisted dynamic bit allocation with object-selective quantization. By transferring a small fraction of the total available bit rate from the non-facial to the facial area, the coder produces images with better-rendered facial features. The improvement was found to be perceptually significant on video sequences coded at 96 kbps for an input Luminance Signal in CIF format. The technique is applicable to any video coding scheme that allows for fine-grain quantizer selection (e.g. MPEG, H.261), and can maintain full decoder compatibility. >

Sergio M C Nascimento - One of the best experts on this subject based on the ideXlab platform.

  • assessing the effects of dynamic Luminance contrast noise masking on a color discrimination task
    Journal of The Optical Society of America A-optics Image Science and Vision, 2016
    Co-Authors: Joao M M Linhares, Catarina Joao, Eva Silva, Vasco M N De Almeida, Jorge L A Santos, Leticia Alvaro, Sergio M C Nascimento
    Abstract:

    The aim of this work was to assess the influence of dynamic Luminance contrast noise masking (LCNM) on color discrimination for color normal and anomalous trichromats. The stimulus was a colored target on a background presented on a calibrated CRT display. In the static LCNM condition, the background and target consisted of packed circles with variable size and static random Luminance. In the dynamic LCNM condition, a 10 Hz square Luminance Signal was added to each circle. The phase of this Signal was randomized across circles. Discrimination thresholds were estimated along 20 hue directions concurrent at the color of the background. Six observers with normal color vision, six deuteranomalous observers, and three protanomalous observers performed the test in both conditions. With dynamic LCNM, thresholds were significantly lower for anomalous observers but not for normal observers, suggesting a facilitation effect of the masking for anomalous trichromats.

Jeanny Herault - One of the best experts on this subject based on the ideXlab platform.

  • a model of colour processing in the retina of vertebrates from photoreceptors to colour opposition and colour constancy phenomena
    Neurocomputing, 1996
    Co-Authors: Jeanny Herault
    Abstract:

    Abstract Starting with a biological model of the foveal and parafoveal regions of the retina, this paper shows that simple considerations about the spatiotemporal filtering processed by the retinal neural network can account for colour processing at the level of early vision. We establish that the Red, Green, Blue Signal can be considered as a low-pass Luminance Signal plus a colour-modulated Signal. The structure of the Outer- and Inner Plexiform Layers of the retina leads to spatial low- and high-pass filters which account for the achromatic and transient characteristics of the Y ganglion cells as well as for the spatiotemporal colour-opponent properties of X ganglion cells. Considering this property and the logarithmic transduction of photoreceptors, it is easy to postulate that, after the retina at the level of the Lateral Geniculate Nucleus, a simple low-pass filtering can pave the way to the well known colour constancy phenomenon.

Nascimento, Sérgio M. C. - One of the best experts on this subject based on the ideXlab platform.

  • Assessing the effects of dynamic Luminance contrast noise masking on a color discrimination task
    Optical Society of America, 2016
    Co-Authors: Linhares, João M. M., João, Catarina Alexandra Rodrigues, Silva, Eva D. G., De Almeida, Vasco M. N., Santos, Jorge L. A., Álvaro Leticia, Nascimento, Sérgio M. C.
    Abstract:

    The aim of this work was to assess the influence of dynamic Luminance contrast noise masking (LCNM) on color discrimination for color normal and anomalous trichromats. The stimulus was a colored target on a background presented on a calibrated CRT display. In the static LCNM condition, the background and target consisted of packed circles with variable size and static random Luminance. In the dynamic LCNM condition, a 10 Hz square Luminance Signal was added to each circle. The phase of this Signal was randomized across circles. Discrimination thresholds were estimated along 20 hue directions concurrent at the color of the background. Six observers with normal colorvision, six deuter anomalous observers, and three prot anomalous observers performed the test in both conditions. With dynamic LCNM, thresholds were significantly lower for anomalous observers but not for normal observers, suggesting a facilitation effect of the masking for anomalous trichromats.Centro de Física of Minho University; Departamento de Física of University of Beira Interior; FEDER through the COMPETE Program; Foundation for Science and Technology (FCT) (PTDC/MHC-PCN/4731/2012).This work was supported by the Departamento de Física of University of Beira Interior, by the Centro de Física of Minho University, by FEDER through the COMPETE Program, and by the Portuguese Foundation for Science and Technology (FCT) in the framework of the project PTDC/MHC-PCN/4731/2012