Illumination Variation

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 13647 Experts worldwide ranked by ideXlab platform

John Liu - One of the best experts on this subject based on the ideXlab platform.

  • fast lighting independent background subtraction
    International Journal of Computer Vision, 2000
    Co-Authors: Yuri Ivanov, Aaron Bobick, John Liu
    Abstract:

    This paper describes a simple method of fast background subtraction based upon disparity verification that is invariant to arbitrarily rapid run-time changes in Illumination. Using two or more cameras, the method requires the off-line construction of disparity fields mapping the primary background images. At runtime, segmentation is performed by checking background image to each of the additional auxiliary color intensity values at corresponding pixels. If more than two cameras are available, more robust segmentation can be achieved and, in particular, the occlusion shadows can be generally eliminated as well. Because the method only assumes fixed background geometry, the technique allows for Illumination Variation at runtime. Since no disparity search is performed, the algorithm is easily implemented in real-time on conventional hardware.

  • Fast lighting independent background subtraction
    International Journal of Computer Vision, 2000
    Co-Authors: Yuri Ivanov, Aaron Bobick, John Liu
    Abstract:

    This paper describes a new method of fast background subtraction based upon disparity verification that is invariant to run-time changes in Illumination. Using two or more cameras, the method requires the off-line construction of disparity fields mapping the primary (or key) background image to each of the additional difference background images. At runtime, segmentation is performed by checking color intensity values at corresponding pixels. If more than two cameras are available, more robust segmentation can be achieved and in particular, the occlusion shadows, can be generally eliminated as well. Because the method only assumes fixed background geometry, the technique allows for Illumination Variation at runtime. And, because no disparity search is performed at run time, the algorithm is easily implemented in real-time on conventional hardware

Yuri Ivanov - One of the best experts on this subject based on the ideXlab platform.

  • fast lighting independent background subtraction
    International Journal of Computer Vision, 2000
    Co-Authors: Yuri Ivanov, Aaron Bobick, John Liu
    Abstract:

    This paper describes a simple method of fast background subtraction based upon disparity verification that is invariant to arbitrarily rapid run-time changes in Illumination. Using two or more cameras, the method requires the off-line construction of disparity fields mapping the primary background images. At runtime, segmentation is performed by checking background image to each of the additional auxiliary color intensity values at corresponding pixels. If more than two cameras are available, more robust segmentation can be achieved and, in particular, the occlusion shadows can be generally eliminated as well. Because the method only assumes fixed background geometry, the technique allows for Illumination Variation at runtime. Since no disparity search is performed, the algorithm is easily implemented in real-time on conventional hardware.

  • Fast lighting independent background subtraction
    International Journal of Computer Vision, 2000
    Co-Authors: Yuri Ivanov, Aaron Bobick, John Liu
    Abstract:

    This paper describes a new method of fast background subtraction based upon disparity verification that is invariant to run-time changes in Illumination. Using two or more cameras, the method requires the off-line construction of disparity fields mapping the primary (or key) background image to each of the additional difference background images. At runtime, segmentation is performed by checking color intensity values at corresponding pixels. If more than two cameras are available, more robust segmentation can be achieved and in particular, the occlusion shadows, can be generally eliminated as well. Because the method only assumes fixed background geometry, the technique allows for Illumination Variation at runtime. And, because no disparity search is performed at run time, the algorithm is easily implemented in real-time on conventional hardware

Jingjing Chen - One of the best experts on this subject based on the ideXlab platform.

  • Illumination Variation resistant video based heart rate monitoring using lab color space
    Optics and Lasers in Engineering, 2021
    Co-Authors: Yuzhong Zhang, Zhe Dong, Kezun Zhang, Shuangbao Shu, Jingjing Chen
    Abstract:

    Abstract The remote photoplethysmography technology based on consumer cameras has been demonstrated to be an effective method for heart rate monitoring. However, artificial signals caused by the ambient Illumination Variation and facial motion would severely distort the heart rate pulse signal and affect the measurement accuracy of the heart rate. In view of these issues, the conversion from RGB color space to LAB color space is performed to separate the luminance signal, and the smoothness prior approach is employed to remove the stationary artifacts in the raw signals of A channel and B channel. On this basis, the simple combined signal, the difference between A channel and B channel, is introduced to extract the heart rate pulse signal. Finally, the pure signal is decomposed to obtain a set of intrinsic mode functions using the ensemble empirical mode decomposition algorithm, and heart rate is estimated based on the one intrinsic mode function with the highest energy and peak ratio in the range of 0.7 Hz to 3 Hz. To assess the performance of the framework proposed in this paper, experiments in different scenarios are performed and the experimental results show that the method proposed in this paper can effectively estimate heart rate, where the mean absolute bias is 2.59 beats/min (bpm) and the 95% confidence interval is from -7.14 bpm to 3.40 bpm.

Aaron Bobick - One of the best experts on this subject based on the ideXlab platform.

  • fast lighting independent background subtraction
    International Journal of Computer Vision, 2000
    Co-Authors: Yuri Ivanov, Aaron Bobick, John Liu
    Abstract:

    This paper describes a simple method of fast background subtraction based upon disparity verification that is invariant to arbitrarily rapid run-time changes in Illumination. Using two or more cameras, the method requires the off-line construction of disparity fields mapping the primary background images. At runtime, segmentation is performed by checking background image to each of the additional auxiliary color intensity values at corresponding pixels. If more than two cameras are available, more robust segmentation can be achieved and, in particular, the occlusion shadows can be generally eliminated as well. Because the method only assumes fixed background geometry, the technique allows for Illumination Variation at runtime. Since no disparity search is performed, the algorithm is easily implemented in real-time on conventional hardware.

  • Fast lighting independent background subtraction
    International Journal of Computer Vision, 2000
    Co-Authors: Yuri Ivanov, Aaron Bobick, John Liu
    Abstract:

    This paper describes a new method of fast background subtraction based upon disparity verification that is invariant to run-time changes in Illumination. Using two or more cameras, the method requires the off-line construction of disparity fields mapping the primary (or key) background image to each of the additional difference background images. At runtime, segmentation is performed by checking color intensity values at corresponding pixels. If more than two cameras are available, more robust segmentation can be achieved and in particular, the occlusion shadows, can be generally eliminated as well. Because the method only assumes fixed background geometry, the technique allows for Illumination Variation at runtime. And, because no disparity search is performed at run time, the algorithm is easily implemented in real-time on conventional hardware

Josef Kittler - One of the best experts on this subject based on the ideXlab platform.

  • ambient Illumination Variation removal by active near ir imaging
    Lecture Notes in Computer Science, 2006
    Co-Authors: Xuan Zou, Josef Kittler, Kieron Messer
    Abstract:

    We investigate an active Illumination method to overcome the effect of Illumination Variation in face recognition. Active Near-Infrared (Near-IR) Illumination projected by a Light Emitting Diode (LED) light source is used to provide a constant Illumination. The difference between two face images captured when the LED light is on and off respectively, is the image of a face under just the LED Illumination, and is independent of ambient Illumination. In preliminary experiments across different Illuminations, across time, and their combinations, significantly better results are achieved in both automatic and semi-automatic face recognition experiments on LED illuminated faces than on face images under ambient Illuminations.

  • independent component analysis in a local facial residue space for face recognition
    Pattern Recognition, 2004
    Co-Authors: Taekyun Kim, Hyunwoo Kim, Wonjun Hwang, Josef Kittler
    Abstract:

    In this paper, we propose an Independent Component Analysis (ICA) based face recognition algorithm, which is robust to Illumination andpose Variation. Generally, it is well known that the 5rst few eigenfaces represent Illumination Variation rather than identity. Most Principal Component Analysis (PCA) based methods have overcome Illumination Variation by discarding the projection to a few leading eigenfaces. The space spanned after removing a few leading eigenfaces is called the “residual face space”. We found that ICA in the residual face space provides more e9cient encoding in terms of redundancy reduction androbustness to pose Variation as well as Illumination Variation, owing to its ability to represent non-Gaussian statistics. Moreover, a face image is separatedinto several facial components, local spaces, andeach local space is representedby the ICA bases (independent components) of its corresponding residual space. The statistical models of face images in local spaces are relatively simple andfacilitate classi5cation by a linear encoding. Various experimental results show that the accuracy of face recognition is signi5cantly improvedby the proposedmethodunder large Illumination andpose Variations.

  • independent component analysis in a facial local residue space
    Computer Vision and Pattern Recognition, 2003
    Co-Authors: Taekyun Kim, Hyunwoo Kim, Wonjun Hwang, Seokcheol Kee, Josef Kittler
    Abstract:

    In this paper, we propose an ICA (Independent Component Analysis) based face recognition algorithm, which is robust to Illumination and pose Variation. Generally, it is well known that the first few eigenfaces represent Illumination Variation rather than identity. Most PCA (Principal Component Analysis)-based methods have overcome Illumination Variation by discarding the projection to a few leading eigenfaces. The space spanned after removing a few leading eigenfaces is called the "residual face space". We found that ICA in the residual face space provides more efficient encoding in terms of redundancy reduction and robustness to pose Variation as well as Illumination Variation, owing to its ability to represent non-Gaussian statistics. Moreover, a face image is separated into several facial components, local spaces, and each local space is represented by the ICA bases (independent components) of its corresponding residual space. The statistical models of face images in local spaces are relatively simple and facilitate classification by a linear encoding. Various experimental results show that the accuracy of face recognition is significantly improved by the proposed method under large Illumination and pose Variations.