Background Subtraction

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 26784 Experts worldwide ranked by ideXlab platform

John Liu - One of the best experts on this subject based on the ideXlab platform.

  • fast lighting independent Background Subtraction
    International Journal of Computer Vision, 2000
    Co-Authors: Yuri Ivanov, Aaron Bobick, John Liu
    Abstract:

    This paper describes a simple method of fast Background Subtraction based upon disparity verification that is invariant to arbitrarily rapid run-time changes in illumination. Using two or more cameras, the method requires the off-line construction of disparity fields mapping the primary Background images. At runtime, segmentation is performed by checking Background image to each of the additional auxiliary color intensity values at corresponding pixels. If more than two cameras are available, more robust segmentation can be achieved and, in particular, the occlusion shadows can be generally eliminated as well. Because the method only assumes fixed Background geometry, the technique allows for illumination variation at runtime. Since no disparity search is performed, the algorithm is easily implemented in real-time on conventional hardware.

  • Fast lighting independent Background Subtraction
    International Journal of Computer Vision, 2000
    Co-Authors: Yuri Ivanov, Aaron Bobick, John Liu
    Abstract:

    This paper describes a new method of fast Background Subtraction based upon disparity verification that is invariant to run-time changes in illumination. Using two or more cameras, the method requires the off-line construction of disparity fields mapping the primary (or key) Background image to each of the additional difference Background images. At runtime, segmentation is performed by checking color intensity values at corresponding pixels. If more than two cameras are available, more robust segmentation can be achieved and in particular, the occlusion shadows, can be generally eliminated as well. Because the method only assumes fixed Background geometry, the technique allows for illumination variation at runtime. And, because no disparity search is performed at run time, the algorithm is easily implemented in real-time on conventional hardware

Janusz Konrad - One of the best experts on this subject based on the ideXlab platform.

  • bsuv net a fully convolutional neural network for Background Subtraction of unseen videos
    Workshop on Applications of Computer Vision, 2020
    Co-Authors: Ozan M Tezcan, Prakash Ishwar, Janusz Konrad
    Abstract:

    Background Subtraction is a basic task in computer vision and video processing often applied as a pre-processing step for object tracking, people recognition, etc. Recently, a number of successful Background-Subtraction algorithms have been proposed, however nearly all of the top-performing ones are supervised. Crucially, their success relies upon the availability of some annotated frames of the test video during training. Consequently, their performance on completely "unseen" videos is undocumented in the literature. In this work, we propose a new, supervised, Background-Subtraction algorithm for unseen videos (BSUV-Net) based on a fully-convolutional neural network. The input to our network consists of the current frame and two Background frames captured at different time scales along with their semantic segmentation maps. In order to reduce the chance of overfitting, we also introduce a new data-augmentation technique which mitigates the impact of illumination difference between the Background frames and the current frame. On the CDNet-2014 dataset, BSUV-Net outperforms stateof-the-art algorithms evaluated on unseen videos in terms of several metrics including F-measure, recall and precision.

  • foreground adaptive Background Subtraction
    IEEE Signal Processing Letters, 2009
    Co-Authors: J M Mchugh, Janusz Konrad, Venkatesh Saligrama, Pierremarc Jodoin
    Abstract:

    Background Subtraction is a powerful mechanism for detecting change in a sequence of images that finds many applications. The most successful Background Subtraction methods apply probabilistic models to Background intensities evolving in time; nonparametric and mixture-of-Gaussians models are but two examples. The main difficulty in designing a robust Background Subtraction algorithm is the selection of a detection threshold. In this paper, we adapt this threshold to varying video statistics by means of two statistical models. In addition to a nonparametric Background model, we introduce a foreground model based on small spatial neighborhood to improve discrimination sensitivity. We also apply a Markov model to change labels to improve spatial coherence of the detections. The proposed methodology is applicable to other Background models as well.

  • statistical Background Subtraction using spatial cues
    IEEE Transactions on Circuits and Systems for Video Technology, 2007
    Co-Authors: Pierremarc Jodoin, Max Mignotte, Janusz Konrad
    Abstract:

    Most statistical Background Subtraction techniques are based on the analysis of temporal color/intensity distribution. However, learning statistics on a series of time frames can be problematic, especially when no frame absent of moving objects is available or when the available memory is not sufficient to store the series of frames needed for learning. In this letter, we propose a spatial variation to the traditional temporal framework. The proposed framework allows statistical motion detection with methods trained on one Background frame instead of a series of frames as is usually the case. Our framework includes two spatial Background Subtraction approaches suitable for different applications. The first approach is meant for scenes having a nonstatic Background due to noise, camera jitter or animation in the scene (e.g.,waving trees, fluttering leaves). This approach models each pixel with two PDFs: one unimodal PDF and one multimodal PDF, both trained on one Background frame. In this way, the method can handle Backgrounds with static and nonstatic areas. The second spatial approach is designed to use as little processing time and memory as possible. Based on the assumption that neighboring pixels often share similar temporal distribution, this second approach models the Background with one global mixture of Gaussians.

Yuri Ivanov - One of the best experts on this subject based on the ideXlab platform.

  • fast lighting independent Background Subtraction
    International Journal of Computer Vision, 2000
    Co-Authors: Yuri Ivanov, Aaron Bobick, John Liu
    Abstract:

    This paper describes a simple method of fast Background Subtraction based upon disparity verification that is invariant to arbitrarily rapid run-time changes in illumination. Using two or more cameras, the method requires the off-line construction of disparity fields mapping the primary Background images. At runtime, segmentation is performed by checking Background image to each of the additional auxiliary color intensity values at corresponding pixels. If more than two cameras are available, more robust segmentation can be achieved and, in particular, the occlusion shadows can be generally eliminated as well. Because the method only assumes fixed Background geometry, the technique allows for illumination variation at runtime. Since no disparity search is performed, the algorithm is easily implemented in real-time on conventional hardware.

  • Fast lighting independent Background Subtraction
    International Journal of Computer Vision, 2000
    Co-Authors: Yuri Ivanov, Aaron Bobick, John Liu
    Abstract:

    This paper describes a new method of fast Background Subtraction based upon disparity verification that is invariant to run-time changes in illumination. Using two or more cameras, the method requires the off-line construction of disparity fields mapping the primary (or key) Background image to each of the additional difference Background images. At runtime, segmentation is performed by checking color intensity values at corresponding pixels. If more than two cameras are available, more robust segmentation can be achieved and in particular, the occlusion shadows, can be generally eliminated as well. Because the method only assumes fixed Background geometry, the technique allows for illumination variation at runtime. And, because no disparity search is performed at run time, the algorithm is easily implemented in real-time on conventional hardware

Aaron Bobick - One of the best experts on this subject based on the ideXlab platform.

  • fast lighting independent Background Subtraction
    International Journal of Computer Vision, 2000
    Co-Authors: Yuri Ivanov, Aaron Bobick, John Liu
    Abstract:

    This paper describes a simple method of fast Background Subtraction based upon disparity verification that is invariant to arbitrarily rapid run-time changes in illumination. Using two or more cameras, the method requires the off-line construction of disparity fields mapping the primary Background images. At runtime, segmentation is performed by checking Background image to each of the additional auxiliary color intensity values at corresponding pixels. If more than two cameras are available, more robust segmentation can be achieved and, in particular, the occlusion shadows can be generally eliminated as well. Because the method only assumes fixed Background geometry, the technique allows for illumination variation at runtime. Since no disparity search is performed, the algorithm is easily implemented in real-time on conventional hardware.

  • Fast lighting independent Background Subtraction
    International Journal of Computer Vision, 2000
    Co-Authors: Yuri Ivanov, Aaron Bobick, John Liu
    Abstract:

    This paper describes a new method of fast Background Subtraction based upon disparity verification that is invariant to run-time changes in illumination. Using two or more cameras, the method requires the off-line construction of disparity fields mapping the primary (or key) Background image to each of the additional difference Background images. At runtime, segmentation is performed by checking color intensity values at corresponding pixels. If more than two cameras are available, more robust segmentation can be achieved and in particular, the occlusion shadows, can be generally eliminated as well. Because the method only assumes fixed Background geometry, the technique allows for illumination variation at runtime. And, because no disparity search is performed at run time, the algorithm is easily implemented in real-time on conventional hardware

Ozan M Tezcan - One of the best experts on this subject based on the ideXlab platform.

  • bsuv net a fully convolutional neural network for Background Subtraction of unseen videos
    Workshop on Applications of Computer Vision, 2020
    Co-Authors: Ozan M Tezcan, Prakash Ishwar, Janusz Konrad
    Abstract:

    Background Subtraction is a basic task in computer vision and video processing often applied as a pre-processing step for object tracking, people recognition, etc. Recently, a number of successful Background-Subtraction algorithms have been proposed, however nearly all of the top-performing ones are supervised. Crucially, their success relies upon the availability of some annotated frames of the test video during training. Consequently, their performance on completely "unseen" videos is undocumented in the literature. In this work, we propose a new, supervised, Background-Subtraction algorithm for unseen videos (BSUV-Net) based on a fully-convolutional neural network. The input to our network consists of the current frame and two Background frames captured at different time scales along with their semantic segmentation maps. In order to reduce the chance of overfitting, we also introduce a new data-augmentation technique which mitigates the impact of illumination difference between the Background frames and the current frame. On the CDNet-2014 dataset, BSUV-Net outperforms stateof-the-art algorithms evaluated on unseen videos in terms of several metrics including F-measure, recall and precision.