Foreground Object

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 8601 Experts worldwide ranked by ideXlab platform

Marek Gorgon - One of the best experts on this subject based on the ideXlab platform.

  • Foreground Object segmentation in rgb d data implemented on gpu
    KKA, 2020
    Co-Authors: Piotr Andrzej Janus, Tomasz Kryjak, Marek Gorgon
    Abstract:

    This paper presents a GPU implementation of two Foreground Object segmentation algorithms: Gaussian Mixture Model (GMM) and Pixel Based Adaptive Segmenter (PBAS) modified for RGB-D data support. The simultaneous use of colour (RGB) and depth (D) data allows to improve segmentation accuracy, especially in case of colour camouflage, illumination changes and occurrence of shadows. Three GPUs were used to accelerate calculations: embedded NVIDIA Jetson TX2 (Maxwell architecture), mobile NVIDIA GeForce GTX 1050m (Pascal architecture) and efficient NVIDIA RTX 2070 (Turing architecture).Segmentation accuracy comparable to previously published works was obtained. Moreover, the use of a GPU platform allowed to get real-time image processing. In addition, the system has been adapted to work with two RGB-D sensors: RealSense D415 and D435 from Intel.

  • KKA - Foreground Object segmentation in RGB-D data implemented on GPU
    2020
    Co-Authors: Piotr Andrzej Janus, Tomasz Kryjak, Marek Gorgon
    Abstract:

    This paper presents a GPU implementation of two Foreground Object segmentation algorithms: Gaussian Mixture Model (GMM) and Pixel Based Adaptive Segmenter (PBAS) modified for RGB-D data support. The simultaneous use of colour (RGB) and depth (D) data allows to improve segmentation accuracy, especially in case of colour camouflage, illumination changes and occurrence of shadows. Three GPUs were used to accelerate calculations: embedded NVIDIA Jetson TX2 (Maxwell architecture), mobile NVIDIA GeForce GTX 1050m (Pascal architecture) and efficient NVIDIA RTX 2070 (Turing architecture).Segmentation accuracy comparable to previously published works was obtained. Moreover, the use of a GPU platform allowed to get real-time image processing. In addition, the system has been adapted to work with two RGB-D sensors: RealSense D415 and D435 from Intel.

  • real time Foreground Object detection combining the pbas background modelling algorithm and feedback from scene analysis module
    International Journal of Electronics and Telecommunications, 2014
    Co-Authors: Tomasz Kryjak, Mateusz Komorkiewicz, Marek Gorgon
    Abstract:

    The article presents a hardware implementation of the Foreground Object detection algorithm PBAS (Pixel-Based Adaptive Segmenter) with a scene analysis module. A mechanism for static Object detection is proposed, which is based on consecu- tive frame differencing. The method allows to distinguish stopped Foreground Objects (e.g. a car at the intersection, abandoned lug- gage) from false detections (so-called ghosts) using edge similarity. The improved algorithm was compared with the original version on popular test sequences from the changedetection.net dataset. The obtained results indicate that the proposed approach allows to improve the performance of the method for sequences with the stopped Objects. The algorithm has been implemented and successfully verified on a hardware platform with Virtex 7 FPGA device. The PBAS segmentation, consecutive frame differencing, Sobel edge detection and advanced one-pass con- nected component analysis modules were designed. The system is capable of processing 50 frames with a resolution of 720 × 576 pixels per second. Keywords—PBAS algorithm, Foreground segmentation, fore- ground Object detection, background generation, background subtraction, background modelling, image processing and anal- ysis, FPGA, connected component analysis, consecutive frame differencing

  • Real-time Foreground Object Detection Combining the PBAS Background Modelling Algorithm and Feedback from Scene Analysis Module
    International Journal of Electronics and Telecommunications, 2014
    Co-Authors: Tomasz Kryjak, Mateusz Komorkiewicz, Marek Gorgon
    Abstract:

    The article presents a hardware implementation of the Foreground Object detection algorithm PBAS (Pixel-Based Adaptive Segmenter) with a scene analysis module. A mechanism for static Object detection is proposed, which is based on consecutive frame differencing. The method allows to distinguish stopped Foreground Objects (e.g. a car at the intersection, abandoned luggage) from false detections (so-called ghosts) using edge similarity. The improved algorithm was compared with the original version on popular test sequences from the changedetection.net dataset. The obtained results indicate that the proposed approach allows to improve the performance of the method for sequences with the stopped Objects. The algorithm has been implemented and successfully verified on a hardware platform with Virtex 7 FPGA device. The PBAS segmentation, consecutive frame differencing, Sobel edge detection and advanced one-pass connected component analysis modules were designed. The system is capable of processing 50 frames with a resolution of 720 × 576 pixels per second.

  • real time implementation of Foreground Object detection from a moving camera using the vibe algorithm
    Computer Science and Information Systems, 2014
    Co-Authors: Tomasz Kryjak, Mateusz Komorkiewicz, Marek Gorgon
    Abstract:

    The article presents a real-time hardware implementation of a Foreground Object detection for a non-static camera setup. The system consists of two parts: the calculation of the displacement between two consecutive frames using a correlation based corner tracker and background generation method ViBE (Visual Background Extractor). The paper discusses details of the used hardware modules, resource utilization, computing performance and power dissipation. The solution was evaluated on sequences recorded with a static and moving camera. The system was successfully tested on a hardware platform with an FPGA device. It allows to process a 720x576 pixels and 50 frames per second video stream in real-time.

Danna Gurari - One of the best experts on this subject based on the ideXlab platform.

  • ICCV - Unconstrained Foreground Object Search
    2019 IEEE CVF International Conference on Computer Vision (ICCV), 2019
    Co-Authors: Yinan Zhao, Brian Price, Scott Cohen, Danna Gurari
    Abstract:

    Many people search for Foreground Objects to use when editing images. While existing methods can retrieve candidates to aid in this, they are constrained to returning Objects that belong to a pre-specified semantic class. We instead propose a novel problem of unconstrained Foreground Object (UFO) search and introduce a solution that supports efficient search by encoding the background image in the same latent space as the candidate Foreground Objects. A key contribution of our work is a cost-free, scalable approach for creating a large-scale training dataset with a variety of Foreground Objects of differing semantic categories per image location. Quantitative and human-perception experiments with two diverse datasets demonstrate the advantage of our UFO search solution over related baselines.

  • Unconstrained Foreground Object Search
    2019 IEEE CVF International Conference on Computer Vision (ICCV), 2019
    Co-Authors: Yinan Zhao, Brian Price, Scott Cohen, Danna Gurari
    Abstract:

    Many people search for Foreground Objects to use when editing images. While existing methods can retrieve candidates to aid in this, they are constrained to returning Objects that belong to a pre-specified semantic class. We instead propose a novel problem of unconstrained Foreground Object (UFO) search and introduce a solution that supports efficient search by encoding the background image in the same latent space as the candidate Foreground Objects. A key contribution of our work is a cost-free, scalable approach for creating a large-scale training dataset with a variety of Foreground Objects of differing semantic categories per image location. Quantitative and human-perception experiments with two diverse datasets demonstrate the advantage of our UFO search solution over related baselines.

  • Predicting Foreground Object Ambiguity and Efficiently Crowdsourcing the Segmentation(s)
    International Journal of Computer Vision, 2018
    Co-Authors: Danna Gurari, Kun He, Bo Xiong, Jianming Zhang, Mehrnoosh Sameki, Suyog Dutt Jain, Stan Sclaroff, Margrit Betke, Kristen Grauman
    Abstract:

    We propose the ambiguity problem for the Foreground Object segmentation task and motivate the importance of estimating and accounting for this ambiguity when designing vision systems. Specifically, we distinguish between images which lead multiple annotators to segment different Foreground Objects (ambiguous) versus minor inter-annotator differences of the same Object. Taking images from eight widely used datasets, we crowdsource labeling the images as “ambiguous” or “not ambiguous” to segment in order to construct a new dataset we call STATIC. Using STATIC, we develop a system that automatically predicts which images are ambiguous. Experiments demonstrate the advantage of our prediction system over existing saliency-based methods on images from vision benchmarks and images taken by blind people who are trying to recognize Objects in their environment. Finally, we introduce a crowdsourcing system to achieve cost savings for collecting the diversity of all valid “ground truth” Foreground Object segmentations by collecting extra segmentations only when ambiguity is expected. Experiments show our system eliminates up to 47% of human effort compared to existing crowdsourcing methods with no loss in capturing the diversity of ground truths.

Tomasz Kryjak - One of the best experts on this subject based on the ideXlab platform.

  • Foreground Object segmentation in rgb d data implemented on gpu
    KKA, 2020
    Co-Authors: Piotr Andrzej Janus, Tomasz Kryjak, Marek Gorgon
    Abstract:

    This paper presents a GPU implementation of two Foreground Object segmentation algorithms: Gaussian Mixture Model (GMM) and Pixel Based Adaptive Segmenter (PBAS) modified for RGB-D data support. The simultaneous use of colour (RGB) and depth (D) data allows to improve segmentation accuracy, especially in case of colour camouflage, illumination changes and occurrence of shadows. Three GPUs were used to accelerate calculations: embedded NVIDIA Jetson TX2 (Maxwell architecture), mobile NVIDIA GeForce GTX 1050m (Pascal architecture) and efficient NVIDIA RTX 2070 (Turing architecture).Segmentation accuracy comparable to previously published works was obtained. Moreover, the use of a GPU platform allowed to get real-time image processing. In addition, the system has been adapted to work with two RGB-D sensors: RealSense D415 and D435 from Intel.

  • KKA - Foreground Object segmentation in RGB-D data implemented on GPU
    2020
    Co-Authors: Piotr Andrzej Janus, Tomasz Kryjak, Marek Gorgon
    Abstract:

    This paper presents a GPU implementation of two Foreground Object segmentation algorithms: Gaussian Mixture Model (GMM) and Pixel Based Adaptive Segmenter (PBAS) modified for RGB-D data support. The simultaneous use of colour (RGB) and depth (D) data allows to improve segmentation accuracy, especially in case of colour camouflage, illumination changes and occurrence of shadows. Three GPUs were used to accelerate calculations: embedded NVIDIA Jetson TX2 (Maxwell architecture), mobile NVIDIA GeForce GTX 1050m (Pascal architecture) and efficient NVIDIA RTX 2070 (Turing architecture).Segmentation accuracy comparable to previously published works was obtained. Moreover, the use of a GPU platform allowed to get real-time image processing. In addition, the system has been adapted to work with two RGB-D sensors: RealSense D415 and D435 from Intel.

  • Foreground Object Segmentation in Dynamic Background Scenarios
    Image Processing and Communications, 2014
    Co-Authors: Tomasz Kryjak
    Abstract:

    Abstract In the paper research on Foreground Object segmentation in dynamic background scenarios (i.e. flowing water, moving leaves or shrubs) is described. The effectiveness of different algorithms: based on FIFO sample buffer, singlevariant, multi-variant (MOG, Clustering) and recently proposed ViBE and PBAS is evaluated. A post-processing method, that allows false detections reduction is also proposed. The solution was tested on sequences from the changedetection.net dataset. The obtained results indicate usefulness of the proposed approach.

  • real time Foreground Object detection combining the pbas background modelling algorithm and feedback from scene analysis module
    International Journal of Electronics and Telecommunications, 2014
    Co-Authors: Tomasz Kryjak, Mateusz Komorkiewicz, Marek Gorgon
    Abstract:

    The article presents a hardware implementation of the Foreground Object detection algorithm PBAS (Pixel-Based Adaptive Segmenter) with a scene analysis module. A mechanism for static Object detection is proposed, which is based on consecu- tive frame differencing. The method allows to distinguish stopped Foreground Objects (e.g. a car at the intersection, abandoned lug- gage) from false detections (so-called ghosts) using edge similarity. The improved algorithm was compared with the original version on popular test sequences from the changedetection.net dataset. The obtained results indicate that the proposed approach allows to improve the performance of the method for sequences with the stopped Objects. The algorithm has been implemented and successfully verified on a hardware platform with Virtex 7 FPGA device. The PBAS segmentation, consecutive frame differencing, Sobel edge detection and advanced one-pass con- nected component analysis modules were designed. The system is capable of processing 50 frames with a resolution of 720 × 576 pixels per second. Keywords—PBAS algorithm, Foreground segmentation, fore- ground Object detection, background generation, background subtraction, background modelling, image processing and anal- ysis, FPGA, connected component analysis, consecutive frame differencing

  • Real-time Foreground Object Detection Combining the PBAS Background Modelling Algorithm and Feedback from Scene Analysis Module
    International Journal of Electronics and Telecommunications, 2014
    Co-Authors: Tomasz Kryjak, Mateusz Komorkiewicz, Marek Gorgon
    Abstract:

    The article presents a hardware implementation of the Foreground Object detection algorithm PBAS (Pixel-Based Adaptive Segmenter) with a scene analysis module. A mechanism for static Object detection is proposed, which is based on consecutive frame differencing. The method allows to distinguish stopped Foreground Objects (e.g. a car at the intersection, abandoned luggage) from false detections (so-called ghosts) using edge similarity. The improved algorithm was compared with the original version on popular test sequences from the changedetection.net dataset. The obtained results indicate that the proposed approach allows to improve the performance of the method for sequences with the stopped Objects. The algorithm has been implemented and successfully verified on a hardware platform with Virtex 7 FPGA device. The PBAS segmentation, consecutive frame differencing, Sobel edge detection and advanced one-pass connected component analysis modules were designed. The system is capable of processing 50 frames with a resolution of 720 × 576 pixels per second.

Qi Tian - One of the best experts on this subject based on the ideXlab platform.

  • Statistical modeling of complex backgrounds for Foreground Object detection
    IEEE Transactions on Image Processing, 2004
    Co-Authors: Liyuan Li, Irene Yu-hua Gu, Weimin Huang, Qi Tian
    Abstract:

    This paper addresses the problem of background modeling for Foreground Object detection in complex environments. A Bayesian framework that incorporates spectral, spatial, and temporal features to characterize the background appearance is proposed. Under this framework, the background is represented by the most significant and frequent features, i.e., the principal features, at each pixel. A Bayes decision rule is derived for background and Foreground classification based on the statistics of principal features. Principal feature representation for both the static and dynamic background pixels is investigated. A novel learning method is proposed to adapt to both gradual and sudden "once-off" background changes. The convergence of the learning process is analyzed and a formula to select a proper learning rate is derived. Under the proposed framework, a novel algorithm for detecting Foreground Objects from complex environments is then established. It consists of change detection, change classification, Foreground segmentation, and background maintenance. Experiments were conducted on image sequences containing targets of interest in a variety of environments, e.g., offices, public buildings, subway stations, campuses, parking lots, airports, and sidewalks. Good results of Foreground detection were obtained. Quantitative evaluation and comparison with the existing method show that the proposed method provides much improved results.

  • Foreground Object detection from videos containing complex background
    ACM Multimedia, 2003
    Co-Authors: Liyuan Li, Irene Yu-hua Gu, Weimin Huang, Qi Tian
    Abstract:

    This paper proposes a novel method for detection and segmentation of Foreground Objects from a video which contains both stationary and moving background Objects and undergoes both gradual and sudden "once-off" changes. A Bayes decision rule for classification of background and Foreground from selected feature vectors is formulated. Under this rule, different types of background Objects will be classified from Foreground Objects by choosing a proper feature vector. The stationary background Object is described by the color feature, and the moving background Object is represented by the color co-occurrence feature. Foreground Objects are extracted by fusing the classification results from both stationary and moving pixels. Learning strategies for the gradual and sudden "once-off" background changes are proposed to adapt to various changes in background through the video. The convergence of the learning process is proved and a formula to select a proper learning rate is also derived. Experiments have shown promising results in extracting Foreground Objects from many complex backgrounds including wavering tree branches, flickering screens and water surfaces, moving escalators, opening and closing doors, switching lights and shadows of moving Objects.

Yinan Zhao - One of the best experts on this subject based on the ideXlab platform.

  • ICCV - Unconstrained Foreground Object Search
    2019 IEEE CVF International Conference on Computer Vision (ICCV), 2019
    Co-Authors: Yinan Zhao, Brian Price, Scott Cohen, Danna Gurari
    Abstract:

    Many people search for Foreground Objects to use when editing images. While existing methods can retrieve candidates to aid in this, they are constrained to returning Objects that belong to a pre-specified semantic class. We instead propose a novel problem of unconstrained Foreground Object (UFO) search and introduce a solution that supports efficient search by encoding the background image in the same latent space as the candidate Foreground Objects. A key contribution of our work is a cost-free, scalable approach for creating a large-scale training dataset with a variety of Foreground Objects of differing semantic categories per image location. Quantitative and human-perception experiments with two diverse datasets demonstrate the advantage of our UFO search solution over related baselines.

  • Unconstrained Foreground Object Search
    2019 IEEE CVF International Conference on Computer Vision (ICCV), 2019
    Co-Authors: Yinan Zhao, Brian Price, Scott Cohen, Danna Gurari
    Abstract:

    Many people search for Foreground Objects to use when editing images. While existing methods can retrieve candidates to aid in this, they are constrained to returning Objects that belong to a pre-specified semantic class. We instead propose a novel problem of unconstrained Foreground Object (UFO) search and introduce a solution that supports efficient search by encoding the background image in the same latent space as the candidate Foreground Objects. A key contribution of our work is a cost-free, scalable approach for creating a large-scale training dataset with a variety of Foreground Objects of differing semantic categories per image location. Quantitative and human-perception experiments with two diverse datasets demonstrate the advantage of our UFO search solution over related baselines.