Background Image

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 196392 Experts worldwide ranked by ideXlab platform

John Liu - One of the best experts on this subject based on the ideXlab platform.

  • fast lighting independent Background subtraction
    International Journal of Computer Vision, 2000
    Co-Authors: Yuri Ivanov, Aaron Bobick, John Liu
    Abstract:

    This paper describes a simple method of fast Background subtraction based upon disparity verification that is invariant to arbitrarily rapid run-time changes in illumination. Using two or more cameras, the method requires the off-line construction of disparity fields mapping the primary Background Images. At runtime, segmentation is performed by checking Background Image to each of the additional auxiliary color intensity values at corresponding pixels. If more than two cameras are available, more robust segmentation can be achieved and, in particular, the occlusion shadows can be generally eliminated as well. Because the method only assumes fixed Background geometry, the technique allows for illumination variation at runtime. Since no disparity search is performed, the algorithm is easily implemented in real-time on conventional hardware.

  • Fast lighting independent Background subtraction
    International Journal of Computer Vision, 2000
    Co-Authors: Yuri Ivanov, Aaron Bobick, John Liu
    Abstract:

    This paper describes a new method of fast Background subtraction based upon disparity verification that is invariant to run-time changes in illumination. Using two or more cameras, the method requires the off-line construction of disparity fields mapping the primary (or key) Background Image to each of the additional difference Background Images. At runtime, segmentation is performed by checking color intensity values at corresponding pixels. If more than two cameras are available, more robust segmentation can be achieved and in particular, the occlusion shadows, can be generally eliminated as well. Because the method only assumes fixed Background geometry, the technique allows for illumination variation at runtime. And, because no disparity search is performed at run time, the algorithm is easily implemented in real-time on conventional hardware

Yuri Ivanov - One of the best experts on this subject based on the ideXlab platform.

  • fast lighting independent Background subtraction
    International Journal of Computer Vision, 2000
    Co-Authors: Yuri Ivanov, Aaron Bobick, John Liu
    Abstract:

    This paper describes a simple method of fast Background subtraction based upon disparity verification that is invariant to arbitrarily rapid run-time changes in illumination. Using two or more cameras, the method requires the off-line construction of disparity fields mapping the primary Background Images. At runtime, segmentation is performed by checking Background Image to each of the additional auxiliary color intensity values at corresponding pixels. If more than two cameras are available, more robust segmentation can be achieved and, in particular, the occlusion shadows can be generally eliminated as well. Because the method only assumes fixed Background geometry, the technique allows for illumination variation at runtime. Since no disparity search is performed, the algorithm is easily implemented in real-time on conventional hardware.

  • Fast lighting independent Background subtraction
    International Journal of Computer Vision, 2000
    Co-Authors: Yuri Ivanov, Aaron Bobick, John Liu
    Abstract:

    This paper describes a new method of fast Background subtraction based upon disparity verification that is invariant to run-time changes in illumination. Using two or more cameras, the method requires the off-line construction of disparity fields mapping the primary (or key) Background Image to each of the additional difference Background Images. At runtime, segmentation is performed by checking color intensity values at corresponding pixels. If more than two cameras are available, more robust segmentation can be achieved and in particular, the occlusion shadows, can be generally eliminated as well. Because the method only assumes fixed Background geometry, the technique allows for illumination variation at runtime. And, because no disparity search is performed at run time, the algorithm is easily implemented in real-time on conventional hardware

Aaron Bobick - One of the best experts on this subject based on the ideXlab platform.

  • fast lighting independent Background subtraction
    International Journal of Computer Vision, 2000
    Co-Authors: Yuri Ivanov, Aaron Bobick, John Liu
    Abstract:

    This paper describes a simple method of fast Background subtraction based upon disparity verification that is invariant to arbitrarily rapid run-time changes in illumination. Using two or more cameras, the method requires the off-line construction of disparity fields mapping the primary Background Images. At runtime, segmentation is performed by checking Background Image to each of the additional auxiliary color intensity values at corresponding pixels. If more than two cameras are available, more robust segmentation can be achieved and, in particular, the occlusion shadows can be generally eliminated as well. Because the method only assumes fixed Background geometry, the technique allows for illumination variation at runtime. Since no disparity search is performed, the algorithm is easily implemented in real-time on conventional hardware.

  • Fast lighting independent Background subtraction
    International Journal of Computer Vision, 2000
    Co-Authors: Yuri Ivanov, Aaron Bobick, John Liu
    Abstract:

    This paper describes a new method of fast Background subtraction based upon disparity verification that is invariant to run-time changes in illumination. Using two or more cameras, the method requires the off-line construction of disparity fields mapping the primary (or key) Background Image to each of the additional difference Background Images. At runtime, segmentation is performed by checking color intensity values at corresponding pixels. If more than two cameras are available, more robust segmentation can be achieved and in particular, the occlusion shadows, can be generally eliminated as well. Because the method only assumes fixed Background geometry, the technique allows for illumination variation at runtime. And, because no disparity search is performed at run time, the algorithm is easily implemented in real-time on conventional hardware

Chin-chen Chang - One of the best experts on this subject based on the ideXlab platform.

  • a novel aesthetic qr code algorithm based on hybrid basis vector matrices
    Symmetry, 2018
    Co-Authors: Weiling Cheng, Zaorang Yang, Shanqing Zhang, Chin-chen Chang
    Abstract:

    Recently, more and more research has focused on the beautification technology of QR (Quick Response) codes. In this paper, a novel algorithm based on the XOR (exclusive OR) mechanism of hybrid basis vector matrices and a Background Image synthetic strategy is proposed. The hybrid basis vector matrices include the reverse basis vector matrix (RBVM) and positive basis vector matrix (PBVM). Firstly, the RBVM and PBVM are obtained by the Gauss–Jordan elimination method, according to the characteristics of the RS code. Secondly, the modification of the parity area of the QR code can be applied with the XOR operation of the RBVM, and the XOR operation of the PBVM is used to change the data area of the QR code. So, the QR code can be modified to be very close to the Background Image without impacting the error-correction ability. Finally, in order to further decrease the difference between the QR code and the Background Image, a new synthesis strategy is adopted in order to obtain a better aesthetic effect. The experimental results show that it obtains a better visual effect without the sacrificing recognition rate.

  • an aesthetic qr code solution based on error correction mechanism
    Journal of Systems and Software, 2016
    Co-Authors: Jianfeng Lu, Chin-chen Chang
    Abstract:

    Our aesthetic QR code generation algorithm is based on error correction mechanism, use the characteristics of QR code.In order to highlight the important regions of Background Image, we combine with the saliency technology to detect the significant regions.Compared with the existing methods, our algorithm can maximize the changeable areas.This algorithm has better aesthetic effects, while keeping the rate of successful decoding. QR code(Quick Response Code) is a popular two-dimensional matrix that randomly consists of black and white square modules. While the appearance of QR codes are often visually unpleasant, it leads to the increasing demand for the aesthetic QR codes. However, it may turn out to be unreadable if changes to the modules of the QR code are inadequate. Therefore, to resolve this conflict, we propose a method to generate an aesthetic QR code, which is based on the RS(Reed-Solomon) error correction mechanism in QR code encoding rules. First, according to the characteristics of the QR code, we mark the positions of codewords as codeword layout. Then, we detect salient regions of the Background Image to generate the saliency map. The next step is to combine it with the saliency map and codeword layout to calculate saliency values, then sort and select proper codewords as changeable regions. Finally, we propose the hierarchical module replacement rules. The theoretical maximum value of the changeable areas is the redundancy capacity T of RS error correction. Compared with the existing methods, our algorithm can maximize the changeable areas and highlight the important regions of Background Image. This algorithm has better aesthetic effects, while maintaining the rate of successful decoding.

Sciacchitano A. - One of the best experts on this subject based on the ideXlab platform.

  • Elimination of unsteady Background reflections in PIV Images by anisotropic diffusion
    2019
    Co-Authors: Adatrao S., Sciacchitano A.
    Abstract:

    A novel approach is introduced that allows the elimination of undesired laser light reflections from particle Image velocimetry (PIV) Images. The approach relies upon anisotropic diffusion of the light intensity, which is used to generate a Background Image to be subtracted from the original Image. The intensity is diffused only along the edges and not across the edges, thus allowing one to preserve, in the Background Image, the shape of boundaries as laser light reflections on solid surfaces. Due to its ability to produce a Background Image from a single snapshot, as opposed to most methods that make use of intensity information in time, the technique is particularly suitable for elimination of reflections in PIV Images of unsteady models, such as transiting objects, propellers, flapping and pitching wings. The technique is assessed on an experimental test case which considers the flow in front of a propeller, where the laser light reflections on the model's surface preclude accurate determination of the flow velocity. Comparison of the anisotropic diffusion approach with conventional techniques for suppression of light reflections shows the advantages of the former method, especially when reflections need to be removed from individual Images.

  • Elimination of unsteady Background reflections in PIV Images by anisotropic diffusion
    'IOP Publishing', 2019
    Co-Authors: Adatrao S., Sciacchitano A.
    Abstract:

    A novel approach is introduced that allows the elimination of undesired laser light reflections from particle Image velocimetry (PIV) Images. The approach relies upon anisotropic diffusion of the light intensity, which is used to generate a Background Image to be subtracted from the original Image. The intensity is diffused only along the edges and not across the edges, thus allowing one to preserve, in the Background Image, the shape of boundaries as laser light reflections on solid surfaces. Due to its ability to produce a Background Image from a single snapshot, as opposed to most methods that make use of intensity information in time, the technique is particularly suitable for elimination of reflections in PIV Images of unsteady models, such as transiting objects, propellers, flapping and pitching wings. The technique is assessed on an experimental test case which considers the flow in front of a propeller, where the laser light reflections on the model's surface preclude accurate determination of the flow velocity. Comparison of the anisotropic diffusion approach with conventional techniques for suppression of light reflections shows the advantages of the former method, especially when reflections need to be removed from individual Images.Aerodynamic