Luminance Information

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 9891 Experts worldwide ranked by ideXlab platform

Christophe Collewet - One of the best experts on this subject based on the ideXlab platform.

  • IROS - Using image gradient as a visual feature for visual servoing
    2010 IEEE RSJ International Conference on Intelligent Robots and Systems, 2010
    Co-Authors: Eric Marchand, Christophe Collewet
    Abstract:

    Direct photometric visual servoing has proved to be an efficient approach for robot positioning. Instead of using classical geometric features such as points, straight lines, pose or an homography, as it is usually done, Information provided by all pixels in the image are considered. In the past mainly Luminance Information has been considered. In this paper, considering that most of the useful Information in an image is located in its high frequency areas (that are contours), we have consider various possible combinations of global visual feature based on Luminance and gradient. Experimental results are presented to show the behavior of such features.

  • Using image gradient as a visual feature for visual servoing
    2010
    Co-Authors: Eric Marchand, Christophe Collewet
    Abstract:

    Direct photometric visual servoing has proved to be an efficient approach for robot positioning. Instead of using classical geometric features such as points, straight lines, pose or an homography, as it is usually done, Information provided by all pixels in the image are considered. In the past mainly Luminance Information has been considered. In this paper, considering that most of the useful Information in an image is located in its high frequency areas (that are contours), we have consider various possible combinations of global visual feature based on Luminance and gradient. Experimental results are presented to show the behavior of such features.

Eric Marchand - One of the best experts on this subject based on the ideXlab platform.

  • Three-Dimensional Visual Tracking and Pose Estimation in Scanning Electron Microscopes
    2016
    Co-Authors: Le Cui, Eric Marchand, Sinan Haliyo, Stéphane Régnier
    Abstract:

    Visual tracking and estimation of the 3D posture of a micro/nano-object is a key issue in the development of automated manipulation tasks using the visual feedback. The 3D posture of the micro-object is estimated based on a template matching algorithm. Nevertheless, a key challenge for visual tracking in a scanning electron microscope (SEM) is the difficulty to observe the motion along the depth direction. In this paper, we propose a template-based hybrid visual tracking scheme that uses Luminance Information to estimate the object displacement on x-y plane and uses defocus Information to es- timate object depth. This approach is experimentally validated on 4-DoF motion of a sample in a SEM.

  • IROS - Using image gradient as a visual feature for visual servoing
    2010 IEEE RSJ International Conference on Intelligent Robots and Systems, 2010
    Co-Authors: Eric Marchand, Christophe Collewet
    Abstract:

    Direct photometric visual servoing has proved to be an efficient approach for robot positioning. Instead of using classical geometric features such as points, straight lines, pose or an homography, as it is usually done, Information provided by all pixels in the image are considered. In the past mainly Luminance Information has been considered. In this paper, considering that most of the useful Information in an image is located in its high frequency areas (that are contours), we have consider various possible combinations of global visual feature based on Luminance and gradient. Experimental results are presented to show the behavior of such features.

  • Using image gradient as a visual feature for visual servoing
    2010
    Co-Authors: Eric Marchand, Christophe Collewet
    Abstract:

    Direct photometric visual servoing has proved to be an efficient approach for robot positioning. Instead of using classical geometric features such as points, straight lines, pose or an homography, as it is usually done, Information provided by all pixels in the image are considered. In the past mainly Luminance Information has been considered. In this paper, considering that most of the useful Information in an image is located in its high frequency areas (that are contours), we have consider various possible combinations of global visual feature based on Luminance and gradient. Experimental results are presented to show the behavior of such features.

Shuqiang Jiang - One of the best experts on this subject based on the ideXlab platform.

  • an effective local invariant descriptor combining Luminance and color Information
    International Conference on Multimedia and Expo, 2007
    Co-Authors: Dong Zhang, Weiqiang Wang, Wen Gao, Shuqiang Jiang
    Abstract:

    Extraction of stable local invariant features is very important in many computer vision applications, such as image matching, object recognition and image retrieval. Most existing local invariant features mainly characterize Luminance Information, and neglect color Information. In this paper, we present a new local invariant descriptor characterizing both of them, which combines three photometric invariant color descriptors with the famous SIFT descriptor. To reduce the dimension of the combined high-dimensional invariant feature the principal component analysis (PCA) is used. Our experiments show the proposed local descriptor through combining Luminance and color Information outperforms the descriptors that only utilize a single category of Information, and combining the three color feature representations is more effective than only one.

  • AN EFFECTIVELOCAL INVARIANTDESCRIPTORCOMBININGLuminance AND COLOR Information
    2007
    Co-Authors: Shuqiang Jiang
    Abstract:

    Extraction ofstable local invariant features isveryimportant inmanycomputer vision applications, suchasimage matching, object recognition andimageretrieval. Most existing localinvariant features mainlycharacterize Luminance Information, andneglect color Information. In thispaper, we present a newlocal invariant descriptor characterizing bothofthem,whichcombines three photometric invariant color descriptors withthefamous SIFTdescriptor. Toreduce thedimension ofthecombined high-dimensional invariant feature theprincipal component analysis (PCA)isused. Ourexperiments showtheproposed local descriptor through combining Luminance andcolor Information outperforms thedescriptors that onlyutilize a single category ofInformation, andcombining thethree color feature representations ismoreeffective thanonly one.

Frederick A. A. Kingdom - One of the best experts on this subject based on the ideXlab platform.

  • Interactions of Color Vision with Other Visual Modalities
    Human Color Vision, 2016
    Co-Authors: Frederick A. A. Kingdom
    Abstract:

    Color vision is not only good for seeing hues but for seeing other visual dimensions, or “modalities,” such as form, depth, material, and motion. The latter use of color vision relies in part on the exploitation of physical constraints that exist between the patterns of color and Luminance in the natural visual world. Color vision on its own, however, that is, in the absence of Luminance Information, is in most cases less effective than Luminance Information for processing other modalities, often requiring more contrast relative to detection threshold to achieve commensurate levels of performance. Reasons for this are discussed.

  • Separating colour and Luminance Information in the visual system
    Spatial Vision, 1995
    Co-Authors: Frederick A. A. Kingdom, Kathy T. Mullen
    Abstract:

    In our visual world we can distinguish with ease between chromatic and Luminance contrasts. However, in our retinae most neurones are responsive to both chromatic and Luminance changes and therefore send ambiguous or 'multiplexed' Information to the higher visual centres. Psychophysical evidence suggests that some cortical process must subsequently separate out this Information into its chromatic and Luminance components. The purpose of this communication is to review and critically evaluate the different existing schemes for doing this. To assist in this evaluation a linear systems analysis is employed in which model cortical neurones are imputed with the property of providing Information about either colour or Luminance. It is concluded that there is currently no unified scheme available to explain a separation of colour and Luminance Information in the visual system. Some theoretical considerations and most promising approaches to solving the problem are noted, but it is suggested that there may be definite limits to the ability of the visual system to achieve complete separation of colour and Luminance from the retinal signal.

S. Weik - One of the best experts on this subject based on the ideXlab platform.

  • registration of 3 d partial surface models using Luminance and depth Information
    Digital Identity Management, 1997
    Co-Authors: S. Weik
    Abstract:

    Textured surface models of three-dimensional objects are gaining importance in computer graphics applications. These models often have to be merged from several overlapping partial models which have to be registered (i.e. the relative transformation between the partial models has to be determined) prior to the merging process. In this paper a method is presented that makes use of both camera-based depth Information (e.g. from stereo) and the Luminance image. The Luminance Information is exploited to determine corresponding point sets on the partial surfaces using an optical flow approach. Quaternions are then employed to determine the transformation between the partial models which minimizes the sum of the 3-D Euclidian distances between the corresponding point sets. In order to find corresponding points on the partial surfaces Luminance Information is linearized. The procedure is iterated until convergence is reached. In contrast to only using depth Information, employing Luminance speeds up convergence and reduces remaining degrees of freedom (e.g. when registering sphere-like shapes).

  • 3DIM - Registration of 3-D partial surface models using Luminance and depth Information
    Proceedings. International Conference on Recent Advances in 3-D Digital Imaging and Modeling (Cat. No.97TB100134), 1
    Co-Authors: S. Weik
    Abstract:

    Textured surface models of three-dimensional objects are gaining importance in computer graphics applications. These models often have to be merged from several overlapping partial models which have to be registered (i.e. the relative transformation between the partial models has to be determined) prior to the merging process. In this paper a method is presented that makes use of both camera-based depth Information (e.g. from stereo) and the Luminance image. The Luminance Information is exploited to determine corresponding point sets on the partial surfaces using an optical flow approach. Quaternions are then employed to determine the transformation between the partial models which minimizes the sum of the 3-D Euclidian distances between the corresponding point sets. In order to find corresponding points on the partial surfaces Luminance Information is linearized. The procedure is iterated until convergence is reached. In contrast to only using depth Information, employing Luminance speeds up convergence and reduces remaining degrees of freedom (e.g. when registering sphere-like shapes).