Visual Attribute

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 13008 Experts worldwide ranked by ideXlab platform

Minsuk Kang - One of the best experts on this subject based on the ideXlab platform.

  • Visual Attribute modulates the time course of iconic memory decay
    Visual Cognition, 2018
    Co-Authors: Woojong Yi, Minsuk Kang
    Abstract:

    ABSTRACTStudies on iconic memory demonstrate that rich information from a Visual scene quickly becomes unavailable with the passage of time. The decay rate of iconic memory refers to the dynamics of memory availability. The present study investigated the iconic memory decay of different stimulus Attributes that comprised an object. Specifically, in Experiment 1, participants were presented with eight coloured numbers (e.g., red 4) and required to remember only one Attribute, either colour or number, over different blocks of trials. The participants then reported the cued Attribute in which the cue Stimulus Onset Asynchrony (SOA) from the memory array onset was varied (0, 100, 200, 300, 500, and 1000 ms). We found that numerical information became unavailable more quickly than colour information, despite the fact that the memory accuracies at 0 and 1000 ms SOAs were comparable between the two Attributes. In Experiment 2, we replicated the finding that a numerical representation was lost more quickly than a...

  • Visual Attribute modulates the time course of iconic memory decay
    Visual Cognition, 2017
    Co-Authors: Minsuk Kang, Kyoung Min Lee
    Abstract:

    Studies on iconic memory demonstrate that rich information from a Visual scene quickly becomes unavailable with the passage of time. The decay rate of iconic memory refers to the dynamics of memory...

Sing Bing Kang - One of the best experts on this subject based on the ideXlab platform.

  • Visual Attribute transfer through deep image analogy
    International Conference on Computer Graphics and Interactive Techniques, 2017
    Co-Authors: Jing Liao, Yuan Yao, Lu Yuan, Gang Hua, Sing Bing Kang
    Abstract:

    We propose a new technique for Visual Attribute transfer across images that may have very different appearance but have perceptually similar semantic structure. By Visual Attribute transfer, we mean transfer of Visual information (such as color, tone, texture, and style) from one image to another. For example, one image could be that of a painting or a sketch while the other is a photo of a real scene, and both depict the same type of scene.Our technique finds semantically-meaningful dense correspondences between two input images. To accomplish this, it adapts the notion of "image analogy" [Hertzmann et al. 2001] with features extracted from a Deep Convolutional Neutral Network for matching; we call our technique deep image analogy. A coarse-to-fine strategy is used to compute the nearest-neighbor field for generating the results. We validate the effectiveness of our proposed method in a variety of cases, including style/texture transfer, color/style swap, sketch/painting to photo, and time lapse.

  • Visual Attribute transfer through deep image analogy
    arXiv: Computer Vision and Pattern Recognition, 2017
    Co-Authors: Jing Liao, Yuan Yao, Lu Yuan, Gang Hua, Sing Bing Kang
    Abstract:

    We propose a new technique for Visual Attribute transfer across images that may have very different appearance but have perceptually similar semantic structure. By Visual Attribute transfer, we mean transfer of Visual information (such as color, tone, texture, and style) from one image to another. For example, one image could be that of a painting or a sketch while the other is a photo of a real scene, and both depict the same type of scene. Our technique finds semantically-meaningful dense correspondences between two input images. To accomplish this, it adapts the notion of "image analogy" with features extracted from a Deep Convolutional Neutral Network for matching; we call our technique Deep Image Analogy. A coarse-to-fine strategy is used to compute the nearest-neighbor field for generating the results. We validate the effectiveness of our proposed method in a variety of cases, including style/texture transfer, color/style swap, sketch/painting to photo, and time lapse.

Nazli Ikizlercinbis - One of the best experts on this subject based on the ideXlab platform.

  • low level features for Visual Attribute recognition
    Pattern Recognition Letters, 2016
    Co-Authors: Emine Gul Danaci, Nazli Ikizlercinbis
    Abstract:

    A comprehensive evaluation on the use of low-level features in Attribute recognition.Several color, texture, shape and deep (CNN) features are evaluated.Experiments show that best feature may vary for different Attribute types.Although CNN features outperform others, HOG and CSIFT are also competitive.Weighted late fusion is a more effective strategy for combining low-level features. In recent years, Visual Attributes, which are mid-level representations that describe human-understandable aspects of objects and scenes, have become a popular topic of computer vision research. Visual Attributes are being used in various tasks, including object recognition, people search, scene recognition, and many more. A critical step in Attribute recognition is the extraction of low-level features, which encodes the local Visual characteristics in images, and provides the representation used in the Attribute prediction step. In this work, we explore the effects of utilizing different low-level features on learning Visual Attributes. In particular, we analyze the performance of various shape, color, texture and deep neural network features. Experiments have been carried out on four different datasets, which have been collected for different Visual recognition tasks and extensive evaluations have been reported. Our results show that, while the supervised deep features are effective, using them in combination with low-level features can lead to significant improvements in Attribute recognition performance.

Kyoung Min Lee - One of the best experts on this subject based on the ideXlab platform.

Emine Gul Danaci - One of the best experts on this subject based on the ideXlab platform.

  • low level features for Visual Attribute recognition
    Pattern Recognition Letters, 2016
    Co-Authors: Emine Gul Danaci, Nazli Ikizlercinbis
    Abstract:

    A comprehensive evaluation on the use of low-level features in Attribute recognition.Several color, texture, shape and deep (CNN) features are evaluated.Experiments show that best feature may vary for different Attribute types.Although CNN features outperform others, HOG and CSIFT are also competitive.Weighted late fusion is a more effective strategy for combining low-level features. In recent years, Visual Attributes, which are mid-level representations that describe human-understandable aspects of objects and scenes, have become a popular topic of computer vision research. Visual Attributes are being used in various tasks, including object recognition, people search, scene recognition, and many more. A critical step in Attribute recognition is the extraction of low-level features, which encodes the local Visual characteristics in images, and provides the representation used in the Attribute prediction step. In this work, we explore the effects of utilizing different low-level features on learning Visual Attributes. In particular, we analyze the performance of various shape, color, texture and deep neural network features. Experiments have been carried out on four different datasets, which have been collected for different Visual recognition tasks and extensive evaluations have been reported. Our results show that, while the supervised deep features are effective, using them in combination with low-level features can lead to significant improvements in Attribute recognition performance.

  • gorsel nitelik o grenmede alt duzey ozniteliklerin karsilastirilmasi a comparison of low level features for Visual Attribute recognition
    2015
    Co-Authors: Emine Gul Danaci, Hacettepe Universitesi
    Abstract:

    Recently, Visual Attribute learning and usage have become a popular research topic of computer vision. In this work, we aim to explore which low-level features contribute to the modeling of the Visual Attributes the most. In this context, several low-level features that encode the color and shape information in various levels are explored and their contribution to the recognition of the Attributes are evaluated experimentally. Experimental results demonstrate that, the colorSIFT features that encode local shape information together with color information and the LBP features that encode the local structure are both effective for Visual Attribute recognition. Keywords—Visual Attributes, low-level features, colorSIFT, LBP.