Saliency Map

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 7155 Experts worldwide ranked by ideXlab platform

Minho Lee - One of the best experts on this subject based on the ideXlab platform.

  • Biologically Motivated Vergence Control System Based on Stereo Saliency Map Model
    'IntechOpen', 2021
    Co-Authors: Sang-woo Bana, Minho Lee
    Abstract:

    We proposed a new biologically motivated vergence control method of an active stereo vision system that mimics human-like stereo visual selective attention. We used a trainable selective attention model that can decide an interesting area by the low-level top-down mechanism implemented by Fuzzy ART training model in conjunction with the bottom-up static SM model. In the system, we proposed a landmark selection method using the lowlevel top-down trainable selective attention model and the IOR regions. Also, a depth estimation method was applied for reflecting stereo Saliency. Based on the proposed algorithm, we implemented a human-like active stereo vision system. From the computer simulation and experimental results, we showed the effectiveness of the proposed vergence control method based on the stereo SM model. The practical purpose of the proposed system is to get depth information for robot vision with a small computation load by only considering an interesting object but by considering all the area of input image. Depth information of the developed system will operate for avoiding an obstacle in a robotic system. Also, we are considering a look-up table method to reduce the computation load of the Saliency Map for real-time application. In addition, as a further work, we are now developing an artificial agent system by tracking a moving person as main practical application of the proposed system

  • affective Saliency Map considering psychological distance
    Neurocomputing, 2011
    Co-Authors: Sangwoo Ban, Youngmin Jang, Minho Lee
    Abstract:

    This paper proposes a new affective Saliency Map (SM) model considering psychological distance as well as the pop-out property based on relative spatial distribution of the primitive visual features such as intensity, edge, color, and orientation. By reflecting congruency between the spatial distance caused by spatial proximity and distal in a visual scene and psychological distance caused by the way people think about visual stimuli, the proposed SM model can produce more human-like visual selective attention than a conventional SM model based on primary visual perception. In the proposed model, a psychological distance caused by a social distance, in which a proximal entity such as friend becomes more attractive when it is located near but a distal entity such as enemy becomes more attractive when it is located far from an observer, is considered. In the experiments, two types of visual stimuli are considered, mono-stimuli and stereo-stimuli. In the case of mono-stimuli, the visual stimuli on a picture with psychological depth cues were considered. Instead, in the case of stereo-stimuli, depth perception is also considered for obtaining real spatial distance of visual target in a visual scene. In order to verify the proposed affective SM model, an eye tracking system was used to measure the visual scan path and fixation time on a specific local area while monitoring the visual scenes by human subjects. Experimental results show that the proposed model can generate plausible visual selective attention properly reflecting both psychological distance and primitive visual stimuli inducing pop-out bottom-up features.

  • a traffic surveillance system using dynamic Saliency Map and svm boosting
    International Journal of Control Automation and Systems, 2010
    Co-Authors: Jeongwoo Woo, Wono Lee, Minho Lee
    Abstract:

    This paper proposes a traffic surveillance system that can efficiently detect an interesting object and identify vehicles and pedestrians in real traffic situations. The proposed system consists of a moving object detection model and an object identification model. A dynamic Saliency Map is used for analyzing dynamics of the successive static Saliency Maps, and can localize an attention area in dynamic scenes to focus on a specific moving object for traffic surveillance purposes. The candidate local areas of a moving object are followed by a blob detection processing including binarization, morphological closing and labeling methods. For identifying a moving object class, the proposed system uses a hybrid of global and local information in each local area. Although the global feature analysis is a compact way to identify an object and provide a good accuracy for non-occluded objects, it is sensitive to image translation and occlusion. Therefore, a local feature analysis is also considered and combined with the global feature analysis. In order to construct an efficient classifier using the global and local features, this study proposes a novel classifier based on boosting of support vector machines. The proposed object identification model can identify a class of moving object and discard unexpected candidate area which does not include an interesting object. As a result, the proposed road surveillance system is able to detect a moving object and identify the class of the moving object. Experimental results show that the proposed traffic surveillance system can successfully detect specific moving objects.

  • autonomous detector using Saliency Map model and modified mean shift tracking for a blind spot monitor in a car
    International Conference on Machine Learning and Applications, 2008
    Co-Authors: Sungmoon Jeong, Sangwoo Ban, Minho Lee
    Abstract:

    We propose an autonomous blind spot monitoring method using a morphology-based Saliency Map (SM) model and the method of combining scale invariant feature transform (SIFT) with mean-shift tracking algorithm. The proposed method decides a region of interest (ROI) which includes the blind spot from the successive image frames obtained by side-view cameras. Topology information of the salient areas obtained from the SM model is used to detect a candidate of dangerous situations in the ROI, and the SIFT algorithm is considered for verifying whether the localized candidate area contains an automobile. We developed a modified mean-shift algorithm to track the detected automobile in a blind spot area. The modified mean-shift algorithm uses the orientation probability histogram for tracking the automobile around the localized area. Experimental results show that the proposed algorithm successfully provides an alarm signal to the driver in a dangerous situations caused by approaching an automobile at side-view.

  • 2008 special issue stereo Saliency Map considering affective factors and selective motion analysis in a dynamic environment
    Neural Networks, 2008
    Co-Authors: Sungmoon Jeong, Sangwoo Ban, Minho Lee
    Abstract:

    We propose new integrated Saliency Map and selective motion analysis models partly inspired by a biological visual attention mechanism. The proposed models consider not only binocular stereopsis to identify a final attention area so that the system focuses on the closer area as in human binocular vision, based on the single eye alignment hypothesis, but also both the static and dynamic features of an input scene. Moreover, the proposed Saliency Map model includes an affective computing process that skips an unwanted area and pays attention to a desired area, which reflects the human preference and refusal in subsequent visual search processes. In addition, we show the effectiveness of considering the symmetry feature determined by a neural network and an independent component analysis (ICA) filter which are helpful to construct an object preferable attention model. Also, we propose a selective motion analysis model by integrating the proposed Saliency Map with a neural network for motion analysis. The neural network for motion analysis responds selectively to rotation, expansion, contraction and planar motion of the optical flow in a selected area. Experiments show that the proposed model can generate plausible scan paths and selective motion analysis results for natural input scenes.

Li Zhaoping - One of the best experts on this subject based on the ideXlab platform.

  • from the optic tectum to the primary visual cortex migration through evolution of the Saliency Map for exogenous attentional guidance
    Current Opinion in Neurobiology, 2016
    Co-Authors: Li Zhaoping
    Abstract:

    Recent data have supported the hypothesis that, in primates, the primary visual cortex (V1) creates a Saliency Map from visual input. The exogenous guidance of attention is then realized by means of monosynaptic projections to the superior colliculus, which can select the most salient location as the target of a gaze shift. V1 is less prominent, or is even absent in lower vertebrates such as fish; whereas the superior colliculus, called optic tectum in lower vertebrates, also receives retinal input. I review the literature and propose that the Saliency Map has migrated from the tectum to V1 over evolution. In addition, attentional benefits manifested as cueing effects in humans should also be present in lower vertebrates.

  • primary visual cortex as a Saliency Map a parameter free prediction and its test by behavioral data
    PLOS Computational Biology, 2015
    Co-Authors: Li Zhaoping, Li Zhe
    Abstract:

    It has been hypothesized that neural activities in the primary visual cortex (V1) represent a Saliency Map of the visual field to exogenously guide attention. This hypothesis has so far provided only qualitative predictions and their confirmations. We report this hypothesis' first quantitative prediction, derived without free parameters, and its confirmation by human behavioral data. The hypothesis provides a direct link between V1 neural responses to a visual location and the Saliency of that location to guide attention exogenously. In a visual input containing many bars, one of them saliently different from all the other bars which are identical to each other, Saliency at the singleton's location can be measured by the shortness of the reaction time in a visual search for singletons. The hypothesis predicts quantitatively the whole distribution of the reaction times to find a singleton unique in color, orientation, and motion direction from the reaction times to find other types of singletons. The prediction matches human reaction time data. A requirement for this successful prediction is a data-motivated assumption that V1 lacks neurons tuned simultaneously to color, orientation, and motion direction of visual inputs. Since evidence suggests that extrastriate cortices do have such neurons, we discuss the possibility that the extrastriate cortices play no role in guiding exogenous attention so that they can be devoted to other functions like visual decoding and endogenous attention.

  • a theory of a Saliency Map in primary visual cortex v1 tested by psychophysics of colour orientation interference in texture segmentation
    Visual Cognition, 2006
    Co-Authors: Li Zhaoping, Robert Jefferson Snowden
    Abstract:

    It has been proposed that V1 creates a bottom-up Saliency Map, where Saliency of any location increases with the firing rate of the most active V1 output cell responding to it, regardless the feature selectivity of the cell. Thus, a red vertical bar may have its Saliency signalled by a cell tuned to red colour, or one tuned to vertical orientation, whichever cell is the most active. This theory predicts interference between colour and orientation features in texture segmentation tasks where bottom-up processes are significant. The theory not only explains existing data, but also provides a prediction. A subsequent psychophysical test confirmed the prediction by showing that segmentation of textures of oriented bars became more difficult as the colours of the bars were randomly drawn from more colour categories.

  • psychophysical tests of the hypothesis of a bottom up Saliency Map in primary visual cortex
    PLOS Computational Biology, 2005
    Co-Authors: Li Zhaoping, Keith A May
    Abstract:

    A unique vertical bar among horizontal bars is salient and pops out perceptually. Physiological data have suggested that mechanisms in the primary visual cortex (V1) contribute to the high Saliency of such a unique basic feature, but indicated little regarding whether V1 plays an essential or peripheral role in input-driven or bottom-up Saliency. Meanwhile, a biologically based V1 model has suggested that V1 mechanisms can also explain bottom-up saliencies beyond the pop-out of basic features, such as the low Saliency of a unique conjunction feature such as a red vertical bar among red horizontal and green vertical bars, under the hypothesis that the bottom-up Saliency at any location is signaled by the activity of the most active cell responding to it regardless of the cell's preferred features such as color and orientation. The model can account for phenomena such as the difficulties in conjunction feature search, asymmetries in visual search, and how background irregularities affect ease of search. In this paper, we report nontrivial predictions from the V1 Saliency hypothesis, and their psychophysical tests and confirmations. The prediction that most clearly distinguishes the V1 Saliency hypothesis from other models is that task-irrelevant features could interfere in visual search or segmentation tasks which rely significantly on bottom-up Saliency. For instance, irrelevant colors can interfere in an orientation-based task, and the presence of horizontal and vertical bars can impair performance in a task based on oblique bars. Furthermore, properties of the intracortical interactions and neural selectivities in V1 predict specific emergent phenomena associated with visual grouping. Our findings support the idea that a bottom-up Saliency Map can be at a lower visual area than traditionally expected, with implications for top-down selection mechanisms.

Yasar Abbas Ur Rehman - One of the best experts on this subject based on the ideXlab platform.

  • SCGAN: Saliency Map-guided Colorization with Generative Adversarial Network
    IEEE Transactions on Circuits and Systems for Video Technology, 2020
    Co-Authors: Yuzhi Zhao, Kwok-wai Cheung, Yasar Abbas Ur Rehman
    Abstract:

    Given a grayscale photograph, the colorization system estimates a visually plausible colorful image. Conventional methods often use semantics to colorize grayscale images. However, in these methods, only classification semantic information is embedded, resulting in semantic confusion and color bleeding in the final colorized image. To address these issues, we propose a fully automatic Saliency Map-guided Colorization with Generative Adversarial Network (SCGAN) framework. It jointly predicts the colorization and Saliency Map to minimize semantic confusion and color bleeding in the colorized image. Since the global features from pre-trained VGG-16-Gray network are embedded to the colorization encoder, the proposed SCGAN can be trained with much less data than state-of-the-art methods to achieve perceptually reasonable colorization. In addition, we propose a novel Saliency Map-based guidance method. Branches of the colorization decoder are used to predict the Saliency Map as a proxy target. Moreover, two hierarchical discriminators are utilized for the generated colorization and Saliency Map, respectively, in order to strengthen visual perception performance. The proposed system is evaluated on ImageNet validation set. Experimental results show that SCGAN can generate more reasonable colorized images than state-of-the-art techniques.

  • SCGAN: Saliency Map-guided Colorization with Generative Adversarial Network
    'Institute of Electrical and Electronics Engineers (IEEE)', 2020
    Co-Authors: Zhao Yuzhi, Po Lai-man, Cheung Kwok-wai, Yu Wing-yin, Yasar Abbas Ur Rehman
    Abstract:

    Given a grayscale photograph, the colorization system estimates a visually plausible colorful image. Conventional methods often use semantics to colorize grayscale images. However, in these methods, only classification semantic information is embedded, resulting in semantic confusion and color bleeding in the final colorized image. To address these issues, we propose a fully automatic Saliency Map-guided Colorization with Generative Adversarial Network (SCGAN) framework. It jointly predicts the colorization and Saliency Map to minimize semantic confusion and color bleeding in the colorized image. Since the global features from pre-trained VGG-16-Gray network are embedded to the colorization encoder, the proposed SCGAN can be trained with much less data than state-of-the-art methods to achieve perceptually reasonable colorization. In addition, we propose a novel Saliency Map-based guidance method. Branches of the colorization decoder are used to predict the Saliency Map as a proxy target. Moreover, two hierarchical discriminators are utilized for the generated colorization and Saliency Map, respectively, in order to strengthen visual perception performance. The proposed system is evaluated on ImageNet validation set. Experimental results show that SCGAN can generate more reasonable colorized images than state-of-the-art techniques.Comment: accepted by IEEE Transactions on Circuits and Systems for Video Technolog

Jinwen Tian - One of the best experts on this subject based on the ideXlab platform.

  • aircraft detection in high resolution sar images based on a gradient textural Saliency Map
    Sensors, 2015
    Co-Authors: Yihua Tan, Jinwen Tian
    Abstract:

    This paper proposes a new automatic and adaptive aircraft target detection algorithm in high-resolution synthetic aperture radar (SAR) images of airport. The proposed method is based on gradient textural Saliency Map under the contextual cues of apron area. Firstly, the candidate regions with the possible existence of airport are detected from the apron area. Secondly, directional local gradient distribution detector is used to obtain a gradient textural Saliency Map in the favor of the candidate regions. In addition, the final targets will be detected by segmenting the Saliency Map using CFAR-type algorithm. The real high-resolution airborne SAR image data is used to verify the proposed algorithm. The results demonstrate that this algorithm can detect aircraft targets quickly and accurately, and decrease the false alarm rate.

  • Saliency Map based active contour method for automatic image segmentation
    6th International Symposium on Advanced Optical Manufacturing and Testing Technologies: Optical System Technologies for Manufacturing and Testing, 2012
    Co-Authors: Changcai Yang, Xinyi Zheng, Jinwen Tian, Sheng Zheng
    Abstract:

    In this paper, we present a novel automatic image segmentation method, which combines the active contour method and the Saliency Map method. The Saliency Map which is obtained by inversing the spectral residual of the image brings a priori knowledge to bear on the image segmentation. The initial level set function is constructed from Saliency Map. In this way, an automatic initialization of the level set evolution can be obtained. This method can minimize the iterations of the level set evolution. The efficiency and accuracy of the method are demonstrated by the experiments on the synthetic and real images.

Sangwoo Ban - One of the best experts on this subject based on the ideXlab platform.

  • affective Saliency Map considering psychological distance
    Neurocomputing, 2011
    Co-Authors: Sangwoo Ban, Youngmin Jang, Minho Lee
    Abstract:

    This paper proposes a new affective Saliency Map (SM) model considering psychological distance as well as the pop-out property based on relative spatial distribution of the primitive visual features such as intensity, edge, color, and orientation. By reflecting congruency between the spatial distance caused by spatial proximity and distal in a visual scene and psychological distance caused by the way people think about visual stimuli, the proposed SM model can produce more human-like visual selective attention than a conventional SM model based on primary visual perception. In the proposed model, a psychological distance caused by a social distance, in which a proximal entity such as friend becomes more attractive when it is located near but a distal entity such as enemy becomes more attractive when it is located far from an observer, is considered. In the experiments, two types of visual stimuli are considered, mono-stimuli and stereo-stimuli. In the case of mono-stimuli, the visual stimuli on a picture with psychological depth cues were considered. Instead, in the case of stereo-stimuli, depth perception is also considered for obtaining real spatial distance of visual target in a visual scene. In order to verify the proposed affective SM model, an eye tracking system was used to measure the visual scan path and fixation time on a specific local area while monitoring the visual scenes by human subjects. Experimental results show that the proposed model can generate plausible visual selective attention properly reflecting both psychological distance and primitive visual stimuli inducing pop-out bottom-up features.

  • autonomous detector using Saliency Map model and modified mean shift tracking for a blind spot monitor in a car
    International Conference on Machine Learning and Applications, 2008
    Co-Authors: Sungmoon Jeong, Sangwoo Ban, Minho Lee
    Abstract:

    We propose an autonomous blind spot monitoring method using a morphology-based Saliency Map (SM) model and the method of combining scale invariant feature transform (SIFT) with mean-shift tracking algorithm. The proposed method decides a region of interest (ROI) which includes the blind spot from the successive image frames obtained by side-view cameras. Topology information of the salient areas obtained from the SM model is used to detect a candidate of dangerous situations in the ROI, and the SIFT algorithm is considered for verifying whether the localized candidate area contains an automobile. We developed a modified mean-shift algorithm to track the detected automobile in a blind spot area. The modified mean-shift algorithm uses the orientation probability histogram for tracking the automobile around the localized area. Experimental results show that the proposed algorithm successfully provides an alarm signal to the driver in a dangerous situations caused by approaching an automobile at side-view.

  • 2008 special issue stereo Saliency Map considering affective factors and selective motion analysis in a dynamic environment
    Neural Networks, 2008
    Co-Authors: Sungmoon Jeong, Sangwoo Ban, Minho Lee
    Abstract:

    We propose new integrated Saliency Map and selective motion analysis models partly inspired by a biological visual attention mechanism. The proposed models consider not only binocular stereopsis to identify a final attention area so that the system focuses on the closer area as in human binocular vision, based on the single eye alignment hypothesis, but also both the static and dynamic features of an input scene. Moreover, the proposed Saliency Map model includes an affective computing process that skips an unwanted area and pays attention to a desired area, which reflects the human preference and refusal in subsequent visual search processes. In addition, we show the effectiveness of considering the symmetry feature determined by a neural network and an independent component analysis (ICA) filter which are helpful to construct an object preferable attention model. Also, we propose a selective motion analysis model by integrating the proposed Saliency Map with a neural network for motion analysis. The neural network for motion analysis responds selectively to rotation, expansion, contraction and planar motion of the optical flow in a selected area. Experiments show that the proposed model can generate plausible scan paths and selective motion analysis results for natural input scenes.

  • autonomous detector using Saliency Map model and modified mean shift tracking for a blind spot monitor in a car
    International Conference on Machine Learning and Applications, 2008
    Co-Authors: Sungmoon Jeong, Sangwoo Ban, Minho Lee
    Abstract:

    We propose an autonomous blind spot monitoring method using a morphology-based Saliency Map (SM) model and the method of combining scale invariant feature transform (SIFT) with mean-shift tracking algorithm. The proposed method decides a region of interest (ROI) which includes the blind spot from the successive image frames obtained by side-view cameras. Topology information of the salient areas obtained from the SM model is used to detect a candidate of dangerous situations in the ROI, and the SIFT algorithm is considered for verifying whether the localized candidate area contains an automobile. We developed a modified mean-shift algorithm to track the detected automobile in a blind spot area. The modified mean-shift algorithm uses the orientation probability histogram for tracking the automobile around the localized area. Experimental results show that the proposed algorithm successfully provides an alarm signal to the driver in a dangerous situations caused by approaching an automobile at side-view.