Superpixel

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 10989 Experts worldwide ranked by ideXlab platform

Zhi Liu - One of the best experts on this subject based on the ideXlab platform.

  • Spatiotemporal Saliency Detection Based on Superpixel-level Trajectory
    Signal Processing: Image Communication, 2015
    Co-Authors: Zhi Liu, Xiang Zhang, Olivier Le Meur, Liquan Shen
    Abstract:

    In this paper, we propose a novel spatiotemporal saliency model based on Superpixel-level trajectories for saliency detection in videos. The input video is first decomposed into a set of temporally consistent Superpixels, on which Superpixel-level trajectories are directly generated, and motion histograms at Superpixel level as well as frame level are extracted. Based on motion vector fields of multiple successive frames, the inside–outside maps are estimated to roughly indicate whether pixels are inside or outside objects with motion different from background. Then two descriptors, i.e. accumulated motion histogram and trajectory velocity entropy, are exploited to characterize the short-term and long-term temporal features of Superpixel-level trajectories. Based on trajectory descriptors and inside–outside maps, Superpixel-level trajectory distinctiveness is evaluated and trajectory classification is performed to obtain trajectory-level temporal saliency. Superpixel-level and pixel-level temporal saliency maps are generated in turn by exploiting motion similarity with neighboring Superpixels around each trajectory, and color-spatial similarity with neighboring Superpixels around each pixel, respectively. Finally, a quality-guided fusion method is proposed to integrate the pixel-level temporal saliency map with the pixel-level spatial saliency map, which is generated based on global contrast and spatial sparsity of Superpixels, to generate the pixel-level spatiotemporal saliency map with reasonable quality. Experimental results on two public video datasets demonstrate that the proposed model outperforms the state-of-the-art spatiotemporal saliency models on saliency detection performance.

  • Superpixel-Based Spatiotemporal Saliency Detection
    IEEE Transactions on Circuits and Systems for Video Technology, 2014
    Co-Authors: Zhi Liu, Xiang Zhang, Shuhua Luo, Olivier Le Meur
    Abstract:

    This paper proposes a Superpixel-based spatiotemporal saliency model for saliency detection in videos. Based on the Superpixel representation of video frames, motion histograms and color histograms are extracted at the Superpixel level as local features and frame level as global features. Then, Superpixel-level temporal saliency is measured by integrating motion distinctiveness of Superpixels with a scheme of temporal saliency prediction and adjustment, and Superpixel-level spatial saliency is measured by evaluating global contrast and spatial sparsity of Superpixels. Finally, a pixel-level saliency derivation method is used to generate pixel-level temporal and spatial saliency maps, and an adaptive fusion method is exploited to integrate them into the spatiotemporal saliency map. Experimental results on two public datasets demonstrate that the proposed model outperforms six state-of-the-art spatiotemporal saliency models in terms of both saliency detection and human fixation prediction.

  • Superpixel-based saliency detection
    2013
    Co-Authors: Zhi Liu, Olivier Le Meur, Shuhua Luo
    Abstract:

    In this paper, we propose an effective Superpixel-based saliency model. First, the original image is simplified by performing Superpixel segmentation and adaptive color quantization. On the basis of Superpixel representation, inter-Superpixel similarity measures are then calculated based on difference of histograms and spatial distance between each pair of Superpixels. For each Superpixel, its global contrast measure and spatial sparsity measure are evaluated, and refined with the integration of interSuperpixel similarity measures to finally generate the Superpixel-level saliency map. Experimental results on a dataset containing 1,000 test images with ground truths demonstrate that the proposed saliency model outperforms state-of-the-art saliency models.

  • WIAMIS - Superpixel-based saliency detection
    2013 14th International Workshop on Image Analysis for Multimedia Interactive Services (WIAMIS), 2013
    Co-Authors: Zhi Liu, Le Meur, Shuhua Luo
    Abstract:

    In this paper, we propose an effective Superpixel-based saliency model. First, the original image is simplified by performing Superpixel segmentation and adaptive color quantization. On the basis of Superpixel representation, inter-Superpixel similarity measures are then calculated based on difference of histograms and spatial distance between each pair of Superpixels. For each Superpixel, its global contrast measure and spatial sparsity measure are evaluated, and refined with the integration of inter-Superpixel similarity measures to finally generate the Superpixel-level saliency map. Experimental results on a dataset containing 1,000 test images with ground truths demonstrate that the proposed saliency model outperforms state-of-the-art saliency models.

Olivier Le Meur - One of the best experts on this subject based on the ideXlab platform.

  • Spatiotemporal Saliency Detection Based on Superpixel-level Trajectory
    Signal Processing: Image Communication, 2015
    Co-Authors: Zhi Liu, Xiang Zhang, Olivier Le Meur, Liquan Shen
    Abstract:

    In this paper, we propose a novel spatiotemporal saliency model based on Superpixel-level trajectories for saliency detection in videos. The input video is first decomposed into a set of temporally consistent Superpixels, on which Superpixel-level trajectories are directly generated, and motion histograms at Superpixel level as well as frame level are extracted. Based on motion vector fields of multiple successive frames, the inside–outside maps are estimated to roughly indicate whether pixels are inside or outside objects with motion different from background. Then two descriptors, i.e. accumulated motion histogram and trajectory velocity entropy, are exploited to characterize the short-term and long-term temporal features of Superpixel-level trajectories. Based on trajectory descriptors and inside–outside maps, Superpixel-level trajectory distinctiveness is evaluated and trajectory classification is performed to obtain trajectory-level temporal saliency. Superpixel-level and pixel-level temporal saliency maps are generated in turn by exploiting motion similarity with neighboring Superpixels around each trajectory, and color-spatial similarity with neighboring Superpixels around each pixel, respectively. Finally, a quality-guided fusion method is proposed to integrate the pixel-level temporal saliency map with the pixel-level spatial saliency map, which is generated based on global contrast and spatial sparsity of Superpixels, to generate the pixel-level spatiotemporal saliency map with reasonable quality. Experimental results on two public video datasets demonstrate that the proposed model outperforms the state-of-the-art spatiotemporal saliency models on saliency detection performance.

  • Superpixel-Based Spatiotemporal Saliency Detection
    IEEE Transactions on Circuits and Systems for Video Technology, 2014
    Co-Authors: Zhi Liu, Xiang Zhang, Shuhua Luo, Olivier Le Meur
    Abstract:

    This paper proposes a Superpixel-based spatiotemporal saliency model for saliency detection in videos. Based on the Superpixel representation of video frames, motion histograms and color histograms are extracted at the Superpixel level as local features and frame level as global features. Then, Superpixel-level temporal saliency is measured by integrating motion distinctiveness of Superpixels with a scheme of temporal saliency prediction and adjustment, and Superpixel-level spatial saliency is measured by evaluating global contrast and spatial sparsity of Superpixels. Finally, a pixel-level saliency derivation method is used to generate pixel-level temporal and spatial saliency maps, and an adaptive fusion method is exploited to integrate them into the spatiotemporal saliency map. Experimental results on two public datasets demonstrate that the proposed model outperforms six state-of-the-art spatiotemporal saliency models in terms of both saliency detection and human fixation prediction.

  • Superpixel-based saliency detection
    2013
    Co-Authors: Zhi Liu, Olivier Le Meur, Shuhua Luo
    Abstract:

    In this paper, we propose an effective Superpixel-based saliency model. First, the original image is simplified by performing Superpixel segmentation and adaptive color quantization. On the basis of Superpixel representation, inter-Superpixel similarity measures are then calculated based on difference of histograms and spatial distance between each pair of Superpixels. For each Superpixel, its global contrast measure and spatial sparsity measure are evaluated, and refined with the integration of interSuperpixel similarity measures to finally generate the Superpixel-level saliency map. Experimental results on a dataset containing 1,000 test images with ground truths demonstrate that the proposed saliency model outperforms state-of-the-art saliency models.

Chang-su Kim - One of the best experts on this subject based on the ideXlab platform.

  • Superpixels for image and video processing based on proximity-weighted patch matching
    Multimedia Tools and Applications, 2020
    Co-Authors: Se-ho Lee, Won-dong Jang, Chang-su Kim
    Abstract:

    In this paper, a temporal Superpixel algorithm using proximity-weighted patch matching (PPM) is proposed to yield temporally consistent Superpixels for image and video processing. PPM estimates the motion vector of a Superpixel robustly, by considering the patch matching distances of neighboring Superpixels as well as the Superpixel itself. In each frame, we initialize Superpixels by transferring the Superpixel labels of the previous frame using PPM motion vectors. Then, we update the Superpixel labels of boundary pixels by minimizing a cost function, which is composed of feature distance, compactness, contour, and temporal consistency terms. Finally, we carry out Superpixel splitting, merging, and relabeling to regularize Superpixel sizes and correct inaccurate labels. Extensive experimental results confirm that the proposed algorithm outperforms the state-of-the-art conventional algorithms significantly. Also, it is demonstrated that the proposed algorithm can be applied to video object segmentation and video saliency detection tasks.

  • CVPR - Contour-Constrained Superpixels for Image and Video Processing
    2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017
    Co-Authors: Se-ho Lee, Won-dong Jang, Chang-su Kim
    Abstract:

    A novel contour-constrained Superpixel (CCS) algorithm is proposed in this work. We initialize Superpixels and regions in a regular grid and then refine the Superpixel label of each region hierarchically from block to pixel levels. To make Superpixel boundaries compatible with object contours, we propose the notion of contour pattern matching and formulate an objective function including the contour constraint. Furthermore, we extend the CCS algorithm to generate temporal Superpixels for video processing. We initialize Superpixel labels in each frame by transferring those in the previous frame and refine the labels to make Superpixels temporally consistent as well as compatible with object contours. Experimental results demonstrate that the proposed algorithm provides better performance than the state-of-the-art Superpixel methods.

  • ICCV - Temporal Superpixels Based on Proximity-Weighted Patch Matching
    2017 IEEE International Conference on Computer Vision (ICCV), 2017
    Co-Authors: Se-ho Lee, Won-dong Jang, Chang-su Kim
    Abstract:

    A temporal Superpixel algorithm based on proximity-weighted patch matching (TS-PPM) is proposed in this work. We develop the proximity-weighted patch matching (PPM), which estimates the motion vector of a Superpixel robustly, by considering the patch matching distances of neighboring Superpixels as well as the target Superpixel. In each frame, we initialize Superpixels by transferring the Superpixel labels of the previous frame using PPM motion vectors. Then, we update the Superpixel labels of boundary pixels, based on a cost function, composed of color, spatial, contour, and temporal consistency terms. Finally, we execute Superpixel splitting, merging, and relabeling to regularize Superpixel sizes and reduce incorrect labels. Experiments show that the proposed algorithm outperforms the state-of-the-art conventional algorithms significantly.

Ling Shao - One of the best experts on this subject based on the ideXlab platform.

  • Adaptive Nonlocal Random Walks for Image Superpixel Segmentation
    IEEE Transactions on Circuits and Systems for Video Technology, 2020
    Co-Authors: Hui Wang, Jianbing Shen, Junbo Yin, Xingping Dong, Hanqiu Sun, Ling Shao
    Abstract:

    In this paper, we propose a novel Superpixel segmentation method using an adaptive nonlocal random walk (ANRW) algorithm. There are three main steps in our image Superpixel segmentation algorithm. Our method is based on the random walk model, in which the seed points are produced to generate the initial Superpixels by a gradient-based method in the first step. In the second step, the ANRW is proposed to get the initial Superpixels by adjusting the NRW to obtain a better image and Superpixel segmentation. In the last step, these small Superpixels are merged to get the final regular and compact Superpixels. The experimental results demonstrate that our method achieves a better Superpixel performance than the state-of-the-art methods. Our source code will be available at: http://github.com/shenjianbing/ANRW .

  • real time Superpixel segmentation by dbscan clustering algorithm
    IEEE Transactions on Image Processing, 2016
    Co-Authors: Jianbing Shen, Wenguan Wang, Xiaopeng Hao, Zhiyuan Liang, Yu Liu, Ling Shao
    Abstract:

    In this paper, we propose a real-time image Superpixel segmentation method with 50 frames/s by using the density-based spatial clustering of applications with noise (DBSCAN) algorithm. In order to decrease the computational costs of Superpixel algorithms, we adopt a fast two-step framework. In the first clustering stage, the DBSCAN algorithm with color-similarity and geometric restrictions is used to rapidly cluster the pixels, and then, small clusters are merged into Superpixels by their neighborhood through a distance measurement defined by color and spatial features in the second merging stage. A robust and simple distance function is defined for obtaining better Superpixels in these two steps. The experimental results demonstrate that our real-time Superpixel algorithm (50 frames/s) by the DBSCAN clustering outperforms the state-of-the-art Superpixel segmentation methods in terms of both accuracy and efficiency.

Se-ho Lee - One of the best experts on this subject based on the ideXlab platform.

  • Superpixels for image and video processing based on proximity-weighted patch matching
    Multimedia Tools and Applications, 2020
    Co-Authors: Se-ho Lee, Won-dong Jang, Chang-su Kim
    Abstract:

    In this paper, a temporal Superpixel algorithm using proximity-weighted patch matching (PPM) is proposed to yield temporally consistent Superpixels for image and video processing. PPM estimates the motion vector of a Superpixel robustly, by considering the patch matching distances of neighboring Superpixels as well as the Superpixel itself. In each frame, we initialize Superpixels by transferring the Superpixel labels of the previous frame using PPM motion vectors. Then, we update the Superpixel labels of boundary pixels by minimizing a cost function, which is composed of feature distance, compactness, contour, and temporal consistency terms. Finally, we carry out Superpixel splitting, merging, and relabeling to regularize Superpixel sizes and correct inaccurate labels. Extensive experimental results confirm that the proposed algorithm outperforms the state-of-the-art conventional algorithms significantly. Also, it is demonstrated that the proposed algorithm can be applied to video object segmentation and video saliency detection tasks.

  • CVPR - Contour-Constrained Superpixels for Image and Video Processing
    2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017
    Co-Authors: Se-ho Lee, Won-dong Jang, Chang-su Kim
    Abstract:

    A novel contour-constrained Superpixel (CCS) algorithm is proposed in this work. We initialize Superpixels and regions in a regular grid and then refine the Superpixel label of each region hierarchically from block to pixel levels. To make Superpixel boundaries compatible with object contours, we propose the notion of contour pattern matching and formulate an objective function including the contour constraint. Furthermore, we extend the CCS algorithm to generate temporal Superpixels for video processing. We initialize Superpixel labels in each frame by transferring those in the previous frame and refine the labels to make Superpixels temporally consistent as well as compatible with object contours. Experimental results demonstrate that the proposed algorithm provides better performance than the state-of-the-art Superpixel methods.

  • ICCV - Temporal Superpixels Based on Proximity-Weighted Patch Matching
    2017 IEEE International Conference on Computer Vision (ICCV), 2017
    Co-Authors: Se-ho Lee, Won-dong Jang, Chang-su Kim
    Abstract:

    A temporal Superpixel algorithm based on proximity-weighted patch matching (TS-PPM) is proposed in this work. We develop the proximity-weighted patch matching (PPM), which estimates the motion vector of a Superpixel robustly, by considering the patch matching distances of neighboring Superpixels as well as the target Superpixel. In each frame, we initialize Superpixels by transferring the Superpixel labels of the previous frame using PPM motion vectors. Then, we update the Superpixel labels of boundary pixels, based on a cost function, composed of color, spatial, contour, and temporal consistency terms. Finally, we execute Superpixel splitting, merging, and relabeling to regularize Superpixel sizes and reduce incorrect labels. Experiments show that the proposed algorithm outperforms the state-of-the-art conventional algorithms significantly.