Image Boundary

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 294 Experts worldwide ranked by ideXlab platform

Shaoyi Chien - One of the best experts on this subject based on the ideXlab platform.

  • real time salient object detection with a minimum spanning tree
    Computer Vision and Pattern Recognition, 2016
    Co-Authors: Qingxiong Yang, Shaoyi Chien
    Abstract:

    In this paper, we present a real-time salient object detection system based on the minimum spanning tree. Due to the fact that background regions are typically connected to the Image boundaries, salient objects can be extracted by computing the distances to the boundaries. However, measuring the Image Boundary connectivity efficiently is a challenging problem. Existing methods either rely on superpixel representation to reduce the processing units or approximate the distance transform. Instead, we propose an exact and iteration free solution on a minimum spanning tree. The minimum spanning tree representation of an Image inherently reveals the object geometry information in a scene. Meanwhile, it largely reduces the search space of shortest paths, resulting an efficient and high quality distance transform algorithm. We further introduce a Boundary dissimilarity measure to compliment the shortage of distance transform for salient object detection. Extensive evaluations show that the proposed algorithm achieves the leading performance compared to the state-of-the-art methods in terms of efficiency and accuracy.

  • CVPR - Real-Time Salient Object Detection with a Minimum Spanning Tree
    2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016
    Co-Authors: Qingxiong Yang, Shaoyi Chien
    Abstract:

    In this paper, we present a real-time salient object detection system based on the minimum spanning tree. Due to the fact that background regions are typically connected to the Image boundaries, salient objects can be extracted by computing the distances to the boundaries. However, measuring the Image Boundary connectivity efficiently is a challenging problem. Existing methods either rely on superpixel representation to reduce the processing units or approximate the distance transform. Instead, we propose an exact and iteration free solution on a minimum spanning tree. The minimum spanning tree representation of an Image inherently reveals the object geometry information in a scene. Meanwhile, it largely reduces the search space of shortest paths, resulting an efficient and high quality distance transform algorithm. We further introduce a Boundary dissimilarity measure to compliment the shortage of distance transform for salient object detection. Extensive evaluations show that the proposed algorithm achieves the leading performance compared to the state-of-the-art methods in terms of efficiency and accuracy.

Sergey Komech - One of the best experts on this subject based on the ideXlab platform.

  • Boundary distortion in dynamical systems and in Image analysis
    Electronic Notes in Discrete Mathematics, 2013
    Co-Authors: Sergey Komech
    Abstract:

    Abstract We develop a geometrical approach to Kolmogorov entropy. Our approach is based on studying of Boundary distortion. Also we develop a shape descriptor based on the Image Boundary distortion.

  • shape descriptor based on the volume of transformed Image Boundary
    Pattern Recognition and Machine Intelligence, 2011
    Co-Authors: Xavier Descombes, Sergey Komech
    Abstract:

    In this paper, we derive new shape descriptors based on a directional characterization. The main idea is to study the behavior of the shape neighborhood under family of transformations. We obtain a description invariant with respect to rotation, reflection, translation and scaling. We consider family of volume-preserving transformations. Our descriptor is based on the volume of the neighbourhood of transformed Image. A well-defined metric is then proposed on the associated feature space. We show the continuity of this metric. Some results on shape retrieval are provided on Kimia 216 and part of MPEG-7 CE-Shape-1 databases to show the accuracy of the proposed shape metric.

  • PReMI - Shape descriptor based on the volume of transformed Image Boundary
    Lecture Notes in Computer Science, 2011
    Co-Authors: Xavier Descombes, Sergey Komech
    Abstract:

    In this paper, we derive new shape descriptors based on a directional characterization. The main idea is to study the behavior of the shape neighborhood under family of transformations. We obtain a description invariant with respect to rotation, reflection, translation and scaling. We consider family of volume-preserving transformations. Our descriptor is based on the volume of the neighbourhood of transformed Image. A well-defined metric is then proposed on the associated feature space. We show the continuity of this metric. Some results on shape retrieval are provided on Kimia 216 and part of MPEG-7 CE-Shape-1 databases to show the accuracy of the proposed shape metric.

Qingxiong Yang - One of the best experts on this subject based on the ideXlab platform.

  • real time salient object detection with a minimum spanning tree
    Computer Vision and Pattern Recognition, 2016
    Co-Authors: Qingxiong Yang, Shaoyi Chien
    Abstract:

    In this paper, we present a real-time salient object detection system based on the minimum spanning tree. Due to the fact that background regions are typically connected to the Image boundaries, salient objects can be extracted by computing the distances to the boundaries. However, measuring the Image Boundary connectivity efficiently is a challenging problem. Existing methods either rely on superpixel representation to reduce the processing units or approximate the distance transform. Instead, we propose an exact and iteration free solution on a minimum spanning tree. The minimum spanning tree representation of an Image inherently reveals the object geometry information in a scene. Meanwhile, it largely reduces the search space of shortest paths, resulting an efficient and high quality distance transform algorithm. We further introduce a Boundary dissimilarity measure to compliment the shortage of distance transform for salient object detection. Extensive evaluations show that the proposed algorithm achieves the leading performance compared to the state-of-the-art methods in terms of efficiency and accuracy.

  • CVPR - Real-Time Salient Object Detection with a Minimum Spanning Tree
    2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016
    Co-Authors: Qingxiong Yang, Shaoyi Chien
    Abstract:

    In this paper, we present a real-time salient object detection system based on the minimum spanning tree. Due to the fact that background regions are typically connected to the Image boundaries, salient objects can be extracted by computing the distances to the boundaries. However, measuring the Image Boundary connectivity efficiently is a challenging problem. Existing methods either rely on superpixel representation to reduce the processing units or approximate the distance transform. Instead, we propose an exact and iteration free solution on a minimum spanning tree. The minimum spanning tree representation of an Image inherently reveals the object geometry information in a scene. Meanwhile, it largely reduces the search space of shortest paths, resulting an efficient and high quality distance transform algorithm. We further introduce a Boundary dissimilarity measure to compliment the shortage of distance transform for salient object detection. Extensive evaluations show that the proposed algorithm achieves the leading performance compared to the state-of-the-art methods in terms of efficiency and accuracy.

Jan C. Van Gemert - One of the best experts on this subject based on the ideXlab platform.

  • On Translation Invariance in CNNs: Convolutional Layers can Exploit Absolute Spatial Location
    arXiv: Computer Vision and Pattern Recognition, 2020
    Co-Authors: Osman Semih Kayhan, Jan C. Van Gemert
    Abstract:

    In this paper we challenge the common assumption that convolutional layers in modern CNNs are translation invariant. We show that CNNs can and will exploit the absolute spatial location by learning filters that respond exclusively to particular absolute locations by exploiting Image Boundary effects. Because modern CNNs filters have a huge receptive field, these Boundary effects operate even far from the Image Boundary, allowing the network to exploit absolute spatial location all over the Image. We give a simple solution to remove spatial location encoding which improves translation invariance and thus gives a stronger visual inductive bias which particularly benefits small data sets. We broadly demonstrate these benefits on several architectures and various applications such as Image classification, patch matching, and two video classification datasets.

  • CVPR - On Translation Invariance in CNNs: Convolutional Layers Can Exploit Absolute Spatial Location
    2020 IEEE CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020
    Co-Authors: Osman Semih Kayhan, Jan C. Van Gemert
    Abstract:

    In this paper we challenge the common assumption that convolutional layers in modern CNNs are translation invariant. We show that CNNs can and will exploit the absolute spatial location by learning filters that respond exclusively to particular absolute locations by exploiting Image Boundary effects. Because modern CNNs filters have a huge receptive field, these Boundary effects operate even far from the Image Boundary, allowing the network to exploit absolute spatial location all over the Image. We give a simple solution to remove spatial location encoding which improves translation invariance and thus gives a stronger visual inductive bias which particularly benefits small data sets. We broadly demonstrate these benefits on several architectures and various applications such as Image classification, patch matching, and two video classification datasets.

Bruce Elliot Hirsch - One of the best experts on this subject based on the ideXlab platform.

  • user steered Image Boundary segmentation
    Medical Imaging 1996: Image Processing, 1996
    Co-Authors: Alexandre X Falcao, Jayaram K Udupa, Supun Samarasekera, Bruce Elliot Hirsch
    Abstract:

    In multidimensional imaging, there are, and will continue to be, situations wherein automatic Image segmentation methods fail and extensive user assistance in the process is needed. For such situations, we introduce a novel user-steered Image Boundary segmentation paradigm under two new methods, live-wire and live-lane. The methods are designed to reduce the time spent by the user in the segmentation process providing tight user control while the process is being executed. The strategy to reach this goal is to exploit the synergy between the superior abilities of human observers (compared to computer algorithms) in Boundary recognition and of computer algorithms (compared to human observers) in Boundary delineation. We describe evaluation studied to compare the utility of the new methods with that of manual tracing based on speed and repeatability of tracing and on data taken from a large on-going application. We conclude that the new methods are more repeatable and on the average two timers faster than manual tracing. Live-wire and live-lane operate slice-by-slice in their present form. Their 3D and 4D extensions, which we are currently developing, can further reduce the total segmentation time significantly.

  • Medical Imaging: Image Processing - User-steered Image Boundary segmentation
    Medical Imaging 1996: Image Processing, 1996
    Co-Authors: Alexandre X Falcao, Jayaram K Udupa, Supun Samarasekera, Bruce Elliot Hirsch
    Abstract:

    In multidimensional imaging, there are, and will continue to be, situations wherein automatic Image segmentation methods fail and extensive user assistance in the process is needed. For such situations, we introduce a novel user-steered Image Boundary segmentation paradigm under two new methods, live-wire and live-lane. The methods are designed to reduce the time spent by the user in the segmentation process providing tight user control while the process is being executed. The strategy to reach this goal is to exploit the synergy between the superior abilities of human observers (compared to computer algorithms) in Boundary recognition and of computer algorithms (compared to human observers) in Boundary delineation. We describe evaluation studied to compare the utility of the new methods with that of manual tracing based on speed and repeatability of tracing and on data taken from a large on-going application. We conclude that the new methods are more repeatable and on the average two timers faster than manual tracing. Live-wire and live-lane operate slice-by-slice in their present form. Their 3D and 4D extensions, which we are currently developing, can further reduce the total segmentation time significantly.