Transformed Image

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 32427 Experts worldwide ranked by ideXlab platform

Sergey Komech - One of the best experts on this subject based on the ideXlab platform.

  • shape descriptor based on the volume of Transformed Image boundary
    Pattern Recognition and Machine Intelligence, 2011
    Co-Authors: Xavier Descombes, Sergey Komech
    Abstract:

    In this paper, we derive new shape descriptors based on a directional characterization. The main idea is to study the behavior of the shape neighborhood under family of transformations. We obtain a description invariant with respect to rotation, reflection, translation and scaling. We consider family of volume-preserving transformations. Our descriptor is based on the volume of the neighbourhood of Transformed Image. A well-defined metric is then proposed on the associated feature space. We show the continuity of this metric. Some results on shape retrieval are provided on Kimia 216 and part of MPEG-7 CE-Shape-1 databases to show the accuracy of the proposed shape metric.

  • PReMI - Shape descriptor based on the volume of Transformed Image boundary
    Lecture Notes in Computer Science, 2011
    Co-Authors: Xavier Descombes, Sergey Komech
    Abstract:

    In this paper, we derive new shape descriptors based on a directional characterization. The main idea is to study the behavior of the shape neighborhood under family of transformations. We obtain a description invariant with respect to rotation, reflection, translation and scaling. We consider family of volume-preserving transformations. Our descriptor is based on the volume of the neighbourhood of Transformed Image. A well-defined metric is then proposed on the associated feature space. We show the continuity of this metric. Some results on shape retrieval are provided on Kimia 216 and part of MPEG-7 CE-Shape-1 databases to show the accuracy of the proposed shape metric.

Xavier Descombes - One of the best experts on this subject based on the ideXlab platform.

  • shape descriptor based on the volume of Transformed Image boundary
    Pattern Recognition and Machine Intelligence, 2011
    Co-Authors: Xavier Descombes, Sergey Komech
    Abstract:

    In this paper, we derive new shape descriptors based on a directional characterization. The main idea is to study the behavior of the shape neighborhood under family of transformations. We obtain a description invariant with respect to rotation, reflection, translation and scaling. We consider family of volume-preserving transformations. Our descriptor is based on the volume of the neighbourhood of Transformed Image. A well-defined metric is then proposed on the associated feature space. We show the continuity of this metric. Some results on shape retrieval are provided on Kimia 216 and part of MPEG-7 CE-Shape-1 databases to show the accuracy of the proposed shape metric.

  • PReMI - Shape descriptor based on the volume of Transformed Image boundary
    Lecture Notes in Computer Science, 2011
    Co-Authors: Xavier Descombes, Sergey Komech
    Abstract:

    In this paper, we derive new shape descriptors based on a directional characterization. The main idea is to study the behavior of the shape neighborhood under family of transformations. We obtain a description invariant with respect to rotation, reflection, translation and scaling. We consider family of volume-preserving transformations. Our descriptor is based on the volume of the neighbourhood of Transformed Image. A well-defined metric is then proposed on the associated feature space. We show the continuity of this metric. Some results on shape retrieval are provided on Kimia 216 and part of MPEG-7 CE-Shape-1 databases to show the accuracy of the proposed shape metric.

Lindeberg Tony - One of the best experts on this subject based on the ideXlab platform.

  • Understanding when spatial transformer networks do not support invariance, and what to do about it
    KTH Beräkningsvetenskap och beräkningsteknik (CST), 2021
    Co-Authors: Finnveden Lukas, Jansson Ylva, Lindeberg Tony
    Abstract:

    Spatial transformer networks (STNs) were designed to enable convolutional neural networks (CNNs) to learn invariance to Image transformations. STNs were originally proposed to transform CNN feature maps as well as input Images. This enables the use of more complex features when predicting transformation parameters. However, since STNs perform a purely spatial transformation, they do not, in the general case, have the ability to align the feature maps of a Transformed Image with those of its original. STNs are therefore unable to support invariance when transforming CNN feature maps. We present a simple proof for this and study the practical implications, showing that this inability is coupled with decreased classification accuracy. We therefore investigate alternative STN architectures that make use of complex features. We find that while deeper localization networks are difficult to train, localization networks that share parameters with the classification network remain stable as they grow deeper, which allows for higher classification accuracy on difficult datasets. Finally, we explore the interaction between localization network complexity and iterative Image alignment.QC 20210118

  • Spatial transformations in convolutional networks and invariant recognition.
    KTH Beräkningsvetenskap och beräkningsteknik (CST), 2020
    Co-Authors: Jansson Ylva, Maydanskiy Maksim, Finnveden Lukas, Lindeberg Tony
    Abstract:

    We show that spatial transformations of CNN feature maps cannot align the feature maps of a Transformed Image to match those of it’s original for general affine transformations. This implies that methods that spatially transform CNN feature maps, such as spatial transformer networks, dilated or deformable convolutions or spatial pyramid pooling cannot enable true invariance. Our proof is based on elementary analysis for both the single- and multi-layer network cases.QC 20201224

  • The problems with using STNs to align CNN feature maps
    KTH Beräkningsvetenskap och beräkningsteknik (CST), 2020
    Co-Authors: Finnveden Lukas, Jansson Ylva, Lindeberg Tony
    Abstract:

    Spatial transformer networks (STNs) were designed to enable CNNs to learn invariance to Image transformations. STNs were originally proposed to transform CNN feature maps as well as input Images. This enables the use of more complex features when predicting transformation parameters. However, since STNs perform a purely spatial transformation, they do not, in the general case, have the ability to align the feature maps of a Transformed Image and its original. We present a theoretical argument for this and investigate the practical implications, showing that this inability is coupled with decreased classification accuracy. We advocate taking advantage of more complex features in deeper layers by instead sharing parameters between the classification and the localisation network.QC 20200123

  • Inability of spatial transformations of CNN feature maps to support invariant recognition
    KTH Beräkningsvetenskap och beräkningsteknik (CST), 2020
    Co-Authors: Jansson Ylva, Maydanskiy Maksim, Finnveden Lukas, Lindeberg Tony
    Abstract:

    A large number of deep learning architectures use spatial transformations of CNN feature maps or filters to better deal with variability in object appearance caused by natural Image transformations. In this paper, we prove that spatial transformations of CNN feature maps cannot align the feature maps of a Transformed Image to match those of its original, for general affine transformations, unless the extracted features are themselves invariant. Our proof is based on elementary analysis for both the single- and multi-layer network case. The results imply that methods based on spatial transformations of CNN feature maps or filters cannot replace Image alignment of the input and cannot enable invariant recognition for general affine transformations, specifically not for scaling transformations or shear transformations. For rotations and reflections, spatially transforming feature maps or filters can enable invariance but only for networks with learnt or hardcoded rotation- or reflection-invariant featuresQC 20200507

  • Inability of spatial transformations of CNN feature maps to support invariant recognition
    2020
    Co-Authors: Jansson Ylva, Maydanskiy Maksim, Finnveden Lukas, Lindeberg Tony
    Abstract:

    A large number of deep learning architectures use spatial transformations of CNN feature maps or filters to better deal with variability in object appearance caused by natural Image transformations. In this paper, we prove that spatial transformations of CNN feature maps cannot align the feature maps of a Transformed Image to match those of its original, for general affine transformations, unless the extracted features are themselves invariant. Our proof is based on elementary analysis for both the single- and multi-layer network case. The results imply that methods based on spatial transformations of CNN feature maps or filters cannot replace Image alignment of the input and cannot enable invariant recognition for general affine transformations, specifically not for scaling transformations or shear transformations. For rotations and reflections, spatially transforming feature maps or filters can enable invariance but only for networks with learnt or hardcoded rotation- or reflection-invariant featuresComment: 22 pages, 3 figure

John M Pauly - One of the best experts on this subject based on the ideXlab platform.

  • sparse mri the application of compressed sensing for rapid mr imaging
    Magnetic Resonance in Medicine, 2007
    Co-Authors: Michael Lustig, David L Donoho, John M Pauly
    Abstract:

    The sparsity which is implicit in MR Images is exploited to significantly undersample k -space. Some MR Images such as angiograms are already sparse in the pixel representation; other, more complicated Images have a sparse representation in some transform domain–for example, in terms of spatial finite-differences or their wavelet coefficients. According to the recently developed mathematical theory of compressedsensing, Images with a sparse representation can be recovered from randomly undersampled k -space data, provided an appropriate nonlinear recovery scheme is used. Intuitively, artifacts due to random undersampling add as noise-like interference. In the sparse transform domain the significant coefficients stand out above the interference. A nonlinear thresholding scheme can recover the sparse coefficients, effectively recovering the Image itself. In this article, practical incoherent undersampling schemes are developed and analyzed by means of their aliasing interference. Incoherence is introduced by pseudo-random variable-density undersampling of phase-encodes. The reconstruction is performed by minimizing the 1 norm of a Transformed Image, subject to data

Abdulmotaleb El Saddik - One of the best experts on this subject based on the ideXlab platform.

  • a survey of rst invariant Image watermarking algorithms
    ACM Computing Surveys, 2007
    Co-Authors: Dong Zheng, Jiying Zhao, Abdulmotaleb El Saddik
    Abstract:

    In this article, we review the algorithms for rotation, scaling and translation (RST) invariant Image watermarking. There are mainly two categories of RST invariant Image watermarking algorithms. One is to rectify the RST Transformed Image before conducting watermark detection. Another is to embed and detect watermark in an RST invariant or semi-invariant domain. In order to help readers understand, we first introduce the fundamental theories and techniques used in the existing RST invariant Image watermarking algorithms. Then, we discuss in detail the work principles, embedding process, and detection process of the typical RST invariant Image watermarking algorithms. Finally, we analyze and evaluate these typical algorithms through implementation, and point out their advantages and disadvantages.