Tangent Space

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 11904 Experts worldwide ranked by ideXlab platform

I-fan Shen - One of the best experts on this subject based on the ideXlab platform.

  • ICPR (2) - Image Tangent Space for Image Retrieval
    18th International Conference on Pattern Recognition (ICPR'06), 2006
    Co-Authors: Hongyu Li, Wenbin Chen, I-fan Shen
    Abstract:

    Image Tangent Space is actually high-level semantic Space learned from low-level feature Space by modified local Tangent Space alignment which was originally proposed for nonlinear manifold learning. Under the assumption that a data point in image Space can be linearly approximated by some nearest neighbors in its local neighborhood, we develop a lazy learning method to locally approximate the optimal mapping function between image Space and image Tangent Space. That is, the semantics of a new query image in image Space can be inferred by the local approximation in its corresponding image Tangent Space. While Euclidean distance induced by the ambient Space is often used to represent the difference between images, clearly, their natural distance is possibly different from Euclidean distance. Here, we compare three distance metrics: Chebyshev, Manhattan and Euclidean distances, and find that Chebyshev distance outperforms the other two in measuring the semantic similarity during retrieval. Experimental results show that our approach is effective in improving the performance of image retrieval systems.

  • Image Tangent Space for Image Retrieval
    18th International Conference on Pattern Recognition (ICPR'06), 2006
    Co-Authors: Hongyu Li, Wenbin Chen, I-fan Shen
    Abstract:

    Image Tangent Space is actually high-level semantic Space learned from low-level feature Space by modified local Tangent Space alignment which was originally proposed for nonlinear manifold learning. Under the assumption that a data point in image Space can be linearly approximated by some nearest neighbors in its local neighborhood, we develop a lazy learning method to locally approximate the optimal mapping function between image Space and image Tangent Space. That is, the semantics of a new query image in image Space can be inferred by the local approximation in its corresponding image Tangent Space. While Euclidean distance induced by the ambient Space is often used to represent the difference between images, clearly, their natural distance is possibly different from Euclidean distance. Here, we compare three distance metrics: Chebyshev, Manhattan and Euclidean distances, and find that Chebyshev distance outperforms the other two in measuring the semantic similarity during retrieval. Experimental results show that our approach is effective in improving the performance of image retrieval systems

  • supervised local Tangent Space alignment for classification
    International Joint Conference on Artificial Intelligence, 2005
    Co-Authors: Hongyu Li, Wenbin Chen, I-fan Shen
    Abstract:

    Supervised local Tangent Space alignment (SLTSA) is an extension of local Tangent Space alignment (LTSA) to supervised feature extraction. Two algorithmic improvements are made upon LTSA for classification. First a simple technique is proposed to map new data to the embedded low-dimensional Space and make LTSA suitable in a changing, dynamic environment. Then SLTSA is introduced to deal with data sets containing multiple classes with class membership information.

  • IJCAI - Supervised local Tangent Space alignment for classification
    2005
    Co-Authors: Hongyu Li, Wenbin Chen, I-fan Shen
    Abstract:

    Supervised local Tangent Space alignment (SLTSA) is an extension of local Tangent Space alignment (LTSA) to supervised feature extraction. Two algorithmic improvements are made upon LTSA for classification. First a simple technique is proposed to map new data to the embedded low-dimensional Space and make LTSA suitable in a changing, dynamic environment. Then SLTSA is introduced to deal with data sets containing multiple classes with class membership information.

  • supervised learning on local Tangent Space
    International Symposium on Neural Networks, 2005
    Co-Authors: Hongyu Li, Wenbin Chen, Li Teng, I-fan Shen
    Abstract:

    A novel supervised learning method is proposed in this paper. It is an extension of local Tangent Space alignment (LTSA) to supervised feature extraction. First LTSA has been improved to be suitable in a changing, dynamic environment, that is, now it can map new data to the embedded low-dimensional Space. Next class membership information is introduced to construct local Tangent Space when data sets contain multiple classes. This method has been applied to a number of data sets for classification and performs well when combined with some simple classifiers.

Hongyu Li - One of the best experts on this subject based on the ideXlab platform.

  • Local Tangent Space based manifold entropy for image retrieval
    Proceedings of the 21st International Conference on Pattern Recognition (ICPR2012), 2012
    Co-Authors: Yi Wang, Hongyu Li, Lin Zhang
    Abstract:

    This paper proposes a new manifold entropy function based on local Tangent Space (LTS). With this entropy function, we further propose a framework for image retrieval. The retrieval is treated as searching for ordered cycles by categories in image datasets. The optimal cycles can be found by minimizing our manifold entropy of images.

  • ICPR (2) - Image Tangent Space for Image Retrieval
    18th International Conference on Pattern Recognition (ICPR'06), 2006
    Co-Authors: Hongyu Li, Wenbin Chen, I-fan Shen
    Abstract:

    Image Tangent Space is actually high-level semantic Space learned from low-level feature Space by modified local Tangent Space alignment which was originally proposed for nonlinear manifold learning. Under the assumption that a data point in image Space can be linearly approximated by some nearest neighbors in its local neighborhood, we develop a lazy learning method to locally approximate the optimal mapping function between image Space and image Tangent Space. That is, the semantics of a new query image in image Space can be inferred by the local approximation in its corresponding image Tangent Space. While Euclidean distance induced by the ambient Space is often used to represent the difference between images, clearly, their natural distance is possibly different from Euclidean distance. Here, we compare three distance metrics: Chebyshev, Manhattan and Euclidean distances, and find that Chebyshev distance outperforms the other two in measuring the semantic similarity during retrieval. Experimental results show that our approach is effective in improving the performance of image retrieval systems.

  • Image Tangent Space for Image Retrieval
    18th International Conference on Pattern Recognition (ICPR'06), 2006
    Co-Authors: Hongyu Li, Wenbin Chen, I-fan Shen
    Abstract:

    Image Tangent Space is actually high-level semantic Space learned from low-level feature Space by modified local Tangent Space alignment which was originally proposed for nonlinear manifold learning. Under the assumption that a data point in image Space can be linearly approximated by some nearest neighbors in its local neighborhood, we develop a lazy learning method to locally approximate the optimal mapping function between image Space and image Tangent Space. That is, the semantics of a new query image in image Space can be inferred by the local approximation in its corresponding image Tangent Space. While Euclidean distance induced by the ambient Space is often used to represent the difference between images, clearly, their natural distance is possibly different from Euclidean distance. Here, we compare three distance metrics: Chebyshev, Manhattan and Euclidean distances, and find that Chebyshev distance outperforms the other two in measuring the semantic similarity during retrieval. Experimental results show that our approach is effective in improving the performance of image retrieval systems

  • supervised local Tangent Space alignment for classification
    International Joint Conference on Artificial Intelligence, 2005
    Co-Authors: Hongyu Li, Wenbin Chen, I-fan Shen
    Abstract:

    Supervised local Tangent Space alignment (SLTSA) is an extension of local Tangent Space alignment (LTSA) to supervised feature extraction. Two algorithmic improvements are made upon LTSA for classification. First a simple technique is proposed to map new data to the embedded low-dimensional Space and make LTSA suitable in a changing, dynamic environment. Then SLTSA is introduced to deal with data sets containing multiple classes with class membership information.

  • IJCAI - Supervised local Tangent Space alignment for classification
    2005
    Co-Authors: Hongyu Li, Wenbin Chen, I-fan Shen
    Abstract:

    Supervised local Tangent Space alignment (SLTSA) is an extension of local Tangent Space alignment (LTSA) to supervised feature extraction. Two algorithmic improvements are made upon LTSA for classification. First a simple technique is proposed to map new data to the embedded low-dimensional Space and make LTSA suitable in a changing, dynamic environment. Then SLTSA is introduced to deal with data sets containing multiple classes with class membership information.

Wenbin Chen - One of the best experts on this subject based on the ideXlab platform.

  • ICPR (2) - Image Tangent Space for Image Retrieval
    18th International Conference on Pattern Recognition (ICPR'06), 2006
    Co-Authors: Hongyu Li, Wenbin Chen, I-fan Shen
    Abstract:

    Image Tangent Space is actually high-level semantic Space learned from low-level feature Space by modified local Tangent Space alignment which was originally proposed for nonlinear manifold learning. Under the assumption that a data point in image Space can be linearly approximated by some nearest neighbors in its local neighborhood, we develop a lazy learning method to locally approximate the optimal mapping function between image Space and image Tangent Space. That is, the semantics of a new query image in image Space can be inferred by the local approximation in its corresponding image Tangent Space. While Euclidean distance induced by the ambient Space is often used to represent the difference between images, clearly, their natural distance is possibly different from Euclidean distance. Here, we compare three distance metrics: Chebyshev, Manhattan and Euclidean distances, and find that Chebyshev distance outperforms the other two in measuring the semantic similarity during retrieval. Experimental results show that our approach is effective in improving the performance of image retrieval systems.

  • Image Tangent Space for Image Retrieval
    18th International Conference on Pattern Recognition (ICPR'06), 2006
    Co-Authors: Hongyu Li, Wenbin Chen, I-fan Shen
    Abstract:

    Image Tangent Space is actually high-level semantic Space learned from low-level feature Space by modified local Tangent Space alignment which was originally proposed for nonlinear manifold learning. Under the assumption that a data point in image Space can be linearly approximated by some nearest neighbors in its local neighborhood, we develop a lazy learning method to locally approximate the optimal mapping function between image Space and image Tangent Space. That is, the semantics of a new query image in image Space can be inferred by the local approximation in its corresponding image Tangent Space. While Euclidean distance induced by the ambient Space is often used to represent the difference between images, clearly, their natural distance is possibly different from Euclidean distance. Here, we compare three distance metrics: Chebyshev, Manhattan and Euclidean distances, and find that Chebyshev distance outperforms the other two in measuring the semantic similarity during retrieval. Experimental results show that our approach is effective in improving the performance of image retrieval systems

  • supervised local Tangent Space alignment for classification
    International Joint Conference on Artificial Intelligence, 2005
    Co-Authors: Hongyu Li, Wenbin Chen, I-fan Shen
    Abstract:

    Supervised local Tangent Space alignment (SLTSA) is an extension of local Tangent Space alignment (LTSA) to supervised feature extraction. Two algorithmic improvements are made upon LTSA for classification. First a simple technique is proposed to map new data to the embedded low-dimensional Space and make LTSA suitable in a changing, dynamic environment. Then SLTSA is introduced to deal with data sets containing multiple classes with class membership information.

  • IJCAI - Supervised local Tangent Space alignment for classification
    2005
    Co-Authors: Hongyu Li, Wenbin Chen, I-fan Shen
    Abstract:

    Supervised local Tangent Space alignment (SLTSA) is an extension of local Tangent Space alignment (LTSA) to supervised feature extraction. Two algorithmic improvements are made upon LTSA for classification. First a simple technique is proposed to map new data to the embedded low-dimensional Space and make LTSA suitable in a changing, dynamic environment. Then SLTSA is introduced to deal with data sets containing multiple classes with class membership information.

  • supervised learning on local Tangent Space
    International Symposium on Neural Networks, 2005
    Co-Authors: Hongyu Li, Wenbin Chen, Li Teng, I-fan Shen
    Abstract:

    A novel supervised learning method is proposed in this paper. It is an extension of local Tangent Space alignment (LTSA) to supervised feature extraction. First LTSA has been improved to be suitable in a changing, dynamic environment, that is, now it can map new data to the embedded low-dimensional Space. Next class membership information is introduced to construct local Tangent Space when data sets contain multiple classes. This method has been applied to a number of data sets for classification and performs well when combined with some simple classifiers.

Martin Horn - One of the best experts on this subject based on the ideXlab platform.

  • CDC - Sliding Mode Tangent Space Observer for LTV Systems with Unknown Inputs
    2018 IEEE Conference on Decision and Control (CDC), 2018
    Co-Authors: Markus Tranninger, Sergiy Zhuk, Martin Steinberger, Leonid M. Fridman, Martin Horn
    Abstract:

    This paper presents a cascaded observer structure for linear time varying systems which yields finite-time exact state estimates despite an unknown input. The observer is based on a Tangent Space observer and a higher order sliding mode reconstruction scheme. Theoretical insights in the construction of the observer are given along with conditions for stability of the Tangent Space observer in the presence of unknown inputs. A numerical simulation example shows the applicability of the proposed approach.

  • Sliding Mode Tangent Space Observer for LTV Systems with Unknown Inputs
    2018 IEEE Conference on Decision and Control (CDC), 2018
    Co-Authors: Markus Tranninger, Sergiy Zhuk, Martin Steinberger, Leonid M. Fridman, Martin Horn
    Abstract:

    This paper presents a cascaded observer structure for linear time varying systems which yields finite-time exact state estimates despite an unknown input. The observer is based on a Tangent Space observer and a higher order sliding mode reconstruction scheme. Theoretical insights in the construction of the observer are given along with conditions for stability of the Tangent Space observer in the presence of unknown inputs. A numerical simulation example shows the applicability of the proposed approach.

Frank Verstraete - One of the best experts on this subject based on the ideXlab platform.

  • Tangent-Space methods for truncating uniform MPS
    arXiv: Quantum Physics, 2020
    Co-Authors: Bram Vanhecke, Laurens Vanderstraeten, Jutho Haegeman, Maarten Van Damme, Frank Verstraete
    Abstract:

    A central primitive in quantum tensor network simulations is the problem of approximating a matrix product state with one of a lower bond dimension. This problem forms the central bottleneck in algorithms for time evolution and for contracting projected entangled pair states. We formulate a Tangent-Space based variational algorithm to achieve this for uniform (infinite) matrix product states. The algorithm exhibits a favourable scaling of the computational cost, and we demonstrate its usefulness by several examples involving the multiplication of a matrix product state with a matrix product operator.

  • Tangent Space methods for uniform matrix product states
    arXiv: Strongly Correlated Electrons, 2019
    Co-Authors: Laurens Vanderstraeten, Jutho Haegeman, Frank Verstraete
    Abstract:

    In these lecture notes we give a technical overview of Tangent-Space methods for matrix product states in the thermodynamic limit. We introduce the manifold of uniform matrix product states, show how to compute different types of observables, and discuss the concept of a Tangent Space. We explain how to variationally optimize ground-state approximations, implement real-time evolution and describe elementary excitations for a given model Hamiltonian. Also, we explain how matrix product states approximate fixed points of one-dimensional transfer matrices. We show how all these methods can be translated to the language of continuous matrix product states for one-dimensional field theories. We conclude with some extensions of the Tangent-Space formalism and with an outlook to new applications.

  • excitations and the Tangent Space of projected entangled pair states
    Physical Review B, 2015
    Co-Authors: Laurens Vanderstraeten, Frank Verstraete, Michael Marien, Jutho Haegeman
    Abstract:

    We develop Tangent Space methods for projected entangled-pair states (PEPS) that provide direct access to the low-energy sector of strongly-correlated two-dimensional quantum systems. More specifically, we construct a variational ansatz for elementary excitations on top of PEPS ground states that allows for computing gaps, dispersion relations, and spectral weights directly in the thermodynamic limit. Solving the corresponding variational problem requires the evaluation of momentum transformed two-point and three-point correlation functions on a PEPS background, which we can compute efficiently by using a contraction scheme. As an application we study the spectral properties of the magnons of the Affleck-Kennedy-Lieb-Tasaki model on the square lattice and the anyonic excitations in a perturbed version of Kitaev's toric code.

  • post matrix product state methods to Tangent Space and beyond
    Physical Review B, 2013
    Co-Authors: Jutho Haegeman, Tobias J Osborne, Frank Verstraete
    Abstract:

    We develop in full detail the formalism of Tangent states to the manifold of matrix product states, and show how they naturally appear in studying time evolution, excitations, and spectral functions. We focus on the case of systems with translation invariance in the thermodynamic limit, where momentum is a well-defined quantum number. We present some illustrative results and discuss analogous constructions for other variational classes. We also discuss generalizations and extensions beyond the Tangent Space, and give a general outlook towards post-matrix product methods.