Laplace-Beltrami Operator

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 360 Experts worldwide ranked by ideXlab platform

Dejan Slepcev - One of the best experts on this subject based on the ideXlab platform.

  • error estimates for spectral convergence of the graph laplacian on random geometric graphs toward the laplace beltrami Operator
    Foundations of Computational Mathematics, 2020
    Co-Authors: Nicolas Garcia Trillos, Moritz Gerlach, Matthias Hein, Dejan Slepcev
    Abstract:

    We study the convergence of the graph Laplacian of a random geometric graph generated by an i.i.d. sample from a m-dimensional submanifold $${\mathcal {M}}$$ in $$\mathbb {R}^d$$ as the sample size n increases and the neighborhood size h tends to zero. We show that eigenvalues and eigenvectors of the graph Laplacian converge with a rate of $$O\Big (\big (\frac{\log n}{n}\big )^\frac{1}{2m}\Big )$$ to the eigenvalues and eigenfunctions of the weighted Laplace–Beltrami Operator of $${\mathcal {M}}$$ . No information on the submanifold $${\mathcal {M}}$$ is needed in the construction of the graph or the “out-of-sample extension” of the eigenvectors. Of independent interest is a generalization of the rate of convergence of empirical measures on submanifolds in $$\mathbb {R}^d$$ in infinity transportation distance.

  • error estimates for spectral convergence of the graph laplacian on random geometric graphs towards the laplace beltrami Operator
    arXiv: Machine Learning, 2018
    Co-Authors: Nicolas Garcia Trillos, Moritz Gerlach, Matthias Hein, Dejan Slepcev
    Abstract:

    We study the convergence of the graph Laplacian of a random geometric graph generated by an i.i.d. sample from a $m$-dimensional submanifold $M$ in $R^d$ as the sample size $n$ increases and the neighborhood size $h$ tends to zero. We show that eigenvalues and eigenvectors of the graph Laplacian converge with a rate of $O\Big(\big(\frac{\log n}{n}\big)^\frac{1}{2m}\Big)$ to the eigenvalues and eigenfunctions of the weighted Laplace-Beltrami Operator of $M$. No information on the submanifold $M$ is needed in the construction of the graph or the "out-of-sample extension" of the eigenvectors. Of independent interest is a generalization of the rate of convergence of empirical measures on submanifolds in $R^d$ in infinity transportation distance.

Partha Niyogi - One of the best experts on this subject based on the ideXlab platform.

  • towards a theoretical foundation for laplacian based manifold methods
    Lecture Notes in Computer Science, 2005
    Co-Authors: Mikhail Belkin, Partha Niyogi
    Abstract:

    In recent years manifold methods have attracted a considerable amount of attention in machine learning. However most algorithms in that class may be termed manifold-motivated as they lack any explicit theoretical guarantees. In this paper we take a step towards closing the gap between theory and practice for a class of Laplacian-based manifold methods. We show that under certain conditions the graph Laplacian of a point cloud converges to the Laplace-Beltrami Operator on the underlying manifold. Theorem 1 contains the first result showing convergence of a random graph Laplacian to manifold Laplacian in the machine learning context.

  • Semi-supervised learning on riemannian manifolds
    Machine Learning, 2004
    Co-Authors: Mikhail Belkin, Partha Niyogi
    Abstract:

    We consider the general problem of utilizing both labeled and unlabeled data to improve classification accuracy. Under the assumption that the data lie on a submanifold in a high dimensional space, we develop an algorithmic framework to classify a partially labeled data set in a principled manner. The central idea of our approach is that classification functions are naturally defined only on the submanifold in question rather than the total ambient space. Using the Laplace-Beltrami Operator one produces a basis (the Laplacian Eigenmaps) for a Hilbert space of square integrable functions on the submanifold. To recover such a basis, only unlabeled examples are required. Once such a basis is obtained, training can be performed using the labeled data set. Our algorithm models the manifold using the adjacency graph for the data and approximates the Laplace-Beltrami Operator by the graph Laplacian. We provide details of the algorithm, its theoretical justification, and several practical applications for image, speech, and text classification.

  • Laplacian Eigenmaps and Spectral Techniques for Embedding and Clustering
    Nips, 2001
    Co-Authors: Mikhail Belkin, Partha Niyogi
    Abstract:

    Drawing on the correspondence between the graph Laplacian, the Laplace-Beltrami Operator on a manifold, and the connections to the heat equation, we propose a geometrically motivated algorithm for constructing a representation for data sampled from a low dimensional manifold embedded in a higher dimensional space. The algorithm provides a computationally efficient approach to nonlinear dimensionality reduction that has locality preserving properties and a natural connection to clustering. Several applications are considered.

Mikhail Belkin - One of the best experts on this subject based on the ideXlab platform.

  • constructing laplace Operator from point clouds in rd
    Symposium on Discrete Algorithms, 2009
    Co-Authors: Mikhail Belkin, Jian Sun, Yusu Wang
    Abstract:

    We present an algorithm for approximating the Laplace-Beltrami Operator from an arbitrary point cloud obtained from a k-dimensional manifold embedded in the d-dimensional space. We show that this PCD Laplace (Point-Cloud Data Laplace) Operator converges to the Laplace-Beltrami Operator on the underlying manifold as the point cloud becomes denser. Unlike the previous work, we do not assume that the data samples are independent identically distributed from a probability distribution and do not require a global mesh. The resulting algorithm is easy to implement. We present experimental results indicating that even for point sets sampled from a uniform distribution, PCD Laplace converges faster than the weighted graph Laplacian. We also show that using our PCD Laplacian we can directly estimate certain geometric invariants, such as manifold area.

  • discrete laplace Operator on meshed surfaces
    Symposium on Computational Geometry, 2008
    Co-Authors: Mikhail Belkin, Jian Sun, Yusu Wang
    Abstract:

    In recent years a considerable amount of work in graphics and geometric optimization used tools based on the Laplace-Beltrami Operator on a surface. The applications of the Laplacian include mesh editing, surface smoothing, and shape interpolations among others. However, it has been shown [13, 24, 26] that the popular cotangent approximation schemes do not provide convergent point-wise (or even L2) estimates, while many applications rely on point-wise estimation. Existence of such schemes has been an open question [13].In this paper we propose the first algorithm for approximating the Laplace Operator of a surface from a mesh with point-wise convergence guarantees applicable to arbitrary meshed surfaces. We show that for a sufficiently fine mesh over an arbitrary surface, our mesh Laplacian is close to the Laplace-Beltrami Operator on the surface at every point of the surface.Moreover, the proposed algorithm is simple and easily implementable. Experimental evidence shows that our algorithm exhibits convergence empirically and compares favorably with cotangentbased methods in providing accurate approximation of the Laplace Operator for various meshes.

  • towards a theoretical foundation for laplacian based manifold methods
    Lecture Notes in Computer Science, 2005
    Co-Authors: Mikhail Belkin, Partha Niyogi
    Abstract:

    In recent years manifold methods have attracted a considerable amount of attention in machine learning. However most algorithms in that class may be termed manifold-motivated as they lack any explicit theoretical guarantees. In this paper we take a step towards closing the gap between theory and practice for a class of Laplacian-based manifold methods. We show that under certain conditions the graph Laplacian of a point cloud converges to the Laplace-Beltrami Operator on the underlying manifold. Theorem 1 contains the first result showing convergence of a random graph Laplacian to manifold Laplacian in the machine learning context.

  • Semi-supervised learning on riemannian manifolds
    Machine Learning, 2004
    Co-Authors: Mikhail Belkin, Partha Niyogi
    Abstract:

    We consider the general problem of utilizing both labeled and unlabeled data to improve classification accuracy. Under the assumption that the data lie on a submanifold in a high dimensional space, we develop an algorithmic framework to classify a partially labeled data set in a principled manner. The central idea of our approach is that classification functions are naturally defined only on the submanifold in question rather than the total ambient space. Using the Laplace-Beltrami Operator one produces a basis (the Laplacian Eigenmaps) for a Hilbert space of square integrable functions on the submanifold. To recover such a basis, only unlabeled examples are required. Once such a basis is obtained, training can be performed using the labeled data set. Our algorithm models the manifold using the adjacency graph for the data and approximates the Laplace-Beltrami Operator by the graph Laplacian. We provide details of the algorithm, its theoretical justification, and several practical applications for image, speech, and text classification.

  • Laplacian Eigenmaps and Spectral Techniques for Embedding and Clustering
    Nips, 2001
    Co-Authors: Mikhail Belkin, Partha Niyogi
    Abstract:

    Drawing on the correspondence between the graph Laplacian, the Laplace-Beltrami Operator on a manifold, and the connections to the heat equation, we propose a geometrically motivated algorithm for constructing a representation for data sampled from a low dimensional manifold embedded in a higher dimensional space. The algorithm provides a computationally efficient approach to nonlinear dimensionality reduction that has locality preserving properties and a natural connection to clustering. Several applications are considered.

Francesca Antoci - One of the best experts on this subject based on the ideXlab platform.

Nicolas Garcia Trillos - One of the best experts on this subject based on the ideXlab platform.

  • error estimates for spectral convergence of the graph laplacian on random geometric graphs toward the laplace beltrami Operator
    Foundations of Computational Mathematics, 2020
    Co-Authors: Nicolas Garcia Trillos, Moritz Gerlach, Matthias Hein, Dejan Slepcev
    Abstract:

    We study the convergence of the graph Laplacian of a random geometric graph generated by an i.i.d. sample from a m-dimensional submanifold $${\mathcal {M}}$$ in $$\mathbb {R}^d$$ as the sample size n increases and the neighborhood size h tends to zero. We show that eigenvalues and eigenvectors of the graph Laplacian converge with a rate of $$O\Big (\big (\frac{\log n}{n}\big )^\frac{1}{2m}\Big )$$ to the eigenvalues and eigenfunctions of the weighted Laplace–Beltrami Operator of $${\mathcal {M}}$$ . No information on the submanifold $${\mathcal {M}}$$ is needed in the construction of the graph or the “out-of-sample extension” of the eigenvectors. Of independent interest is a generalization of the rate of convergence of empirical measures on submanifolds in $$\mathbb {R}^d$$ in infinity transportation distance.

  • error estimates for spectral convergence of the graph laplacian on random geometric graphs towards the laplace beltrami Operator
    arXiv: Machine Learning, 2018
    Co-Authors: Nicolas Garcia Trillos, Moritz Gerlach, Matthias Hein, Dejan Slepcev
    Abstract:

    We study the convergence of the graph Laplacian of a random geometric graph generated by an i.i.d. sample from a $m$-dimensional submanifold $M$ in $R^d$ as the sample size $n$ increases and the neighborhood size $h$ tends to zero. We show that eigenvalues and eigenvectors of the graph Laplacian converge with a rate of $O\Big(\big(\frac{\log n}{n}\big)^\frac{1}{2m}\Big)$ to the eigenvalues and eigenfunctions of the weighted Laplace-Beltrami Operator of $M$. No information on the submanifold $M$ is needed in the construction of the graph or the "out-of-sample extension" of the eigenvectors. Of independent interest is a generalization of the rate of convergence of empirical measures on submanifolds in $R^d$ in infinity transportation distance.