Gaussian

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 360 Experts worldwide ranked by ideXlab platform

Ofer Zeitouni - One of the best experts on this subject based on the ideXlab platform.

  • on the limitation of spectral methods from the Gaussian hidden clique problem to rank one perturbations of Gaussian tensors
    IEEE Transactions on Information Theory, 2017
    Co-Authors: Andrea Montanari, Daniel Reichman, Ofer Zeitouni
    Abstract:

    We consider the following detection problem: given a realization of a symmetric matrix X of dimension $n$ , distinguish between the hypothesis that all upper triangular variables are independent and identically distributed (i.i.d). Gaussians variables with mean 0 and variance 1 and the hypothesis, where X is the sum of such matrix and an independent rank-one perturbation. This setup applies to the situation, where under the alternative, there is a planted principal submatrix B of size $L$ for which all upper triangular variables are i.i.d. Gaussians with mean 1 and variance 1, whereas all other upper triangular elements of X not in B are i.i.d. Gaussians variables with mean 0 and variance 1. We refer to this as the “Gaussian hidden clique problem.” When $L=(1+\epsilon )\sqrt {n}$ ( $\epsilon >0$ ), it is possible to solve this detection problem with probability $1-o_{n}(1)$ by computing the spectrum of X and considering the largest eigenvalue of X. We prove that this condition is tight in the following sense: when $L no algorithm that examines only the eigenvalues of X can detect the existence of a hidden Gaussian clique, with error probability vanishing as $n\to \infty $ . We prove this result as an immediate consequence of a more general result on rank-one perturbations of $k$ -dimensional Gaussian tensors. In this context, we establish a lower bound on the critical signal-to-noise ratio below which a rank-one signal cannot be detected.

  • on the limitation of spectral methods from the Gaussian hidden clique problem to rank one perturbations of Gaussian tensors
    Neural Information Processing Systems, 2015
    Co-Authors: Andrea Montanari, Daniel Reichman, Ofer Zeitouni
    Abstract:

    We consider the following detection problem: given a realization of a symmetric matrix X of dimension n, distinguish between the hypothesis that all upper triangular variables are i.i.d. Gaussians variables with mean 0 and variance 1 and the hypothesis that there is a planted principal submatrix B of dimension L for which all upper triangular variables are i.i.d. Gaussians with mean 1 and variance 1 , whereas all other upper triangular elements of X not in B are i.i.d. Gaussians variables with mean 0 and variance 1. We refer to this as the 'Gaussian hidden clique problem'. When L = (1 + ∊) √n (∊ > 0), it is possible to solve this detection problem with probability 1 - on(1) by computing the spectrum of X and considering the largest eigenvalue of X. We prove that when L < (1 - ∊) √n no algorithm that examines only the eigenvalues of X can detect the existence of a hidden Gaussian clique, with error probability vanishing as n → ∞. The result above is an immediate consequence of a more general result on rank-one perturbations of k-dimensional Gaussian tensors. In this context we establish a lower bound on the critical signal-to-noise ratio below which a rank-one signal cannot be detected.

  • on the limitation of spectral methods from the Gaussian hidden clique problem to rank one perturbations of Gaussian tensors
    arXiv: Statistics Theory, 2014
    Co-Authors: Andrea Montanari, Daniel Reichman, Ofer Zeitouni
    Abstract:

    We consider the following detection problem: given a realization of a symmetric matrix ${\mathbf{X}}$ of dimension $n$, distinguish between the hypothesis that all upper triangular variables are i.i.d. Gaussians variables with mean 0 and variance $1$ and the hypothesis where ${\mathbf{X}}$ is the sum of such matrix and an independent rank-one perturbation. This setup applies to the situation where under the alternative, there is a planted principal submatrix ${\mathbf{B}}$ of size $L$ for which all upper triangular variables are i.i.d. Gaussians with mean $1$ and variance $1$, whereas all other upper triangular elements of ${\mathbf{X}}$ not in ${\mathbf{B}}$ are i.i.d. Gaussians variables with mean 0 and variance $1$. We refer to this as the `Gaussian hidden clique problem.' When $L=(1+\epsilon)\sqrt{n}$ ($\epsilon>0$), it is possible to solve this detection problem with probability $1-o_n(1)$ by computing the spectrum of ${\mathbf{X}}$ and considering the largest eigenvalue of ${\mathbf{X}}$. We prove that this condition is tight in the following sense: when $L<(1-\epsilon)\sqrt{n}$ no algorithm that examines only the eigenvalues of ${\mathbf{X}}$ can detect the existence of a hidden Gaussian clique, with error probability vanishing as $n\to\infty$. We prove this result as an immediate consequence of a more general result on rank-one perturbations of $k$-dimensional Gaussian tensors. In this context we establish a lower bound on the critical signal-to-noise ratio below which a rank-one signal cannot be detected.

Jurg Hutter - One of the best experts on this subject based on the ideXlab platform.

  • fast evaluation of solid harmonic Gaussian integrals for local resolution of the identity methods and range separated hybrid functionals
    arXiv: Chemical Physics, 2017
    Co-Authors: Dorothea Golze, Jan Wilhelm, Niels Benedikter, Marcella Iannuzzi, Jurg Hutter
    Abstract:

    An integral scheme for the efficient evaluation of two-center integrals over contracted solid harmonic Gaussian functions is presented. Integral expressions are derived for local operators that depend on the position vector of one of the two Gaussian centers. These expressions are then used to derive the formula for three-index overlap integrals where two of the three Gaussians are located at the same center. The efficient evaluation of the latter is essential for local resolution-of-the-identity techniques that employ an overlap metric. We compare the performance of our integral scheme to the widely used Cartesian Gaussian-based method of Obara and Saika (OS). Non-local interaction potentials such as standard Coulomb, modified Coulomb and Gaussian-type operators, that occur in range-separated hybrid functionals, are also included in the performance tests. The speed-up with respect to the OS scheme is up to three orders of magnitude for both, integrals and their derivatives. In particular, our method is increasingly efficient for large angular momenta and highly contracted basis sets.

  • fast evaluation of solid harmonic Gaussian integrals for local resolution of the identity methods and range separated hybrid functionals
    Journal of Chemical Physics, 2017
    Co-Authors: Dorothea Golze, Jan Wilhelm, Niels Benedikter, Marcella Iannuzzi, Jurg Hutter
    Abstract:

    An integral scheme for the efficient evaluation of two-center integrals over contracted solid harmonic Gaussian functions is presented. Integral expressions are derived for local operators that depend on the position vector of one of the two Gaussian centers. These expressions are then used to derive the formula for three-index overlap integrals where two of the three Gaussians are located at the same center. The efficient evaluation of the latter is essential for local resolution-of-the-identity techniques that employ an overlap metric. We compare the performance of our integral scheme to the widely used Cartesian Gaussian-based method of Obara and Saika (OS). Non-local interaction potentials such as standard Coulomb, modified Coulomb, and Gaussian-type operators, which occur in range-separated hybrid functionals, are also included in the performance tests. The speed-up with respect to the OS scheme is up to three orders of magnitude for both integrals and their derivatives. In particular, our method is ...

Steve Renals - One of the best experts on this subject based on the ideXlab platform.

  • joint uncertainty decoding for noise robust subspace Gaussian mixture models
    IEEE Transactions on Audio Speech and Language Processing, 2013
    Co-Authors: Liang Lu, Arnab Ghoshal, K K Chin, Steve Renals
    Abstract:

    Joint uncertainty decoding (JUD) is a model-based noise compensation technique for conventional Gaussian Mixture Model (GMM) based speech recognition systems. Unlike vector Taylor series (VTS) compensation which operates on the individual Gaussian components in an acoustic model, JUD clusters the Gaussian components into a smaller number of classes, sharing the compensation parameters for the set of Gaussians in a given class. This significantly reduces the computational cost. In this paper, we investigate noise compensation for subspace Gaussian mixture model (SGMM) based speech recognition systems using JUD. The total number of Gaussian components in an SGMM is typically very large. Therefore direct compensation of the individual Gaussian components, as performed by VTS, is computationally expensive. In this paper we show that JUD-based noise compensation can be successfully applied to SGMMs in a computationally efficient way. We evaluate the JUD/SGMM technique on the standard Aurora 4 corpus. Our experimental results indicate that the JUD/SGMM system results in lower word error rates compared with a conventional GMM system with either VTS-based or JUD-based noise compensation.

  • noise adaptive training for subspace Gaussian mixture models
    Conference of the International Speech Communication Association, 2013
    Co-Authors: Liang Lu, Arnab Ghoshal, Steve Renals
    Abstract:

    Noise adaptive training (NAT) is an effective approach to normalise the environmental distortions in the training data. This paper investigates the model-based NAT scheme using joint uncertainty decoding (JUD) for subspace Gaussian mixture models (SGMMs). A typical SGMM acoustic model has much larger number of surface Gaussian components, which makes it computationally infeasible to compensate each Gaussian explicitly. JUD tackles the problem by sharing the compensation parameters among the Gaussians and hence reduces the computational and memory demands. For noise adaptive training, JUD is reformulated into a generative model, which leads to an efficient expectation-maximisation (EM) based algorithm to update the SGMM acoustic model parameters. We evaluated the SGMMs with NAT on the Aurora 4 database, and obtained higher recognition accuracy compared to systems without adaptive training. Index Terms: adaptive training, noise robustness, joint uncertainty decoding, subspace Gaussian mixture models

  • noise compensation for subspace Gaussian mixture models
    Conference of the International Speech Communication Association, 2012
    Co-Authors: Liang Lu, Arnab Ghoshal, K K Chin, Steve Renals
    Abstract:

    Joint uncertainty decoding (JUD) is an effective model-based noise compensation technique for conventional Gaussian mixture model (GMM) based speech recognition systems. In this paper, we apply JUD to subspace Gaussian mixture model (SGMM) based acoustic models. The total number of Gaussians in the SGMM acoustic model is usually much larger than for conventional GMMs, which limits the application of approaches which explicitly compensate each Gaussian, such as vector Taylor series (VTS). However, by clustering the Gaussian components into a number of regression classes, JUD-based noise compensation can be successfully applied to SGMM systems. We evaluate the JUD/SGMM technique using the Aurora 4 corpus, and the experimental results indicated that it is more accurate than conventional GMM-based systems using either VTS or JUD noise compensation.

Xilin Chen - One of the best experts on this subject based on the ideXlab platform.

  • discriminant analysis on riemannian manifold of Gaussian distributions for face recognition with image sets
    IEEE Transactions on Image Processing, 2018
    Co-Authors: Wen Wang, Ruiping Wang, Zhiwu Huang, Shiguang Shan, Xilin Chen
    Abstract:

    To address the problem of face recognition with image sets, we aim to capture the underlying data distribution in each set and thus facilitate more robust classification. To this end, we represent image set as the Gaussian mixture model (GMM) comprising a number of Gaussian components with prior probabilities and seek to discriminate Gaussian components from different classes. Since in the light of information geometry, the Gaussians lie on a specific Riemannian manifold, this paper presents a method named discriminant analysis on Riemannian manifold of Gaussian distributions (DARG). We investigate several distance metrics between Gaussians and accordingly two discriminative learning frameworks are presented to meet the geometric and statistical characteristics of the specific manifold. The first framework derives a series of provably positive definite probabilistic kernels to embed the manifold to a high-dimensional Hilbert space, where conventional discriminant analysis methods developed in Euclidean space can be applied, and a weighted Kernel discriminant analysis is devised which learns discriminative representation of the Gaussian components in GMMs with their prior probabilities as sample weights. Alternatively, the other framework extends the classical graph embedding method to the manifold by utilizing the distance metrics between Gaussians to construct the adjacency graph, and hence the original manifold is embedded to a lower-dimensional and discriminative target manifold with the geometric structure preserved and the interclass separability maximized. The proposed method is evaluated by face identification and verification tasks on four most challenging and largest databases, YouTube Celebrities, COX, YouTube Face DB, and Point-and-Shoot Challenge, to demonstrate its superiority over the state-of-the-art.

  • discriminant analysis on riemannian manifold of Gaussian distributions for face recognition with image sets
    Computer Vision and Pattern Recognition, 2015
    Co-Authors: Wen Wang, Ruiping Wang, Zhiwu Huang, Shiguang Shan, Xilin Chen
    Abstract:

    This paper presents a method named Discriminant Analysis on Riemannian manifold of Gaussian distributions (DARG) to solve the problem of face recognition with image sets. Our goal is to capture the underlying data distribution in each set and thus facilitate more robust classification. To this end, we represent image set as Gaussian Mixture Model (GMM) comprising a number of Gaussian components with prior probabilities and seek to discriminate Gaussian components from different classes. In the light of information geometry, the Gaussians lie on a specific Riemannian manifold. To encode such Riemannian geometry properly, we investigate several distances between Gaussians and further derive a series of provably positive definite probabilistic kernels. Through these kernels, a weighted Kernel Discriminant Analysis is finally devised which treats the Gaussians in GMMs as samples and their prior probabilities as sample weights. The proposed method is evaluated by face identification and verification tasks on four most challenging and largest databases, YouTube Celebrities, COX, YouTube Face DB and Point-and-Shoot Challenge, to demonstrate its superiority over the state-of-the-art.

Andrea Montanari - One of the best experts on this subject based on the ideXlab platform.

  • on the limitation of spectral methods from the Gaussian hidden clique problem to rank one perturbations of Gaussian tensors
    IEEE Transactions on Information Theory, 2017
    Co-Authors: Andrea Montanari, Daniel Reichman, Ofer Zeitouni
    Abstract:

    We consider the following detection problem: given a realization of a symmetric matrix X of dimension $n$ , distinguish between the hypothesis that all upper triangular variables are independent and identically distributed (i.i.d). Gaussians variables with mean 0 and variance 1 and the hypothesis, where X is the sum of such matrix and an independent rank-one perturbation. This setup applies to the situation, where under the alternative, there is a planted principal submatrix B of size $L$ for which all upper triangular variables are i.i.d. Gaussians with mean 1 and variance 1, whereas all other upper triangular elements of X not in B are i.i.d. Gaussians variables with mean 0 and variance 1. We refer to this as the “Gaussian hidden clique problem.” When $L=(1+\epsilon )\sqrt {n}$ ( $\epsilon >0$ ), it is possible to solve this detection problem with probability $1-o_{n}(1)$ by computing the spectrum of X and considering the largest eigenvalue of X. We prove that this condition is tight in the following sense: when $L no algorithm that examines only the eigenvalues of X can detect the existence of a hidden Gaussian clique, with error probability vanishing as $n\to \infty $ . We prove this result as an immediate consequence of a more general result on rank-one perturbations of $k$ -dimensional Gaussian tensors. In this context, we establish a lower bound on the critical signal-to-noise ratio below which a rank-one signal cannot be detected.

  • on the limitation of spectral methods from the Gaussian hidden clique problem to rank one perturbations of Gaussian tensors
    Neural Information Processing Systems, 2015
    Co-Authors: Andrea Montanari, Daniel Reichman, Ofer Zeitouni
    Abstract:

    We consider the following detection problem: given a realization of a symmetric matrix X of dimension n, distinguish between the hypothesis that all upper triangular variables are i.i.d. Gaussians variables with mean 0 and variance 1 and the hypothesis that there is a planted principal submatrix B of dimension L for which all upper triangular variables are i.i.d. Gaussians with mean 1 and variance 1 , whereas all other upper triangular elements of X not in B are i.i.d. Gaussians variables with mean 0 and variance 1. We refer to this as the 'Gaussian hidden clique problem'. When L = (1 + ∊) √n (∊ > 0), it is possible to solve this detection problem with probability 1 - on(1) by computing the spectrum of X and considering the largest eigenvalue of X. We prove that when L < (1 - ∊) √n no algorithm that examines only the eigenvalues of X can detect the existence of a hidden Gaussian clique, with error probability vanishing as n → ∞. The result above is an immediate consequence of a more general result on rank-one perturbations of k-dimensional Gaussian tensors. In this context we establish a lower bound on the critical signal-to-noise ratio below which a rank-one signal cannot be detected.

  • on the limitation of spectral methods from the Gaussian hidden clique problem to rank one perturbations of Gaussian tensors
    arXiv: Statistics Theory, 2014
    Co-Authors: Andrea Montanari, Daniel Reichman, Ofer Zeitouni
    Abstract:

    We consider the following detection problem: given a realization of a symmetric matrix ${\mathbf{X}}$ of dimension $n$, distinguish between the hypothesis that all upper triangular variables are i.i.d. Gaussians variables with mean 0 and variance $1$ and the hypothesis where ${\mathbf{X}}$ is the sum of such matrix and an independent rank-one perturbation. This setup applies to the situation where under the alternative, there is a planted principal submatrix ${\mathbf{B}}$ of size $L$ for which all upper triangular variables are i.i.d. Gaussians with mean $1$ and variance $1$, whereas all other upper triangular elements of ${\mathbf{X}}$ not in ${\mathbf{B}}$ are i.i.d. Gaussians variables with mean 0 and variance $1$. We refer to this as the `Gaussian hidden clique problem.' When $L=(1+\epsilon)\sqrt{n}$ ($\epsilon>0$), it is possible to solve this detection problem with probability $1-o_n(1)$ by computing the spectrum of ${\mathbf{X}}$ and considering the largest eigenvalue of ${\mathbf{X}}$. We prove that this condition is tight in the following sense: when $L<(1-\epsilon)\sqrt{n}$ no algorithm that examines only the eigenvalues of ${\mathbf{X}}$ can detect the existence of a hidden Gaussian clique, with error probability vanishing as $n\to\infty$. We prove this result as an immediate consequence of a more general result on rank-one perturbations of $k$-dimensional Gaussian tensors. In this context we establish a lower bound on the critical signal-to-noise ratio below which a rank-one signal cannot be detected.