Statistical Efficiency

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 297 Experts worldwide ranked by ideXlab platform

Martin J. Wainwright - One of the best experts on this subject based on the ideXlab platform.

  • High-dimensional Variable Selection with Sparse Random Projections: Measurement Sparsity and Statistical Efficiency
    Journal of Machine Learning Research, 2010
    Co-Authors: Dapo Omidiran, Martin J. Wainwright
    Abstract:

    We consider the problem of high-dimensional variable selection: given n noisy observations of a k-sparse vector β* ∈ Rp, estimate the subset of non-zero entries of β*. A significant body of work has studied behavior of l1-relaxations when applied to random measurement matrices that are dense (e.g., Gaussian, Bernoulli). In this paper, we analyze sparsified measurement ensembles, and consider the trade-off between measurement sparsity, as measured by the fraction γ of non-zero entries, and the Statistical Efficiency, as measured by the minimal number of observations n required for correct variable selection with probability converging to one. Our main result is to prove that it is possible to let the fraction on non-zero entries γ → 0 at some rate, yielding measurement matrices with a vanishing fraction of non-zeros per row, while retaining the same Statistical Efficiency as dense ensembles. A variety of simulation results confirm the sharpness of our theoretical predictions.

  • High-dimensional subset recovery in noise: Sparsified measurements without loss of Statistical Efficiency
    arXiv: Machine Learning, 2008
    Co-Authors: Dapo Omidiran, Martin J. Wainwright
    Abstract:

    We consider the problem of estimating the support of a vector $\beta^* \in \mathbb{R}^{p}$ based on observations contaminated by noise. A significant body of work has studied behavior of $\ell_1$-relaxations when applied to measurement matrices drawn from standard dense ensembles (e.g., Gaussian, Bernoulli). In this paper, we analyze \emph{sparsified} measurement ensembles, and consider the trade-off between measurement sparsity, as measured by the fraction $\gamma$ of non-zero entries, and the Statistical Efficiency, as measured by the minimal number of observations $n$ required for exact support recovery with probability converging to one. Our main result is to prove that it is possible to let $\gamma \to 0$ at some rate, yielding measurement matrices with a vanishing fraction of non-zeros per row while retaining the same Statistical Efficiency as dense ensembles. A variety of simulation results confirm the sharpness of our theoretical predictions.

  • ISIT - High-dimensional subset recovery in noise: Sparse measurements and Statistical Efficiency
    2008 IEEE International Symposium on Information Theory, 2008
    Co-Authors: Dapo Omidiran, Martin J. Wainwright
    Abstract:

    We consider the problem of estimating the support of a vector beta* isin R" W based on observations contaminated by noise. A significant body of work has studied behavior of lscr1 -relaxations when applied to measurement matrices drawn from standard dense ensembles (e.g., Gaussian, Bernoulli). In this paper, we analyze sparsified measurement ensembles, and consider the trade-off between measurement sparsity, as measured by the fraction 7 of non-zero entries, and the Statistical Efficiency, as measured by the minimal number of observations n required for exact support recovery with probability converging to one. Our main result is to prove that it is possible to let gamma rarr 0 at some rate, yielding measurement matrices with a vanishing fraction of non-zeros per row while retaining the same Statistical Efficiency (sample size n) as dense ensembles. A variety of simulation results confirm the sharpness of our theoretical predictions.

Dapo Omidiran - One of the best experts on this subject based on the ideXlab platform.

  • High-dimensional Variable Selection with Sparse Random Projections: Measurement Sparsity and Statistical Efficiency
    Journal of Machine Learning Research, 2010
    Co-Authors: Dapo Omidiran, Martin J. Wainwright
    Abstract:

    We consider the problem of high-dimensional variable selection: given n noisy observations of a k-sparse vector β* ∈ Rp, estimate the subset of non-zero entries of β*. A significant body of work has studied behavior of l1-relaxations when applied to random measurement matrices that are dense (e.g., Gaussian, Bernoulli). In this paper, we analyze sparsified measurement ensembles, and consider the trade-off between measurement sparsity, as measured by the fraction γ of non-zero entries, and the Statistical Efficiency, as measured by the minimal number of observations n required for correct variable selection with probability converging to one. Our main result is to prove that it is possible to let the fraction on non-zero entries γ → 0 at some rate, yielding measurement matrices with a vanishing fraction of non-zeros per row, while retaining the same Statistical Efficiency as dense ensembles. A variety of simulation results confirm the sharpness of our theoretical predictions.

  • High-dimensional subset recovery in noise: Sparsified measurements without loss of Statistical Efficiency
    arXiv: Machine Learning, 2008
    Co-Authors: Dapo Omidiran, Martin J. Wainwright
    Abstract:

    We consider the problem of estimating the support of a vector $\beta^* \in \mathbb{R}^{p}$ based on observations contaminated by noise. A significant body of work has studied behavior of $\ell_1$-relaxations when applied to measurement matrices drawn from standard dense ensembles (e.g., Gaussian, Bernoulli). In this paper, we analyze \emph{sparsified} measurement ensembles, and consider the trade-off between measurement sparsity, as measured by the fraction $\gamma$ of non-zero entries, and the Statistical Efficiency, as measured by the minimal number of observations $n$ required for exact support recovery with probability converging to one. Our main result is to prove that it is possible to let $\gamma \to 0$ at some rate, yielding measurement matrices with a vanishing fraction of non-zeros per row while retaining the same Statistical Efficiency as dense ensembles. A variety of simulation results confirm the sharpness of our theoretical predictions.

  • ISIT - High-dimensional subset recovery in noise: Sparse measurements and Statistical Efficiency
    2008 IEEE International Symposium on Information Theory, 2008
    Co-Authors: Dapo Omidiran, Martin J. Wainwright
    Abstract:

    We consider the problem of estimating the support of a vector beta* isin R" W based on observations contaminated by noise. A significant body of work has studied behavior of lscr1 -relaxations when applied to measurement matrices drawn from standard dense ensembles (e.g., Gaussian, Bernoulli). In this paper, we analyze sparsified measurement ensembles, and consider the trade-off between measurement sparsity, as measured by the fraction 7 of non-zero entries, and the Statistical Efficiency, as measured by the minimal number of observations n required for exact support recovery with probability converging to one. Our main result is to prove that it is possible to let gamma rarr 0 at some rate, yielding measurement matrices with a vanishing fraction of non-zeros per row while retaining the same Statistical Efficiency (sample size n) as dense ensembles. A variety of simulation results confirm the sharpness of our theoretical predictions.

Kohei Hayashi - One of the best experts on this subject based on the ideXlab platform.

  • on tensor train rank minimization Statistical Efficiency and scalable algorithm
    arXiv: Machine Learning, 2017
    Co-Authors: Masaaki Imaizumi, Takanori Maehara, Kohei Hayashi
    Abstract:

    Tensor train (TT) decomposition provides a space-efficient representation for higher-order tensors. Despite its advantage, we face two crucial limitations when we apply the TT decomposition to machine learning problems: the lack of Statistical theory and of scalable algorithms. In this paper, we address the limitations. First, we introduce a convex relaxation of the TT decomposition problem and derive its error bound for the tensor completion task. Next, we develop an alternating optimization method with a randomization technique, in which the time complexity is as efficient as the space complexity is. In experiments, we numerically confirm the derived bounds and empirically demonstrate the performance of our method with a real higher-order tensor.

  • on tensor train rank minimization Statistical Efficiency and scalable algorithm
    Neural Information Processing Systems, 2017
    Co-Authors: Masaaki Imaizumi, Takanori Maehara, Kohei Hayashi
    Abstract:

    Tensor train (TT) decomposition provides a space-efficient representation for higher-order tensors. Despite its advantage, we face two crucial limitations when we apply the TT decomposition to machine learning problems: the lack of Statistical theory and of scalable algorithms. In this paper, we address the limitations. First, we introduce a convex relaxation of the TT decomposition problem and derive its error bound for the tensor completion task. Next, we develop a randomized optimization method, in which the time complexity is as efficient as the space complexity is. In experiments, we numerically confirm the derived bounds and empirically demonstrate the performance of our method with a real higher-order tensor.

  • NIPS - On Tensor Train Rank Minimization : Statistical Efficiency and Scalable Algorithm
    2017
    Co-Authors: Masaaki Imaizumi, Takanori Maehara, Kohei Hayashi
    Abstract:

    Tensor train (TT) decomposition provides a space-efficient representation for higher-order tensors. Despite its advantage, we face two crucial limitations when we apply the TT decomposition to machine learning problems: the lack of Statistical theory and of scalable algorithms. In this paper, we address the limitations. First, we introduce a convex relaxation of the TT decomposition problem and derive its error bound for the tensor completion task. Next, we develop a randomized optimization method, in which the time complexity is as efficient as the space complexity is. In experiments, we numerically confirm the derived bounds and empirically demonstrate the performance of our method with a real higher-order tensor.

Pierre Comon - One of the best experts on this subject based on the ideXlab platform.

  • Statistical Efficiency of structured cpd estimation applied to Wiener-Hammerstein modeling
    2015
    Co-Authors: José Henrique De Morais Goulart, Maxime Boizard, Remy Boyer, Gérard Favier, Pierre Comon
    Abstract:

    The computation of a structured canonical polyadic decomposition (CPD) is useful to address several important modeling problems in real-world applications. In this paper, we consider the identification of a nonlinear system by means of a Wiener-Hammerstein model, assuming a high-order Volterra kernel of that system has been previously estimated. Such a kernel, viewed as a tensor, admits a CPD with banded circulant factors which comprise the model parameters. To estimate them, we formulate specialized estimators based on recently proposed algorithms for the computation of structured CPDs. Then, considering the presence of additive white Gaussian noise, we derive a closed-form expression for the Cramer-Rao bound (CRB) associated with this estimation problem. Finally, we assess the Statistical performance of the proposed estimators via Monte Carlo simulations, by comparing their mean-square error with the CRB.

  • EUSIPCO - Statistical Efficiency of structured CPD estimation applied to Wiener-Hammerstein modeling
    2015 23rd European Signal Processing Conference (EUSIPCO), 2015
    Co-Authors: José Henrique De Morais Goulart, Maxime Boizard, Remy Boyer, Gérard Favier, Pierre Comon
    Abstract:

    The computation of a structured canonical polyadic decomposition (CPD) is useful to address several important modeling problems in real-world applications. In this paper, we consider the identification of a nonlinear system by means of a Wiener-Hammerstein model, assuming a high-order Volterra kernel of that system has been previously estimated. Such a kernel, viewed as a tensor, admits a CPD with banded circulant factors which comprise the model parameters. To estimate them, we formulate specialized estimators based on recently proposed algorithms for the computation of structured CPDs. Then, considering the presence of additive white Gaussian noise, we derive a closed-form expression for the Cramer-Rao bound (CRB) associated with this estimation problem. Finally, we assess the Statistical performance of the proposed estimators via Monte Carlo simulations, by comparing their mean-square error with the CRB.

Masaaki Imaizumi - One of the best experts on this subject based on the ideXlab platform.

  • on tensor train rank minimization Statistical Efficiency and scalable algorithm
    arXiv: Machine Learning, 2017
    Co-Authors: Masaaki Imaizumi, Takanori Maehara, Kohei Hayashi
    Abstract:

    Tensor train (TT) decomposition provides a space-efficient representation for higher-order tensors. Despite its advantage, we face two crucial limitations when we apply the TT decomposition to machine learning problems: the lack of Statistical theory and of scalable algorithms. In this paper, we address the limitations. First, we introduce a convex relaxation of the TT decomposition problem and derive its error bound for the tensor completion task. Next, we develop an alternating optimization method with a randomization technique, in which the time complexity is as efficient as the space complexity is. In experiments, we numerically confirm the derived bounds and empirically demonstrate the performance of our method with a real higher-order tensor.

  • on tensor train rank minimization Statistical Efficiency and scalable algorithm
    Neural Information Processing Systems, 2017
    Co-Authors: Masaaki Imaizumi, Takanori Maehara, Kohei Hayashi
    Abstract:

    Tensor train (TT) decomposition provides a space-efficient representation for higher-order tensors. Despite its advantage, we face two crucial limitations when we apply the TT decomposition to machine learning problems: the lack of Statistical theory and of scalable algorithms. In this paper, we address the limitations. First, we introduce a convex relaxation of the TT decomposition problem and derive its error bound for the tensor completion task. Next, we develop a randomized optimization method, in which the time complexity is as efficient as the space complexity is. In experiments, we numerically confirm the derived bounds and empirically demonstrate the performance of our method with a real higher-order tensor.

  • NIPS - On Tensor Train Rank Minimization : Statistical Efficiency and Scalable Algorithm
    2017
    Co-Authors: Masaaki Imaizumi, Takanori Maehara, Kohei Hayashi
    Abstract:

    Tensor train (TT) decomposition provides a space-efficient representation for higher-order tensors. Despite its advantage, we face two crucial limitations when we apply the TT decomposition to machine learning problems: the lack of Statistical theory and of scalable algorithms. In this paper, we address the limitations. First, we introduce a convex relaxation of the TT decomposition problem and derive its error bound for the tensor completion task. Next, we develop a randomized optimization method, in which the time complexity is as efficient as the space complexity is. In experiments, we numerically confirm the derived bounds and empirically demonstrate the performance of our method with a real higher-order tensor.