Identity Covariance Matrix

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 135 Experts worldwide ranked by ideXlab platform

Richard J. Samworth - One of the best experts on this subject based on the ideXlab platform.

  • Small confidence sets for the mean of a spherically symmetric distribution
    Journal of The Royal Statistical Society Series B-statistical Methodology, 2005
    Co-Authors: Richard J. Samworth
    Abstract:

    Suppose that "X" has a "k"-variate spherically symmetric distribution with mean vector "t" and Identity Covariance Matrix. We present two spherical confidence sets for "t", both centred at a positive part Stein estimator . In the first, we obtain the radius by approximating the upper "a"-point of the sampling distribution of by the first two non-zero terms of its Taylor series about the origin. We can analyse some of the properties of this confidence set and see that it performs well in terms of coverage probability, volume and conditional behaviour. In the second method, we find the radius by using a parametric bootstrap procedure. Here, even greater improvement in terms of volume over the usual confidence set is possible, at the expense of having a less explicit radius function. A real data example is provided, and extensions to the unknown Covariance Matrix and elliptically symmetric cases are discussed. Copyright 2005 Royal Statistical Society.

  • Small confidence sets for the mean of a spherically symmetric distribution
    Journal of the Royal Statistical Society: Series B (Statistical Methodology), 2005
    Co-Authors: Richard J. Samworth
    Abstract:

    Summary. Suppose that X has a k-variate spherically symmetric distribution with mean vector 0 and Identity Covariance Matrix. Wepresent two spherical confidence sets for 0, both centred at a positive part Stein estimator T (X). In the first, we obtain the radius by approximating the upper a-point of the sampling disiribution of IITs+(X) - 9112 by the first two non-zero terms of its Taylor series about the origin. We can analyse some of the properties of this confidence set and see that it performs well in terms of coverage probability, volume and conditional behaviour. In the second method, we find the radius by using a parametric bootstrap procedure. Here, even greater improvement in terms of volume over the usual confidence set is possible, at the expense of having a less explicit radius function. A real data example is provided, and extensions to the unknown Covariance Matrix and elliptically symmetric cases are discussed.

Nisheeth K Vishnoi - One of the best experts on this subject based on the ideXlab platform.

  • faster polytope rounding sampling and volume computation via a sub linear ball walk
    Foundations of Computer Science, 2019
    Co-Authors: Oren Mangoubi, Nisheeth K Vishnoi
    Abstract:

    This paper studies the problem of "isotropically rounding" a polytope K ⊆ R^n, that is, computing a linear transformation which makes the uniform distribution on the polytope have roughly Identity Covariance Matrix. It is assumed that K ⊆ R^n is defined by m linear inequalities. We introduce a new variant of the ball walk Markov chain and show that, roughly, the expected number of arithmetic operations per-step of this Markov chain is O(m) that is sub-linear in the input size mn -- the per-step time of all prior Markov chains. Subsequently, we apply this new variant of the ball walk to obtain a rounding algorithm that gives a factor of √n improvement on the number of arithmetic operations over the previous bound which uses the hit-and-run algorithm. Since the cost of the rounding pre-processing step is in many cases the bottleneck in improving sampling or volume computation running time bounds, our results imply improved bounds for these tasks. Our algorithm achieves this improvement by a novel method of computing polytope membership, where one avoids checking inequalities which are estimated to have a very low probability of being violated. We believe that this method is likely to be of independent interest for constrained sampling and optimization problems.

  • Faster polytope rounding, sampling, and volume computation via a sublinear "Ball Walk"
    arXiv: Data Structures and Algorithms, 2019
    Co-Authors: Oren Mangoubi, Nisheeth K Vishnoi
    Abstract:

    We study the problem of "isotropically rounding" a polytope $K\subset\mathbb{R}^n$, that is, computing a linear transformation which makes the uniform distribution on the polytope have roughly Identity Covariance Matrix. We assume $K$ is defined by $m$ linear inequalities, with guarantee that $rB\subset K\subset RB$, where $B$ is the unit ball. We introduce a new variant of the ball walk Markov chain and show that, roughly, the expected number of arithmetic operations per-step of this Markov chain is $O(m)$ that is sublinear in the input size $mn$--the per-step time of all prior Markov chains. Subsequently, we give a rounding algorithm that succeeds with probability $1-\varepsilon$ in $\tilde{O}(mn^{4.5}\mbox{polylog}(\frac{1}{\varepsilon},\frac{R}{r}))$ arithmetic operations. This gives a factor of $\sqrt{n}$ improvement on the previous bound of $\tilde{O}(mn^5\mbox{polylog}(\frac{1}{\varepsilon},\frac{R}{r}))$ for rounding, which uses the hit-and-run algorithm. Since the rounding preprocessing step is in many cases the bottleneck in improving sampling or volume computation, our results imply these tasks can also be achieved in roughly $\tilde{O}(mn^{4.5}\mbox{polylog}(\frac{1}{\varepsilon},\frac{R}{r})+mn^4\delta^{-2})$ operations for computing the volume of $K$ up to a factor $1+\delta$ and $\tilde{O}(mn^{4.5}\mbox{polylog}(\frac{1}{\varepsilon},\frac{R}{r})))$ for uniformly sampling on $K$ with TV error $\varepsilon$. This improves on the previous bounds of $\tilde{O}(mn^5\mbox{polylog}(\frac{1}{\varepsilon},\frac{R}{r})+mn^4\delta^{-2})$ for volume computation when roughly $m\geq n^{2.5}$, and $\tilde{O}(mn^5\mbox{polylog}(\frac{1}{\varepsilon},\frac{R}{r}))$ for sampling when roughly $m\geq n^{1.5}$. We achieve this improvement by a novel method of computing polytope membership, where one avoids checking inequalities estimated to have a very low probability of being violated.

  • FOCS - Faster Polytope Rounding, Sampling, and Volume Computation via a Sub-Linear Ball Walk
    2019 IEEE 60th Annual Symposium on Foundations of Computer Science (FOCS), 2019
    Co-Authors: Oren Mangoubi, Nisheeth K Vishnoi
    Abstract:

    This paper studies the problem of "isotropically rounding" a polytope K ⊆ R^n, that is, computing a linear transformation which makes the uniform distribution on the polytope have roughly Identity Covariance Matrix. It is assumed that K ⊆ R^n is defined by m linear inequalities. We introduce a new variant of the ball walk Markov chain and show that, roughly, the expected number of arithmetic operations per-step of this Markov chain is O(m) that is sub-linear in the input size mn -- the per-step time of all prior Markov chains. Subsequently, we apply this new variant of the ball walk to obtain a rounding algorithm that gives a factor of √n improvement on the number of arithmetic operations over the previous bound which uses the hit-and-run algorithm. Since the cost of the rounding pre-processing step is in many cases the bottleneck in improving sampling or volume computation running time bounds, our results imply improved bounds for these tasks. Our algorithm achieves this improvement by a novel method of computing polytope membership, where one avoids checking inequalities which are estimated to have a very low probability of being violated. We believe that this method is likely to be of independent interest for constrained sampling and optimization problems.

Oren Mangoubi - One of the best experts on this subject based on the ideXlab platform.

  • faster polytope rounding sampling and volume computation via a sub linear ball walk
    Foundations of Computer Science, 2019
    Co-Authors: Oren Mangoubi, Nisheeth K Vishnoi
    Abstract:

    This paper studies the problem of "isotropically rounding" a polytope K ⊆ R^n, that is, computing a linear transformation which makes the uniform distribution on the polytope have roughly Identity Covariance Matrix. It is assumed that K ⊆ R^n is defined by m linear inequalities. We introduce a new variant of the ball walk Markov chain and show that, roughly, the expected number of arithmetic operations per-step of this Markov chain is O(m) that is sub-linear in the input size mn -- the per-step time of all prior Markov chains. Subsequently, we apply this new variant of the ball walk to obtain a rounding algorithm that gives a factor of √n improvement on the number of arithmetic operations over the previous bound which uses the hit-and-run algorithm. Since the cost of the rounding pre-processing step is in many cases the bottleneck in improving sampling or volume computation running time bounds, our results imply improved bounds for these tasks. Our algorithm achieves this improvement by a novel method of computing polytope membership, where one avoids checking inequalities which are estimated to have a very low probability of being violated. We believe that this method is likely to be of independent interest for constrained sampling and optimization problems.

  • Faster polytope rounding, sampling, and volume computation via a sublinear "Ball Walk"
    arXiv: Data Structures and Algorithms, 2019
    Co-Authors: Oren Mangoubi, Nisheeth K Vishnoi
    Abstract:

    We study the problem of "isotropically rounding" a polytope $K\subset\mathbb{R}^n$, that is, computing a linear transformation which makes the uniform distribution on the polytope have roughly Identity Covariance Matrix. We assume $K$ is defined by $m$ linear inequalities, with guarantee that $rB\subset K\subset RB$, where $B$ is the unit ball. We introduce a new variant of the ball walk Markov chain and show that, roughly, the expected number of arithmetic operations per-step of this Markov chain is $O(m)$ that is sublinear in the input size $mn$--the per-step time of all prior Markov chains. Subsequently, we give a rounding algorithm that succeeds with probability $1-\varepsilon$ in $\tilde{O}(mn^{4.5}\mbox{polylog}(\frac{1}{\varepsilon},\frac{R}{r}))$ arithmetic operations. This gives a factor of $\sqrt{n}$ improvement on the previous bound of $\tilde{O}(mn^5\mbox{polylog}(\frac{1}{\varepsilon},\frac{R}{r}))$ for rounding, which uses the hit-and-run algorithm. Since the rounding preprocessing step is in many cases the bottleneck in improving sampling or volume computation, our results imply these tasks can also be achieved in roughly $\tilde{O}(mn^{4.5}\mbox{polylog}(\frac{1}{\varepsilon},\frac{R}{r})+mn^4\delta^{-2})$ operations for computing the volume of $K$ up to a factor $1+\delta$ and $\tilde{O}(mn^{4.5}\mbox{polylog}(\frac{1}{\varepsilon},\frac{R}{r})))$ for uniformly sampling on $K$ with TV error $\varepsilon$. This improves on the previous bounds of $\tilde{O}(mn^5\mbox{polylog}(\frac{1}{\varepsilon},\frac{R}{r})+mn^4\delta^{-2})$ for volume computation when roughly $m\geq n^{2.5}$, and $\tilde{O}(mn^5\mbox{polylog}(\frac{1}{\varepsilon},\frac{R}{r}))$ for sampling when roughly $m\geq n^{1.5}$. We achieve this improvement by a novel method of computing polytope membership, where one avoids checking inequalities estimated to have a very low probability of being violated.

  • FOCS - Faster Polytope Rounding, Sampling, and Volume Computation via a Sub-Linear Ball Walk
    2019 IEEE 60th Annual Symposium on Foundations of Computer Science (FOCS), 2019
    Co-Authors: Oren Mangoubi, Nisheeth K Vishnoi
    Abstract:

    This paper studies the problem of "isotropically rounding" a polytope K ⊆ R^n, that is, computing a linear transformation which makes the uniform distribution on the polytope have roughly Identity Covariance Matrix. It is assumed that K ⊆ R^n is defined by m linear inequalities. We introduce a new variant of the ball walk Markov chain and show that, roughly, the expected number of arithmetic operations per-step of this Markov chain is O(m) that is sub-linear in the input size mn -- the per-step time of all prior Markov chains. Subsequently, we apply this new variant of the ball walk to obtain a rounding algorithm that gives a factor of √n improvement on the number of arithmetic operations over the previous bound which uses the hit-and-run algorithm. Since the cost of the rounding pre-processing step is in many cases the bottleneck in improving sampling or volume computation running time bounds, our results imply improved bounds for these tasks. Our algorithm achieves this improvement by a novel method of computing polytope membership, where one avoids checking inequalities which are estimated to have a very low probability of being violated. We believe that this method is likely to be of independent interest for constrained sampling and optimization problems.

Jan H. Van Schuppen - One of the best experts on this subject based on the ideXlab platform.

  • Characterization of Conditional Independence and Weak Realizations of Multivariate Gaussian Random Variables: Applications to Networks
    arXiv: Information Theory, 2020
    Co-Authors: Charalambos D. Charalambous, Jan H. Van Schuppen
    Abstract:

    The Gray and Wyner lossy source coding for a simple network for sources that generate a tuple of jointly Gaussian random variables (RVs) $X_1 : \Omega \rightarrow {\mathbb R}^{p_1}$ and $X_2 : \Omega \rightarrow {\mathbb R}^{p_2}$, with respect to square-error distortion at the two decoders is re-examined using (1) Hotelling's geometric approach of Gaussian RVs-the canonical variable form, and (2) van Putten's and van Schuppen's parametrization of joint distributions ${\bf P}_{X_1, X_2, W}$ by Gaussian RVs $W : \Omega \rightarrow {\mathbb R}^n $ which make $(X_1,X_2)$ conditionally independent, and the weak stochastic realization of $(X_1, X_2)$. Item (2) is used to parametrize the lossy rate region of the Gray and Wyner source coding problem for joint decoding with mean-square error distortions ${\bf E}\big\{||X_i-\hat{X}_i||_{{\mathbb R}^{p_i}}^2 \big\}\leq \Delta_i \in [0,\infty], i=1,2$, by the Covariance Matrix of RV $W$. From this then follows Wyner's common information $C_W(X_1,X_2)$ (information definition) is achieved by $W$ with Identity Covariance Matrix, while a formula for Wyner's lossy common information (operational definition) is derived, given by $C_{WL}(X_1,X_2)=C_W(X_1,X_2) = \frac{1}{2} \sum_{j=1}^n \ln \left( \frac{1+d_j}{1-d_j} \right),$ for the distortion region $ 0\leq \Delta_1 \leq \sum_{j=1}^n(1-d_j)$, $0\leq \Delta_2 \leq \sum_{j=1}^n(1-d_j)$, and where $1 > d_1 \geq d_2 \geq \ldots \geq d_n>0$ in $(0,1)$ are {\em the canonical correlation coefficients} computed from the canonical variable form of the tuple $(X_1, X_2)$. The methods are of fundamental importance to other problems of multi-user communication, where conditional independence is imposed as a constraint.

  • ISIT - Characterization of Conditional Independence and Weak Realizations of Multivariate Gaussian Random Variables: Applications to Networks
    2020 IEEE International Symposium on Information Theory (ISIT), 2020
    Co-Authors: Charalambos D. Charalambous, Jan H. Van Schuppen
    Abstract:

    The Gray and Wyner lossy source coding for a simple network for sources that generate a tuple of jointly Gaussian random variables (RVs) ${X_1}:\Omega \to {\mathbb{R}^{{p_1}}}$ and ${X_2}:\Omega \to {\mathbb{R}^{{p_2}}}$, with respect to square-error distortion at the two decoders is reexamined using (1) Hotelling’s geometric approach of Gaussian RVs-the canonical variable form, and (2) van Putten’s and van Schuppen’s parametrization of joint distributions P X1 , X2 , W by Gaussian RVs $W:\Omega \to {\mathbb{R}^n}$ which make (X 1 ,X 2 ) conditionally independent, and the weak stochastic realization of (X 1 ,X 2 ). Item (2) is used to parametrize the lossy rate region of the Gray and Wyner source coding problem for joint decoding with mean-square error distortions ${\mathbf{E}}\left\{ {\left\| {{X_i} - {{\hat X}_i}} \right\|_{{\mathbb{R}^p}i}^2} \right\} \leq {\Delta _i} \in [0,\infty ]$,i=1,2, by the Covariance Matrix of RV W. From this then follows Wyner’s common information C W (X 1 ,X 2 ) (information definition) is achieved by W with Identity Covariance Matrix, while a formula for Wyner’s lossy common information (operational definition) is derived, given by ${C_{WL}}\left( {{X_1},{X_2}} \right) = {C_W}\left( {{X_1},{X_2}} \right) = \frac{1}{2}\sum\nolimits_{j = 1}^n {\ln } \left( {\frac{{1 + {d_j}}}{{1 - {d_j}}}} \right)$, for the distortion region 0 ≤ ∆ 1 ≤ n(1−d 1 ), 0 ≤ ∆ 2 ≤ n(1−d 1 ), and where 1 > d 1 ≥ d 2 ≥ … ≥ d n > 0 in (0,1) are the canonical correlation coefficients computed from the canonical variable form of the tuple (X 1 ,X 2 )The methods are of fundamental importance to other problems of multi-user communication, where conditional independence is imposed as a constraint.

Charalambos D. Charalambous - One of the best experts on this subject based on the ideXlab platform.

  • Characterization of Conditional Independence and Weak Realizations of Multivariate Gaussian Random Variables: Applications to Networks
    arXiv: Information Theory, 2020
    Co-Authors: Charalambos D. Charalambous, Jan H. Van Schuppen
    Abstract:

    The Gray and Wyner lossy source coding for a simple network for sources that generate a tuple of jointly Gaussian random variables (RVs) $X_1 : \Omega \rightarrow {\mathbb R}^{p_1}$ and $X_2 : \Omega \rightarrow {\mathbb R}^{p_2}$, with respect to square-error distortion at the two decoders is re-examined using (1) Hotelling's geometric approach of Gaussian RVs-the canonical variable form, and (2) van Putten's and van Schuppen's parametrization of joint distributions ${\bf P}_{X_1, X_2, W}$ by Gaussian RVs $W : \Omega \rightarrow {\mathbb R}^n $ which make $(X_1,X_2)$ conditionally independent, and the weak stochastic realization of $(X_1, X_2)$. Item (2) is used to parametrize the lossy rate region of the Gray and Wyner source coding problem for joint decoding with mean-square error distortions ${\bf E}\big\{||X_i-\hat{X}_i||_{{\mathbb R}^{p_i}}^2 \big\}\leq \Delta_i \in [0,\infty], i=1,2$, by the Covariance Matrix of RV $W$. From this then follows Wyner's common information $C_W(X_1,X_2)$ (information definition) is achieved by $W$ with Identity Covariance Matrix, while a formula for Wyner's lossy common information (operational definition) is derived, given by $C_{WL}(X_1,X_2)=C_W(X_1,X_2) = \frac{1}{2} \sum_{j=1}^n \ln \left( \frac{1+d_j}{1-d_j} \right),$ for the distortion region $ 0\leq \Delta_1 \leq \sum_{j=1}^n(1-d_j)$, $0\leq \Delta_2 \leq \sum_{j=1}^n(1-d_j)$, and where $1 > d_1 \geq d_2 \geq \ldots \geq d_n>0$ in $(0,1)$ are {\em the canonical correlation coefficients} computed from the canonical variable form of the tuple $(X_1, X_2)$. The methods are of fundamental importance to other problems of multi-user communication, where conditional independence is imposed as a constraint.

  • ISIT - Characterization of Conditional Independence and Weak Realizations of Multivariate Gaussian Random Variables: Applications to Networks
    2020 IEEE International Symposium on Information Theory (ISIT), 2020
    Co-Authors: Charalambos D. Charalambous, Jan H. Van Schuppen
    Abstract:

    The Gray and Wyner lossy source coding for a simple network for sources that generate a tuple of jointly Gaussian random variables (RVs) ${X_1}:\Omega \to {\mathbb{R}^{{p_1}}}$ and ${X_2}:\Omega \to {\mathbb{R}^{{p_2}}}$, with respect to square-error distortion at the two decoders is reexamined using (1) Hotelling’s geometric approach of Gaussian RVs-the canonical variable form, and (2) van Putten’s and van Schuppen’s parametrization of joint distributions P X1 , X2 , W by Gaussian RVs $W:\Omega \to {\mathbb{R}^n}$ which make (X 1 ,X 2 ) conditionally independent, and the weak stochastic realization of (X 1 ,X 2 ). Item (2) is used to parametrize the lossy rate region of the Gray and Wyner source coding problem for joint decoding with mean-square error distortions ${\mathbf{E}}\left\{ {\left\| {{X_i} - {{\hat X}_i}} \right\|_{{\mathbb{R}^p}i}^2} \right\} \leq {\Delta _i} \in [0,\infty ]$,i=1,2, by the Covariance Matrix of RV W. From this then follows Wyner’s common information C W (X 1 ,X 2 ) (information definition) is achieved by W with Identity Covariance Matrix, while a formula for Wyner’s lossy common information (operational definition) is derived, given by ${C_{WL}}\left( {{X_1},{X_2}} \right) = {C_W}\left( {{X_1},{X_2}} \right) = \frac{1}{2}\sum\nolimits_{j = 1}^n {\ln } \left( {\frac{{1 + {d_j}}}{{1 - {d_j}}}} \right)$, for the distortion region 0 ≤ ∆ 1 ≤ n(1−d 1 ), 0 ≤ ∆ 2 ≤ n(1−d 1 ), and where 1 > d 1 ≥ d 2 ≥ … ≥ d n > 0 in (0,1) are the canonical correlation coefficients computed from the canonical variable form of the tuple (X 1 ,X 2 )The methods are of fundamental importance to other problems of multi-user communication, where conditional independence is imposed as a constraint.