The Experts below are selected from a list of 24405 Experts worldwide ranked by ideXlab platform
Ashok Vardhan Makkuva - One of the best experts on this subject based on the ideXlab platform.
-
equivalence of additive combinatorial linear inequalities for Shannon Entropy and differential Entropy
IEEE Transactions on Information Theory, 2018Co-Authors: Ashok Vardhan MakkuvaAbstract:This paper addresses the correspondence between linear inequalities for Shannon Entropy and differential Entropy for sums of independent group-valued random variables. We show that any balanced (with the sum of coefficients being zero) linear inequality for Shannon Entropy holds if and only if its differential Entropy counterpart also holds; moreover, any linear inequality for differential Entropy must be balanced. In particular, our result shows that recently proved differential Entropy inequalities by Kontoyiannis and Madiman can be deduced from their discrete counterparts due to Tao in a unified manner. Generalizations to certain abelian groups are also obtained. Our proof of extending inequalities for Shannon Entropy to differential Entropy relies on a result of Renyi which relates the Shannon Entropy of a finely discretized random variable to its differential Entropy and also helps in establishing that the Entropy of the sum of quantized random variables is asymptotically equal to that of the quantized sum; the converse uses the asymptotics of the differential Entropy of convolutions with weak additive noise.
-
on additive combinatorial affine inequalities for Shannon Entropy and differential Entropy
International Symposium on Information Theory, 2016Co-Authors: Ashok Vardhan MakkuvaAbstract:To be considered for the 2016 IEEE Jack Keil Wolf ISIT Student Paper Award. This paper addresses the question of to what extent do discrete Entropy inequalities for weighted sums of independent group-valued random variables continue to hold for differential entropies. We show that all balanced affine inequalities (with the sum of coefficients being zero) of Shannon Entropy extend to differential Entropy; conversely, any affine inequality for differential Entropy must be balanced. In particular, this result recovers recently proved differential Entropy inequalities by Kontoyiannis and Madiman [1] from their discrete counterparts due to Tao [2] in a unified manner. Our proof relies on a result of Renyi which relates the Shannon Entropy of a finely discretized random variable to its differential Entropy and also helps in establishing the Entropy of the sum of quantized random variables is asymptotically equal to that of the quantized sum.
-
equivalence of additive combinatorial linear inequalities for Shannon Entropy and differential Entropy
arXiv: Information Theory, 2016Co-Authors: Ashok Vardhan MakkuvaAbstract:This paper addresses the correspondence between linear inequalities of Shannon Entropy and differential Entropy for sums of independent group-valued random variables. We show that any balanced (with the sum of coefficients being zero) linear inequality of Shannon Entropy holds if and only if its differential Entropy counterpart also holds; moreover, any linear inequality for differential Entropy must be balanced. In particular, our result shows that recently proved differential Entropy inequalities by Kontoyiannis and Madiman \cite{KM14} can be deduced from their discrete counterparts due to Tao \cite{Tao10} in a unified manner. Generalizations to certain abelian groups are also obtained. Our proof of extending inequalities of Shannon Entropy to differential Entropy relies on a result of Renyi \cite{Renyi59} which relates the Shannon Entropy of a finely discretized random variable to its differential Entropy and also helps in establishing the Entropy of the sum of quantized random variables is asymptotically equal to that of the quantized sum; the converse uses the asymptotics of the differential Entropy of convolutions with weak additive noise.
-
on additive combinatorial affine inequalities for Shannon Entropy and differential Entropy
arXiv: Information Theory, 2016Co-Authors: Ashok Vardhan MakkuvaAbstract:This paper addresses the question of to what extent do discrete Entropy inequalities for weighted sums of independent group-valued random variables continue to hold for differential entropies. We show that all balanced (with the sum of coefficients being zero) affine inequalities of Shannon Entropy extend to differential Entropy; conversely, any affine inequality for differential Entropy must be balanced. In particular, this result shows that recently proved differential Entropy inequalities by Kontoyiannis and Madiman \cite{KM14} can be deduced from their discrete counterparts due to Tao \cite{Tao10} in a unified manner. Generalizations to certain abelian groups are also obtained. Our proof relies on a result of R\'enyi \cite{Renyi59} which relates the Shannon Entropy of a finely discretized random variable to its differential Entropy and also helps in establishing the Entropy of the sum of quantized random variables is asymptotically equal to that of the quantized sum.
Michael T Woodside - One of the best experts on this subject based on the ideXlab platform.
-
conformational Shannon Entropy of mrna structures from force spectroscopy measurements predicts the efficiency of 1 programmed ribosomal frameshift stimulation
Physical Review Letters, 2021Co-Authors: Matthew T J Halma, Dustin B Ritchie, Michael T WoodsideAbstract:$\ensuremath{-}1$ programmed ribosomal frameshifting ($\ensuremath{-}1\text{ }\text{ }\mathrm{PRF}$) is stimulated by structures in messenger RNA (mRNA), but the factors determining $\ensuremath{-}1\text{ }\text{ }\mathrm{PRF}$ efficiency are unclear. We show that $\ensuremath{-}1\text{ }\text{ }\mathrm{PRF}$ efficiency varies directly with the conformational heterogeneity of the stimulatory structure, quantified as the Shannon Entropy of the state occupancy, for a panel of stimulatory structures with efficiencies from 2% to 80%. The correlation is force dependent and vanishes at forces above those applied by the ribosome. These results support the hypothesis that heterogeneous conformational dynamics are a key factor in stimulating $\ensuremath{-}1\text{ }\text{ }\mathrm{PRF}$.
-
conformational Shannon Entropy of mrna structures from force spectroscopy measurements predicts the efficiency of 1 programmed ribosomal frameshift stimulation
bioRxiv, 2020Co-Authors: Matthew T J Halma, Dustin B Ritchie, Michael T WoodsideAbstract:-1 Programmed ribosomal frameshifting (-1 PRF) is stimulated by structures in mRNA, but the factors determining -1 PRF efficiency are unclear. We show that -1 PRF efficiency varies directly with the conformational heterogeneity of the stimulatory structure, quantified as the Shannon Entropy of the state occupancy, for a panel of stimulatory structures with efficiency ranging from 2-80%. The correlation is force-dependent and vanishes at forces above those applied by the ribosome. This work supports the hypothesis that heterogeneous conformational dynamics are a key factor in stimulating -1 PRF.
Tsachy Weissman - One of the best experts on this subject based on the ideXlab platform.
-
does dirichlet prior smoothing solve the Shannon Entropy estimation problem
International Symposium on Information Theory, 2015Co-Authors: Yanjun Han, Jiantao Jiao, Tsachy WeissmanAbstract:The Dirichlet prior is widely used in estimating discrete distributions and functionals of discrete distributions. In terms of Shannon Entropy estimation, one approach is to plug-in the Dirichlet prior smoothed distribution into the Entropy functional, while the other one is to calculate the Bayes estimator for Entropy under the Dirichlet prior for squared error, which is the conditional expectation. We show that in general they do not improve over the maximum likelihood estimator, which plugs-in the empirical distribution into the Entropy functional. No matter how we tune the parameters in the Dirichlet prior, this approach cannot achieve the minimax rates in Entropy estimation, as recently characterized by Jiao, Venkat, Han, and Weissman [1], and Wu and Yang [2]. The performance of the minimax rate-optimal estimator with n samples is essentially at least as good as that of the Dirichlet smoothed Entropy estimators with n ln n samples. We harness the theory of approximation using positive linear operators for analyzing the bias of plug-in estimators for general functionals under arbitrary statistical models, thereby further consolidating the interplay between these two fields, which was thoroughly exploited by Jiao, Venkat, Han, and Weissman [3] in estimating various functionals of discrete distributions. We establish new results in approximation theory, and apply them to analyze the bias of the Dirichlet prior smoothed plug-in Entropy estimator. This interplay between bias analysis and approximation theory is of relevance and consequence far beyond the specific problem setting in this paper.
-
Adaptive Estimation of Shannon Entropy
arXiv: Information Theory, 2015Co-Authors: Yanjun Han, Jiantao Jiao, Tsachy WeissmanAbstract:We consider estimating the Shannon Entropy of a discrete distribution $P$ from $n$ i.i.d. samples. Recently, Jiao, Venkat, Han, and Weissman, and Wu and Yang constructed approximation theoretic estimators that achieve the minimax $L_2$ rates in estimating Entropy. Their estimators are consistent given $n \gg \frac{S}{\ln S}$ samples, where $S$ is the alphabet size, and it is the best possible sample complexity. In contrast, the Maximum Likelihood Estimator (MLE), which is the empirical Entropy, requires $n\gg S$ samples. In the present paper we significantly refine the minimax results of existing work. To alleviate the pessimism of minimaxity, we adopt the adaptive estimation framework, and show that the minimax rate-optimal estimator in Jiao, Venkat, Han, and Weissman achieves the minimax rates simultaneously over a nested sequence of subsets of distributions $P$, without knowing the alphabet size $S$ or which subset $P$ lies in. In other words, their estimator is adaptive with respect to this nested sequence of the parameter space, which is characterized by the Entropy of the distribution. We also characterize the maximum risk of the MLE over this nested sequence, and show, for every subset in the sequence, that the performance of the minimax rate-optimal estimator with $n$ samples is essentially that of the MLE with $n\ln n$ samples, thereby further substantiating the generality of the phenomenon identified by Jiao, Venkat, Han, and Weissman.
-
does dirichlet prior smoothing solve the Shannon Entropy estimation problem
arXiv: Information Theory, 2015Co-Authors: Yanjun Han, Jiantao Jiao, Tsachy WeissmanAbstract:The Dirichlet prior is widely used in estimating discrete distributions and functionals of discrete distributions. In terms of Shannon Entropy estimation, one approach is to plug-in the Dirichlet prior smoothed distribution into the Entropy functional, while the other one is to calculate the Bayes estimator for Entropy under the Dirichlet prior for squared error, which is the conditional expectation. We show that in general they do \emph{not} improve over the maximum likelihood estimator, which plugs-in the empirical distribution into the Entropy functional. No matter how we tune the parameters in the Dirichlet prior, this approach cannot achieve the minimax rates in Entropy estimation, as recently characterized by Jiao, Venkat, Han, and Weissman, and Wu and Yang. The performance of the minimax rate-optimal estimator with $n$ samples is essentially \emph{at least} as good as that of the Dirichlet smoothed Entropy estimators with $n\ln n$ samples. We harness the theory of approximation using positive linear operators for analyzing the bias of plug-in estimators for general functionals under arbitrary statistical models, thereby further consolidating the interplay between these two fields, which was thoroughly developed and exploited by Jiao, Venkat, Han, and Weissman. We establish new results in approximation theory, and apply them to analyze the bias of the Dirichlet prior smoothed plug-in Entropy estimator. This interplay between bias analysis and approximation theory is of relevance and consequence far beyond the specific problem setting in this paper.
Matthew T J Halma - One of the best experts on this subject based on the ideXlab platform.
-
conformational Shannon Entropy of mrna structures from force spectroscopy measurements predicts the efficiency of 1 programmed ribosomal frameshift stimulation
Physical Review Letters, 2021Co-Authors: Matthew T J Halma, Dustin B Ritchie, Michael T WoodsideAbstract:$\ensuremath{-}1$ programmed ribosomal frameshifting ($\ensuremath{-}1\text{ }\text{ }\mathrm{PRF}$) is stimulated by structures in messenger RNA (mRNA), but the factors determining $\ensuremath{-}1\text{ }\text{ }\mathrm{PRF}$ efficiency are unclear. We show that $\ensuremath{-}1\text{ }\text{ }\mathrm{PRF}$ efficiency varies directly with the conformational heterogeneity of the stimulatory structure, quantified as the Shannon Entropy of the state occupancy, for a panel of stimulatory structures with efficiencies from 2% to 80%. The correlation is force dependent and vanishes at forces above those applied by the ribosome. These results support the hypothesis that heterogeneous conformational dynamics are a key factor in stimulating $\ensuremath{-}1\text{ }\text{ }\mathrm{PRF}$.
-
conformational Shannon Entropy of mrna structures from force spectroscopy measurements predicts the efficiency of 1 programmed ribosomal frameshift stimulation
bioRxiv, 2020Co-Authors: Matthew T J Halma, Dustin B Ritchie, Michael T WoodsideAbstract:-1 Programmed ribosomal frameshifting (-1 PRF) is stimulated by structures in mRNA, but the factors determining -1 PRF efficiency are unclear. We show that -1 PRF efficiency varies directly with the conformational heterogeneity of the stimulatory structure, quantified as the Shannon Entropy of the state occupancy, for a panel of stimulatory structures with efficiency ranging from 2-80%. The correlation is force-dependent and vanishes at forces above those applied by the ribosome. This work supports the hypothesis that heterogeneous conformational dynamics are a key factor in stimulating -1 PRF.
Yanjun Han - One of the best experts on this subject based on the ideXlab platform.
-
does dirichlet prior smoothing solve the Shannon Entropy estimation problem
International Symposium on Information Theory, 2015Co-Authors: Yanjun Han, Jiantao Jiao, Tsachy WeissmanAbstract:The Dirichlet prior is widely used in estimating discrete distributions and functionals of discrete distributions. In terms of Shannon Entropy estimation, one approach is to plug-in the Dirichlet prior smoothed distribution into the Entropy functional, while the other one is to calculate the Bayes estimator for Entropy under the Dirichlet prior for squared error, which is the conditional expectation. We show that in general they do not improve over the maximum likelihood estimator, which plugs-in the empirical distribution into the Entropy functional. No matter how we tune the parameters in the Dirichlet prior, this approach cannot achieve the minimax rates in Entropy estimation, as recently characterized by Jiao, Venkat, Han, and Weissman [1], and Wu and Yang [2]. The performance of the minimax rate-optimal estimator with n samples is essentially at least as good as that of the Dirichlet smoothed Entropy estimators with n ln n samples. We harness the theory of approximation using positive linear operators for analyzing the bias of plug-in estimators for general functionals under arbitrary statistical models, thereby further consolidating the interplay between these two fields, which was thoroughly exploited by Jiao, Venkat, Han, and Weissman [3] in estimating various functionals of discrete distributions. We establish new results in approximation theory, and apply them to analyze the bias of the Dirichlet prior smoothed plug-in Entropy estimator. This interplay between bias analysis and approximation theory is of relevance and consequence far beyond the specific problem setting in this paper.
-
Adaptive Estimation of Shannon Entropy
arXiv: Information Theory, 2015Co-Authors: Yanjun Han, Jiantao Jiao, Tsachy WeissmanAbstract:We consider estimating the Shannon Entropy of a discrete distribution $P$ from $n$ i.i.d. samples. Recently, Jiao, Venkat, Han, and Weissman, and Wu and Yang constructed approximation theoretic estimators that achieve the minimax $L_2$ rates in estimating Entropy. Their estimators are consistent given $n \gg \frac{S}{\ln S}$ samples, where $S$ is the alphabet size, and it is the best possible sample complexity. In contrast, the Maximum Likelihood Estimator (MLE), which is the empirical Entropy, requires $n\gg S$ samples. In the present paper we significantly refine the minimax results of existing work. To alleviate the pessimism of minimaxity, we adopt the adaptive estimation framework, and show that the minimax rate-optimal estimator in Jiao, Venkat, Han, and Weissman achieves the minimax rates simultaneously over a nested sequence of subsets of distributions $P$, without knowing the alphabet size $S$ or which subset $P$ lies in. In other words, their estimator is adaptive with respect to this nested sequence of the parameter space, which is characterized by the Entropy of the distribution. We also characterize the maximum risk of the MLE over this nested sequence, and show, for every subset in the sequence, that the performance of the minimax rate-optimal estimator with $n$ samples is essentially that of the MLE with $n\ln n$ samples, thereby further substantiating the generality of the phenomenon identified by Jiao, Venkat, Han, and Weissman.
-
does dirichlet prior smoothing solve the Shannon Entropy estimation problem
arXiv: Information Theory, 2015Co-Authors: Yanjun Han, Jiantao Jiao, Tsachy WeissmanAbstract:The Dirichlet prior is widely used in estimating discrete distributions and functionals of discrete distributions. In terms of Shannon Entropy estimation, one approach is to plug-in the Dirichlet prior smoothed distribution into the Entropy functional, while the other one is to calculate the Bayes estimator for Entropy under the Dirichlet prior for squared error, which is the conditional expectation. We show that in general they do \emph{not} improve over the maximum likelihood estimator, which plugs-in the empirical distribution into the Entropy functional. No matter how we tune the parameters in the Dirichlet prior, this approach cannot achieve the minimax rates in Entropy estimation, as recently characterized by Jiao, Venkat, Han, and Weissman, and Wu and Yang. The performance of the minimax rate-optimal estimator with $n$ samples is essentially \emph{at least} as good as that of the Dirichlet smoothed Entropy estimators with $n\ln n$ samples. We harness the theory of approximation using positive linear operators for analyzing the bias of plug-in estimators for general functionals under arbitrary statistical models, thereby further consolidating the interplay between these two fields, which was thoroughly developed and exploited by Jiao, Venkat, Han, and Weissman. We establish new results in approximation theory, and apply them to analyze the bias of the Dirichlet prior smoothed plug-in Entropy estimator. This interplay between bias analysis and approximation theory is of relevance and consequence far beyond the specific problem setting in this paper.