The Experts below are selected from a list of 360 Experts worldwide ranked by ideXlab platform
Elena Grigorescu  One of the best experts on this subject based on the ideXlab platform.

np hardness of reed solomon decoding and the prouhet tarry escott problem
arXiv: Information Theory, 2016CoAuthors: Venkata Gandikota, Badih Ghazi, Elena GrigorescuAbstract:Establishing the complexity of {\em Bounded Distance Decoding} for ReedSolomon codes is a fundamental open problem in coding theory, explicitly asked by Guruswami and Vardy (IEEE Trans. Inf. Theory, 2005). The problem is motivated by the large current gap between the regime when it is NPhard, and the regime when it is efficiently solvable (i.e., the Johnson radius). We show the first NPhardness results for asymptotically smaller decoding radii than the maximum likelihood decoding radius of Guruswami and Vardy. Specifically, for ReedSolomon codes of length $N$ and dimension $K=O(N)$, we show that it is NPhard to decode more than $ NK c\frac{\log N}{\log\log N}$ errors (with $c>0$ an Absolute Constant). Moreover, we show that the problem is NPhard under quasipolynomialtime reductions for an error amount $> NK c\log{N}$ (with $c>0$ an Absolute Constant). These results follow from the NPhardness of a generalization of the classical Subset Sum problem to higher moments, called {\em Moments Subset Sum}, which has been a known open problem, and which may be of independent interest. We further reveal a strong connection with the wellstudied ProuhetTarryEscott problem in Number Theory, which turns out to capture a main barrier in extending our techniques. We believe the ProuhetTarryEscott problem deserves further study in the theoretical computer science community.

np hardness of reed solomon decoding and the prouhet tarry escott problem
Foundations of Computer Science, 2016CoAuthors: Venkata Gandikota, Badih Ghazi, Elena GrigorescuAbstract:Establishing the complexity of Bounded Distance Decoding for ReedSolomon codes is a fundamental open problem in coding theory, explicitly asked by Guruswami and Vardy (IEEE Trans. Inf. Theory, 2005). The problem is motivated by the large current gap between the regime when it is NPhard, and the regime when it is efficiently solvable (i.e., the Johnson radius). We show the first NPhardness results for asymptotically smaller decoding radii than the maximum likelihood decoding radius of Guruswami and Vardy. Specifically, for ReedSolomon codes of length N and dimension K = O(N), we show that it is NPhard to decode more than NKO/log N log log N) errors. Moreover, we show that the problem is NPhard under quasipolynomialtime reductions for an error amount > NKc log N (with c > 0 an Absolute Constant). An alternative natural reformulation of the Bounded Distance Decoding problem for ReedSolomon codes is as a Polynomial Reconstruction problem. In this view, our results show that it is NPhard to decide whether there exists a degree K polynomial passing through K + O/log N log log N) points from a given set of points (a1, b1), (a2, b2) …, (aN, bN). Furthermore, it is NPhard under quasipolynomialtime reductions to decide whether there is a degree K polynomial passing through K + c logN many points (with c > 0 an Absolute Constant). These results follow from the NPhardness of a generalization of the classical Subset Sum problem to higher moments, called Moments Subset Sum, which has been a known open problem, and which may be of independent interest. We further reveal a strong connection with the wellstudied ProuhetTarryEscott problem in Number Theory, which turns out to capture a main barrier in extending our techniques. We believe the ProuhetTarryEscott problem deserves further study in the theoretical computer science community.
Venkata Gandikota  One of the best experts on this subject based on the ideXlab platform.

np hardness of reed solomon decoding and the prouhet tarry escott problem
arXiv: Information Theory, 2016CoAuthors: Venkata Gandikota, Badih Ghazi, Elena GrigorescuAbstract:Establishing the complexity of {\em Bounded Distance Decoding} for ReedSolomon codes is a fundamental open problem in coding theory, explicitly asked by Guruswami and Vardy (IEEE Trans. Inf. Theory, 2005). The problem is motivated by the large current gap between the regime when it is NPhard, and the regime when it is efficiently solvable (i.e., the Johnson radius). We show the first NPhardness results for asymptotically smaller decoding radii than the maximum likelihood decoding radius of Guruswami and Vardy. Specifically, for ReedSolomon codes of length $N$ and dimension $K=O(N)$, we show that it is NPhard to decode more than $ NK c\frac{\log N}{\log\log N}$ errors (with $c>0$ an Absolute Constant). Moreover, we show that the problem is NPhard under quasipolynomialtime reductions for an error amount $> NK c\log{N}$ (with $c>0$ an Absolute Constant). These results follow from the NPhardness of a generalization of the classical Subset Sum problem to higher moments, called {\em Moments Subset Sum}, which has been a known open problem, and which may be of independent interest. We further reveal a strong connection with the wellstudied ProuhetTarryEscott problem in Number Theory, which turns out to capture a main barrier in extending our techniques. We believe the ProuhetTarryEscott problem deserves further study in the theoretical computer science community.

np hardness of reed solomon decoding and the prouhet tarry escott problem
Foundations of Computer Science, 2016CoAuthors: Venkata Gandikota, Badih Ghazi, Elena GrigorescuAbstract:Establishing the complexity of Bounded Distance Decoding for ReedSolomon codes is a fundamental open problem in coding theory, explicitly asked by Guruswami and Vardy (IEEE Trans. Inf. Theory, 2005). The problem is motivated by the large current gap between the regime when it is NPhard, and the regime when it is efficiently solvable (i.e., the Johnson radius). We show the first NPhardness results for asymptotically smaller decoding radii than the maximum likelihood decoding radius of Guruswami and Vardy. Specifically, for ReedSolomon codes of length N and dimension K = O(N), we show that it is NPhard to decode more than NKO/log N log log N) errors. Moreover, we show that the problem is NPhard under quasipolynomialtime reductions for an error amount > NKc log N (with c > 0 an Absolute Constant). An alternative natural reformulation of the Bounded Distance Decoding problem for ReedSolomon codes is as a Polynomial Reconstruction problem. In this view, our results show that it is NPhard to decide whether there exists a degree K polynomial passing through K + O/log N log log N) points from a given set of points (a1, b1), (a2, b2) …, (aN, bN). Furthermore, it is NPhard under quasipolynomialtime reductions to decide whether there is a degree K polynomial passing through K + c logN many points (with c > 0 an Absolute Constant). These results follow from the NPhardness of a generalization of the classical Subset Sum problem to higher moments, called Moments Subset Sum, which has been a known open problem, and which may be of independent interest. We further reveal a strong connection with the wellstudied ProuhetTarryEscott problem in Number Theory, which turns out to capture a main barrier in extending our techniques. We believe the ProuhetTarryEscott problem deserves further study in the theoretical computer science community.
Badih Ghazi  One of the best experts on this subject based on the ideXlab platform.

np hardness of reed solomon decoding and the prouhet tarry escott problem
arXiv: Information Theory, 2016CoAuthors: Venkata Gandikota, Badih Ghazi, Elena GrigorescuAbstract:Establishing the complexity of {\em Bounded Distance Decoding} for ReedSolomon codes is a fundamental open problem in coding theory, explicitly asked by Guruswami and Vardy (IEEE Trans. Inf. Theory, 2005). The problem is motivated by the large current gap between the regime when it is NPhard, and the regime when it is efficiently solvable (i.e., the Johnson radius). We show the first NPhardness results for asymptotically smaller decoding radii than the maximum likelihood decoding radius of Guruswami and Vardy. Specifically, for ReedSolomon codes of length $N$ and dimension $K=O(N)$, we show that it is NPhard to decode more than $ NK c\frac{\log N}{\log\log N}$ errors (with $c>0$ an Absolute Constant). Moreover, we show that the problem is NPhard under quasipolynomialtime reductions for an error amount $> NK c\log{N}$ (with $c>0$ an Absolute Constant). These results follow from the NPhardness of a generalization of the classical Subset Sum problem to higher moments, called {\em Moments Subset Sum}, which has been a known open problem, and which may be of independent interest. We further reveal a strong connection with the wellstudied ProuhetTarryEscott problem in Number Theory, which turns out to capture a main barrier in extending our techniques. We believe the ProuhetTarryEscott problem deserves further study in the theoretical computer science community.

np hardness of reed solomon decoding and the prouhet tarry escott problem
Foundations of Computer Science, 2016CoAuthors: Venkata Gandikota, Badih Ghazi, Elena GrigorescuAbstract:Establishing the complexity of Bounded Distance Decoding for ReedSolomon codes is a fundamental open problem in coding theory, explicitly asked by Guruswami and Vardy (IEEE Trans. Inf. Theory, 2005). The problem is motivated by the large current gap between the regime when it is NPhard, and the regime when it is efficiently solvable (i.e., the Johnson radius). We show the first NPhardness results for asymptotically smaller decoding radii than the maximum likelihood decoding radius of Guruswami and Vardy. Specifically, for ReedSolomon codes of length N and dimension K = O(N), we show that it is NPhard to decode more than NKO/log N log log N) errors. Moreover, we show that the problem is NPhard under quasipolynomialtime reductions for an error amount > NKc log N (with c > 0 an Absolute Constant). An alternative natural reformulation of the Bounded Distance Decoding problem for ReedSolomon codes is as a Polynomial Reconstruction problem. In this view, our results show that it is NPhard to decide whether there exists a degree K polynomial passing through K + O/log N log log N) points from a given set of points (a1, b1), (a2, b2) …, (aN, bN). Furthermore, it is NPhard under quasipolynomialtime reductions to decide whether there is a degree K polynomial passing through K + c logN many points (with c > 0 an Absolute Constant). These results follow from the NPhardness of a generalization of the classical Subset Sum problem to higher moments, called Moments Subset Sum, which has been a known open problem, and which may be of independent interest. We further reveal a strong connection with the wellstudied ProuhetTarryEscott problem in Number Theory, which turns out to capture a main barrier in extending our techniques. We believe the ProuhetTarryEscott problem deserves further study in the theoretical computer science community.
Makrand Sinha  One of the best experts on this subject based on the ideXlab platform.

lower bounds for approximating the matching polytope
Symposium on Discrete Algorithms, 2018CoAuthors: Makrand SinhaAbstract:We prove that any linear program that approximates the matching polytope on nvertex graphs up to a factor of (1 + e) for any [EQUATION] must have at least [EQUATION] inequalities where 0 < α < 1 is an Absolute Constant. This is tight as exhibited by the (1 + e) approximating linear program obtained by dropping the odd set constraints of size larger than (1 + e)/e from the description of the matching polytope. Previously, a tight lower bound of 2Ω(n) was only known for [EQUATION] [22, 5] whereas for [EQUATION], the best lower bound was 2Ω(1/e) [22]. The key new ingredient in our proof is a close connection to the nonnegative rank of a lopsided version of the unique disjointness matrix.

lower bounds for approximating the matching polytope
arXiv: Computational Complexity, 2017CoAuthors: Makrand SinhaAbstract:We prove that any extended formulation that approximates the matching polytope on $n$vertex graphs up to a factor of $(1+\varepsilon)$ for any $\frac2n \le \varepsilon \le 1$ must have at least $\binom{n}{{\alpha}/{\varepsilon}}$ defining inequalities where $0<\alpha<1$ is an Absolute Constant. This is tight as exhibited by the $(1+\varepsilon)$ approximating linear program obtained by dropping the odd set constraints of size larger than $({1+\varepsilon})/{\varepsilon}$ from the description of the matching polytope. Previously, a tight lower bound of $2^{\Omega(n)}$ was only known for $\varepsilon = O\left(\frac{1}{n}\right)$ [Rothvoss, STOC '14; Braun and Pokutta, IEEE Trans. Information Theory '15] whereas for $\frac2n \le \varepsilon \le 1$, the best lower bound was $2^{\Omega\left({1}/{\varepsilon}\right)}$ [Rothvoss, STOC '14]. The key new ingredient in our proof is a close connection to the nonnegative rank of a lopsided version of the unique disjointness matrix.
I G Shevtsova  One of the best experts on this subject based on the ideXlab platform.

on nonuniform convergence rate estimates in the central limit theorem
Theory of Probability and Its Applications, 2013CoAuthors: Yu S Nefedova, I G ShevtsovaAbstract:We sharpen the upper bounds for the Absolute Constant in nonuniform convergence rate estimates in the central limit theorem for sums of independent identically distributed random variables possessing Absolute moments of the order $2+\delta$ with some $0<\delta\le1$. In particular, it is demonstrated that under the existence of the third moment this Constant does not exceed $18.2$. Also it is shown that the Absolute Constant in the estimates under consideration can be replaced by a function $C^*(x,\delta)$ of the argument $x$ of the difference of the prelimit and limit normal distribution functions for which a positive bounded nonincreasing majorant is found. Moreover, for $\delta=1$ this majorant is asymptotically exact (unimprovable) as $x\to\infty$ and sharpens the estimates due to Nikulin [preprint, arXiv:1004.0552v1 [math.ST], 2010] for all $x$. For the first time a similar result is obtained for the case $\delta\in(0,1)$. As a corollary, we obtain upper estimates for the Kolmogorov functions which ...

on the upper bound for the Absolute Constant in the berry esseen inequality
Theory of Probability and Its Applications, 2010CoAuthors: Yu V Korolev, I G ShevtsovaAbstract:This paper describes the history of the search for unconditional and conditional upper bounds of the Absolute Constant in the Berry–Esseen inequality for sums of independent identically distributed random variables. Computational procedures are described. New estimates are presented from which it follows that the Absolute Constant in the classical Berry–Esseen inequality does not exceed 0.5129.

sharpening of the upper bound of the Absolute Constant in the berry esseen inequality
Theory of Probability and Its Applications, 2007CoAuthors: I G ShevtsovaAbstract:The upper bound of the Absolute Constant in the classical Berry–Esseen inequality for sums of independent identically distributed random variables with finite third moments is lowered to $C\leqslant 0.7056$.