The Experts below are selected from a list of 69639 Experts worldwide ranked by ideXlab platform
Stephen Mussmann - One of the best experts on this subject based on the ideXlab platform.
-
a tight analysis of greedy yields subexponential time Approximation for uniform decision tree
Symposium on Discrete Algorithms, 2020Co-Authors: Percy Liang, Stephen MussmannAbstract:Decision Tree is a classic formulation of active learning: given n hypotheses with nonnegative weights summing to 1 and a set of tests that each partition the hypotheses, output a decision tree using the provided tests that uniquely identifies each hypothesis and has minimum (weighted) average depth. Previous works showed that the greedy algorithm achieves a O(log n) Approximation Ratio for this problem and it is NP-hard beat a O(log n) Approximation, settling the complexity of the problem. However, for Uniform Decision Tree, i.e. Decision Tree with uniform weights, the story is more subtle. The greedy algorithm's O(log n) Approximation Ratio was the best known, but the largest Approximation Ratio known to be NP-hard is 4 − e. We prove that the greedy algorithm gives a [MATH HERE] Approximation for Uniform Decision Tree, where Copt is the cost of the optimal tree and show this is best possible for the greedy algorithm. As a corollary, we resolve a conjecture of Kosaraju, Przytycka, and Borgstrom [20]. Our results also hold for instances of Decision Tree whose weights are not too far from uniform. Leveraging this result, for all α ∈ (0, 1), we exhibit a [MATH HERE] Approximation algorithm to Uniform Decision Tree running in subexponential time 2O(nα). As a corollary, achieving any super-constant Approximation Ratio on Uniform Decision Tree is not NP-hard, assuming the Exponential Time Hypothesis. This work therefore adds approximating Uniform Decision Tree to a small list of natural problems that have subexponential time algorithms but no known polynomial time algorithms. Like the analysis of the greedy algorithm, our analysis of the subexponential time algorithm gives similar Approximation guarantees even for slightly nonuniform weights. A key technical contribution of our work is showing a connection between greedy algorithms for Uniform Decision Tree and for Min Sum Set Cover.
-
a tight analysis of greedy yields subexponential time Approximation for uniform decision tree
arXiv: Data Structures and Algorithms, 2019Co-Authors: Percy Liang, Stephen MussmannAbstract:Decision Tree is a classic formulation of active learning: given $n$ hypotheses with nonnegative weights summing to 1 and a set of tests that each partition the hypotheses, output a decision tree using the provided tests that uniquely identifies each hypothesis and has minimum (weighted) average depth. Previous works showed that the greedy algorithm achieves a $O(\log n)$ Approximation Ratio for this problem and it is NP-hard beat a $O(\log n)$ Approximation, settling the complexity of the problem. However, for Uniform Decision Tree, i.e. Decision Tree with uniform weights, the story is more subtle. The greedy algorithm's $O(\log n)$ Approximation Ratio was the best known, but the largest Approximation Ratio known to be NP-hard is $4-\varepsilon$. We prove that the greedy algorithm gives a $O(\frac{\log n}{\log C_{OPT}})$ Approximation for Uniform Decision Tree, where $C_{OPT}$ is the cost of the optimal tree and show this is best possible for the greedy algorithm. As a corollary, we resolve a conjecture of Kosaraju, Przytycka, and Borgstrom. Leveraging this result, for all $\alpha\in(0,1)$, we exhibit a $\frac{9.01}{\alpha}$ Approximation algorithm to Uniform Decision Tree running in subexponential time $2^{\tilde O(n^\alpha)}$. As a corollary, achieving any super-constant Approximation Ratio on Uniform Decision Tree is not NP-hard, assuming the Exponential Time Hypothesis. This work therefore adds approximating Uniform Decision Tree to a small list of natural problems that have subexponential time algorithms but no known polynomial time algorithms. All our results hold for Decision Tree with weights not too far from uniform. A key technical contribution of our work is showing a connection between greedy algorithms for Uniform Decision Tree and for Min Sum Set Cover.
Harold N Gabow - One of the best experts on this subject based on the ideXlab platform.
-
iterated rounding algorithms for the smallest k edge connected spanning subgraph
SIAM Journal on Computing, 2012Co-Authors: Harold N Gabow, Suzanne Renick GallagherAbstract:We present the best known algorithms for approximating the minimum-size undirected $k$-edge connected spanning subgraph. For simple graphs our Approximation Ratio is $1+ {1}/(2k) + O({1}/{k^2})$. The more precise version of this bound requires $k\ge 7$, and for all such $k$ it improves the long-standing performance Ratio of Cheriyan and Thurimella [SIAM J. Comput., 30 (2000), pp. 528-560], $1+2/(k+1)$. The improvement comes in two steps. First we show that for simple $k$-edge connected graphs, any laminar family of degree $k$ sets is smaller than the general bound ($n(1+ {3}/{k} + O(1/k\sqrt k))$ versus $2n$). This immediately implies that iterated rounding improves the performance Ratio of Cheriyan and Thurimella. The second step carefully chooses good edges for rounding. For multigraphs our Approximation Ratio is $1+(21/11)k 1$. This improves the previous Ratio $1+2/k$ [H. N. Gabow, M. X. Goemans, E. Tardos, and D. P. Williamson, Networks, 53 (2009), pp. 345-357]. It is of interest since it is known that for some constant $c>0$, an Approximation Ratio $\le 1+c/k$ implies $P=NP$. Our Approximation Ratio extends to the minimum-size Steiner network problem, where $k$ denotes the average vertex demand. The algorithm exploits rounding properties of the first two linear programs in iterated rounding.
-
iterated rounding algorithms for the smallest k edge connected spanning subgraph
Symposium on Discrete Algorithms, 2008Co-Authors: Harold N Gabow, Suzanne Renick GallagherAbstract:We present the best known algorithms for approximating the minimum cardinality undirected k-edge connected spanning subgraph. For simple graphs our Approximation Ratio is 1 + 1/2k + O(1/k2). The more precise version of our bound requires k ≥ 7, and for all such k it improves the longstanding bound of Cheriyan and Thurimella, 1 + 2/(k + 1) [2]. The improvement comes in two steps: First we show that for simple k-edge connected graphs, any laminar family of degree k sets is smaller than the general bound (n(1 + 3/k + O(1/k√k)) versus 2n). This immediately implies that iterated rounding improves the bound of [2]. Our second step improves iterated rounding by finding good edges for rounding. For multigraphs our Approximation Ratio is 1 + 21/11k 0, an Approximation Ratio ≤ 1 + c/k implies P = NP. Our Approximation Ratio extends to the minimum cardinality Steiner network problem, where k denotes the average vertex demand. The algorithm exploits rounding properties of the first two linear programs in iterated rounding.
-
better performance bounds for finding the smallest k edge connected spanning subgraph of a multigraph
Symposium on Discrete Algorithms, 2003Co-Authors: Harold N GabowAbstract:Khuller and Raghavachari [12] present an Approximation algorithm (the KR algorithm) for finding the smallest k-edge connected spanning subgraph (k-ECSS) of an undirected multigraph. They prove the KR algorithm has Approximation Ratio < 1.85. We prove the KR algorithm has Approximation Ratio ≤ 1 + √1/e < 1.61; for odd k this requires a minor modification of the algorithm. This is the bestknown performance bound for the smallest k-ECSS problem for arbitrary k. Our analysis also gives the best-known performance bound for any fixed value of k ≤ 3, e.g., for even k the Approximation Ratio is ≤ 1 + (1 -- 1/k)k/2. Our analysis is based on a laminar family of sets (similar to families used in related contexts) which gives a better accounting of edges added in previous iteRations of the algorithm. We also present a polynomial time implementation of the KR algorithm on multigraphs, running in the time for O(nm) maximum flow computations, where n (m) is the number of vertices (edges, not counting parallel copies). This complements the implementation of [12] which uses time O((kn)2) and is efficient for small k.
George Karakostas - One of the best experts on this subject based on the ideXlab platform.
-
a better Approximation Ratio for the vertex cover problem
International Colloquium on Automata Languages and Programming, 2005Co-Authors: George KarakostasAbstract:We reduce the Approximation factor for Vertex Cover to $2 - \theta(\frac{1}{\sqrt{{\rm log} n}})$ (instead of the previous $2- \theta(\frac{{\rm log log} n}{{\rm log}\ n})$, obtained by Bar-Yehuda and Even [3], and by Monien and Speckenmeyer[11]). The improvement of the vanishing factor comes as an application of the recent results of Arora, Rao, and Vazirani [2] that improved the Approximation factor of the sparsest cut and balanced cut problems. In particular, we use the existence of two big and well-separated sets of nodes in the solution of the semidefinite relaxation for balanced cut, proven in [2]. We observe that a solution of the semidefinite relaxation for vertex cover, when strengthened with the triangle inequalities, can be transformed into a solution of a balanced cut problem, and therefore the existence of big well-separated sets in the sense of [2] translates into the existence of a big independent set.
-
a better Approximation Ratio for the vertex cover problem
Electronic Colloquium on Computational Complexity, 2004Co-Authors: George KarakostasAbstract:We reduce the Approximation factor for the vertex cover to 2 − Θ (1/√logn) (instead of the previous 2 − Θ ln ln n/2ln n obtained by Bar-Yehuda and Even [1985] and Monien and Speckenmeyer [1985]). The improvement of the vanishing factor comes as an application of the recent results of Arora et al. [2004] that improved the Approximation factor of the sparsest cut and balanced cut problems. In particular, we use the existence of two big and well-separated sets of nodes in the solution of the semidefinite relaxation for balanced cut, proven by Arora et al. [2004]. We observe that a solution of the semidefinite relaxation for vertex cover, when strengthened with the triangle inequalities, can be transformed into a solution of a balanced cut problem, and therefore the existence of big well-separated sets in the sense of Arora et al. [2004] translates into the existence of a big independent set.
Percy Liang - One of the best experts on this subject based on the ideXlab platform.
-
a tight analysis of greedy yields subexponential time Approximation for uniform decision tree
Symposium on Discrete Algorithms, 2020Co-Authors: Percy Liang, Stephen MussmannAbstract:Decision Tree is a classic formulation of active learning: given n hypotheses with nonnegative weights summing to 1 and a set of tests that each partition the hypotheses, output a decision tree using the provided tests that uniquely identifies each hypothesis and has minimum (weighted) average depth. Previous works showed that the greedy algorithm achieves a O(log n) Approximation Ratio for this problem and it is NP-hard beat a O(log n) Approximation, settling the complexity of the problem. However, for Uniform Decision Tree, i.e. Decision Tree with uniform weights, the story is more subtle. The greedy algorithm's O(log n) Approximation Ratio was the best known, but the largest Approximation Ratio known to be NP-hard is 4 − e. We prove that the greedy algorithm gives a [MATH HERE] Approximation for Uniform Decision Tree, where Copt is the cost of the optimal tree and show this is best possible for the greedy algorithm. As a corollary, we resolve a conjecture of Kosaraju, Przytycka, and Borgstrom [20]. Our results also hold for instances of Decision Tree whose weights are not too far from uniform. Leveraging this result, for all α ∈ (0, 1), we exhibit a [MATH HERE] Approximation algorithm to Uniform Decision Tree running in subexponential time 2O(nα). As a corollary, achieving any super-constant Approximation Ratio on Uniform Decision Tree is not NP-hard, assuming the Exponential Time Hypothesis. This work therefore adds approximating Uniform Decision Tree to a small list of natural problems that have subexponential time algorithms but no known polynomial time algorithms. Like the analysis of the greedy algorithm, our analysis of the subexponential time algorithm gives similar Approximation guarantees even for slightly nonuniform weights. A key technical contribution of our work is showing a connection between greedy algorithms for Uniform Decision Tree and for Min Sum Set Cover.
-
a tight analysis of greedy yields subexponential time Approximation for uniform decision tree
arXiv: Data Structures and Algorithms, 2019Co-Authors: Percy Liang, Stephen MussmannAbstract:Decision Tree is a classic formulation of active learning: given $n$ hypotheses with nonnegative weights summing to 1 and a set of tests that each partition the hypotheses, output a decision tree using the provided tests that uniquely identifies each hypothesis and has minimum (weighted) average depth. Previous works showed that the greedy algorithm achieves a $O(\log n)$ Approximation Ratio for this problem and it is NP-hard beat a $O(\log n)$ Approximation, settling the complexity of the problem. However, for Uniform Decision Tree, i.e. Decision Tree with uniform weights, the story is more subtle. The greedy algorithm's $O(\log n)$ Approximation Ratio was the best known, but the largest Approximation Ratio known to be NP-hard is $4-\varepsilon$. We prove that the greedy algorithm gives a $O(\frac{\log n}{\log C_{OPT}})$ Approximation for Uniform Decision Tree, where $C_{OPT}$ is the cost of the optimal tree and show this is best possible for the greedy algorithm. As a corollary, we resolve a conjecture of Kosaraju, Przytycka, and Borgstrom. Leveraging this result, for all $\alpha\in(0,1)$, we exhibit a $\frac{9.01}{\alpha}$ Approximation algorithm to Uniform Decision Tree running in subexponential time $2^{\tilde O(n^\alpha)}$. As a corollary, achieving any super-constant Approximation Ratio on Uniform Decision Tree is not NP-hard, assuming the Exponential Time Hypothesis. This work therefore adds approximating Uniform Decision Tree to a small list of natural problems that have subexponential time algorithms but no known polynomial time algorithms. All our results hold for Decision Tree with weights not too far from uniform. A key technical contribution of our work is showing a connection between greedy algorithms for Uniform Decision Tree and for Min Sum Set Cover.
Hougardy Stefan - One of the best experts on this subject based on the ideXlab platform.
-
The Approximation Ratio of the 2-Opt Heuristic for the Euclidean Traveling Salesman Problem
2021Co-Authors: Brodowsky, Ulrich A., Hougardy StefanAbstract:The 2-Opt heuristic is a simple improvement heuristic for the Traveling Salesman Problem. It starts with an arbitrary tour and then repeatedly replaces two edges of the tour by two other edges, as long as this yields a shorter tour. We will prove that for Euclidean Traveling Salesman Problems with $n$ cities the Approximation Ratio of the 2-Opt heuristic is $\Theta(\log n/ \log \log n)$. This improves the upper bound of $O(\log n$) given by Chandra, Karloff, and Tovey [3] in 1999.Comment: revised version, to appear in: 38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021
-
The Approximation Ratio of the 2-Opt Heuristic for the Euclidean Traveling Salesman Problem
LIPIcs - Leibniz International Proceedings in Informatics. 38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021), 2021Co-Authors: Brodowsky, Ulrich A., Hougardy StefanAbstract:The 2-Opt heuristic is a simple improvement heuristic for the Traveling Salesman Problem. It starts with an arbitrary tour and then repeatedly replaces two edges of the tour by two other edges, as long as this yields a shorter tour. We will prove that for Euclidean Traveling Salesman Problems with n cities the Approximation Ratio of the 2-Opt heuristic is ?(log n / log log n). This improves the upper bound of O(log n) given by Chandra, Karloff, and Tovey [Barun Chandra et al., 1999] in 1999
-
The Approximation Ratio of the 2-Opt Heuristic for the Euclidean Traveling Salesman Problem
2020Co-Authors: Brodowsky, Ulrich A., Hougardy StefanAbstract:The 2-Opt heuristic is a simple improvement heuristic for the Traveling Salesman Problem. It starts with an arbitrary tour and then repeatedly replaces two edges of the tour by two other edges, as long as this yields a shorter tour. We will prove that for euclidean Traveling Salesman Problems with $n$ cities the Approximation Ratio of the 2-Opt heuristic is $\Theta(\log n/ \log \log n)$
-
The Approximation Ratio of the 2-Opt Heuristic for the Metric Traveling Salesman Problem
2020Co-Authors: Hougardy Stefan, Zaiser Fabian, Zhong XianghuiAbstract:The 2-Opt heuristic is one of the simplest algorithms for finding good solutions to the metric Traveling Salesman Problem. It is the key ingredient to the well-known Lin-Kernighan algorithm and often used in practice. So far, only upper and lower bounds on the Approximation Ratio of the 2-Opt heuristic for the metric TSP were known. We prove that for the metric TSP with $n$ cities, the Approximation Ratio of the 2-Opt heuristic is $\sqrt{n/2}$ and that this bound is tight