Dense Subset

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 15198 Experts worldwide ranked by ideXlab platform

Philip Wellnitz - One of the best experts on this subject based on the ideXlab platform.

  • on near linear time algorithms for Dense Subset sum
    Symposium on Discrete Algorithms, 2021
    Co-Authors: Karl Bringmann, Philip Wellnitz
    Abstract:

    In the Subset Sum problem we are given a set of $n$ positive integers $X$ and a target $t$ and are asked whether some Subset of $X$ sums to $t$. Natural parameters for this problem that have been studied in the literature are $n$ and $t$ as well as the maximum input number $\rm{mx}_X$ and the sum of all input numbers $\Sigma_X$. In this paper we study the Dense case of Subset Sum, where all these parameters are polynomial in $n$. In this regime, standard pseudo-polynomial algorithms solve Subset Sum in polynomial time $n^{O(1)}$. Our main question is: When can Dense Subset Sum be solved in near-linear time $\tilde{O}(n)$? We provide an essentially complete dichotomy by designing improved algorithms and proving conditional lower bounds, thereby determining essentially all settings of the parameters $n,t,\rm{mx}_X,\Sigma_X$ for which Dense Subset Sum is in time $\tilde{O}(n)$. For notational convenience we assume without loss of generality that $t \ge \rm{mx}_X$ (as larger numbers can be ignored) and $t \le \Sigma_X/2$ (using symmetry). Then our dichotomy reads as follows: - By reviving and improving an additive-combinatorics-based approach by Galil and Margalit [SICOMP'91], we show that Subset Sum is in near-linear time $\tilde{O}(n)$ if $t \gg \rm{mx}_X \Sigma_X/n^2$. - We prove a matching conditional lower bound: If Subset Sum is in near-linear time for any setting with $t \ll \rm{mx}_X \Sigma_X/n^2$, then the Strong Exponential Time Hypothesis and the Strong k-Sum Hypothesis fail. We also generalize our algorithm from sets to multi-sets, albeit with non-matching upper and lower bounds.

Jesper Nederlof - One of the best experts on this subject based on the ideXlab platform.

  • Dense Subset sum may be the hardest
    Symposium on Theoretical Aspects of Computer Science, 2016
    Co-Authors: Per Austrin, Petteri Kaski, Mikko Koivisto, Jesper Nederlof
    Abstract:

    The Subset SUM problem asks whether a given set of n positive integers contains a Subset of elements that sum up to a given target t. It is an outstanding open question whether the O^*(2^{n/2})-time algorithm for Subset SUM by Horowitz and Sahni [J. ACM 1974] can be beaten in the worst-case setting by a "truly faster", O^*(2^{(0.5-delta)*n})-time algorithm, with some constant delta > 0. Continuing an earlier work [STACS 2015], we study Subset SUM parameterized by the maximum bin size beta, defined as the largest number of Subsets of the n input integers that yield the same sum. For every epsilon > 0 we give a truly faster algorithm for instances with beta = 2^{0.661n}. Consequently, we also obtain a characterization in terms of the popular density parameter n/log_2(t): if all instances of density at least 1.003 admit a truly faster algorithm, then so does every instance. This goes against the current intuition that instances of density 1 are the hardest, and therefore is a step toward answering the open question in the affirmative. Our results stem from a novel combinatorial analysis of mixings of earlier algorithms for Subset SUM and a study of an extremal question in additive combinatorics connected to the problem of Uniquely Decodable Code Pairs in information theory.

  • Dense Subset Sum may be the hardest
    arXiv: Data Structures and Algorithms, 2015
    Co-Authors: Per Austrin, Petteri Kaski, Mikko Koivisto, Jesper Nederlof
    Abstract:

    The Subset Sum problem asks whether a given set of $n$ positive integers contains a Subset of elements that sum up to a given target $t$. It is an outstanding open question whether the $O^*(2^{n/2})$-time algorithm for Subset Sum by Horowitz and Sahni [J. ACM 1974] can be beaten in the worst-case setting by a "truly faster", $O^*(2^{(0.5-\delta)n})$-time algorithm, with some constant $\delta > 0$. Continuing an earlier work [STACS 2015], we study Subset Sum parameterized by the maximum bin size $\beta$, defined as the largest number of Subsets of the $n$ input integers that yield the same sum. For every $\epsilon > 0$ we give a truly faster algorithm for instances with $\beta \leq 2^{(0.5-\epsilon)n}$, as well as instances with $\beta \geq 2^{0.661n}$. Consequently, we also obtain a characterization in terms of the popular density parameter $n/\log_2 t$: if all instances of density at least $1.003$ admit a truly faster algorithm, then so does every instance. This goes against the current intuition that instances of density 1 are the hardest, and therefore is a step toward answering the open question in the affirmative. Our results stem from novel combinations of earlier algorithms for Subset Sum and a study of an extremal question in additive combinatorics connected to the problem of Uniquely Decodable Code Pairs in information theory.

Karl Bringmann - One of the best experts on this subject based on the ideXlab platform.

  • on near linear time algorithms for Dense Subset sum
    Symposium on Discrete Algorithms, 2021
    Co-Authors: Karl Bringmann, Philip Wellnitz
    Abstract:

    In the Subset Sum problem we are given a set of $n$ positive integers $X$ and a target $t$ and are asked whether some Subset of $X$ sums to $t$. Natural parameters for this problem that have been studied in the literature are $n$ and $t$ as well as the maximum input number $\rm{mx}_X$ and the sum of all input numbers $\Sigma_X$. In this paper we study the Dense case of Subset Sum, where all these parameters are polynomial in $n$. In this regime, standard pseudo-polynomial algorithms solve Subset Sum in polynomial time $n^{O(1)}$. Our main question is: When can Dense Subset Sum be solved in near-linear time $\tilde{O}(n)$? We provide an essentially complete dichotomy by designing improved algorithms and proving conditional lower bounds, thereby determining essentially all settings of the parameters $n,t,\rm{mx}_X,\Sigma_X$ for which Dense Subset Sum is in time $\tilde{O}(n)$. For notational convenience we assume without loss of generality that $t \ge \rm{mx}_X$ (as larger numbers can be ignored) and $t \le \Sigma_X/2$ (using symmetry). Then our dichotomy reads as follows: - By reviving and improving an additive-combinatorics-based approach by Galil and Margalit [SICOMP'91], we show that Subset Sum is in near-linear time $\tilde{O}(n)$ if $t \gg \rm{mx}_X \Sigma_X/n^2$. - We prove a matching conditional lower bound: If Subset Sum is in near-linear time for any setting with $t \ll \rm{mx}_X \Sigma_X/n^2$, then the Strong Exponential Time Hypothesis and the Strong k-Sum Hypothesis fail. We also generalize our algorithm from sets to multi-sets, albeit with non-matching upper and lower bounds.

Per Austrin - One of the best experts on this subject based on the ideXlab platform.

  • Dense Subset sum may be the hardest
    Symposium on Theoretical Aspects of Computer Science, 2016
    Co-Authors: Per Austrin, Petteri Kaski, Mikko Koivisto, Jesper Nederlof
    Abstract:

    The Subset SUM problem asks whether a given set of n positive integers contains a Subset of elements that sum up to a given target t. It is an outstanding open question whether the O^*(2^{n/2})-time algorithm for Subset SUM by Horowitz and Sahni [J. ACM 1974] can be beaten in the worst-case setting by a "truly faster", O^*(2^{(0.5-delta)*n})-time algorithm, with some constant delta > 0. Continuing an earlier work [STACS 2015], we study Subset SUM parameterized by the maximum bin size beta, defined as the largest number of Subsets of the n input integers that yield the same sum. For every epsilon > 0 we give a truly faster algorithm for instances with beta = 2^{0.661n}. Consequently, we also obtain a characterization in terms of the popular density parameter n/log_2(t): if all instances of density at least 1.003 admit a truly faster algorithm, then so does every instance. This goes against the current intuition that instances of density 1 are the hardest, and therefore is a step toward answering the open question in the affirmative. Our results stem from a novel combinatorial analysis of mixings of earlier algorithms for Subset SUM and a study of an extremal question in additive combinatorics connected to the problem of Uniquely Decodable Code Pairs in information theory.

  • Dense Subset Sum may be the hardest
    arXiv: Data Structures and Algorithms, 2015
    Co-Authors: Per Austrin, Petteri Kaski, Mikko Koivisto, Jesper Nederlof
    Abstract:

    The Subset Sum problem asks whether a given set of $n$ positive integers contains a Subset of elements that sum up to a given target $t$. It is an outstanding open question whether the $O^*(2^{n/2})$-time algorithm for Subset Sum by Horowitz and Sahni [J. ACM 1974] can be beaten in the worst-case setting by a "truly faster", $O^*(2^{(0.5-\delta)n})$-time algorithm, with some constant $\delta > 0$. Continuing an earlier work [STACS 2015], we study Subset Sum parameterized by the maximum bin size $\beta$, defined as the largest number of Subsets of the $n$ input integers that yield the same sum. For every $\epsilon > 0$ we give a truly faster algorithm for instances with $\beta \leq 2^{(0.5-\epsilon)n}$, as well as instances with $\beta \geq 2^{0.661n}$. Consequently, we also obtain a characterization in terms of the popular density parameter $n/\log_2 t$: if all instances of density at least $1.003$ admit a truly faster algorithm, then so does every instance. This goes against the current intuition that instances of density 1 are the hardest, and therefore is a step toward answering the open question in the affirmative. Our results stem from novel combinations of earlier algorithms for Subset Sum and a study of an extremal question in additive combinatorics connected to the problem of Uniquely Decodable Code Pairs in information theory.

Constantin Tudor - One of the best experts on this subject based on the ideXlab platform.

  • on the wiener integral with respect to a sub fractional brownian motion on an interval
    Journal of Mathematical Analysis and Applications, 2009
    Co-Authors: Constantin Tudor
    Abstract:

    Abstract The domain Λ k , T s f of the Wiener integral with respect to a sub-fractional Brownian motion ( S t k ) t ∈ [ 0 , T ] , k ∈ ( − 1 2 , 1 2 ) , k ≠ 0 , is characterized. The set Λ k , T s f is a Hilbert space which contains the class of elementary functions as a Dense Subset. If k ∈ ( − 1 2 , 0 ) , any element of Λ k , T s f is a function and if k ∈ ( 0 , 1 2 ) , the domain Λ k , T s f is a space of distributions.

  • inner product spaces of integrands associated to subfractional brownian motion
    Statistics & Probability Letters, 2008
    Co-Authors: Constantin Tudor
    Abstract:

    Abstract We characterize the domain of the Wiener integral with respect to a subfractional Brownian motion { S H ( t ) } t ≥ 0 , H ∈ ( 0 , 1 ) , H ≠ 1 2 . The domain is a Hilbert space which contains the class of elementary functions as a Dense Subset. If 0 H 1 2 , any element of the domain is a function and if 1 2 H 1 , the domain is a space of distributions. The RKHS of S H  is also determined.