Unknown Feature Vector

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 27 Experts worldwide ranked by ideXlab platform

Eren C Kizildag - One of the best experts on this subject based on the ideXlab platform.

  • inference in high dimensional linear regression via lattice basis reduction and integer relation detection
    arXiv: Statistics Theory, 2019
    Co-Authors: David Gamarnik, Eren C Kizildag, Ilias Zadik
    Abstract:

    We focus on the high-dimensional linear regression problem, where the algorithmic goal is to efficiently infer an Unknown Feature Vector $\beta^*\in\mathbb{R}^p$ from its linear measurements, using a small number $n$ of samples. Unlike most of the literature, we make no sparsity assumption on $\beta^*$, but instead adopt a different regularization: In the noiseless setting, we assume $\beta^*$ consists of entries, which are either rational numbers with a common denominator $Q\in\mathbb{Z}^+$ (referred to as $Q$-rationality); or irrational numbers supported on a rationally independent set of bounded cardinality, known to learner; collectively called as the mixed-support assumption. Using a novel combination of the PSLQ integer relation detection, and LLL lattice basis reduction algorithms, we propose a polynomial-time algorithm which provably recovers a $\beta^*\in\mathbb{R}^p$ enjoying the mixed-support assumption, from its linear measurements $Y=X\beta^*\in\mathbb{R}^n$ for a large class of distributions for the random entries of $X$, even with one measurement $(n=1)$. In the noisy setting, we propose a polynomial-time, lattice-based algorithm, which recovers a $\beta^*\in\mathbb{R}^p$ enjoying $Q$-rationality, from its noisy measurements $Y=X\beta^*+W\in\mathbb{R}^n$, even with a single sample $(n=1)$. We further establish for large $Q$, and normal noise, this algorithm tolerates information-theoretically optimal level of noise. We then apply these ideas to develop a polynomial-time, single-sample algorithm for the phase retrieval problem. Our methods address the single-sample $(n=1)$ regime, where the sparsity-based methods such as LASSO and Basis Pursuit are known to fail. Furthermore, our results also reveal an algorithmic connection between the high-dimensional linear regression problem, and the integer relation detection, randomized subset-sum, and shortest Vector problems.

  • high dimensional linear regression and phase retrieval via pslq integer relation algorithm
    International Symposium on Information Theory, 2019
    Co-Authors: David Gamarnik, Eren C Kizildag
    Abstract:

    We study high-dimensional linear regression problem without sparsity, and address the question of efficient recovery with small number of measurements. We propose an algorithm which efficiently recovers an Unknown Feature Vector β∗ ∈ ℝp from its linear measurements Y = Xβ∗ in polynomially many steps, with high probability (as p → ∞), even with a single measurement, provided elements of β∗ are supported on a rationally independent set of at most polynomial in p size known to learner. We use a combination of PSLQ integer relation and LLL lattice basis reduction algorithms to achieve our goal. We then apply our ideas to develop an efficient, single-sample algorithm for the phase retrieval problem, where ${\beta ^ * } \in {\mathbb{C}^p}$ is to be recovered from magnitude-only observations Y = |〈X, β∗〉|.

  • ISIT - High-Dimensional Linear Regression and Phase Retrieval via PSLQ Integer Relation Algorithm
    2019 IEEE International Symposium on Information Theory (ISIT), 2019
    Co-Authors: David Gamarnik, Eren C Kizildag
    Abstract:

    We study high-dimensional linear regression problem without sparsity, and address the question of efficient recovery with small number of measurements. We propose an algorithm which efficiently recovers an Unknown Feature Vector β∗ ∈ ℝp from its linear measurements Y = Xβ∗ in polynomially many steps, with high probability (as p → ∞), even with a single measurement, provided elements of β∗ are supported on a rationally independent set of at most polynomial in p size known to learner. We use a combination of PSLQ integer relation and LLL lattice basis reduction algorithms to achieve our goal. We then apply our ideas to develop an efficient, single-sample algorithm for the phase retrieval problem, where ${\beta ^ * } \in {\mathbb{C}^p}$ is to be recovered from magnitude-only observations Y = |〈X, β∗〉|.

David Gamarnik - One of the best experts on this subject based on the ideXlab platform.

  • inference in high dimensional linear regression via lattice basis reduction and integer relation detection
    arXiv: Statistics Theory, 2019
    Co-Authors: David Gamarnik, Eren C Kizildag, Ilias Zadik
    Abstract:

    We focus on the high-dimensional linear regression problem, where the algorithmic goal is to efficiently infer an Unknown Feature Vector $\beta^*\in\mathbb{R}^p$ from its linear measurements, using a small number $n$ of samples. Unlike most of the literature, we make no sparsity assumption on $\beta^*$, but instead adopt a different regularization: In the noiseless setting, we assume $\beta^*$ consists of entries, which are either rational numbers with a common denominator $Q\in\mathbb{Z}^+$ (referred to as $Q$-rationality); or irrational numbers supported on a rationally independent set of bounded cardinality, known to learner; collectively called as the mixed-support assumption. Using a novel combination of the PSLQ integer relation detection, and LLL lattice basis reduction algorithms, we propose a polynomial-time algorithm which provably recovers a $\beta^*\in\mathbb{R}^p$ enjoying the mixed-support assumption, from its linear measurements $Y=X\beta^*\in\mathbb{R}^n$ for a large class of distributions for the random entries of $X$, even with one measurement $(n=1)$. In the noisy setting, we propose a polynomial-time, lattice-based algorithm, which recovers a $\beta^*\in\mathbb{R}^p$ enjoying $Q$-rationality, from its noisy measurements $Y=X\beta^*+W\in\mathbb{R}^n$, even with a single sample $(n=1)$. We further establish for large $Q$, and normal noise, this algorithm tolerates information-theoretically optimal level of noise. We then apply these ideas to develop a polynomial-time, single-sample algorithm for the phase retrieval problem. Our methods address the single-sample $(n=1)$ regime, where the sparsity-based methods such as LASSO and Basis Pursuit are known to fail. Furthermore, our results also reveal an algorithmic connection between the high-dimensional linear regression problem, and the integer relation detection, randomized subset-sum, and shortest Vector problems.

  • high dimensional linear regression and phase retrieval via pslq integer relation algorithm
    International Symposium on Information Theory, 2019
    Co-Authors: David Gamarnik, Eren C Kizildag
    Abstract:

    We study high-dimensional linear regression problem without sparsity, and address the question of efficient recovery with small number of measurements. We propose an algorithm which efficiently recovers an Unknown Feature Vector β∗ ∈ ℝp from its linear measurements Y = Xβ∗ in polynomially many steps, with high probability (as p → ∞), even with a single measurement, provided elements of β∗ are supported on a rationally independent set of at most polynomial in p size known to learner. We use a combination of PSLQ integer relation and LLL lattice basis reduction algorithms to achieve our goal. We then apply our ideas to develop an efficient, single-sample algorithm for the phase retrieval problem, where ${\beta ^ * } \in {\mathbb{C}^p}$ is to be recovered from magnitude-only observations Y = |〈X, β∗〉|.

  • ISIT - High-Dimensional Linear Regression and Phase Retrieval via PSLQ Integer Relation Algorithm
    2019 IEEE International Symposium on Information Theory (ISIT), 2019
    Co-Authors: David Gamarnik, Eren C Kizildag
    Abstract:

    We study high-dimensional linear regression problem without sparsity, and address the question of efficient recovery with small number of measurements. We propose an algorithm which efficiently recovers an Unknown Feature Vector β∗ ∈ ℝp from its linear measurements Y = Xβ∗ in polynomially many steps, with high probability (as p → ∞), even with a single measurement, provided elements of β∗ are supported on a rationally independent set of at most polynomial in p size known to learner. We use a combination of PSLQ integer relation and LLL lattice basis reduction algorithms to achieve our goal. We then apply our ideas to develop an efficient, single-sample algorithm for the phase retrieval problem, where ${\beta ^ * } \in {\mathbb{C}^p}$ is to be recovered from magnitude-only observations Y = |〈X, β∗〉|.

Ilias Zadik - One of the best experts on this subject based on the ideXlab platform.

  • inference in high dimensional linear regression via lattice basis reduction and integer relation detection
    arXiv: Statistics Theory, 2019
    Co-Authors: David Gamarnik, Eren C Kizildag, Ilias Zadik
    Abstract:

    We focus on the high-dimensional linear regression problem, where the algorithmic goal is to efficiently infer an Unknown Feature Vector $\beta^*\in\mathbb{R}^p$ from its linear measurements, using a small number $n$ of samples. Unlike most of the literature, we make no sparsity assumption on $\beta^*$, but instead adopt a different regularization: In the noiseless setting, we assume $\beta^*$ consists of entries, which are either rational numbers with a common denominator $Q\in\mathbb{Z}^+$ (referred to as $Q$-rationality); or irrational numbers supported on a rationally independent set of bounded cardinality, known to learner; collectively called as the mixed-support assumption. Using a novel combination of the PSLQ integer relation detection, and LLL lattice basis reduction algorithms, we propose a polynomial-time algorithm which provably recovers a $\beta^*\in\mathbb{R}^p$ enjoying the mixed-support assumption, from its linear measurements $Y=X\beta^*\in\mathbb{R}^n$ for a large class of distributions for the random entries of $X$, even with one measurement $(n=1)$. In the noisy setting, we propose a polynomial-time, lattice-based algorithm, which recovers a $\beta^*\in\mathbb{R}^p$ enjoying $Q$-rationality, from its noisy measurements $Y=X\beta^*+W\in\mathbb{R}^n$, even with a single sample $(n=1)$. We further establish for large $Q$, and normal noise, this algorithm tolerates information-theoretically optimal level of noise. We then apply these ideas to develop a polynomial-time, single-sample algorithm for the phase retrieval problem. Our methods address the single-sample $(n=1)$ regime, where the sparsity-based methods such as LASSO and Basis Pursuit are known to fail. Furthermore, our results also reveal an algorithmic connection between the high-dimensional linear regression problem, and the integer relation detection, randomized subset-sum, and shortest Vector problems.

Zadik Ilias - One of the best experts on this subject based on the ideXlab platform.

  • Inference in High-Dimensional Linear Regression via Lattice Basis Reduction and Integer Relation Detection
    2019
    Co-Authors: Gamarnik David, Kızıldağ, Eren C., Zadik Ilias
    Abstract:

    We focus on the high-dimensional linear regression problem, where the algorithmic goal is to efficiently infer an Unknown Feature Vector $\beta^*\in\mathbb{R}^p$ from its linear measurements, using a small number $n$ of samples. Unlike most of the literature, we make no sparsity assumption on $\beta^*$, but instead adopt a different regularization: In the noiseless setting, we assume $\beta^*$ consists of entries, which are either rational numbers with a common denominator $Q\in\mathbb{Z}^+$ (referred to as $Q$-rationality); or irrational numbers supported on a rationally independent set of bounded cardinality, known to learner; collectively called as the mixed-support assumption. Using a novel combination of the PSLQ integer relation detection, and LLL lattice basis reduction algorithms, we propose a polynomial-time algorithm which provably recovers a $\beta^*\in\mathbb{R}^p$ enjoying the mixed-support assumption, from its linear measurements $Y=X\beta^*\in\mathbb{R}^n$ for a large class of distributions for the random entries of $X$, even with one measurement $(n=1)$. In the noisy setting, we propose a polynomial-time, lattice-based algorithm, which recovers a $\beta^*\in\mathbb{R}^p$ enjoying $Q$-rationality, from its noisy measurements $Y=X\beta^*+W\in\mathbb{R}^n$, even with a single sample $(n=1)$. We further establish for large $Q$, and normal noise, this algorithm tolerates information-theoretically optimal level of noise. We then apply these ideas to develop a polynomial-time, single-sample algorithm for the phase retrieval problem. Our methods address the single-sample $(n=1)$ regime, where the sparsity-based methods such as LASSO and Basis Pursuit are known to fail. Furthermore, our results also reveal an algorithmic connection between the high-dimensional linear regression problem, and the integer relation detection, randomized subset-sum, and shortest Vector problems.Comment: 56 pages. Parts of the material of this manuscript were presented at NeurIPS 2018, and ISIT 2019. This submission subsumes the content of arXiv:1803.0671

Carl G. Looney - One of the best experts on this subject based on the ideXlab platform.

  • Fuzzy connectivity clustering with radial basis kernel functions
    Fuzzy Sets and Systems, 2009
    Co-Authors: Carl G. Looney
    Abstract:

    This method clusters data when the number of classes is Unknown. We partition a data set with a Gaussian radial basis kernel function on pairs of Feature Vectors from a reduced sample to obtain a fuzzy connectivity matrix. The matrix entries are fuzzy truths that the row-column Vector pairs belong to the same classes. To reduce the matrix size when the data set is large, we obtain a smaller set of representative Vectors by first grouping the Feature Vectors into many small pre-clusters based on a new robust similarity measure. Then we use the pre-cluster centers as the reduced sample. We next map pairs of the centers via the kernel function to form the connectivity matrix entries of fuzzy values from which we determine the classes and the number of classes. Afterward, when an Unknown Feature Vector is input for recognition, we find its nearest pre-cluster center and assign that center's class to the Unknown Vector. We demonstrate the method first on a simple set of linearly nonseparable synthetic data to show how it works and then apply it to the well-known difficult iris data. We also apply it to the more substantial and noisy Wisconsin breast cancer data.

  • CAINE - A Simple Fuzzy Neural Network.
    2007
    Co-Authors: Carl G. Looney, Sergiu M. Dascalu
    Abstract:

    Our simple fuzzy neural network first thins the set of exemplar input Feature Vectors and then centers a Gaussian function on each remaining one and saves its associated output label (target). Next, any Unknown Feature Vector to be classified is put through each Gaussian to get the fuzzy truth that it belongs to that center. The fuzzy truths for all Gaussian centers are then maximized and the label of the winner is the class of the input Feature Vector. We use the knowledge in the exemplar-label pairs directly with no training, no weights, no local minima, no epochs, no defuzzification, no overtraining, and no experience needed to use it. It sets up automatically and then classifies all input Feature Vectors from the same population as the exemplar Feature Vectors. We compare our results on well known data with those of several other fuzzy neural networks, which themselves compared favorably to other neural networks.