Additive Error

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 32955 Experts worldwide ranked by ideXlab platform

Dror Baron - One of the best experts on this subject based on the ideXlab platform.

  • Performance Limits With Additive Error Metrics in Noisy Multimeasurement Vector Problems
    IEEE Transactions on Signal Processing, 2018
    Co-Authors: Junan Zhu, Dror Baron
    Abstract:

    Real-world applications such as magnetic resonance imaging with multiple coils, multiuser communication, and diffuse optical tomography often assume a linear model, where several sparse signals sharing common sparse supports are acquired by several measurement matrices and then contaminated by noise. Multimeasurement vector (MMV) problems consider the estimation or reconstruction of such signals. In different applications, the estimation Error that we want to minimize could be the mean squared Error or other metrics, such as the mean absolute Error and the support set Error. Seeing that minimizing different Error metrics is useful in MMV problems, we study information-theoretic performance limits for MMV signal estimation with arbitrary Additive Error metrics. We also propose a message passing algorithmic framework that achieves the optimal performance, and rigorously prove the optimality of our algorithm for a special case. We further conjecture the optimality of our algorithm for some general cases and back it up through numerical examples. As an application of our MMV algorithm, we propose a novel setup for active user detection in multiuser communication and demonstrate the promise of our proposed setup.

  • signal estimation with Additive Error metrics in compressed sensing
    IEEE Transactions on Information Theory, 2014
    Co-Authors: Jin Tan, Danielle Carmon, Dror Baron
    Abstract:

    Compressed sensing typically deals with the estimation of a system input from its noise-corrupted linear measurements, where the number of measurements is smaller than the number of input components. The performance of the estimation process is usually quantified by some standard Error metric such as squared Error or support set Error. In this correspondence, we consider a noisy compressed sensing problem with any Additive Error metric. Under the assumption that the relaxed belief propagation method matches Tanaka's fixed point equation, we propose a general algorithm that estimates the original signal by minimizing the Additive Error metric defined by the user. The algorithm is a pointwise estimation process, and thus simple and fast. We verify that our algorithm is asymptotically optimal, and we describe a general method to compute the fundamental information-theoretic performance limit for any Additive Error metric. We provide several example metrics, and give the theoretical performance limits for these cases. Experimental results show that our algorithm outperforms methods such as relaxed belief propagation (relaxed BP) and compressive sampling matching pursuit (CoSaMP), and reaches the suggested theoretical limits for our example metrics.

  • Signal Estimation with Additive Error Metrics in Compressed Sensing
    IEEE Transactions on Information Theory, 2014
    Co-Authors: Jin Tan, Danielle Carmon, Dror Baron
    Abstract:

    Compressed sensing typically deals with the estimation of a system input from its noise-corrupted linear measurements, where the number of measurements is smaller than the number of input components. The performance of the estimation process is usually quantified by some standard Error metric such as squared Error or support set Error. In this correspondence, we consider a noisy compressed sensing problem with any arbitrary Error metric. We propose a simple, fast, and highly general algorithm that estimates the original signal by minimizing the Error metric defined by the user. We verify that our algorithm is optimal owing to the decoupling principle, and we describe a general method to compute the fundamental information-theoretic performance limit for any Error metric. We provide two example metrics --- minimum mean absolute Error and minimum mean support Error --- and give the theoretical performance limits for these two cases. Experimental results show that our algorithm outperforms methods such as relaxed belief propagation (relaxed BP) and compressive sampling matching pursuit (CoSaMP), and reaches the suggested theoretical limits for our two example metrics.

Jiapeng Zhang - One of the best experts on this subject based on the ideXlab platform.

  • improved noisy population recovery and reverse bonami beckner inequality for sparse functions
    Symposium on the Theory of Computing, 2015
    Co-Authors: Shachar Lovett, Jiapeng Zhang
    Abstract:

    The noisy population recovery problem is a basic statistical inference problem. Given an unknown distribution in {0,1}n with support of size k, and given access only to noisy samples from it, where each bit is flipped independently with probability (1-μ)/2, estimate the original probability up to an Additive Error of e. We give an algorithm which solves this problem in time polynomial in (klog log k, n, 1/e). This improves on the previous algorithm of Wigderson and Yehudayoff [FOCS 2012] which solves the problem in time polynomial in (klog k, n, 1/e). Our main technical contribution, which facilitates the algorithm, is a new reverse Bonami-Beckner inequality for the L1 norm of sparse functions.

  • STOC - Improved Noisy Population Recovery, and Reverse Bonami-Beckner Inequality for Sparse Functions
    Proceedings of the Forty-Seventh Annual ACM on Symposium on Theory of Computing - STOC '15, 2015
    Co-Authors: Shachar Lovett, Jiapeng Zhang
    Abstract:

    The noisy population recovery problem is a basic statistical inference problem. Given an unknown distribution in {0,1}n with support of size k, and given access only to noisy samples from it, where each bit is flipped independently with probability (1-μ)/2, estimate the original probability up to an Additive Error of e. We give an algorithm which solves this problem in time polynomial in (klog log k, n, 1/e). This improves on the previous algorithm of Wigderson and Yehudayoff [FOCS 2012] which solves the problem in time polynomial in (klog k, n, 1/e). Our main technical contribution, which facilitates the algorithm, is a new reverse Bonami-Beckner inequality for the L1 norm of sparse functions.

Tapati Basak - One of the best experts on this subject based on the ideXlab platform.

  • an application of non linear cobb douglas production function to selected manufacturing industries in bangladesh
    Open Journal of Statistics, 2012
    Co-Authors: Md Moyazzem Hossain, Ajit Kumar Majumder, Tapati Basak
    Abstract:

    Recently, businessmen as well as industrialists are very much concerned about the theory of firm in order to make correct decisions regarding what items, how much and how to produce them. All these decisions are directly related with the cost considerations and market situations where the firm is to be operated. In this regard, this paper should be helpful in suggesting the most suitable functional form of production process for the major manufacturing industries in Bangladesh. This paper considers Cobb-Douglas (C-D) production function with Additive Error and multiplicative Error term. The main purpose of this paper is to select the appropriate Cobb-Douglas production model for measuring the production process of some selected manufacturing industries in Bangladesh. We use different model selection criteria to compare the Cobb-Douglas production function with Additive Error term to Cobb-Douglas production function with multiplicative Error term. Finally, we estimate the parameters of the production function by using optimization subroutine.

  • An Application of Non–Linear Cobb-Douglas Production Function to Selected Manufacturing Industries in Bangladesh
    Open Journal of Statistics, 2012
    Co-Authors: Moyazzem Hossain, Ajit Kumar Majumder, Tapati Basak
    Abstract:

    Recently, businessmen as well as industrialists are very much concerned about the theory of firm in order to make correct decisions regarding what items, how much and how to produce them. All these decisions are directly related with the cost considerations and market situations where the firm is to be operated. In this regard, this paper should be helpful in suggesting the most suitable functional form of production process for the major manufacturing industries in Bangladesh. This paper considers Cobb-Douglas (C-D) production function with Additive Error and multiplicative Error term. The main purpose of this paper is to select the appropriate Cobb-Douglas production model for measuring the production process of some selected manufacturing industries in Bangladesh. We use different model selection criteria to compare the Cobb-Douglas production function with Additive Error term to Cobb-Douglas production function with multiplicative Error term. Finally, we estimate the parameters of the production function by using optimization subroutine.

Rocco A Servedio - One of the best experts on this subject based on the ideXlab platform.

  • a robust khintchine inequality and algorithms for computing optimal constants in fourier analysis and high dimensional geometry
    International Colloquium on Automata Languages and Programming, 2013
    Co-Authors: Ilias Diakonikolas, Rocco A Servedio
    Abstract:

    This paper makes two contributions towards determining some well-studied optimal constants in Fourier analysis of Boolean functions and high-dimensional geometry. 1 It has been known since 1994 [GL94] that every linear threshold function has squared Fourier mass at least 1/2 on its degree-0 and degree-1 coefficients. Denote the minimum such Fourier mass by W≤1[LTF], where the minimum is taken over all n-variable linear threshold functions and all n≥0. Benjamini, Kalai and Schramm [BKS99] have conjectured that the true value of W≤1[LTF] is 2/π. We make progress on this conjecture by proving that W≤1[LTF]≥1/2+c for some absolute constant c>0. The key ingredient in our proof is a "robust" version of the well-known Khintchine inequality in functional analysis, which we believe may be of independent interest. 2 We give an algorithm with the following property: given any η>0, the algorithm runs in time 2poly(1/η) and determines the value of W≤1[LTF] up to an Additive Error of ±η. We give a similar 2poly(1/η)-time algorithm to determine Tomaszewski's constant to within an Additive Error of ±η; this is the minimum (over all origin-centered hyperplanes H) fraction of points in {−1,1}n that lie within Euclidean distance 1 of H. Tomaszewski's constant is conjectured to be 1/2; lower bounds on it have been given by Holzman and Kleitman [HK92] and independently by Ben-Tal, Nemirovski and Roos [BTNR02]. Our algorithms combine tools from anti-concentration of sums of independent random variables, Fourier analysis, and Hermite analysis of linear threshold functions.

  • a robust khintchine inequality and algorithms for computing optimal constants in fourier analysis and high dimensional geometry
    arXiv: Computational Complexity, 2012
    Co-Authors: Ilias Diakonikolas, Rocco A Servedio
    Abstract:

    This paper makes two contributions towards determining some well-studied optimal constants in Fourier analysis \newa{of Boolean functions} and high-dimensional geometry. \begin{enumerate} \item It has been known since 1994 \cite{GL:94} that every linear threshold function has squared Fourier mass at least 1/2 on its degree-0 and degree-1 coefficients. Denote the minimum such Fourier mass by $\w^{\leq 1}[\ltf]$, where the minimum is taken over all $n$-variable linear threshold functions and all $n \ge 0$. Benjamini, Kalai and Schramm \cite{BKS:99} have conjectured that the true value of $\w^{\leq 1}[\ltf]$ is $2/\pi$. We make progress on this conjecture by proving that $\w^{\leq 1}[\ltf] \geq 1/2 + c$ for some absolute constant $c>0$. The key ingredient in our proof is a "robust" version of the well-known Khintchine inequality in functional analysis, which we believe may be of independent interest. \item We give an algorithm with the following property: given any $\eta > 0$, the algorithm runs in time $2^{\poly(1/\eta)}$ and determines the value of $\w^{\leq 1}[\ltf]$ up to an Additive Error of $\pm\eta$. We give a similar $2^{{\poly(1/\eta)}}$-time algorithm to determine \emph{Tomaszewski's constant} to within an Additive Error of $\pm \eta$; this is the minimum (over all origin-centered hyperplanes $H$) fraction of points in $\{-1,1\}^n$ that lie within Euclidean distance 1 of $H$. Tomaszewski's constant is conjectured to be 1/2; lower bounds on it have been given by Holzman and Kleitman \cite{HK92} and independently by Ben-Tal, Nemirovski and Roos \cite{BNR02}. Our algorithms combine tools from anti-concentration of sums of independent random variables, Fourier analysis, and Hermite analysis of linear threshold functions. \end{enumerate}

Ely Porat - One of the best experts on this subject based on the ideXlab platform.

  • Improved Algorithms for Polynomial-Time Decay and Time-Decay with Additive Error
    Theory of Computing Systems, 2007
    Co-Authors: Tsvi Kopelowitz, Ely Porat
    Abstract:

    We consider the problem of maintaining polynomial and exponential decay aggregates of a data stream, where the weight of values seen from the stream diminishes as time elapses. These types of aggregation were discussed by Cohen and Strauss (J. Algorithms 1(59), 2006), and can be used in many applications in which the relative value of streaming data decreases since the time the data was seen. Some recent work and space efficient algorithms were developed for time-decaying aggregations, and in particular polynomial and exponential decaying aggregations. All of the work done so far has maintained multiplicative approximations for the aggregates. In this paper we present the first O(log N) space algorithm for the polynomial decay under a multiplicative approximation, matching a lower bound. In addition, we explore and develop algorithms and lower bounds for approximations allowing an Additive Error in addition to the multiplicative Error. We show that in some cases, allowing an Additive Error can decrease the amount of space required, while in other cases we cannot do any better than a solution without Additive Error.

  • improved algorithms for polynomial time decay and time decay with Additive Error
    Italian Conference on Theoretical Computer Science, 2005
    Co-Authors: Tsvi Kopelowitz, Ely Porat
    Abstract:

    We consider the problem of maintaining polynomial and exponential decay aggregates of a data stream, where the weight of values seen from the stream diminishes as time elapses. This type of aggregation was first introduced by Cohen and Strauss in [4]. These types of decay functions on streams are used in many applications in which the relative value of streaming data decreases since the time the data was seen. Some recent work and space efficient algorithms were developed for time-decaying aggregations, and in particular polynomial and exponential decaying aggregations. All of the work done so far has maintained multiplicative approximations for the aggregates. In this paper we present the first O(log N) space algorithm for the polynomial decay under a multiplicative approximation, matching a lower bound. In addition, we explore and develop algorithms and lower bounds for approximations allowing an Additive Error in addition to the multiplicative Error. We show that in some cases, allowing an Additive Error can decrease the amount of space required, while in other cases we cannot do any better than a solution without Additive Error.

  • ICTCS - Improved algorithms for polynomial-time decay and time-decay with Additive Error
    Lecture Notes in Computer Science, 2005
    Co-Authors: Tsvi Kopelowitz, Ely Porat
    Abstract:

    We consider the problem of maintaining polynomial and exponential decay aggregates of a data stream, where the weight of values seen from the stream diminishes as time elapses. This type of aggregation was first introduced by Cohen and Strauss in [4]. These types of decay functions on streams are used in many applications in which the relative value of streaming data decreases since the time the data was seen. Some recent work and space efficient algorithms were developed for time-decaying aggregations, and in particular polynomial and exponential decaying aggregations. All of the work done so far has maintained multiplicative approximations for the aggregates. In this paper we present the first O(log N) space algorithm for the polynomial decay under a multiplicative approximation, matching a lower bound. In addition, we explore and develop algorithms and lower bounds for approximations allowing an Additive Error in addition to the multiplicative Error. We show that in some cases, allowing an Additive Error can decrease the amount of space required, while in other cases we cannot do any better than a solution without Additive Error.