Nonlinear Approximation

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 324 Experts worldwide ranked by ideXlab platform

Licheng Jiao - One of the best experts on this subject based on the ideXlab platform.

Shan Tan - One of the best experts on this subject based on the ideXlab platform.

Reinhard Hochmuth - One of the best experts on this subject based on the ideXlab platform.

Ronald A Devore - One of the best experts on this subject based on the ideXlab platform.

  • Optimal Stable Nonlinear Approximation.
    arXiv: Numerical Analysis, 2020
    Co-Authors: Albert Cohen, Ronald A Devore, Guergana Petrova, Przemysław Wojtaszczyk
    Abstract:

    While it is well known that Nonlinear methods of Approximation can often perform dramatically better than linear methods, there are still questions on how to measure the optimal performance possible for such methods. This paper studies Nonlinear methods of Approximation that are compatible with numerical implementation in that they are required to be numerically stable. A measure of optimal performance, called {\em stable manifold widths}, for approximating a model class $K$ in a Banach space $X$ by stable manifold methods is introduced. Fundamental inequalities between these stable manifold widths and the entropy of $K$ are established. The effects of requiring stability in the settings of deep learning and compressed sensing are discussed.

  • Nonlinear Approximation and (Deep) ReLU Networks.
    arXiv: Learning, 2019
    Co-Authors: Ingrid Daubechies, Ronald A Devore, Simon Foucart, Boris Hanin, Guergana Petrova
    Abstract:

    This article is concerned with the Approximation and expressive powers of deep neural networks. This is an active research area currently producing many interesting papers. The results most commonly found in the literature prove that neural networks approximate functions with classical smoothness to the same accuracy as classical linear methods of Approximation, e.g. Approximation by polynomials or by piecewise polynomials on prescribed partitions. However, Approximation by neural networks depending on n parameters is a form of Nonlinear Approximation and as such should be compared with other Nonlinear methods such as variable knot splines or n-term Approximation from dictionaries. The performance of neural networks in targeted applications such as machine learning indicate that they actually possess even greater Approximation power than these traditional methods of Nonlinear Approximation. The main results of this article prove that this is indeed the case. This is done by exhibiting large classes of functions which can be efficiently captured by neural networks where classical Nonlinear methods fall short of the task. The present article purposefully limits itself to studying the Approximation of univariate functions by ReLU networks. Many generalizations to functions of several variables and other activation functions can be envisioned. However, even in this simplest of settings considered here, a theory that completely quantifies the Approximation power of neural networks is still lacking.

  • Nonlinear Approximation and its applications
    Multiscale Nonlinear and Adaptive Approximation, 2009
    Co-Authors: Ronald A Devore
    Abstract:

    I first met Wolfgang Dahmen in 1974 in Oberwolfach. He looked like a high school student to me but he impressed everyone with his talk on whether polynomial operators could produce both polynomial and spectral orders of Approximation. We became the best of friends and frequent collaborators. While Wolfgang’s mathematical contributions spread across many disciplines, a major thread in his work has been the exploitation of Nonlinear Approximation. This article will reflect on Wolfgang’s pervasive contributions to the development of Nonlinear Approximation and its application. Since many of the contributions in this volume will address specific application areas in some details, my thoughts on these will be to a large extent anecdotal.

  • Restricted Nonlinear Approximation
    Constructive Approximation, 2000
    Co-Authors: Albert Cohen, Ronald A Devore, R. Hochmuth
    Abstract:

    We introduce a new form of Nonlinear Approximation called restricted Approximation . It is a generalization of n -term wavelet Approximation in which a weight function is used to control the terms in the wavelet expansion of the approximant. This form of Approximation occurs in statistical estimation and in the characterization of interpolation spaces for certain pairs of L p and Besov spaces. We characterize, both in terms of their wavelet coefficients and also in terms of their smoothness, the functions which are approximated with a specified rate by restricted Approximation. We also show the relation of this form of Approximation with certain types of thresholding of wavelet coefficients.

  • Nonlinear Approximation and the Space BV(R2)
    American Journal of Mathematics, 1999
    Co-Authors: Albert Cohen, Ronald A Devore, Pencho Petrushev
    Abstract:

    Given a function/ ? ?2(0), Q '= (0, l)2 and a real number t > 0, let U(f,t) := infg?BV(g) 11/ ? ??ll/^/) + 'Vgig), where the infimum is taken over all functions g G BV of bounded variation on /. This and related extremal problems arise in several areas of mathematics such as interpolation of operators and statistical estimation, as well as in digital image processing. Techniques for finding minimizers g for U(f, t) based on variational calculus and Nonlinear partial differential equations have been put forward by several authors (DMS), (RO), (MS), (CL). The main disadvantage of these approaches is that they are numerically intensive. On the other hand, it is well known that more elementary methods based on wavelet shrinkage solve related extremal problems, for example, the above problem with BV replaced by the Besov space B\(L\(I)) (see e.g. (CDLL)). However, since BV has no simple description in terms of wavelet coefficients, it is not clear that minimizers for U(f, t) can be realized in this way. We shall show in this paper that simple methods based on Haar thresholding provide near minimizers for U(f, t). Our analysis of this extremal problem brings forward many interesting relations between Haar decompositions and the space BV. 1. Introduction. Nonlinear Approximation has recently played an impor tant role in several problems of image processing including compression, noise removal, and feature extraction. We have in mind techniques such as wavelet compression (DJL), wavelet shrinkage or thresholding (DJKP1), wavelet packets (CW), and greedy algorithms (MZ), (DT). There has also been an impressive contribution of techniques based on variational calculus and Nonlinear partial dif ferential equations (see e.g. (DMS), (RO), (MS), (CL)) especially to the problems of noise removal and image segmentation. The common point between these two approaches is their ability to adapt to the composite nature of images: edge, tex tures and smooth regions should be treated adaptively, a requirement which is certainly not fulfilled by the classical linear filtering techniques.

Xiangrong Zhang - One of the best experts on this subject based on the ideXlab platform.