Band-Limited Function - Explore the Science & Experts | ideXlab

Scan Science and Technology

Contact Leading Edge Experts & Companies

Band-Limited Function

The Experts below are selected from a list of 96 Experts worldwide ranked by ideXlab platform

Jianfeng Weng – 1st expert on this subject based on the ideXlab platform

  • A New One-Step Band-Limited Extrapolation Procedure Using Empirical Orthogonal Functions
    2006 IEEE International Conference on Acoustics Speech and Signal Processing Proceedings, 2006
    Co-Authors: Jianfeng Weng

    Abstract:

    A one-step Band-Limited extrapolation procedure is systematically developed under an a priori assumption of bandwidth. The rationale of the proposed scheme is to expand the known signal segment based on a Band-Limited basis Function set and then to generate a set of empirical orthogonal Functions (EOF’s) adaptively from the sample values of the Band-Limited Function set. Simulation results indicate that, in addition to the attractive adaptive feature, this scheme also appears to guarantee a smooth result for inexact data, thus suggesting the robustness of the proposed procedure

  • Reconstructing a Band-Limited Function Using Empirical Orthogonal Functions
    2006 IEEE Instrumentation and Measurement Technology Conference Proceedings, 2006
    Co-Authors: Jianfeng Weng

    Abstract:

    A reconstruction procedure for a Band-Limited Function from its samples taken in a finite segment is systematically developed under an a priori assumption of bandwidth. The rationale of the proposed scheme is to expand the known signal segment based on a Band-Limited basis Function set and then to generate a set of empirical orthogonal Functions (EOF’s) adaptively from the sample values of the Band-Limited Function set. Simulation results indicate that, in addition to the attractive adaptive feature, this scheme also appears to guarantee a smooth result for inexact data, thus suggesting the robustness of the proposed procedure

Jesse M. Zhang – 2nd expert on this subject based on the ideXlab platform

  • A Fourier-Based Approach to Generalization and Optimization in Deep Learning
    IEEE Journal on Selected Areas in Information Theory, 2020
    Co-Authors: Farzan Farnia, Jesse M. Zhang

    Abstract:

    The success of deep neural networks stems from their ability to generalize well on real data; however, et al. have observed that neural networks can easily overfit randomly-generated labels. This observation highlights the following question: why do gradient methods succeed in finding generalizable solutions for neural networks while there exist solutions with poor generalization behavior? In this work, we use a Fourier-based approach to study the generalization properties of gradient-based methods over 2-layer neural networks with Band-Limited activation Functions. Our results indicate that in such settings if the underlying distribution of data enjoys nice Fourier properties including bandlimitedness and bounded Fourier norm, then the gradient descent method can converge to local minima with nice generalization behavior. We also establish a Fourier-based generalization error bound for Band-Limited Function spaces, applicable to 2-layer neural networks with general activation Functions. This generalization bound motivates a grouped version of path norms for measuring the complexity of 2-layer neural networks with ReLU-type activation Functions. We empirically demonstrate that regularization of the group path norms results in neural network solutions that can fit true labels without losing test accuracy while not overfitting random labels.

Farzan Farnia – 3rd expert on this subject based on the ideXlab platform

  • A Fourier-Based Approach to Generalization and Optimization in Deep Learning
    IEEE Journal on Selected Areas in Information Theory, 2020
    Co-Authors: Farzan Farnia, Jesse M. Zhang

    Abstract:

    The success of deep neural networks stems from their ability to generalize well on real data; however, et al. have observed that neural networks can easily overfit randomly-generated labels. This observation highlights the following question: why do gradient methods succeed in finding generalizable solutions for neural networks while there exist solutions with poor generalization behavior? In this work, we use a Fourier-based approach to study the generalization properties of gradient-based methods over 2-layer neural networks with Band-Limited activation Functions. Our results indicate that in such settings if the underlying distribution of data enjoys nice Fourier properties including bandlimitedness and bounded Fourier norm, then the gradient descent method can converge to local minima with nice generalization behavior. We also establish a Fourier-based generalization error bound for Band-Limited Function spaces, applicable to 2-layer neural networks with general activation Functions. This generalization bound motivates a grouped version of path norms for measuring the complexity of 2-layer neural networks with ReLU-type activation Functions. We empirically demonstrate that regularization of the group path norms results in neural network solutions that can fit true labels without losing test accuracy while not overfitting random labels.