Bias Variance Dilemma

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 192 Experts worldwide ranked by ideXlab platform

R. Manduchi - One of the best experts on this subject based on the ideXlab platform.

  • CVPR (2) - Invariant operators, small samples, and the Bias-Variance Dilemma
    Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition 2004. CVPR 2004., 2004
    Co-Authors: R. Manduchi
    Abstract:

    Invariant features or operators are often used to shield the recognition process from the effect of "nuisance" parameters, such as rotations, foreshortening, or illumination changes. From an information-theoretic point of view, imposing inVariance results in reduced (rather than improved) system performance. In fact, in the case of small training samples, the situation is reversed, and invariant operators may reduce the misclassification rate. We propose an analysis of this interesting behavior based on the Bias-Variance Dilemma, and present experimental results confirming our theoretical expectations. In addition, we introduce the concept of "randomized invariants" for training, which can be used to mitigate the effect of small sample size.

  • Invariant operators, small samples, and the Bias-Variance Dilemma
    Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition 2004. CVPR 2004., 2004
    Co-Authors: R. Manduchi
    Abstract:

    Invariant features or operators are often used to shield the recognition process from the effect of "nuisance" parameters, such as rotations, foreshortening, or illumination changes. From an information-theoretic point of view, imposing inVariance results in reduced (rather than improved) system performance. In fact, in the case of small training samples, the situation is reversed, and invariant operators may reduce the misclassification rate. We propose an analysis of this interesting behavior based on the Bias-Variance Dilemma, and present experimental results confirming our theoretical expectations. In addition, we introduce the concept of "randomized invariants" for training, which can be used to mitigate the effect of small sample size.

René Doursat - One of the best experts on this subject based on the ideXlab platform.

  • neural networks and the Bias Variance Dilemma
    Neural Computation, 1992
    Co-Authors: Stuart Geman, Elie Bienenstock, René Doursat
    Abstract:

    Feedforward neural networks trained by error backpropagation are examples of nonparametric regression estimators. We present a tutorial on nonparametric inference and its relation to neural networks, and we use the statistical viewpoint to highlight strengths and weaknesses of neural models. We illustrate the main points with some recognition experiments involving artificial data as well as handwritten numerals. In way of conclusion, we suggest that current-generation feedforward neural networks are largely inadequate for difficult problems in machine perception and machine learning, regardless of parallel-versus-serial hardware or other implementation issues. Furthermore, we suggest that the fundamental challenges in neural modeling are about representation rather than learning per se. This last point is supported by additional experiments with handwritten numerals.

  • Neural Networks and the Bias/Variance Dilemma
    Neural Computation, 1992
    Co-Authors: Stuart Geman, Elie Bienenstock, René Doursat
    Abstract:

    Feedforward neural networks trained by error backpropagation are examples of nonparametric regression estimators. We present a tutorial on nonparametric inference and its relation to neural networks, and we use the statistical viewpoint to highlight strengths and weaknesses of neural models. We illustrate the main points with some recognition experiments involving artificial data as well as handwritten numerals. In way of conclusion, we suggest that current-generation feedforward neural networks are largely inadequate for difficult problems in machine perception and machine learning, regardless of parallel-versus-serial hardware or other implementation issues. Furthermore, we suggest that the fundamental challenges in neural modeling are about representation rather than learning per se. This last point is supported by additional experiments with handwritten numerals.

Dan Schonfeld - One of the best experts on this subject based on the ideXlab platform.

  • Space Kernel Analysis
    2009 IEEE International Conference on Acoustics Speech and Signal Processing, 2009
    Co-Authors: Liuling Gong, Dan Schonfeld
    Abstract:

    In this paper, we propose a novel nonparametric modeling technique, namely Space Kernel Analysis (SKA), as a result of the definition of the space kernel. We analyze the uncertainty of SKA and show that SKA is subjected to the Bias/Variance Dilemma. Nevertheless, we demonstrate that, by a proper choice of the space kernel matrix, SKA is able to balance between the robustness and accuracy and hence outperforms other kernel-based learning methods. The cost function of SKA is derived, and it proves that SKA minimizes the Weighted Least Squared cost function whose weight matrix is diagonal and determined by the space kernel matrix. The parallels between SKA and several other nonparametric modeling techniques are examined. Study shows that the traditional Kernel Regression, General Regression Neural Network, Similarity Based Modeling and Radial Basis Function Network are examples of SKA with specified space kernel matrices.

  • ICASSP - Space Kernel Analysis
    2009 IEEE International Conference on Acoustics Speech and Signal Processing, 2009
    Co-Authors: Liuling Gong, Dan Schonfeld
    Abstract:

    In this paper, we propose a novel nonparametric modeling technique, namely Space Kernel Analysis (SKA), as a result of the definition of the space kernel. We analyze the uncertainty of SKA and show that SKA is subjected to the Bias/Variance Dilemma. Nevertheless, we demonstrate that, by a proper choice of the space kernel matrix, SKA is able to balance between the robustness and accuracy and hence outperforms other kernel-based learning methods. The cost function of SKA is derived, and it proves that SKA minimizes the Weighted Least Squared cost function whose weight matrix is diagonal and determined by the space kernel matrix. The parallels between SKA and several other nonparametric modeling techniques are examined. Study shows that the traditional Kernel Regression, General Regression Neural Network, Similarity Based Modeling and Radial Basis Function Network are examples of SKA with specified space kernel matrices.

Ricardo H. C. Takahashi - One of the best experts on this subject based on the ideXlab platform.

  • LMI formulation for multiobjective learning in Radial Basis Function neural networks
    The 2010 International Joint Conference on Neural Networks (IJCNN), 2010
    Co-Authors: Gladston J. P. Moreira, Elizabeth F. Wanner, Frederico G. Guimarães, Luiz H. Duczmal, Ricardo H. C. Takahashi
    Abstract:

    This work presents a Linear Matrix Inequality (LMI) formulation for training Radial Basis Function (RBF) neural networks, considering the context of multiobjective learning. The multiobjective learning approach treats the Bias-Variance Dilemma in neural network modeling as a bi-objective optimization problem: the minimization of the empirical risk measured by the sum of squared error over the training data, and the minimization of the structure complexity measured by the norm of the weight vector. We transform the multiobjective problem into a constrained mono-objective one, using the ϵ-constraint method. This mono-objective problem can be efficiently solved using an LMI formulation. A procedure for choosing the width parameter of the radial basis functions is also presented. The results show that the proposed methodology provides generalization control and high quality solutions.

  • IJCNN - LMI formulation for multiobjective learning in Radial Basis Function neural networks
    The 2010 International Joint Conference on Neural Networks (IJCNN), 2010
    Co-Authors: Gladston Moreira, Elizabeth F. Wanner, Frederico G. Guimarães, Luiz H. Duczmal, Ricardo H. C. Takahashi
    Abstract:

    This work presents a Linear Matrix Inequality (LMI) formulation for training Radial Basis Function (RBF) neural networks, considering the context of multiobjective learning. The multiobjective learning approach treats the Bias-Variance Dilemma in neural network modeling as a bi-objective optimization problem: the minimization of the empirical risk measured by the sum of squared error over the training data, and the minimization of the structure complexity measured by the norm of the weight vector. We transform the multiobjective problem into a constrained mono-objective one, using the ∈-constraint method. This mono-objective problem can be efficiently solved using an LMI formulation. A procedure for choosing the width parameter of the radial basis functions is also presented. The results show that the proposed methodology provides generalization control and high quality solutions.

Stuart Geman - One of the best experts on this subject based on the ideXlab platform.

  • neural networks and the Bias Variance Dilemma
    Neural Computation, 1992
    Co-Authors: Stuart Geman, Elie Bienenstock, René Doursat
    Abstract:

    Feedforward neural networks trained by error backpropagation are examples of nonparametric regression estimators. We present a tutorial on nonparametric inference and its relation to neural networks, and we use the statistical viewpoint to highlight strengths and weaknesses of neural models. We illustrate the main points with some recognition experiments involving artificial data as well as handwritten numerals. In way of conclusion, we suggest that current-generation feedforward neural networks are largely inadequate for difficult problems in machine perception and machine learning, regardless of parallel-versus-serial hardware or other implementation issues. Furthermore, we suggest that the fundamental challenges in neural modeling are about representation rather than learning per se. This last point is supported by additional experiments with handwritten numerals.

  • Neural Networks and the Bias/Variance Dilemma
    Neural Computation, 1992
    Co-Authors: Stuart Geman, Elie Bienenstock, René Doursat
    Abstract:

    Feedforward neural networks trained by error backpropagation are examples of nonparametric regression estimators. We present a tutorial on nonparametric inference and its relation to neural networks, and we use the statistical viewpoint to highlight strengths and weaknesses of neural models. We illustrate the main points with some recognition experiments involving artificial data as well as handwritten numerals. In way of conclusion, we suggest that current-generation feedforward neural networks are largely inadequate for difficult problems in machine perception and machine learning, regardless of parallel-versus-serial hardware or other implementation issues. Furthermore, we suggest that the fundamental challenges in neural modeling are about representation rather than learning per se. This last point is supported by additional experiments with handwritten numerals.