Bias Variance Dilemma - Explore the Science & Experts | ideXlab

Scan Science and Technology

Contact Leading Edge Experts & Companies

Bias Variance Dilemma

The Experts below are selected from a list of 192 Experts worldwide ranked by ideXlab platform

R. Manduchi – 1st expert on this subject based on the ideXlab platform

  • CVPR (2) – Invariant operators, small samples, and the BiasVariance Dilemma
    Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition 2004. CVPR 2004., 2004
    Co-Authors: R. Manduchi

    Abstract:

    Invariant features or operators are often used to shield the recognition process from the effect of “nuisance” parameters, such as rotations, foreshortening, or illumination changes. From an information-theoretic point of view, imposing inVariance results in reduced (rather than improved) system performance. In fact, in the case of small training samples, the situation is reversed, and invariant operators may reduce the misclassification rate. We propose an analysis of this interesting behavior based on the BiasVariance Dilemma, and present experimental results confirming our theoretical expectations. In addition, we introduce the concept of “randomized invariants” for training, which can be used to mitigate the effect of small sample size.

  • Invariant operators, small samples, and the BiasVariance Dilemma
    Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition 2004. CVPR 2004., 2004
    Co-Authors: R. Manduchi

    Abstract:

    Invariant features or operators are often used to shield the recognition process from the effect of “nuisance” parameters, such as rotations, foreshortening, or illumination changes. From an information-theoretic point of view, imposing inVariance results in reduced (rather than improved) system performance. In fact, in the case of small training samples, the situation is reversed, and invariant operators may reduce the misclassification rate. We propose an analysis of this interesting behavior based on the BiasVariance Dilemma, and present experimental results confirming our theoretical expectations. In addition, we introduce the concept of “randomized invariants” for training, which can be used to mitigate the effect of small sample size.

René Doursat – 2nd expert on this subject based on the ideXlab platform

  • neural networks and the Bias Variance Dilemma
    Neural Computation, 1992
    Co-Authors: Stuart Geman, Elie Bienenstock, René Doursat

    Abstract:

    Feedforward neural networks trained by error backpropagation are examples of nonparametric regression estimators. We present a tutorial on nonparametric inference and its relation to neural networks, and we use the statistical viewpoint to highlight strengths and weaknesses of neural models. We illustrate the main points with some recognition experiments involving artificial data as well as handwritten numerals. In way of conclusion, we suggest that current-generation feedforward neural networks are largely inadequate for difficult problems in machine perception and machine learning, regardless of parallel-versus-serial hardware or other implementation issues. Furthermore, we suggest that the fundamental challenges in neural modeling are about representation rather than learning per se. This last point is supported by additional experiments with handwritten numerals.

  • Neural Networks and the Bias/Variance Dilemma
    Neural Computation, 1992
    Co-Authors: Stuart Geman, Elie Bienenstock, René Doursat

    Abstract:

    Feedforward neural networks trained by error backpropagation are examples of nonparametric regression estimators. We present a tutorial on nonparametric inference and its relation to neural networks, and we use the statistical viewpoint to highlight strengths and weaknesses of neural models. We illustrate the main points with some recognition experiments involving artificial data as well as handwritten numerals. In way of conclusion, we suggest that current-generation feedforward neural networks are largely inadequate for difficult problems in machine perception and machine learning, regardless of parallel-versus-serial hardware or other implementation issues. Furthermore, we suggest that the fundamental challenges in neural modeling are about representation rather than learning per se. This last point is supported by additional experiments with handwritten numerals.

Dan Schonfeld – 3rd expert on this subject based on the ideXlab platform

  • Space Kernel Analysis
    2009 IEEE International Conference on Acoustics Speech and Signal Processing, 2009
    Co-Authors: Liuling Gong, Dan Schonfeld

    Abstract:

    In this paper, we propose a novel nonparametric modeling technique, namely Space Kernel Analysis (SKA), as a result of the definition of the space kernel. We analyze the uncertainty of SKA and show that SKA is subjected to the Bias/Variance Dilemma. Nevertheless, we demonstrate that, by a proper choice of the space kernel matrix, SKA is able to balance between the robustness and accuracy and hence outperforms other kernel-based learning methods. The cost function of SKA is derived, and it proves that SKA minimizes the Weighted Least Squared cost function whose weight matrix is diagonal and determined by the space kernel matrix. The parallels between SKA and several other nonparametric modeling techniques are examined. Study shows that the traditional Kernel Regression, General Regression Neural Network, Similarity Based Modeling and Radial Basis Function Network are examples of SKA with specified space kernel matrices.

  • ICASSP – Space Kernel Analysis
    2009 IEEE International Conference on Acoustics Speech and Signal Processing, 2009
    Co-Authors: Liuling Gong, Dan Schonfeld

    Abstract:

    In this paper, we propose a novel nonparametric modeling technique, namely Space Kernel Analysis (SKA), as a result of the definition of the space kernel. We analyze the uncertainty of SKA and show that SKA is subjected to the Bias/Variance Dilemma. Nevertheless, we demonstrate that, by a proper choice of the space kernel matrix, SKA is able to balance between the robustness and accuracy and hence outperforms other kernel-based learning methods. The cost function of SKA is derived, and it proves that SKA minimizes the Weighted Least Squared cost function whose weight matrix is diagonal and determined by the space kernel matrix. The parallels between SKA and several other nonparametric modeling techniques are examined. Study shows that the traditional Kernel Regression, General Regression Neural Network, Similarity Based Modeling and Radial Basis Function Network are examples of SKA with specified space kernel matrices.