Structural Risk Minimization

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 2346 Experts worldwide ranked by ideXlab platform

Sandro Ridella - One of the best experts on this subject based on the ideXlab platform.

  • rademacher complexity and Structural Risk Minimization an application to human gene expression datasets
    International Conference on Artificial Neural Networks, 2012
    Co-Authors: Luca Oneto, Davide Anguita, Alessandro Ghio, Sandro Ridella
    Abstract:

    In this paper, we target the problem of model selection for Support Vector Classifiers through in–sample methods, which are particularly appealing in the small–sample regime, i.e. when few high–dimensional patterns are available. In particular, we describe the application of a trimmed hinge loss function to Rademacher Complexity and Maximal Discrepancy based in–sample approaches. We also show that the selected classifiers outperform the ones obtained with other state-of-the-art in-sample and out–of–sample model selection techniques in classifying Human Gene Expression datasets.

  • Structural Risk Minimization and rademacher complexity for regression
    The European Symposium on Artificial Neural Networks, 2012
    Co-Authors: Davide Anguita, Alessandro Ghio, Luca Oneto, Sandro Ridella
    Abstract:

    The Structural Risk Minimization principle allows estimating the generalization ability of a learned hypothesis by measuring the com- plexity of the entire hypothesis class. Two of the most recent and effective complexity measures are the Rademacher Complexity and the Maximal Discrepancy, which have been applied to the derivation of generalization bounds for kernel classifiers. In this work, we extend their application to the regression framework.

Bilge Karacali - One of the best experts on this subject based on the ideXlab platform.

  • a comparative analysis of Structural Risk Minimization by support vector machines and nearest neighbor rule
    Pattern Recognition Letters, 2004
    Co-Authors: Bilge Karacali, Rajeev Ramanath, Wesley E Snyder
    Abstract:

    Support vector machines (SVMs) are by far the most sophisticated and powerful classifiers available today. However, this robustness and novelty in approach come at a large computational cost. On the other hand, nearest neighbor (NN) classifiers provide a simple yet robust approach that is guaranteed to converge to a result. In this paper, we present a technique that combines these two classifiers by adopting a NN rule-based Structural Risk Minimization classifier. Using synthetic and real data, the classification technique is shown to be more robust to kernel conditions with a significantly lower computational cost than conventional SVMs. Consequently, the proposed method provides a powerful alternative to SVMs in applications where computation time and accuracy are of prime importance. Experimental results indicate that the NNSRM formulation is not only computationally less expensive, but also much more robust to varying data representations than SVMs.

  • Structural Risk Minimization using nearest neighbor rule
    International Conference on Acoustics Speech and Signal Processing, 2003
    Co-Authors: A B Hamza, Hamid Krim, Bilge Karacali
    Abstract:

    We present a novel nearest neighbor rule-based implementation of the Structural Risk Minimization principle to address a generic classification problem. We propose a fast reference set thinning algorithm on the training data set similar to a support vector machine approach. We then show that the nearest neighbor rule based on the reduced set implements the Structural Risk Minimization principle, in a manner, which does not involve selection of a convenient feature space. Simulation results on real data indicate that this method significantly reduces the computational cost of the conventional support vector machines, and achieves a nearly comparable test error performance.

  • Fast Minimization of Structural Risk by nearest neighbor rule
    IEEE transactions on neural networks, 2003
    Co-Authors: Bilge Karacali, Hamid Krim
    Abstract:

    In this paper, we present a novel nearest neighbor rule-based implementation of the Structural Risk Minimization principle to address a generic classification problem. We propose a fast reference set thinning algorithm on the training data set similar to a support vector machine (SVM) approach. We then show that the nearest neighbor rule based on the reduced set implements the Structural Risk Minimization principle, in a manner which does not involve selection of a convenient feature space. Simulation results on real data indicate that this method significantly reduces the computational cost of the conventional SVMs, and achieves a nearly comparable test error performance.

  • ICASSP (6) - Structural Risk Minimization using nearest neighbor rule
    2003 IEEE International Conference on Acoustics Speech and Signal Processing 2003. Proceedings. (ICASSP '03)., 1
    Co-Authors: A B Hamza, Hamid Krim, Bilge Karacali
    Abstract:

    We present a novel nearest neighbor rule-based implementation of the Structural Risk Minimization principle to address a generic classification problem. We propose a fast reference set thinning algorithm on the training data set similar to a support vector machine approach. We then show that the nearest neighbor rule based on the reduced set implements the Structural Risk Minimization principle, in a manner, which does not involve selection of a convenient feature space. Simulation results on real data indicate that this method significantly reduces the computational cost of the conventional support vector machines, and achieves a nearly comparable test error performance.

Marino Gatto - One of the best experts on this subject based on the ideXlab platform.

  • Structural Risk Minimization a robust method for density dependence detection and model selection
    Ecography, 2007
    Co-Authors: Giorgio Corani, Marino Gatto
    Abstract:

    Statistically distinguishing density-dependent from density-independent populations and selecting the best demographic model for a given population are problems of primary importance. Traditional approaches are PBLR (parametric bootstrapping of likelihood ratios) and Information criteria (IC), such as the Schwarz information criterion (SIC), the Akaike information criterion (AIC) or the Final prediction error (FPE). While PBLR is suitable for choosing from a couple of models, ICs select the best model from among a set of candidates. In this paper, we use the Structural Risk Minimization (SRM) approach. SRM is the model selection criterion developed within the Statistical learning theory (SLT), a theory of great generality for modelling and learning with finite samples. SRM is almost unknown in the ecological literature and has never been used to analyze time series. First, we compare SRM with PBLR in terms of their ability to discriminate between the Malthusian and the density-dependent Ricker model. We rigorously repeat the experiments described in a previous study and find out that SRM is equally powerful in detecting density-independence and much more powerful in detecting density-dependence. Then, we compare SRM against ICs in terms of their ability to select one of several candidate models; we generate, via stochastic simulation, a huge amount of artificial time series both density-independent and dependent, with and without exogenous covariates, using different dataset sizes, noise levels and parameter values. Our findings show that SRM outperforms traditional ICs, because generally a) it recognizes the model underlying the data with higher frequency, and b) it leads to lower errors in out-of-samples predictions. SRM superiority is specially apparent with short time series. We finally apply SRM to the population records of Alpine ibex Capra ibex living in the Gran Paradiso National Park (Italy), already investigated by other authors via traditional statistical methods; we both analyze their models and introduce some novel ones. We show that models that are best according to SRM show also the lowest leave-one-out cross-validation error. A widely addressed problem in ecology is the identification of the basic mechanisms underlying the observed course of population abundances. In particular, statistically distinguishing density-dependent from independent time series, which is of paramount importance to correctly predict future population abundances, stimu

  • Structural Risk Minimization: a robust method for density‐dependence detection and model selection
    Ecography, 2007
    Co-Authors: Giorgio Corani, Marino Gatto
    Abstract:

    Statistically distinguishing density-dependent from density-independent populations and selecting the best demographic model for a given population are problems of primary importance. Traditional approaches are PBLR (parametric bootstrapping of likelihood ratios) and Information criteria (IC), such as the Schwarz information criterion (SIC), the Akaike information criterion (AIC) or the Final prediction error (FPE). While PBLR is suitable for choosing from a couple of models, ICs select the best model from among a set of candidates. In this paper, we use the Structural Risk Minimization (SRM) approach. SRM is the model selection criterion developed within the Statistical learning theory (SLT), a theory of great generality for modelling and learning with finite samples. SRM is almost unknown in the ecological literature and has never been used to analyze time series. First, we compare SRM with PBLR in terms of their ability to discriminate between the Malthusian and the density-dependent Ricker model. We rigorously repeat the experiments described in a previous study and find out that SRM is equally powerful in detecting density-independence and much more powerful in detecting density-dependence. Then, we compare SRM against ICs in terms of their ability to select one of several candidate models; we generate, via stochastic simulation, a huge amount of artificial time series both density-independent and dependent, with and without exogenous covariates, using different dataset sizes, noise levels and parameter values. Our findings show that SRM outperforms traditional ICs, because generally a) it recognizes the model underlying the data with higher frequency, and b) it leads to lower errors in out-of-samples predictions. SRM superiority is specially apparent with short time series. We finally apply SRM to the population records of Alpine ibex Capra ibex living in the Gran Paradiso National Park (Italy), already investigated by other authors via traditional statistical methods; we both analyze their models and introduce some novel ones. We show that models that are best according to SRM show also the lowest leave-one-out cross-validation error. A widely addressed problem in ecology is the identification of the basic mechanisms underlying the observed course of population abundances. In particular, statistically distinguishing density-dependent from independent time series, which is of paramount importance to correctly predict future population abundances, stimu

  • vc dimension and Structural Risk Minimization for the analysis of nonlinear ecological models
    Applied Mathematics and Computation, 2006
    Co-Authors: Giorgio Corani, Marino Gatto
    Abstract:

    Abstract The problem of distinguishing density-independent (DI) from density-dependent (DD) demographic time series is important for understanding the mechanisms that regulate populations of animals and plants. We address this problem in a novel way by means of Statistical Learning Theory. First, we estimate the VC-dimensions of the best known nonlinear ecological models through the methodology proposed by Vapnik et al. [V. Vapnik, E. Levin, Y. Cun, Measuring the VC-dimension of a learning machine, Neural Comput. 6 (1994) 851–876]. Then, we generate noisy artificial time series, both DI and DD, and use Structural Risk Minimization (SRM) to recognize the model underlying the data from among a suite of alternative candidates. The method shows an encouraging ability in distinguishing between DI and DD time series.

  • AN APPLICATION OF Structural Risk Minimization TO THE SELECTION OF ECOLOGICAL MODELS
    IFAC Proceedings Volumes, 2005
    Co-Authors: Giorgio Corani, Marino Gatto
    Abstract:

    Abstract The problem of distinguishing density-independent (DI) from density-dependent (DD) demographic time series has been addressed in the past via hypothesis testing based on parametric bootstrapping (PBLR) and, in later works, by Information Criteria such as FPE or SIC. Here, we address the problem in a novel way using Structural Risk Minimization (SRM). DI and DD time series corrupted with noise are extensively simulated using a drift (DI) and a Ricker (DD) model; on each generated time series, both models are identified, and then one is selected by FPE, SIC and SRM. The probability of density-[in]dependence recognition is statistically assessed and compared with the results obtained via PBLR in a previous work.

Yun-chao Bai - One of the best experts on this subject based on the ideXlab platform.

Wang Ning - One of the best experts on this subject based on the ideXlab platform.