Independent Random Sample

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 43152 Experts worldwide ranked by ideXlab platform

Segers Johan - One of the best experts on this subject based on the ideXlab platform.

  • Risk bounds when learning infinitely many response functions by ordinary linear regression
    2020
    Co-Authors: Plassier Vincent, Portier François, Segers Johan
    Abstract:

    Consider the problem of learning a large number of response functions simultaneously based on the same input variables. The training data consist of a single Independent Random Sample of the input variables drawn from a common distribution together with the associated responses. The input variables are mapped into a highdimensional linear space, called the feature space, and the response functions are modelled as linear functionals of the mapped features, with coefficients calibrated via ordinary least squares. We provide convergence guarantees on the worst-case excess prediction risk by controlling the convergence rate of the excess risk uniformly in the response function. The dimension of the feature map is allowed to tend to infinity with the Sample size. The collection of response functions, although potentially infinite, is supposed to have a finite Vapnik–Chervonenkis dimension. The bound derived can be applied when building multiple surrogate models in a reasonable computing time

  • Risk bounds when learning infinitely many response functions by ordinary linear regression
    2020
    Co-Authors: Plassier Vincent, Portier François, Segers Johan
    Abstract:

    Consider the problem of learning a large number of response functions simultaneously based on the same input variables. The training data consist of a single Independent Random Sample of the input variables drawn from a common distribution together with the associated responses. The input variables are mapped into a high-dimensional linear space, called the feature space, and the response functions are modelled as linear functionals of the mapped features, with coefficients calibrated via ordinary least squares. We provide convergence guarantees on the worst-case excess prediction risk by controlling the convergence rate of the excess risk uniformly in the response function. The dimension of the feature map is allowed to tend to infinity with the Sample size. The collection of response functions, although potentiallyinfinite, is supposed to have a finite Vapnik-Chervonenkis dimension. The bound derived can be applied when building multiple surrogate models in a reasonable computing time.Comment: 19 page

Plassier Vincent - One of the best experts on this subject based on the ideXlab platform.

  • Risk bounds when learning infinitely many response functions by ordinary linear regression
    2020
    Co-Authors: Plassier Vincent, Portier François, Segers Johan
    Abstract:

    Consider the problem of learning a large number of response functions simultaneously based on the same input variables. The training data consist of a single Independent Random Sample of the input variables drawn from a common distribution together with the associated responses. The input variables are mapped into a highdimensional linear space, called the feature space, and the response functions are modelled as linear functionals of the mapped features, with coefficients calibrated via ordinary least squares. We provide convergence guarantees on the worst-case excess prediction risk by controlling the convergence rate of the excess risk uniformly in the response function. The dimension of the feature map is allowed to tend to infinity with the Sample size. The collection of response functions, although potentially infinite, is supposed to have a finite Vapnik–Chervonenkis dimension. The bound derived can be applied when building multiple surrogate models in a reasonable computing time

  • Risk bounds when learning infinitely many response functions by ordinary linear regression
    2020
    Co-Authors: Plassier Vincent, Portier François, Segers Johan
    Abstract:

    Consider the problem of learning a large number of response functions simultaneously based on the same input variables. The training data consist of a single Independent Random Sample of the input variables drawn from a common distribution together with the associated responses. The input variables are mapped into a high-dimensional linear space, called the feature space, and the response functions are modelled as linear functionals of the mapped features, with coefficients calibrated via ordinary least squares. We provide convergence guarantees on the worst-case excess prediction risk by controlling the convergence rate of the excess risk uniformly in the response function. The dimension of the feature map is allowed to tend to infinity with the Sample size. The collection of response functions, although potentiallyinfinite, is supposed to have a finite Vapnik-Chervonenkis dimension. The bound derived can be applied when building multiple surrogate models in a reasonable computing time.Comment: 19 page

Portier François - One of the best experts on this subject based on the ideXlab platform.

  • Risk bounds when learning infinitely many response functions by ordinary linear regression
    2020
    Co-Authors: Plassier Vincent, Portier François, Segers Johan
    Abstract:

    Consider the problem of learning a large number of response functions simultaneously based on the same input variables. The training data consist of a single Independent Random Sample of the input variables drawn from a common distribution together with the associated responses. The input variables are mapped into a highdimensional linear space, called the feature space, and the response functions are modelled as linear functionals of the mapped features, with coefficients calibrated via ordinary least squares. We provide convergence guarantees on the worst-case excess prediction risk by controlling the convergence rate of the excess risk uniformly in the response function. The dimension of the feature map is allowed to tend to infinity with the Sample size. The collection of response functions, although potentially infinite, is supposed to have a finite Vapnik–Chervonenkis dimension. The bound derived can be applied when building multiple surrogate models in a reasonable computing time

  • Risk bounds when learning infinitely many response functions by ordinary linear regression
    2020
    Co-Authors: Plassier Vincent, Portier François, Segers Johan
    Abstract:

    Consider the problem of learning a large number of response functions simultaneously based on the same input variables. The training data consist of a single Independent Random Sample of the input variables drawn from a common distribution together with the associated responses. The input variables are mapped into a high-dimensional linear space, called the feature space, and the response functions are modelled as linear functionals of the mapped features, with coefficients calibrated via ordinary least squares. We provide convergence guarantees on the worst-case excess prediction risk by controlling the convergence rate of the excess risk uniformly in the response function. The dimension of the feature map is allowed to tend to infinity with the Sample size. The collection of response functions, although potentiallyinfinite, is supposed to have a finite Vapnik-Chervonenkis dimension. The bound derived can be applied when building multiple surrogate models in a reasonable computing time.Comment: 19 page

Johan Segers - One of the best experts on this subject based on the ideXlab platform.

  • extreme value copula estimation based on block maxima of a multivariate stationary time series
    arXiv: Statistics Theory, 2013
    Co-Authors: Axel Bucher, Johan Segers
    Abstract:

    The core of the classical block maxima method consists of fitting an extreme value distribution to a Sample of maxima over blocks extracted from an underlying series. In asymptotic theory, it is usually postulated that the block maxima are an Independent Random Sample of an extreme value distribution. In practice however, block sizes are finite, so that the extreme value postulate will only hold approximately. A more accurate asymptotic framework is that of a triangular array of block maxima, the block size depending on the size of the underlying Sample in such a way that both the block size and the number of blocks within that Sample tend to infinity. The copula of the vector of componentwise maxima in a block is assumed to converge to a limit, which, under mild conditions, is then necessarily an extreme value copula. Under this setting and for absolutely regular stationary sequences, the empirical copula of the Sample of vectors of block maxima is shown to be a consistent and asymptotically normal estimator for the limiting extreme value copula. Moreover, the empirical copula serves as a basis for rank-based, nonparametric estimation of the Pickands dependence function of the extreme value copula. The results are illustrated by theoretical examples and a Monte Carlo simulation study.

Ayman Alzaatreh - One of the best experts on this subject based on the ideXlab platform.