Unbiased Estimate

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 23703 Experts worldwide ranked by ideXlab platform

T Aaltonen - One of the best experts on this subject based on the ideXlab platform.

  • measurement of the ratio sigma sub tt sigma sub z gamma sup sub yields ll and precise extraction of the tt cross section
    Physical Review Letters, 2010
    Co-Authors: T Aaltonen, P Mehtala, R Orava, K Osterberg, H Saarikko, N Van Remortel, J Adelman, E Brubaker, W T Fedorko, C Grossopilcher
    Abstract:

    We report a measurement of the ratio of the tt to Z/{gamma}* production cross sections in {radical}(s)=1.96 TeV pp collisions using data corresponding to an integrated luminosity of up to 4.6 fb{sup -1}, collected by the CDF II detector. The tt cross section ratio is measured using two complementary methods, a b-jet tagging measurement and a topological approach. By multiplying the ratios by the well-known theoretical Z/{gamma}{sup *{yields}}ll cross section predicted by the standard model, the extracted tt cross sections are effectively insensitive to the uncertainty on luminosity. A best linear Unbiased Estimate is used to combine both measurements with the result {sigma}{sub tt}=7.70{+-}0.52 pb, for a top-quark mass of 172.5 GeV/c{sup 2}.

  • measurement of the ratio sigma t t over bar sigma z gamma ll and precise extraction of the t t over bar cross section
    Physical Review Letters, 2010
    Co-Authors: T Aaltonen
    Abstract:

    We report a measurement of the ratio of the t (t) over bar to Z/gamma* production cross sections in root s = 1.96 TeV p (p) over bar collisions using data corresponding to an integrated luminosity of up to 4.6 fb(-1), collected by the CDF II detector. The t (t) over bar cross section ratio is measured using two complementary methods, a b-jet tagging measurement and a topological approach. By multiplying the ratios by the well-known theoretical Z/gamma* -> ll cross section predicted by the standard model, the extracted t (t) over bar cross sections are effectively insensitive to the uncertainty on luminosity. A best linear Unbiased Estimate is used to combine both measurements with the result sigma(t (t) over bar) = 7.70 +/- 0.52 pb, for a top-quark mass of 172.5 GeV/c(2).

  • measurement of the ratio σtt σz γ ll and precise extraction of the tt cross section
    Physical Review Letters, 2010
    Co-Authors: T Aaltonen, J Adelman, Alvarez B Gonzalez, S Amerio, D Amidei, A Anastassov, A Annovi, J Antos
    Abstract:

    We report a measurement of the ratio of the t (t) over bar to Z/gamma* production cross sections in root s = 1.96 TeV p (p) over bar collisions using data corresponding to an integrated luminosity of up to 4.6 fb(-1), collected by the CDF II detector. The t (t) over bar cross section ratio is measured using two complementary methods, a b-jet tagging measurement and a topological approach. By multiplying the ratios by the well-known theoretical Z/gamma* -> ll cross section predicted by the standard model, the extracted t (t) over bar cross sections are effectively insensitive to the uncertainty on luminosity. A best linear Unbiased Estimate is used to combine both measurements with the result sigma(t (t) over bar) = 7.70 +/- 0.52 pb, for a top-quark mass of 172.5 GeV/c(2).

Hidemitsu Ogawa - One of the best experts on this subject based on the ideXlab platform.

  • theoretical and experimental evaluation of the subspace information criterion
    Machine Learning, 2002
    Co-Authors: Masashi Sugiyama, Hidemitsu Ogawa
    Abstract:

    Recently, a new model selection criterion called the subspace information criterion (SIC) was proposed. SIC works well with small samples since it gives an Unbiased Estimate of the generalization error with finite samples. In this paper, we theoretically and experimentally evaluate the effectiveness of SIC in comparison with existing model selection techniques including the traditional leave-one-out cross-validation (CV), Mallows's CP, Akaike's information criterion (AIC), Sugiura's corrected AIC (cAIC), Schwarz's Bayesian information criterion (BIC), Rissanen's minimum description length criterion (MDL), and Vapnik's measure (VM). Theoretical evaluation includes the comparison of the generalization measure, approximation method, and restriction on model candidates and learning methods. Experimentally, the performance of SIC in various situations is investigated. The simulations show that SIC outperforms existing techniques especially when the number of training examples is small and the noise variance is large.

  • theoretical and experimental evaluation of subspace information criterion
    2002
    Co-Authors: Masashi Sugiyama, Hidemitsu Ogawa
    Abstract:

    Recently, a new model selection criterion called the subspace information criterion (SIC) was proposed. SIC works well with small samples since it gives an Unbiased Estimate of the generalization error with finite samples. In this paper, we theoretically and experimentally evaluate the effectiveness of SIC in comparison with existing model selection techniques including the traditional leave-one-out cross-validation (CV), Mallows’s CP , Akaike’s information criterion (AIC), Sugiura’s corrected AIC (cAIC), Schwarz’s Bayesian information criterion (BIC), Rissanen’s minimum description length criterion (MDL), and Vapnik’s measure (VM). Theoretical evaluation includes the comparison of the generalization measure, approximation method, and restriction on model candidates and learning methods. Experimentally, the performance of SIC in various situations is investigated. The simulations show that SIC outperforms existing techniques especially when the number of training examples is small and the noise variance is large.

  • subspace information criterion for model selection
    Neural Computation, 2001
    Co-Authors: Masashi Sugiyama, Hidemitsu Ogawa
    Abstract:

    The problem of model selection is considerably important for acquiring higher levels of generalization capability in supervised learning. In this article, we propose a new criterion for model selection, the subspace information criterion (SIC), which is a generalization of Mallows's CL. It is assumed that the learning target function belongs to a specified functional Hilbert space and the generalization error is defined as the Hilbert space squared norm of the difference between the learning result function and target function. SIC gives an Unbiased Estimate of the generalization error so defined. SIC assumes the availability of an Unbiased Estimate of the target function and the noise covariance matrix, which are generally unknown. A practical calculation method of SIC for least-mean-squares learning is provided under the assumption that the dimension of the Hilbert space is less than the number of training examples. Finally, computer simulations in two examples show that SIC works well even when the number of training examples is small.

J Antos - One of the best experts on this subject based on the ideXlab platform.

  • measurement of the ratio σtt σz γ ll and precise extraction of the tt cross section
    Physical Review Letters, 2010
    Co-Authors: T Aaltonen, J Adelman, Alvarez B Gonzalez, S Amerio, D Amidei, A Anastassov, A Annovi, J Antos
    Abstract:

    We report a measurement of the ratio of the t (t) over bar to Z/gamma* production cross sections in root s = 1.96 TeV p (p) over bar collisions using data corresponding to an integrated luminosity of up to 4.6 fb(-1), collected by the CDF II detector. The t (t) over bar cross section ratio is measured using two complementary methods, a b-jet tagging measurement and a topological approach. By multiplying the ratios by the well-known theoretical Z/gamma* -> ll cross section predicted by the standard model, the extracted t (t) over bar cross sections are effectively insensitive to the uncertainty on luminosity. A best linear Unbiased Estimate is used to combine both measurements with the result sigma(t (t) over bar) = 7.70 +/- 0.52 pb, for a top-quark mass of 172.5 GeV/c(2).

Peter Richtarik - One of the best experts on this subject based on the ideXlab platform.

  • sega variance reduction via gradient sketching
    Neural Information Processing Systems, 2018
    Co-Authors: Filip Hanzely, Konstantin Mishchenko, Peter Richtarik
    Abstract:

    We propose a novel randomized first order optimization method---SEGA (SkEtched GrAdient method)---which progressively throughout its iterations builds a variance-reduced Estimate of the gradient from random linear measurements (sketches) of the gradient provided at each iteration by an oracle. In each iteration, SEGA updates the current Estimate of the gradient through a sketch-and-project operation using the information provided by the latest sketch, and this is subsequently used to compute an Unbiased Estimate of the true gradient through a random relaxation procedure. This Unbiased Estimate is then used to perform a gradient step. Unlike standard subspace descent methods, such as coordinate descent, SEGA can be used for optimization problems with a non-separable proximal term. We provide a general convergence analysis and prove linear convergence for strongly convex objectives. In the special case of coordinate sketches, SEGA can be enhanced with various techniques such as importance sampling, minibatching and acceleration, and its rate is up to a small constant factor identical to the best-known rate of coordinate descent.

  • sega variance reduction via gradient sketching
    arXiv: Optimization and Control, 2018
    Co-Authors: Filip Hanzely, Konstantin Mishchenko, Peter Richtarik
    Abstract:

    We propose a randomized first order optimization method--SEGA (SkEtched GrAdient method)-- which progressively throughout its iterations builds a variance-reduced Estimate of the gradient from random linear measurements (sketches) of the gradient obtained from an oracle. In each iteration, SEGA updates the current Estimate of the gradient through a sketch-and-project operation using the information provided by the latest sketch, and this is subsequently used to compute an Unbiased Estimate of the true gradient through a random relaxation procedure. This Unbiased Estimate is then used to perform a gradient step. Unlike standard subspace descent methods, such as coordinate descent, SEGA can be used for optimization problems with a non-separable proximal term. We provide a general convergence analysis and prove linear convergence for strongly convex objectives. In the special case of coordinate sketches, SEGA can be enhanced with various techniques such as importance sampling, minibatching and acceleration, and its rate is up to a small constant factor identical to the best-known rate of coordinate descent.

Masashi Sugiyama - One of the best experts on this subject based on the ideXlab platform.

  • theoretical and experimental evaluation of the subspace information criterion
    Machine Learning, 2002
    Co-Authors: Masashi Sugiyama, Hidemitsu Ogawa
    Abstract:

    Recently, a new model selection criterion called the subspace information criterion (SIC) was proposed. SIC works well with small samples since it gives an Unbiased Estimate of the generalization error with finite samples. In this paper, we theoretically and experimentally evaluate the effectiveness of SIC in comparison with existing model selection techniques including the traditional leave-one-out cross-validation (CV), Mallows's CP, Akaike's information criterion (AIC), Sugiura's corrected AIC (cAIC), Schwarz's Bayesian information criterion (BIC), Rissanen's minimum description length criterion (MDL), and Vapnik's measure (VM). Theoretical evaluation includes the comparison of the generalization measure, approximation method, and restriction on model candidates and learning methods. Experimentally, the performance of SIC in various situations is investigated. The simulations show that SIC outperforms existing techniques especially when the number of training examples is small and the noise variance is large.

  • theoretical and experimental evaluation of subspace information criterion
    2002
    Co-Authors: Masashi Sugiyama, Hidemitsu Ogawa
    Abstract:

    Recently, a new model selection criterion called the subspace information criterion (SIC) was proposed. SIC works well with small samples since it gives an Unbiased Estimate of the generalization error with finite samples. In this paper, we theoretically and experimentally evaluate the effectiveness of SIC in comparison with existing model selection techniques including the traditional leave-one-out cross-validation (CV), Mallows’s CP , Akaike’s information criterion (AIC), Sugiura’s corrected AIC (cAIC), Schwarz’s Bayesian information criterion (BIC), Rissanen’s minimum description length criterion (MDL), and Vapnik’s measure (VM). Theoretical evaluation includes the comparison of the generalization measure, approximation method, and restriction on model candidates and learning methods. Experimentally, the performance of SIC in various situations is investigated. The simulations show that SIC outperforms existing techniques especially when the number of training examples is small and the noise variance is large.

  • subspace information criterion for model selection
    Neural Computation, 2001
    Co-Authors: Masashi Sugiyama, Hidemitsu Ogawa
    Abstract:

    The problem of model selection is considerably important for acquiring higher levels of generalization capability in supervised learning. In this article, we propose a new criterion for model selection, the subspace information criterion (SIC), which is a generalization of Mallows's CL. It is assumed that the learning target function belongs to a specified functional Hilbert space and the generalization error is defined as the Hilbert space squared norm of the difference between the learning result function and target function. SIC gives an Unbiased Estimate of the generalization error so defined. SIC assumes the availability of an Unbiased Estimate of the target function and the noise covariance matrix, which are generally unknown. A practical calculation method of SIC for least-mean-squares learning is provided under the assumption that the dimension of the Hilbert space is less than the number of training examples. Finally, computer simulations in two examples show that SIC works well even when the number of training examples is small.