F Test

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 279 Experts worldwide ranked by ideXlab platform

Ethem Alpaydin - One of the best experts on this subject based on the ideXlab platform.

  • combined 5 2 cv F Test For comparing supervised classiFication learning algorithms
    Neural Computation, 1999
    Co-Authors: Ethem Alpaydin
    Abstract:

    Dietterich (1998) reviews Five statistical Tests and proposes the 5 × 2 cv t Test For determining whether there is a signiFicant diFFerence between the error rates oF two classiFiers. In our experiments, we noticed that the 5 × 2 cv t Test result may vary depending on Factors that should not aFFect the Test, and we propose a variant, the combined 5 ×2 cv F Test, that combines multiple statistics to get a more robust Test. Simulation results show that this combined version oF the Test has lower type I error and higher power than 5 × 2 cv proper.

Ethem Alpaydm - One of the best experts on this subject based on the ideXlab platform.

  • Combined 5 × 2 cv F Test For Comparing Supervised ClassiFication Learning Algorithms
    Neural Computation, 1999
    Co-Authors: Ethem Alpaydm
    Abstract:

    Dietterich (1998) reviews Five statistical Tests and proposes the 5 × 2 cvt Test For determining whether there is a signiFicant diFFerence between the error rates oF two classiFiers. In our experiments, we noticed that the 5 × 2 cvt Test result may vary depending on Factors that should not aFFect the Test, and we propose a variant, the combined 5 × 2 cv F Test, that combines multiple statistics to get a more robust Test. Simulation results show that this combined version oF the Test has lower type I error and higher power than 5 × 2 cv proper.

Yanfang Li - One of the best experts on this subject based on the ideXlab platform.

  • choosing between two classiFication learning algorithms based on calibrated balanced 5 times 2 5 2 cross validated F Test
    Neural Processing Letters, 2017
    Co-Authors: Yu Wang, Jihong Li, Yanfang Li
    Abstract:

    $$5\times 2$$ cross-validated F-Test based on independent Five replications oF 2-Fold cross-validation is recommended in choosing between two classiFication learning algorithms. However, the reusing oF the same data in a $$5\times 2$$ cross-validation causes the real degree oF Freedom (DOF) oF the Test to be lower than the F(10, 5) distribution given by (Neural Comput 11:1885–1892, [1]). This easily leads the Test to suFFer From high type I and type II errors. Random partitions For $$5\times 2$$ cross-validation result in diFFiculty in analyzing the DOF For the Test. In particular, Wang et al. (Neural Comput 26(1):208–235, [2]) proposed a new blocked $$3 \times 2$$ cross-validation, that considered the correlation between any two 2-Fold cross-validations. Based on this, a calibrated balanced $$5\times 2$$ cross-validated F-Test Following F(7, 5) distribution is put Forward in this study by calibrating the DOF For the F(10, 5) distribution. Simulated and real data studies demonstrate that the calibrated balanced $$5\times 2$$ cross-validated F-Test has lower type I and type II errors than the $$5\times 2$$ cross-validated F-Test Following F(10, 5) in most cases.

  • Choosing Between Two ClassiFication Learning Algorithms Based on Calibrated Balanced $$5\times 2$$ 5 × 2 Cross-Validated F -Test
    Neural Processing Letters, 2016
    Co-Authors: Yu Wang, Jihong Li, Yanfang Li
    Abstract:

    $$5\times 2$$ cross-validated F-Test based on independent Five replications oF 2-Fold cross-validation is recommended in choosing between two classiFication learning algorithms. However, the reusing oF the same data in a $$5\times 2$$ cross-validation causes the real degree oF Freedom (DOF) oF the Test to be lower than the F(10, 5) distribution given by (Neural Comput 11:1885–1892, [1]). This easily leads the Test to suFFer From high type I and type II errors. Random partitions For $$5\times 2$$ cross-validation result in diFFiculty in analyzing the DOF For the Test. In particular, Wang et al. (Neural Comput 26(1):208–235, [2]) proposed a new blocked $$3 \times 2$$ cross-validation, that considered the correlation between any two 2-Fold cross-validations. Based on this, a calibrated balanced $$5\times 2$$ cross-validated F-Test Following F(7, 5) distribution is put Forward in this study by calibrating the DOF For the F(10, 5) distribution. Simulated and real data studies demonstrate that the calibrated balanced $$5\times 2$$ cross-validated F-Test has lower type I and type II errors than the $$5\times 2$$ cross-validated F-Test Following F(10, 5) in most cases.

Siyang Wang - One of the best experts on this subject based on the ideXlab platform.

Yu Wang - One of the best experts on this subject based on the ideXlab platform.

  • choosing between two classiFication learning algorithms based on calibrated balanced 5 times 2 5 2 cross validated F Test
    Neural Processing Letters, 2017
    Co-Authors: Yu Wang, Jihong Li, Yanfang Li
    Abstract:

    $$5\times 2$$ cross-validated F-Test based on independent Five replications oF 2-Fold cross-validation is recommended in choosing between two classiFication learning algorithms. However, the reusing oF the same data in a $$5\times 2$$ cross-validation causes the real degree oF Freedom (DOF) oF the Test to be lower than the F(10, 5) distribution given by (Neural Comput 11:1885–1892, [1]). This easily leads the Test to suFFer From high type I and type II errors. Random partitions For $$5\times 2$$ cross-validation result in diFFiculty in analyzing the DOF For the Test. In particular, Wang et al. (Neural Comput 26(1):208–235, [2]) proposed a new blocked $$3 \times 2$$ cross-validation, that considered the correlation between any two 2-Fold cross-validations. Based on this, a calibrated balanced $$5\times 2$$ cross-validated F-Test Following F(7, 5) distribution is put Forward in this study by calibrating the DOF For the F(10, 5) distribution. Simulated and real data studies demonstrate that the calibrated balanced $$5\times 2$$ cross-validated F-Test has lower type I and type II errors than the $$5\times 2$$ cross-validated F-Test Following F(10, 5) in most cases.

  • Choosing Between Two ClassiFication Learning Algorithms Based on Calibrated Balanced $$5\times 2$$ 5 × 2 Cross-Validated F -Test
    Neural Processing Letters, 2016
    Co-Authors: Yu Wang, Jihong Li, Yanfang Li
    Abstract:

    $$5\times 2$$ cross-validated F-Test based on independent Five replications oF 2-Fold cross-validation is recommended in choosing between two classiFication learning algorithms. However, the reusing oF the same data in a $$5\times 2$$ cross-validation causes the real degree oF Freedom (DOF) oF the Test to be lower than the F(10, 5) distribution given by (Neural Comput 11:1885–1892, [1]). This easily leads the Test to suFFer From high type I and type II errors. Random partitions For $$5\times 2$$ cross-validation result in diFFiculty in analyzing the DOF For the Test. In particular, Wang et al. (Neural Comput 26(1):208–235, [2]) proposed a new blocked $$3 \times 2$$ cross-validation, that considered the correlation between any two 2-Fold cross-validations. Based on this, a calibrated balanced $$5\times 2$$ cross-validated F-Test Following F(7, 5) distribution is put Forward in this study by calibrating the DOF For the F(10, 5) distribution. Simulated and real data studies demonstrate that the calibrated balanced $$5\times 2$$ cross-validated F-Test has lower type I and type II errors than the $$5\times 2$$ cross-validated F-Test Following F(10, 5) in most cases.