The Experts below are selected from a list of 279 Experts worldwide ranked by ideXlab platform
Ethem Alpaydin - One of the best experts on this subject based on the ideXlab platform.
-
combined 5 2 cv F Test For comparing supervised classiFication learning algorithms
Neural Computation, 1999Co-Authors: Ethem AlpaydinAbstract:Dietterich (1998) reviews Five statistical Tests and proposes the 5 × 2 cv t Test For determining whether there is a signiFicant diFFerence between the error rates oF two classiFiers. In our experiments, we noticed that the 5 × 2 cv t Test result may vary depending on Factors that should not aFFect the Test, and we propose a variant, the combined 5 ×2 cv F Test, that combines multiple statistics to get a more robust Test. Simulation results show that this combined version oF the Test has lower type I error and higher power than 5 × 2 cv proper.
Ethem Alpaydm - One of the best experts on this subject based on the ideXlab platform.
-
Combined 5 × 2 cv F Test For Comparing Supervised ClassiFication Learning Algorithms
Neural Computation, 1999Co-Authors: Ethem AlpaydmAbstract:Dietterich (1998) reviews Five statistical Tests and proposes the 5 × 2 cvt Test For determining whether there is a signiFicant diFFerence between the error rates oF two classiFiers. In our experiments, we noticed that the 5 × 2 cvt Test result may vary depending on Factors that should not aFFect the Test, and we propose a variant, the combined 5 × 2 cv F Test, that combines multiple statistics to get a more robust Test. Simulation results show that this combined version oF the Test has lower type I error and higher power than 5 × 2 cv proper.
Yanfang Li - One of the best experts on this subject based on the ideXlab platform.
-
choosing between two classiFication learning algorithms based on calibrated balanced 5 times 2 5 2 cross validated F Test
Neural Processing Letters, 2017Co-Authors: Yu Wang, Jihong Li, Yanfang LiAbstract:$$5\times 2$$ cross-validated F-Test based on independent Five replications oF 2-Fold cross-validation is recommended in choosing between two classiFication learning algorithms. However, the reusing oF the same data in a $$5\times 2$$ cross-validation causes the real degree oF Freedom (DOF) oF the Test to be lower than the F(10, 5) distribution given by (Neural Comput 11:1885–1892, [1]). This easily leads the Test to suFFer From high type I and type II errors. Random partitions For $$5\times 2$$ cross-validation result in diFFiculty in analyzing the DOF For the Test. In particular, Wang et al. (Neural Comput 26(1):208–235, [2]) proposed a new blocked $$3 \times 2$$ cross-validation, that considered the correlation between any two 2-Fold cross-validations. Based on this, a calibrated balanced $$5\times 2$$ cross-validated F-Test Following F(7, 5) distribution is put Forward in this study by calibrating the DOF For the F(10, 5) distribution. Simulated and real data studies demonstrate that the calibrated balanced $$5\times 2$$ cross-validated F-Test has lower type I and type II errors than the $$5\times 2$$ cross-validated F-Test Following F(10, 5) in most cases.
-
Choosing Between Two ClassiFication Learning Algorithms Based on Calibrated Balanced $$5\times 2$$ 5 × 2 Cross-Validated F -Test
Neural Processing Letters, 2016Co-Authors: Yu Wang, Jihong Li, Yanfang LiAbstract:$$5\times 2$$ cross-validated F-Test based on independent Five replications oF 2-Fold cross-validation is recommended in choosing between two classiFication learning algorithms. However, the reusing oF the same data in a $$5\times 2$$ cross-validation causes the real degree oF Freedom (DOF) oF the Test to be lower than the F(10, 5) distribution given by (Neural Comput 11:1885–1892, [1]). This easily leads the Test to suFFer From high type I and type II errors. Random partitions For $$5\times 2$$ cross-validation result in diFFiculty in analyzing the DOF For the Test. In particular, Wang et al. (Neural Comput 26(1):208–235, [2]) proposed a new blocked $$3 \times 2$$ cross-validation, that considered the correlation between any two 2-Fold cross-validations. Based on this, a calibrated balanced $$5\times 2$$ cross-validated F-Test Following F(7, 5) distribution is put Forward in this study by calibrating the DOF For the F(10, 5) distribution. Simulated and real data studies demonstrate that the calibrated balanced $$5\times 2$$ cross-validated F-Test has lower type I and type II errors than the $$5\times 2$$ cross-validated F-Test Following F(10, 5) in most cases.
Siyang Wang - One of the best experts on this subject based on the ideXlab platform.
-
generalized F Test For high dimensional linear regression coeFFicients
Journal of Multivariate Analysis, 2013Co-Authors: Siyang WangAbstract:To Test the regression coeFFicients oF linear models, the conventional F-Test has been suggested. This paper investigates the perFormance oF the generalized F-Test For Testing regression coeFFicients in high dimensional linear regression under the case oF p/[email protected][email protected](0<@r<1). The asymptotic normality oF generalized F-statistic is obtained under some regular conditions, and then the power oF the F-Test is derived. Some comparisons and an illustrated example are also presented.
-
Generalized F Test For high dimensional linear regression coeFFicients
Journal of Multivariate Analysis, 2013Co-Authors: Siyang WangAbstract:To Test the regression coeFFicients oF linear models, the conventional F-Test has been suggested. This paper investigates the perFormance oF the generalized F-Test For Testing regression coeFFicients in high dimensional linear regression under the case oF p/[email protected][email protected](0
Yu Wang - One of the best experts on this subject based on the ideXlab platform.
-
choosing between two classiFication learning algorithms based on calibrated balanced 5 times 2 5 2 cross validated F Test
Neural Processing Letters, 2017Co-Authors: Yu Wang, Jihong Li, Yanfang LiAbstract:$$5\times 2$$ cross-validated F-Test based on independent Five replications oF 2-Fold cross-validation is recommended in choosing between two classiFication learning algorithms. However, the reusing oF the same data in a $$5\times 2$$ cross-validation causes the real degree oF Freedom (DOF) oF the Test to be lower than the F(10, 5) distribution given by (Neural Comput 11:1885–1892, [1]). This easily leads the Test to suFFer From high type I and type II errors. Random partitions For $$5\times 2$$ cross-validation result in diFFiculty in analyzing the DOF For the Test. In particular, Wang et al. (Neural Comput 26(1):208–235, [2]) proposed a new blocked $$3 \times 2$$ cross-validation, that considered the correlation between any two 2-Fold cross-validations. Based on this, a calibrated balanced $$5\times 2$$ cross-validated F-Test Following F(7, 5) distribution is put Forward in this study by calibrating the DOF For the F(10, 5) distribution. Simulated and real data studies demonstrate that the calibrated balanced $$5\times 2$$ cross-validated F-Test has lower type I and type II errors than the $$5\times 2$$ cross-validated F-Test Following F(10, 5) in most cases.
-
Choosing Between Two ClassiFication Learning Algorithms Based on Calibrated Balanced $$5\times 2$$ 5 × 2 Cross-Validated F -Test
Neural Processing Letters, 2016Co-Authors: Yu Wang, Jihong Li, Yanfang LiAbstract:$$5\times 2$$ cross-validated F-Test based on independent Five replications oF 2-Fold cross-validation is recommended in choosing between two classiFication learning algorithms. However, the reusing oF the same data in a $$5\times 2$$ cross-validation causes the real degree oF Freedom (DOF) oF the Test to be lower than the F(10, 5) distribution given by (Neural Comput 11:1885–1892, [1]). This easily leads the Test to suFFer From high type I and type II errors. Random partitions For $$5\times 2$$ cross-validation result in diFFiculty in analyzing the DOF For the Test. In particular, Wang et al. (Neural Comput 26(1):208–235, [2]) proposed a new blocked $$3 \times 2$$ cross-validation, that considered the correlation between any two 2-Fold cross-validations. Based on this, a calibrated balanced $$5\times 2$$ cross-validated F-Test Following F(7, 5) distribution is put Forward in this study by calibrating the DOF For the F(10, 5) distribution. Simulated and real data studies demonstrate that the calibrated balanced $$5\times 2$$ cross-validated F-Test has lower type I and type II errors than the $$5\times 2$$ cross-validated F-Test Following F(10, 5) in most cases.