Root-Mean-Square Error

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 524778 Experts worldwide ranked by ideXlab platform

Edward R Dougherty - One of the best experts on this subject based on the ideXlab platform.

  • moments and root mean square Error of the bayesian mmse estimator of classification Error in the gaussian model
    Pattern Recognition, 2014
    Co-Authors: Amin Zollanvari, Edward R Dougherty
    Abstract:

    The most important aspect of any classifier is its Error rate, because this quantifies its predictive capacity. Thus, the accuracy of Error estimation is critical. Error estimation is problematic in small-sample classifier design because the Error must be estimated using the same data from which the classifier has been designed. Use of prior knowledge, in the form of a prior distribution on an uncertainty class of feature-label distributions to which the true, but unknown, feature-distribution belongs, can facilitate accurate Error estimation (in the mean-square sense) in circumstances where accurate completely model-free Error estimation is impossible. This paper provides analytic asymptotically exact finite-sample approximations for various performance metrics of the resulting Bayesian Minimum Mean-Square-Error (MMSE) Error estimator in the case of linear discriminant analysis (LDA) in the multivariate Gaussian model. These performance metrics include the first, second, and cross moments of the Bayesian MMSE Error estimator with the true Error of LDA, and therefore, the Root-Mean-Square (RMS) Error of the estimator. We lay down the theoretical groundwork for Kolmogorov double-asymptotics in a Bayesian setting, which enables us to derive asymptotic expressions of the desired performance metrics. From these we produce analytic finite-sample approximations and demonstrate their accuracy via numerical examples. Various examples illustrate the behavior of these approximations and their use in determining the necessary sample size to achieve a desired RMS. The Supplementary Material contains derivations for some equations and added figures. HighlightsThe Bayesian Minimum Mean-Square-Error (BMMSE) Error estimator of LDA is studied.The first, second, and cross moments of the BMMSE estimator with true Error are considered.Both conditional and unconditional performance metrics are analyzed.For the first time, the Kolmogorov double-asymptotic is used in a Bayesian setting.Asymptotically exact finite-sample approximations of performance metrics are derived.

  • moments and root mean square Error of the bayesian mmse estimator of classification Error in the gaussian model
    arXiv: Machine Learning, 2013
    Co-Authors: Amin Zollanvari, Edward R Dougherty
    Abstract:

    The most important aspect of any classifier is its Error rate, because this quantifies its predictive capacity. Thus, the accuracy of Error estimation is critical. Error estimation is problematic in small-sample classifier design because the Error must be estimated using the same data from which the classifier has been designed. Use of prior knowledge, in the form of a prior distribution on an uncertainty class of feature-label distributions to which the true, but unknown, feature-distribution belongs, can facilitate accurate Error estimation (in the mean-square sense) in circumstances where accurate completely model-free Error estimation is impossible. This paper provides analytic asymptotically exact finite-sample approximations for various performance metrics of the resulting Bayesian Minimum Mean-Square-Error (MMSE) Error estimator in the case of linear discriminant analysis (LDA) in the multivariate Gaussian model. These performance metrics include the first, second, and cross moments of the Bayesian MMSE Error estimator with the true Error of LDA, and therefore, the Root-Mean-Square (RMS) Error of the estimator. We lay down the theoretical groundwork for Kolmogorov double-asymptotics in a Bayesian setting, which enables us to derive asymptotic expressions of the desired performance metrics. From these we produce analytic finite-sample approximations and demonstrate their accuracy via numerical examples. Various examples illustrate the behavior of these approximations and their use in determining the necessary sample size to achieve a desired RMS. The Supplementary Material contains derivations for some equations and added figures.

Tianfeng Chai - One of the best experts on this subject based on the ideXlab platform.

  • root mean square Error rmse or mean absolute Error mae arguments against avoiding rmse in the literature
    Geoscientific Model Development, 2014
    Co-Authors: Tianfeng Chai, Roland R. Draxler
    Abstract:

    Abstract. Both the root mean square Error (RMSE) and the mean absolute Error (MAE) are regularly employed in model evaluation studies. Willmott and Matsuura (2005) have suggested that the RMSE is not a good indicator of average model performance and might be a misleading indicator of average Error, and thus the MAE would be a better metric for that purpose. While some concerns over using RMSE raised by Willmott and Matsuura (2005) and Willmott et al. (2009) are valid, the proposed avoidance of RMSE in favor of MAE is not the solution. Citing the aforementioned papers, many researchers chose MAE over RMSE to present their model evaluation statistics when presenting or adding the RMSE measures could be more beneficial. In this technical note, we demonstrate that the RMSE is not ambiguous in its meaning, contrary to what was claimed by Willmott et al. (2009). The RMSE is more appropriate to represent model performance than the MAE when the Error distribution is expected to be Gaussian. In addition, we show that the RMSE satisfies the triangle inequality requirement for a distance metric, whereas Willmott et al. (2009) indicated that the sums-of-squares-based statistics do not satisfy this rule. In the end, we discussed some circumstances where using the RMSE will be more beneficial. However, we do not contend that the RMSE is superior over the MAE. Instead, a combination of metrics, including but certainly not limited to RMSEs and MAEs, are often required to assess model performance.

Amin Zollanvari - One of the best experts on this subject based on the ideXlab platform.

  • moments and root mean square Error of the bayesian mmse estimator of classification Error in the gaussian model
    Pattern Recognition, 2014
    Co-Authors: Amin Zollanvari, Edward R Dougherty
    Abstract:

    The most important aspect of any classifier is its Error rate, because this quantifies its predictive capacity. Thus, the accuracy of Error estimation is critical. Error estimation is problematic in small-sample classifier design because the Error must be estimated using the same data from which the classifier has been designed. Use of prior knowledge, in the form of a prior distribution on an uncertainty class of feature-label distributions to which the true, but unknown, feature-distribution belongs, can facilitate accurate Error estimation (in the mean-square sense) in circumstances where accurate completely model-free Error estimation is impossible. This paper provides analytic asymptotically exact finite-sample approximations for various performance metrics of the resulting Bayesian Minimum Mean-Square-Error (MMSE) Error estimator in the case of linear discriminant analysis (LDA) in the multivariate Gaussian model. These performance metrics include the first, second, and cross moments of the Bayesian MMSE Error estimator with the true Error of LDA, and therefore, the Root-Mean-Square (RMS) Error of the estimator. We lay down the theoretical groundwork for Kolmogorov double-asymptotics in a Bayesian setting, which enables us to derive asymptotic expressions of the desired performance metrics. From these we produce analytic finite-sample approximations and demonstrate their accuracy via numerical examples. Various examples illustrate the behavior of these approximations and their use in determining the necessary sample size to achieve a desired RMS. The Supplementary Material contains derivations for some equations and added figures. HighlightsThe Bayesian Minimum Mean-Square-Error (BMMSE) Error estimator of LDA is studied.The first, second, and cross moments of the BMMSE estimator with true Error are considered.Both conditional and unconditional performance metrics are analyzed.For the first time, the Kolmogorov double-asymptotic is used in a Bayesian setting.Asymptotically exact finite-sample approximations of performance metrics are derived.

  • moments and root mean square Error of the bayesian mmse estimator of classification Error in the gaussian model
    arXiv: Machine Learning, 2013
    Co-Authors: Amin Zollanvari, Edward R Dougherty
    Abstract:

    The most important aspect of any classifier is its Error rate, because this quantifies its predictive capacity. Thus, the accuracy of Error estimation is critical. Error estimation is problematic in small-sample classifier design because the Error must be estimated using the same data from which the classifier has been designed. Use of prior knowledge, in the form of a prior distribution on an uncertainty class of feature-label distributions to which the true, but unknown, feature-distribution belongs, can facilitate accurate Error estimation (in the mean-square sense) in circumstances where accurate completely model-free Error estimation is impossible. This paper provides analytic asymptotically exact finite-sample approximations for various performance metrics of the resulting Bayesian Minimum Mean-Square-Error (MMSE) Error estimator in the case of linear discriminant analysis (LDA) in the multivariate Gaussian model. These performance metrics include the first, second, and cross moments of the Bayesian MMSE Error estimator with the true Error of LDA, and therefore, the Root-Mean-Square (RMS) Error of the estimator. We lay down the theoretical groundwork for Kolmogorov double-asymptotics in a Bayesian setting, which enables us to derive asymptotic expressions of the desired performance metrics. From these we produce analytic finite-sample approximations and demonstrate their accuracy via numerical examples. Various examples illustrate the behavior of these approximations and their use in determining the necessary sample size to achieve a desired RMS. The Supplementary Material contains derivations for some equations and added figures.

Takehito Yoshiki - One of the best experts on this subject based on the ideXlab platform.

  • the mean square quasi monte carlo Error for digitally shifted digital nets
    Springer Proceedings in Mathematics and Statistics, 2016
    Co-Authors: Takashi Goda, Ryuichi Ohori, Kosuke Suzuki, Takehito Yoshiki
    Abstract:

    In this paper, we study randomized quasi-Monte Carlo (QMC) integration using digitally shifted digital nets. We express the mean square QMC Error of the nth discrete approximation \(f_n\) of a function \(f:[0,1)^s\rightarrow \mathbb {R}\) for digitally shifted digital nets in terms of the Walsh coefficients of f. We then apply a bound on the Walsh coefficients for sufficiently smooth integrands to obtain a quality measure called Walsh figure of merit for the root mean square Error, which satisfies a Koksma–Hlawka type inequality on the root mean square Error. Through two types of experiments, we confirm that our quality measure is of use for finding digital nets which show good convergence behavior of the root mean square Error for smooth integrands.

  • the mean square quasi monte carlo Error for digitally shifted digital nets
    arXiv: Numerical Analysis, 2014
    Co-Authors: Takashi Goda, Ryuichi Ohori, Kosuke Suzuki, Takehito Yoshiki
    Abstract:

    In this paper, we study randomized quasi-Monte Carlo (QMC) integration using digitally shifted digital nets. We express the mean square QMC Error of the $n$-th discrete approximation $f_n$ of a function $f\colon[0,1)^s\to \mathbb{R}$ for digitally shifted digital nets in terms of the Walsh coefficients of $f$. We then apply a bound on the Walsh coefficients for sufficiently smooth integrands to obtain a quality measure called Walsh figure of merit for root mean square Error, which satisfies a Koksma-Hlawka type inequality on the root mean square Error. Through two types of experiments, we confirm that our quality measure is of use for finding digital nets which show good convergence behaviors of the root mean square Error for smooth integrands.

Roland R. Draxler - One of the best experts on this subject based on the ideXlab platform.

  • root mean square Error rmse or mean absolute Error mae arguments against avoiding rmse in the literature
    Geoscientific Model Development, 2014
    Co-Authors: Tianfeng Chai, Roland R. Draxler
    Abstract:

    Abstract. Both the root mean square Error (RMSE) and the mean absolute Error (MAE) are regularly employed in model evaluation studies. Willmott and Matsuura (2005) have suggested that the RMSE is not a good indicator of average model performance and might be a misleading indicator of average Error, and thus the MAE would be a better metric for that purpose. While some concerns over using RMSE raised by Willmott and Matsuura (2005) and Willmott et al. (2009) are valid, the proposed avoidance of RMSE in favor of MAE is not the solution. Citing the aforementioned papers, many researchers chose MAE over RMSE to present their model evaluation statistics when presenting or adding the RMSE measures could be more beneficial. In this technical note, we demonstrate that the RMSE is not ambiguous in its meaning, contrary to what was claimed by Willmott et al. (2009). The RMSE is more appropriate to represent model performance than the MAE when the Error distribution is expected to be Gaussian. In addition, we show that the RMSE satisfies the triangle inequality requirement for a distance metric, whereas Willmott et al. (2009) indicated that the sums-of-squares-based statistics do not satisfy this rule. In the end, we discussed some circumstances where using the RMSE will be more beneficial. However, we do not contend that the RMSE is superior over the MAE. Instead, a combination of metrics, including but certainly not limited to RMSEs and MAEs, are often required to assess model performance.