Statistical Hypothesis

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 165783 Experts worldwide ranked by ideXlab platform

Fuminori Kanaya - One of the best experts on this subject based on the ideXlab platform.

  • on the converse theorem in Statistical Hypothesis testing
    IEEE Transactions on Information Theory, 1993
    Co-Authors: Kenji Nakagawa, Fuminori Kanaya
    Abstract:

    Simple Statistical Hypothesis testing is investigated by making use of the divergence geometric method. The asymptotic behavior of the minimum value of the error probability of the second kind under the constraint that the error probability of the first kind is bounded above by exp(-rn) is looked for, where r is a given positive number. If r is greater than the divergence of the two probability measures, the so-called converse theorem holds. It is shown that the condition under which the converse theorem holds can be divided into two separate cases by analyzing the geodesic connecting the two probability measures, and, as a result, an explanation is given for the Han-Kobayashi linear function f/sub T/(X). >

  • on the converse theorem in Statistical Hypothesis testing for markov chains
    IEEE Transactions on Information Theory, 1993
    Co-Authors: Kenji Nakagawa, Fuminori Kanaya
    Abstract:

    Hypothesis testing for two Markov chains is considered. Under the constraint that the error probability of the first kind is less than or equal to exp(-rn), the error probability of the second kind is minimized. The geodesic that connects the two Markov chains is defined. By analyzing the geodesic, the power exponents are calculated and then represented in terms of Kullback-Leibler divergence. >

Kenji Nakagawa - One of the best experts on this subject based on the ideXlab platform.

  • on the converse theorem in Statistical Hypothesis testing
    IEEE Transactions on Information Theory, 1993
    Co-Authors: Kenji Nakagawa, Fuminori Kanaya
    Abstract:

    Simple Statistical Hypothesis testing is investigated by making use of the divergence geometric method. The asymptotic behavior of the minimum value of the error probability of the second kind under the constraint that the error probability of the first kind is bounded above by exp(-rn) is looked for, where r is a given positive number. If r is greater than the divergence of the two probability measures, the so-called converse theorem holds. It is shown that the condition under which the converse theorem holds can be divided into two separate cases by analyzing the geodesic connecting the two probability measures, and, as a result, an explanation is given for the Han-Kobayashi linear function f/sub T/(X). >

  • on the converse theorem in Statistical Hypothesis testing for markov chains
    IEEE Transactions on Information Theory, 1993
    Co-Authors: Kenji Nakagawa, Fuminori Kanaya
    Abstract:

    Hypothesis testing for two Markov chains is considered. Under the constraint that the error probability of the first kind is less than or equal to exp(-rn), the error probability of the second kind is minimized. The geodesic that connects the two Markov chains is defined. By analyzing the geodesic, the power exponents are calculated and then represented in terms of Kullback-Leibler divergence. >

Joel R. Levin - One of the best experts on this subject based on the ideXlab platform.

  • research news and comment rejoinder Statistical Hypothesis testing effect size estimation and the conclusion coherence of primary research studies
    Educational Researcher, 2000
    Co-Authors: Joel R. Levin, Daniel H. Robinson
    Abstract:

    Support, in the form of both a fable and a framework, is provided for a two-step approach to the estimation and discussion of effect sizes. A distinction is made between single-study decision-oriented research and multiple-study syntheses. The concept of “conclusion coherence” (the consistency between Statistical and verbal inference is introduced and illustrated within each investigative context.

  • overcoming feelings of powerlessness in aging researchers a primer on Statistical power in analysis of variance designs
    Psychology and Aging, 1997
    Co-Authors: Joel R. Levin
    Abstract:

    A general rationale and specific procedures for examining the Statistical power characteristics of psychology-of-aging empirical studies are provided. First, 4 basic ingredients of Statistical Hypothesis testing are reviewed. Then, 2 measures of effect size are introduced (standardized mean differences and the proportion of variation accounted for by the effect of interest), and methods are given for estimating these measures from already-completed studies. Power and sample size formulas, examples, and discussion are provided for common comparison-of-means designs, including independent samples I-factor and factorial analysis of variance (ANOVA) design, analysis of covariance designs, repeated measures (correlated samples) ANOVA designs, and split-plot (combined between- and within-subjects) ANOVA designs. Because of past conceptual differences, special attention is given to the power associated with Statistical interactions, and cautions about applying the various procedures are indicated. Illustrative power estimations also are applied to a published study from the literature. It is argued that psychology-of-aging researchers will be both better informed consumers of what they read and more "empowered" with respect to what they research by understanding the important roles played by power and sample size in Statistical Hypothesis testing.

Michael Lindenbaum - One of the best experts on this subject based on the ideXlab platform.

  • Local Variation as a Statistical Hypothesis Test
    International Journal of Computer Vision, 2016
    Co-Authors: Michael Baltaxe, Peter Meer, Michael Lindenbaum
    Abstract:

    The goal of image oversegmentation is to divide an image into several pieces, each of which should ideally be part of an object. One of the simplest and yet most effective oversegmentation algorithms is known as local variation (LV) Felzenszwalb and Huttenlocher in Efficient graph-based image segmentation. IJCV 59(2):167–181 ( 2004 ). In this work, we study this algorithm and show that algorithms similar to LV can be devised by applying different Statistical models and decisions, thus providing further theoretical justification and a well-founded explanation for the unexpected high performance of the LV approach. Some of these algorithms are based on statistics of natural images and on a Hypothesis testing decision; we denote these algorithms probabilistic local variation (pLV). The best pLV algorithm, which relies on censored estimation, presents state-of-the-art results while keeping the same computational complexity of the LV algorithm.

  • local variation as a Statistical Hypothesis test
    arXiv: Computer Vision and Pattern Recognition, 2015
    Co-Authors: Michael Baltaxe, Peter Meer, Michael Lindenbaum
    Abstract:

    The goal of image oversegmentation is to divide an image into several pieces, each of which should ideally be part of an object. One of the simplest and yet most effective oversegmentation algorithms is known as local variation (LV) (Felzenszwalb and Huttenlocher 2004). In this work, we study this algorithm and show that algorithms similar to LV can be devised by applying different Statistical models and decisions, thus providing further theoretical justification and a well-founded explanation for the unexpected high performance of the LV approach. Some of these algorithms are based on statistics of natural images and on a Hypothesis testing decision; we denote these algorithms probabilistic local variation (pLV). The best pLV algorithm, which relies on censored estimation, presents state-of-the-art results while keeping the same computational complexity of the LV algorithm.

Yolanda Vidal - One of the best experts on this subject based on the ideXlab platform.

  • wind turbine fault detection through principal component analysis and Statistical Hypothesis testing
    Advances in Science and Technology, 2016
    Co-Authors: Francesc Pozo, Yolanda Vidal
    Abstract:

    This work addresses the problem of online fault detection of an advanced wind turbine benchmark under actuators (pitch and torque) and sensors (pitch angle measurement) faults of different type. The fault detection scheme starts by computing the baseline principal component analysis (PCA) model from the healthy wind turbine. Subsequently, when the structure is inspected or supervised, new measurements are obtained and projected into the baseline PCA model. When both sets of data are compared, a Statistical Hypothesis testing is used to make a decision on whether or not the wind turbine presents some fault. The effectiveness of the proposed fault-detection scheme is illustrated by numerical simulations on a well-known large wind turbine in the presence of wind turbulence and realistic fault scenarios.

  • wind turbine fault detection through principal component analysis and Statistical Hypothesis testing
    Energies, 2015
    Co-Authors: Francesc Pozo, Yolanda Vidal
    Abstract:

    This paper addresses the problem of online fault detection of an advanced wind turbine benchmark under actuators (pitch and torque) and sensors (pitch angle measurement) faults of different type: fixed value, gain factor, offset and changed dynamics. The fault detection scheme starts by computing the baseline principal component analysis (PCA) model from the healthy or undamaged wind turbine. Subsequently, when the structure is inspected or supervised, new measurements are obtained are projected into the baseline PCA model. When both sets of data—the baseline and the data from the current wind turbine—are compared, a Statistical Hypothesis testing is used to make a decision on whether or not the wind turbine presents some damage, fault or misbehavior. The effectiveness of the proposed fault-detection scheme is illustrated by numerical simulations on a well-known large offshore wind turbine in the presence of wind turbulence and realistic fault scenarios. The obtained results demonstrate that the proposed strategy provides and early fault identification, thereby giving the operators sufficient time to make more informed decisions regarding the maintenance of their machines.