Statistical Power

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 294 Experts worldwide ranked by ideXlab platform

Herman Aguinis - One of the best experts on this subject based on the ideXlab platform.

  • Statistical Power with Moderated Multiple Regression in Management Research
    Journal of Management, 1995
    Co-Authors: Herman Aguinis
    Abstract:

    Due to the increasing importance of moderating (i.e., interaction) effects, the use of moderated multiple regression (MMR) has become pervasive in numerous management specialties such as organizational behavior, human resources management, and strategy, to name a few. Despite its popularity, recent research on the MMR approach to moderator variable detection has identified several factors that reduce Statistical Power below acceptable levels and, consequently, lead researchers to erroneously dismiss theoretical models that include moderated relationships. The present article (1) briefly describes MMR, (2) reviews factors that affect the Statistical Power of hypothesis tests conducted using this technique, (3) proposes solutions to low Power situations, and (4) discusses areas and problems related to MMR that are in need of further investigation.

  • Statistical Power problems with moderated multiple regression in management research
    Journal of Management, 1995
    Co-Authors: Herman Aguinis
    Abstract:

    Abstract Due to the increasing importance of moderating (i.e., interaction) effects, the use of moderated multiple regression (MMR) has become pervasive in numerous management specialties such as organizational behavior, human resources management, and strategy, to name a few. Despite its popularity, recent research on the MMR approach to moderator variable detection has identified several factors that reduce Statistical Power below acceptable levels and, consequently, lead researchers to erroneously dismiss theoretical models that include moderated relationships. The present article (1) briefly describes MMR, (2) reviews factors that affect the Statistical Power of hypothesis tests conducted using this technique, (3) proposes solutions to low Power situations, and (4) discusses areas and problems related to MMR that are in need of further investigation. If we want to know how well we are doing in the biological, psychological, and social sciences, an index that will serve us well is how far we have advanced in our understanding of the moderator variables of our field -Hall & Rosenthal, 1991, p. 447.

Patrizio E. Tressoldi - One of the best experts on this subject based on the ideXlab platform.

  • Replication unreliability in psychology: elusive phenomena or "elusive" Statistical Power?
    Frontiers in psychology, 2012
    Co-Authors: Patrizio E. Tressoldi
    Abstract:

    The focus of this paper is to analyse whether the unreliability of results related to certain controversial psychological phenomena may be a consequence of their low Statistical Power. Applying the Null Hypothesis Statistical Testing (NHST), still the widest used Statistical approach, unreliability derives from the failure to refute the null hypothesis, in particular when exact or quasi-exact replications of experiments are carried out. Taking as example the results of meta-analyses related to four different controversial phenomena, subliminal semantic priming, incubation effect for problem solving, unconscious thought theory, and non-local perception, it was found that, except for semantic priming on categorization, the Statistical Power to detect the expected effect size of the typical study, is low or very low. The low Power in most studies undermines the use of NHST to study phenomena with moderate or low effect sizes. We conclude by providing some suggestions on how to increase the Statistical Power or use different Statistical approaches to help discriminate whether the results obtained may or may not be used to support or to refute the reality of a phenomenon with small effect size.

  • Replication Unreliability in Psychology: Elusive Phenomena or 'Elusive' Statistical Power?
    2012
    Co-Authors: Patrizio E. Tressoldi
    Abstract:

    The focus of this paper is to analyze whether the unreliability of results related to certain controversial psychological phenomena may be a consequence of their low Statistical Power. Applying the Null Hypothesis Statistical Testing (NHST), still the widest used Statistical approach, unreliability derives from the failure to refute the null hypothesis, in particular when exact or quasi-exact replications of experiments are carried out. Taking as example the results of meta-analyses related to four different controversial phenomena, subliminal semantic priming, incubation effect for problem solving, unconscious thought theory, and non-local perception, it was found that, except for semantic priming on categorization, the Statistical Power to detect the expected effect size (ES) of the typical study, is low or very low. The low Power in most studies undermines the use of NHST to study phenomena with moderate or low ESs. We conclude by providing some suggestions on how to increase the Statistical Power or use different Statistical approaches to help discriminate whether the results obtained may or may not be used to support or to refute the reality of a phenomenon with small ES.

Shaun Mcquitty - One of the best experts on this subject based on the ideXlab platform.

  • Statistical Power and structural equation models in business research
    Journal of Business Research, 2004
    Co-Authors: Shaun Mcquitty
    Abstract:

    Abstract It has long been recognized that Statistical Power is important for structural equation models, but only recently has it become possible to estimate the Power associated with the test of an entire model. This article discusses the relevance of Power for structural equation models and measurement validation, then examines the question of the degree of Power associated with models published in business journals. Addressing this matter is essential, because Statistical Power directly affects the confidence with which test results can be interpreted. The issue is particularly appropriate in light of the increased use of structural equation models in business research. Using articles from some leading business journals as examples, a survey finds that Power tends to be either very low, implying that too many false models will not be rejected (Type II error), or extremely high, causing overrejection of tenable models (Type I error). The implications of this discovery are explored, and recommendations that should improve the validity and application of structural equation modeling in business research are offered.

Julia Gaeckler - One of the best experts on this subject based on the ideXlab platform.

  • Statistical Power of structural equation models in SCM research
    Journal of Purchasing and Supply Management, 2014
    Co-Authors: Dominik F. Riedl, Lutz Kaufmann, Julia Gaeckler
    Abstract:

    Abstract Prior research has emphasized the relevance of adequate Statistical Power for covariance-based structural equation modeling (CSEM). Nevertheless, reviews in domains other than supply chain management (SCM) found that the magnitude of Power tends to be inadequate. This finding is worrisome because Statistical Power directly affects the meaningfulness of the conclusions based on CSEM. The issue is particularly relevant for the field of SCM in light of the increasing use of CSEM. An investigation of the Statistical Power of CSEM published in seven major SCM journals since 1999 confirms this criticism. Specifically, an analysis of 988 applications of CSEM indicates that 32% of all applications have too little Power, increasing the probability of Type II errors, and that another 43% of all applications exhibit excessive Power, increasing the probability of Type I errors. This paper emphasizes the importance of adequate Statistical Power for CSEM in SCM.

Ron Thompson - One of the best experts on this subject based on the ideXlab platform.

  • pls small sample size and Statistical Power in mis research
    Hawaii International Conference on System Sciences, 2006
    Co-Authors: Dale L Goodhue, William Lewis, Ron Thompson
    Abstract:

    There is a pervasive belief in the Management Information Systems (MIS) field that Partial Least Squares (PLS) has special abilities that make it more appropriate than other techniques, such as multiple regression and LISREL, when analyzing small sample sizes. We conducted a study using Monte Carlo simulation to compare these three relatively popular techniques for modeling relationships among variables under varying sample sizes (N = 40, 90, 150, and 200) and varying effect sizes (large, medium, small and no effect). The focus of the analysis was on comparing the path estimates and the Statistical Power for each combination of technique, sample size, and effect size. The results suggest that PLS with bootstrapping does not have special abilities with respect to Statistical Power at small sample sizes. In fact, for simple models with normally distributed data and relatively reliable measures, none of the three techniques have adequate Power to detect small or medium effects at small sample sizes. These findings run counter to extant suggestions in MIS literature.