Standardized Mean Difference

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 21489 Experts worldwide ranked by ideXlab platform

James E. Pustejovsky - One of the best experts on this subject based on the ideXlab platform.

  • procedural sensitivities of effect sizes for single case designs with directly observed behavioral outcome measures
    Psychological Methods, 2019
    Co-Authors: James E. Pustejovsky
    Abstract:

    A wide variety of effect size indices have been proposed for quantifying the magnitude of treatment effects in single-case designs. Commonly used measures include parametric indices such as the Standardized Mean Difference as well as nonoverlap measures such as the percentage of nonoverlapping data, improvement rate Difference, and nonoverlap of all pairs. Currently, little is known about the properties of these indices when applied to behavioral data collected by systematic direct observation, even though systematic direct observation is the most common method for outcome measurement in single-case research. This study uses Monte Carlo simulation to investigate the properties of several widely used single-case effect size measures when applied to systematic direct observation data. Results indicate that the magnitude of the nonoverlap measures and of the Standardized Mean Difference can be strongly influenced by procedural details of the study's design, which is a significant limitation to using these indices as effect sizes for meta-analysis of single-case designs. A less widely used parametric index, the log response ratio, has the advantage of being insensitive to sample size and observation session length, although its magnitude is influenced by the use of partial interval recording. (PsycINFO Database Record (c) 2019 APA, all rights reserved).

  • testing for funnel plot asymmetry of Standardized Mean Differences
    Research Synthesis Methods, 2019
    Co-Authors: James E. Pustejovsky, Melissa A Rodgers
    Abstract:

    Publication bias and other forms of outcome reporting bias are critical threats to the validity of findings from research syntheses. A variety of methods have been proposed for detecting selective outcome reporting in a collection of effect size estimates, including several methods based on assessment of asymmetry of funnel plots, such as the Egger's regression test, the rank correlation test, and the Trim-and-Fill test. Previous research has demonstated that the Egger's regression test is miscalibrated when applied to log-odds ratio effect size estimates, because of artifactual correlation between the effect size estimate and its standard error. This study examines similar problems that occur in meta-analyses of the Standardized Mean Difference, a ubiquitous effect size measure in educational and psychological research. In a simulation study of Standardized Mean Difference effect sizes, we assess the Type I error rates of conventional tests of funnel plot asymmetry, as well as the likelihood ratio test from a three-parameter selection model. Results demonstrate that the conventional tests have inflated Type I error due to the correlation between the effect size estimate and its standard error, while tests based on either a simple modification to the conventional standard error formula or a variance-stabilizing transformation both maintain close-to-nominal Type I error.

  • Testing for funnel plot asymmetry of Standardized Mean Differences
    2018
    Co-Authors: James E. Pustejovsky, Melissa A Rodgers
    Abstract:

    Publication bias and other forms of outcome reporting bias are critical threats to the validity of findings from research syntheses. A variety of methods have been proposed for detecting selective outcome reporting in a collection of effect size estimates, including several methods based on assessment of asymmetry of funnel plots, such as Egger's regression test, the rank correlation test, and the Trim-and-Fill test. Previous research has demonstated that Egger's regression test is mis-calibrated when applied to log-odds ratio effect size estimates, due to artifactual correlation between the effect size estimate and its standard error. This study examines similar problems that occur in meta-analyses of the Standardized Mean Difference, a ubiquitous effect size measure in educational and psychological research. In a simulation study of Standardized Mean Difference effect sizes, we assess the Type I error rates of conventional tests of funnel plot asymmetry, as well as the likelihood ratio test from a three-parameter selection model. Results demonstrate that the conventional tests have inflated Type I error due to correlation between the effect size estimate and its standard error, while tests based on either a simple modification to the conventional standard error formula or a variance-stabilizing transformation both maintain close-to-nominal Type I error.

  • between case Standardized Mean Difference effect sizes for single case designs a primer and tutorial using the scdhlm web application
    Campbell Systematic Reviews, 2016
    Co-Authors: Jeffrey C Valentine, James E. Pustejovsky, Emily E Tannersmith, Timothy Lau
    Abstract:

    Single‐case research designs are critically important for understanding the effectiveness of interventions that target individuals with low incidence disabilities (e.g., physical disabilities, autism spectrum disorders). These designs comprise an important part of the evidence base in fields such as special education and school psychology, and can provide credible and persuasive evidence for guiding practice and policy decisions. In this paper we discuss the development and use of between‐case Standardized Mean Difference effect sizes for two popular single‐case research designs (the treatment reversal design and the multiple baseline design), and discuss how they might be used in meta‐analyses either with other single‐case research designs or in conjunction with between‐group research designs. Effect size computation is carried out using a user‐friendly web application, scdhlm, powered by the free statistical program R; no knowledge of R programming is needed to use this web application.

  • Analysis and meta-analysis of single-case designs with a Standardized Mean Difference statistic: A primer and applications
    Journal of school psychology, 2013
    Co-Authors: William R. Shadish, Larry V. Hedges, James E. Pustejovsky
    Abstract:

    This article presents a d-statistic for single-case designs that is in the same metric as the d-statistic used in between-subjects designs such as randomized experiments and offers some reasons why such a statistic would be useful in SCD research. The d has a formal statistical development, is accompanied by appropriate power analyses, and can be estimated using user-friendly SPSS macros. We discuss both advantages and disadvantages of d compared to other approaches such as previous d-statistics, overlap statistics, and multilevel modeling. It requires at least three cases for computation and assumes normally distributed outcomes and stationarity, assumptions that are discussed in some detail. We also show how to test these assumptions. The core of the article then demonstrates in depth how to compute d for one study, including estimation of the autocorrelation and the ratio of between case variance to total variance (between case plus within case variance), how to compute power using a macro, and how to use the d to conduct a meta-analysis of studies using single-case designs in the free program R, including syntax in an appendix. This syntax includes how to read data, compute fixed and random effect average effect sizes, prepare a forest plot and a cumulative meta-analysis, estimate various influence statistics to identify studies contributing to heterogeneity and effect size, and do various kinds of publication bias analyses. This d may prove useful for both the analysis and meta-analysis of data from SCDs.

William R. Shadish - One of the best experts on this subject based on the ideXlab platform.

  • Analysis and meta-analysis of single-case designs with a Standardized Mean Difference statistic: A primer and applications
    Journal of school psychology, 2013
    Co-Authors: William R. Shadish, Larry V. Hedges, James E. Pustejovsky
    Abstract:

    This article presents a d-statistic for single-case designs that is in the same metric as the d-statistic used in between-subjects designs such as randomized experiments and offers some reasons why such a statistic would be useful in SCD research. The d has a formal statistical development, is accompanied by appropriate power analyses, and can be estimated using user-friendly SPSS macros. We discuss both advantages and disadvantages of d compared to other approaches such as previous d-statistics, overlap statistics, and multilevel modeling. It requires at least three cases for computation and assumes normally distributed outcomes and stationarity, assumptions that are discussed in some detail. We also show how to test these assumptions. The core of the article then demonstrates in depth how to compute d for one study, including estimation of the autocorrelation and the ratio of between case variance to total variance (between case plus within case variance), how to compute power using a macro, and how to use the d to conduct a meta-analysis of studies using single-case designs in the free program R, including syntax in an appendix. This syntax includes how to read data, compute fixed and random effect average effect sizes, prepare a forest plot and a cumulative meta-analysis, estimate various influence statistics to identify studies contributing to heterogeneity and effect size, and do various kinds of publication bias analyses. This d may prove useful for both the analysis and meta-analysis of data from SCDs.

  • A Standardized Mean Difference effect size for multiple baseline designs across individuals.
    Research synthesis methods, 2013
    Co-Authors: Larry V. Hedges, James E. Pustejovsky, William R. Shadish
    Abstract:

    Single-case designs are a class of research methods for evaluating treatment effects by measuring outcomes repeatedly over time while systematically introducing different condition (e.g., treatment and control) to the same individual. The designs are used across fields such as behavior analysis, clinical psychology, special education, and medicine. Emerging standards for single-case designs have focused attention on methods for summarizing and meta-analyzing findings and on the need for effect sizes indices that are comparable to those used in between-subjects designs. In the previous work, we discussed how to define and estimate an effect size that is directly comparable to the Standardized Mean Difference often used in between-subjects research based on the data from a particular type of single-case design, the treatment reversal or (AB)k design. This paper extends the effect size measure to another type of single-case study, the multiple baseline design. We propose estimation methods for the effect size and its variance, study the estimators using simulation, and demonstrate the approach in two applications. Copyright © 2013 John Wiley & Sons, Ltd.

  • A Standardized Mean Difference effect size for single case designs.
    Research synthesis methods, 2012
    Co-Authors: Larry V. Hedges, James E. Pustejovsky, William R. Shadish
    Abstract:

    Single case designs are a set of research methods for evaluating treatment effects by assigning different treatments to the same individual and measuring outcomes over time and are used across fields such as behavior analysis, clinical psychology, special education, and medicine. Emerging standards for single case designs have focused attention on the need for effect sizes for summarizing and meta-analyzing findings from the designs; although many effect size measures have been proposed, there is little consensus regarding their use. This article proposes a new effect size measure for single case research that is directly comparable with the Standardized Mean Difference (Cohen's d) often used in between-subjects designs. Techniques are provided for estimating the new effect size, as well as its variance, from balanced or unbalanced treatment reversal designs. The proposed estimation methods are further evaluated using a simulation study and then demonstrated in two applications. Copyright © 2012 John Wiley & Sons, Ltd.

Ken Kelley - One of the best experts on this subject based on the ideXlab platform.

  • Estimating the Standardized Mean Difference With Minimum Risk: Maximizing Accuracy and Minimizing Cost With Sequential Estimation.
    Psychological methods, 2016
    Co-Authors: Bhargab Chattopadhyay, Ken Kelley
    Abstract:

    The Standardized Mean Difference is a widely used effect size measure. In this article, we develop a general theory for estimating the population Standardized Mean Difference by minimizing both the Mean square error of the estimator and the total sampling cost. Fixed sample size methods, when sample size is planned before the start of a study, cannot simultaneously minimize both the Mean square error of the estimator and the total sampling cost. To overcome this limitation of the current state of affairs, this article develops a purely sequential sampling procedure, which provides an estimate of the sample size required to achieve a sufficiently accurate estimate with minimum expected sampling cost. Performance of the purely sequential procedure is examined via a simulation study to show that our analytic developments are highly accurate. Additionally, we provide freely available functions in R to implement the algorithm of the purely sequential procedure. (PsycINFO Database Record

  • Sample size planning for the Standardized Mean Difference: accuracy in parameter estimation via narrow confidence intervals.
    Psychological methods, 2006
    Co-Authors: Ken Kelley, Joseph R. Rausch
    Abstract:

    Methods for planning sample size (SS) for the Standardized Mean Difference so that a narrow confidence interval (CI) can be obtained via the accuracy in parameter estimation (AIPE) approach are developed. One method plans SS so that the expected width of the CI is sufficiently narrow. A modification adjusts the SS so that the obtained CI is no wider than desired with some specified degree of certainty (e.g., 99% certain the 95% CI will be no wider than omega). The rationale of the AIPE approach to SS planning is given, as is a discussion of the analytic approach to CI formation for the population Standardized Mean Difference. Tables with values of necessary SS are provided. The freely available Methods for the Behavioral, Educational, and Social Sciences (K. Kelley, 2006a) R (R Development Core Team, 2006) software package easily implements the methods discussed.

  • Corrected Figure 3 from "Sample size planning for the Standardized Mean Difference: Accuracy in parameter estimation via narrow confidence intervals"
    2006
    Co-Authors: Ken Kelley, Joseph R. Rausch
    Abstract:

    This is the correct version of Figure 3 from the cited Kelley and Rausch (2006) article that appeared in Psychological Methods (Volume 11, pp. 363-385).

  • THE EFFECTS OF NONNORMAL DISTRIBUTIONS ON CONFIDENCE INTERVALS AROUND THE Standardized Mean Difference: BOOTSTRAP AND PARAMETRIC CONFIDENCE INTERVALS
    Educational and Psychological Measurement, 2005
    Co-Authors: Ken Kelley
    Abstract:

    The Standardized group Mean Difference, Cohen’s d, is among the most commonly used and intuitively appealing effect sizes for group comparisons. However, reporting this point estimate alone does not reflect the extent to which sampling error may have led to an obtained value. A confidence interval expresses the uncertainty that exists between d and the population value, δ, it represents. A set of Monte Carlo simulations was conducted to examine the integrity of a noncentral approach analogous to that given by Steiger and Fouladi, as well as two bootstrap approaches in situations in which the normality assumption is violated. Because d is positively biased, a procedure given by Hedges and Olkin is outlined, such that an unbiased estimate of δ can be obtained. The bias-corrected and accelerated bootstrap confidence interval using the unbiased estimate of δ is proposed and recommended for general use, especially in cases in which the assumption of normality may be violated.

Elena Kulinskaya - One of the best experts on this subject based on the ideXlab platform.

  • estimation in meta analyses of Mean Difference and Standardized Mean Difference
    Statistics in Medicine, 2020
    Co-Authors: Ilyas Bakbergenuly, David C. Hoaglin, Elena Kulinskaya
    Abstract:

    Methods for random-effects meta-analysis require an estimate of the between-study variance, τ 2. The performance of estimators of τ 2 (measured by bias and coverage) affects their usefulness in assessing heterogeneity of study-level effects and also the performance of related estimators of the overall effect. However, as we show, the performance of the methods varies widely among effect measures. For the effect measures Mean Difference (MD) and Standardized MD (SMD), we use improved effect-measure-specific approximations to the expected value of Q for both MD and SMD to introduce two new methods of point estimation of τ 2 for MD (Welch-type and corrected DerSimonian-Laird) and one WT interval method. We also introduce one point estimator and one interval estimator for τ 2 in SMD. Extensive simulations compare our methods with four point estimators of τ 2 (the popular methods of DerSimonian-Laird, restricted maximum likelihood, and Mandel and Paule, and the less-familiar method of Jackson) and four interval estimators for τ 2 (profile likelihood, Q-profile, Biggerstaff and Jackson, and Jackson). We also study related point and interval estimators of the overall effect, including an estimator whose weights use only study-level sample sizes. We provide measure-specific recommendations from our comprehensive simulation study and discuss an example.

  • Estimation in meta‐analyses of Mean Difference and Standardized Mean Difference
    Statistics in medicine, 2019
    Co-Authors: Ilyas Bakbergenuly, David C. Hoaglin, Elena Kulinskaya
    Abstract:

    Methods for random-effects meta-analysis require an estimate of the between-study variance, τ 2. The performance of estimators of τ 2 (measured by bias and coverage) affects their usefulness in assessing heterogeneity of study-level effects and also the performance of related estimators of the overall effect. However, as we show, the performance of the methods varies widely among effect measures. For the effect measures Mean Difference (MD) and Standardized MD (SMD), we use improved effect-measure-specific approximations to the expected value of Q for both MD and SMD to introduce two new methods of point estimation of τ 2 for MD (Welch-type and corrected DerSimonian-Laird) and one WT interval method. We also introduce one point estimator and one interval estimator for τ 2 in SMD. Extensive simulations compare our methods with four point estimators of τ 2 (the popular methods of DerSimonian-Laird, restricted maximum likelihood, and Mandel and Paule, and the less-familiar method of Jackson) and four interval estimators for τ 2 (profile likelihood, Q-profile, Biggerstaff and Jackson, and Jackson). We also study related point and interval estimators of the overall effect, including an estimator whose weights use only study-level sample sizes. We provide measure-specific recommendations from our comprehensive simulation study and discuss an example.

  • Simulation study of estimating between-study variance and overall effect in meta-analysis of Standardized Mean Difference
    arXiv: Methodology, 2019
    Co-Authors: Ilyas Bakbergenuly, David C. Hoaglin, Elena Kulinskaya
    Abstract:

    Methods for random-effects meta-analysis require an estimate of the between-study variance, $\tau^2$. The performance of estimators of $\tau^2$ (measured by bias and coverage) affects their usefulness in assessing heterogeneity of study-level effects, and also the performance of related estimators of the overall effect. For the effect measure Standardized Mean Difference (SMD), we provide the results from extensive simulations on five point estimators of $\tau^2$ (the popular methods of DerSimonian-Laird, restricted maximum likelihood, and Mandel and Paule (MP); the less-familiar method of Jackson; the new method (KDB) based on the improved approximation to the distribution of the Q statistic by Kulinskaya, Dollinger and Bj{\o}rkest{\o}l (2011) ), five interval estimators for $\tau^2$ (profile likelihood, Q-profile, Biggerstaff and Jackson, Jackson, and the new KDB method), six point estimators of the overall effect (the five related to the point estimators of $\tau^2$ and an estimator whose weights use only study-level sample sizes), and eight interval estimators for the overall effect (five based on the point estimators for $\tau^2$; the Hartung-Knapp-Sidik-Jonkman (HKSJ) interval; a modification of HKSJ; and an interval based on the sample-size-weighted estimator).

  • testing for homogeneity in meta analysis i the one parameter case Standardized Mean Difference
    Biometrics, 2011
    Co-Authors: Elena Kulinskaya, Michael B Dollinger, Kirsten Bjorkestol
    Abstract:

    Meta-analysis seeks to combine the results of several experiments in order to improve the accuracy of decisions. It is common to use a test for homogeneity to determine if the results of the several experiments are sufficiently similar to warrant their combination into an overall result. Cochran'sQstatistic is frequently used for this homogeneity test. It is often assumed thatQfollows a chi-square distribution under the null hypothesis of homogeneity, but it has long been known that this asymptotic distribution forQis not accurate for moderate sample sizes. Here, we present an expansion for the Mean ofQunder the null hypothesis that is valid when the effect and the weight for each study depend on a single parameter, but for which neither normality nor independence of the effect and weight estimators is needed. This expansion represents an orderO(1/n)correction to the usual chi-square moment in the one-parameter case. We apply the result to the homogeneity test for meta-analyses in which the effects are measured by the Standardized Mean Difference (Cohen'sd-statistic). In this situation, we recommend approximating the null distribution ofQby a chi-square distribution with fractional degrees of freedom that are estimated from the data using our expansion for the Mean ofQ. The resulting homogeneity test is substantially more accurate than the currently used test. We provide a program available at the Paper Information link at theBiometricswebsitefor making the necessary calculations.

  • Testing for Homogeneity in Meta‐Analysis I. The One‐Parameter Case: Standardized Mean Difference
    Biometrics, 2010
    Co-Authors: Elena Kulinskaya, Michael B Dollinger, Kirsten Bjorkestol
    Abstract:

    Meta-analysis seeks to combine the results of several experiments in order to improve the accuracy of decisions. It is common to use a test for homogeneity to determine if the results of the several experiments are sufficiently similar to warrant their combination into an overall result. Cochran'sQstatistic is frequently used for this homogeneity test. It is often assumed thatQfollows a chi-square distribution under the null hypothesis of homogeneity, but it has long been known that this asymptotic distribution forQis not accurate for moderate sample sizes. Here, we present an expansion for the Mean ofQunder the null hypothesis that is valid when the effect and the weight for each study depend on a single parameter, but for which neither normality nor independence of the effect and weight estimators is needed. This expansion represents an orderO(1/n)correction to the usual chi-square moment in the one-parameter case. We apply the result to the homogeneity test for meta-analyses in which the effects are measured by the Standardized Mean Difference (Cohen'sd-statistic). In this situation, we recommend approximating the null distribution ofQby a chi-square distribution with fractional degrees of freedom that are estimated from the data using our expansion for the Mean ofQ. The resulting homogeneity test is substantially more accurate than the currently used test. We provide a program available at the Paper Information link at theBiometricswebsitefor making the necessary calculations.

Larry V. Hedges - One of the best experts on this subject based on the ideXlab platform.

  • Analysis and meta-analysis of single-case designs with a Standardized Mean Difference statistic: A primer and applications
    Journal of school psychology, 2013
    Co-Authors: William R. Shadish, Larry V. Hedges, James E. Pustejovsky
    Abstract:

    This article presents a d-statistic for single-case designs that is in the same metric as the d-statistic used in between-subjects designs such as randomized experiments and offers some reasons why such a statistic would be useful in SCD research. The d has a formal statistical development, is accompanied by appropriate power analyses, and can be estimated using user-friendly SPSS macros. We discuss both advantages and disadvantages of d compared to other approaches such as previous d-statistics, overlap statistics, and multilevel modeling. It requires at least three cases for computation and assumes normally distributed outcomes and stationarity, assumptions that are discussed in some detail. We also show how to test these assumptions. The core of the article then demonstrates in depth how to compute d for one study, including estimation of the autocorrelation and the ratio of between case variance to total variance (between case plus within case variance), how to compute power using a macro, and how to use the d to conduct a meta-analysis of studies using single-case designs in the free program R, including syntax in an appendix. This syntax includes how to read data, compute fixed and random effect average effect sizes, prepare a forest plot and a cumulative meta-analysis, estimate various influence statistics to identify studies contributing to heterogeneity and effect size, and do various kinds of publication bias analyses. This d may prove useful for both the analysis and meta-analysis of data from SCDs.

  • A Standardized Mean Difference effect size for multiple baseline designs across individuals.
    Research synthesis methods, 2013
    Co-Authors: Larry V. Hedges, James E. Pustejovsky, William R. Shadish
    Abstract:

    Single-case designs are a class of research methods for evaluating treatment effects by measuring outcomes repeatedly over time while systematically introducing different condition (e.g., treatment and control) to the same individual. The designs are used across fields such as behavior analysis, clinical psychology, special education, and medicine. Emerging standards for single-case designs have focused attention on methods for summarizing and meta-analyzing findings and on the need for effect sizes indices that are comparable to those used in between-subjects designs. In the previous work, we discussed how to define and estimate an effect size that is directly comparable to the Standardized Mean Difference often used in between-subjects research based on the data from a particular type of single-case design, the treatment reversal or (AB)k design. This paper extends the effect size measure to another type of single-case study, the multiple baseline design. We propose estimation methods for the effect size and its variance, study the estimators using simulation, and demonstrate the approach in two applications. Copyright © 2013 John Wiley & Sons, Ltd.

  • A Standardized Mean Difference effect size for single case designs.
    Research synthesis methods, 2012
    Co-Authors: Larry V. Hedges, James E. Pustejovsky, William R. Shadish
    Abstract:

    Single case designs are a set of research methods for evaluating treatment effects by assigning different treatments to the same individual and measuring outcomes over time and are used across fields such as behavior analysis, clinical psychology, special education, and medicine. Emerging standards for single case designs have focused attention on the need for effect sizes for summarizing and meta-analyzing findings from the designs; although many effect size measures have been proposed, there is little consensus regarding their use. This article proposes a new effect size measure for single case research that is directly comparable with the Standardized Mean Difference (Cohen's d) often used in between-subjects designs. Techniques are provided for estimating the new effect size, as well as its variance, from balanced or unbalanced treatment reversal designs. The proposed estimation methods are further evaluated using a simulation study and then demonstrated in two applications. Copyright © 2012 John Wiley & Sons, Ltd.