Predictive Validity

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 53364 Experts worldwide ranked by ideXlab platform

Judith A. Hall - One of the best experts on this subject based on the ideXlab platform.

  • Predictive Validity of Thin Slices of Verbal and Nonverbal Behaviors: Comparison of Slice Lengths and Rating Methodologies
    Journal of Nonverbal Behavior, 2020
    Co-Authors: Michael Z. Wang, Katrina Chen, Judith A. Hall
    Abstract:

    Thin slices, or excerpts of behavior, are commonly used by researchers to represent behaviors in their full stimulus. The present study asked how slices of different lengths and locations, as well as different measurement methodologies, influence correlations between the measured behavior and different variables (Predictive Validity). We collected self-rated, perceiver-rated, and objectively measured data on 60 participants who participated in a 5-min interaction with a confederate on video. These videos were split into five 1-min slices and rated for verbal and nonverbal behaviors via global impressions, using the same rater for all five slices and also using a different rater for each slice. For single slices, results indicated no clear pattern for optimal slice locations. In general, single slices had weaker Predictive Validity than the total. Slices of 2 or 3 min were, in general, equal to 5-min total in Predictive Validity. The magnitude of correlations was similar when same versus different coders were used, and the Predictive Validity correlations of the two methods covaried strongly across behavior-outcome variable combinations.

  • Predictive Validity of Thin-Slice Nonverbal Behavior from Social Interactions:
    Personality and Social Psychology Bulletin, 2018
    Co-Authors: Nora A. Murphy, Judith A. Hall, Mollie A. Ruben, Denise Frauendorfer, Marianne Schmid Mast, Kirsten E. Johnson, Laurent Son Nguyen
    Abstract:

    We present five studies investigating the Predictive Validity of thin slices of nonverbal behavior (NVB). Predictive Validity of thin slices refers to how well behavior slices excerpted from longer...

Michael Z. Wang - One of the best experts on this subject based on the ideXlab platform.

  • Predictive Validity of Thin Slices of Verbal and Nonverbal Behaviors: Comparison of Slice Lengths and Rating Methodologies
    Journal of Nonverbal Behavior, 2020
    Co-Authors: Michael Z. Wang, Katrina Chen, Judith A. Hall
    Abstract:

    Thin slices, or excerpts of behavior, are commonly used by researchers to represent behaviors in their full stimulus. The present study asked how slices of different lengths and locations, as well as different measurement methodologies, influence correlations between the measured behavior and different variables (Predictive Validity). We collected self-rated, perceiver-rated, and objectively measured data on 60 participants who participated in a 5-min interaction with a confederate on video. These videos were split into five 1-min slices and rated for verbal and nonverbal behaviors via global impressions, using the same rater for all five slices and also using a different rater for each slice. For single slices, results indicated no clear pattern for optimal slice locations. In general, single slices had weaker Predictive Validity than the total. Slices of 2 or 3 min were, in general, equal to 5-min total in Predictive Validity. The magnitude of correlations was similar when same versus different coders were used, and the Predictive Validity correlations of the two methods covaried strongly across behavior-outcome variable combinations.

Jay P Singh - One of the best experts on this subject based on the ideXlab platform.

  • Predictive Validity performance indicators in violence risk assessment a methodological primer
    Behavioral Sciences & The Law, 2013
    Co-Authors: Jay P Singh
    Abstract:

    The Predictive Validity of violence risk assessments can be divided into two components: calibration and discrimination. The most common performance indicator used to measure the Predictive Validity of structured risk assessments, the area under the receiver operating characteristic curve (AUC), measures the latter component but not the former. As it does not capture how well a risk assessment tool’s predictions of risk agree with actual observed risk, the AUC provides an incomplete portrayal of Predictive Validity. This primer provides an overview of calibration and discrimination performance indicators that measure global performance, performance in identifying higher-risk groups, and performance in identifying lower-risk groups. It is recommended that future research into the Predictive Validity of violence risk assessment tools includes a number of performance indicators that measure different facets of Predictive Validity and that the limitations of reported indicators be routinely explicated.

Sebastian Kaiser - One of the best experts on this subject based on the ideXlab platform.

  • guidelines for choosing between multi item and single item scales for construct measurement a Predictive Validity perspective
    ERIM Top-Core Articles, 2012
    Co-Authors: Adamantios Diamantopoulos, Marko Sarstedt, Christoph Fuchs, Petra Wilczynski, Sebastian Kaiser
    Abstract:

    textabstractEstablishing Predictive Validity of measures is a major concern in marketing research. This paper investigates the conditions favoring the use of single items versus multi-item scales in terms of Predictive Validity. A series of complementary studies reveals that the Predictive Validity of single items varies considerably across different (concrete) constructs and stimuli objects. In an attempt to explain the observed instability, a comprehensive simulation study is conducted aimed at identifying the influence of different factors on the Predictive Validity of single versus multi-item measures. These include the average inter-item correlations in the predictor and criterion constructs, the number of items measuring these constructs, as well as the correlation patterns of multiple and single items between the predictor and criterion constructs. The simulation results show that, under most conditions typically encountered in practical applications, multi-item scales clearly outperform single items in terms of Predictive Validity. Only under very specific conditions do single items perform equally well as multi-item scales. Therefore, the use of single-item measures in empirical research should be approached with caution, and the use of such measures should be limited to special circumstances.

  • guidelines for choosing between multi item and single item scales for construct measurement a Predictive Validity perspective
    Journal of the Academy of Marketing Science, 2012
    Co-Authors: Adamantios Diamantopoulos, Marko Sarstedt, Christoph Fuchs, Petra Wilczynski, Sebastian Kaiser
    Abstract:

    Establishing Predictive Validity of measures is a major concern in marketing research. This paper investigates the conditions favoring the use of single items versus multi-item scales in terms of Predictive Validity. A series of complementary studies reveals that the Predictive Validity of single items varies considerably across different (concrete) constructs and stimuli objects. In an attempt to explain the observed instability, a comprehensive simulation study is conducted aimed at identifying the influence of different factors on the Predictive Validity of single versus multi-item measures. These include the average inter-item correlations in the predictor and criterion constructs, the number of items measuring these constructs, as well as the correlation patterns of multiple and single items between the predictor and criterion constructs. The simulation results show that, under most conditions typically encountered in practical applications, multi-item scales clearly outperform single items in terms of Predictive Validity. Only under very specific conditions do single items perform equally well as multi-item scales. Therefore, the use of single-item measures in empirical research should be approached with caution, and the use of such measures should be limited to special circumstances.

Marko Sarstedt - One of the best experts on this subject based on the ideXlab platform.

  • guidelines for choosing between multi item and single item scales for construct measurement a Predictive Validity perspective
    ERIM Top-Core Articles, 2012
    Co-Authors: Adamantios Diamantopoulos, Marko Sarstedt, Christoph Fuchs, Petra Wilczynski, Sebastian Kaiser
    Abstract:

    textabstractEstablishing Predictive Validity of measures is a major concern in marketing research. This paper investigates the conditions favoring the use of single items versus multi-item scales in terms of Predictive Validity. A series of complementary studies reveals that the Predictive Validity of single items varies considerably across different (concrete) constructs and stimuli objects. In an attempt to explain the observed instability, a comprehensive simulation study is conducted aimed at identifying the influence of different factors on the Predictive Validity of single versus multi-item measures. These include the average inter-item correlations in the predictor and criterion constructs, the number of items measuring these constructs, as well as the correlation patterns of multiple and single items between the predictor and criterion constructs. The simulation results show that, under most conditions typically encountered in practical applications, multi-item scales clearly outperform single items in terms of Predictive Validity. Only under very specific conditions do single items perform equally well as multi-item scales. Therefore, the use of single-item measures in empirical research should be approached with caution, and the use of such measures should be limited to special circumstances.

  • guidelines for choosing between multi item and single item scales for construct measurement a Predictive Validity perspective
    Journal of the Academy of Marketing Science, 2012
    Co-Authors: Adamantios Diamantopoulos, Marko Sarstedt, Christoph Fuchs, Petra Wilczynski, Sebastian Kaiser
    Abstract:

    Establishing Predictive Validity of measures is a major concern in marketing research. This paper investigates the conditions favoring the use of single items versus multi-item scales in terms of Predictive Validity. A series of complementary studies reveals that the Predictive Validity of single items varies considerably across different (concrete) constructs and stimuli objects. In an attempt to explain the observed instability, a comprehensive simulation study is conducted aimed at identifying the influence of different factors on the Predictive Validity of single versus multi-item measures. These include the average inter-item correlations in the predictor and criterion constructs, the number of items measuring these constructs, as well as the correlation patterns of multiple and single items between the predictor and criterion constructs. The simulation results show that, under most conditions typically encountered in practical applications, multi-item scales clearly outperform single items in terms of Predictive Validity. Only under very specific conditions do single items perform equally well as multi-item scales. Therefore, the use of single-item measures in empirical research should be approached with caution, and the use of such measures should be limited to special circumstances.