All-or-None Thinking

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 36 Experts worldwide ranked by ideXlab platform

Paul Maruff - One of the best experts on this subject based on the ideXlab platform.

  • The use of effect sizes to characterize the nature of cognitive change in psychopharmacological studies : an example with scopolamine
    Human psychopharmacology, 2008
    Co-Authors: Amy Fredrickson, Peter J. Snyder, Jennifer R. Cromer, Elizabeth Thomas, Matthew Lewis, Paul Maruff
    Abstract:

    Drug induced cognitive change is generally investigated using small sample sizes. In terms of null hypothesis significance testing (NHST) this can render a meaningful change non-significant, as a result of insufficient power in the statistical model. NHST leads to 'all or none' Thinking, where a non-significant result is interpreted as an absence of change. An effect size calculation indicates the magnitude of change which has occurred post-intervention, and therefore whether a significant result is meaningful. We used a scopolamine challenge to demonstrate the usefulness of effect sizes. The aim of the study was to determine how effect sizes could describe the cognitive changes that occur following administration of subcutaneous scopolamine (s.c. scopolamine). Twenty four healthy young males (M = 32.6, sd = 4.5 years) were administered placebo and 0.2 mg, 0.4 mg & 0.6 mg of s.c. scopolamine using a 4-way crossover design. Memory, learning, psychomotor function, attention and executive function were assessed. Scopolamine significantly impaired performance on all tasks in a dose and time related manner. These results demonstrate the functionality of change scores to draw comparisons between different times and doses. This methodology overcomes the limitations of comparisons between studies using different tasks, doses and time at which cognitive functions are measured.

Kelli Huber - One of the best experts on this subject based on the ideXlab platform.

  • Measuring Individual Differences in the Perfect Automation Schema.
    Human factors, 2015
    Co-Authors: Stephanie M. Merritt, Jennifer L. Unnerstall, Deborah Lee, Kelli Huber
    Abstract:

    OBJECTIVE: A self-report measure of the perfect automation schema (PAS) is developed and tested. BACKGROUND: Researchers have hypothesized that the extent to which users possess a PAS is associated with greater decreases in trust after users encounter automation errors. However, no measure of the PAS currently exists. We developed a self-report measure assessing two proposed PAS factors: high expectations and All-or-None Thinking about automation performance. METHOD: In two studies, participants responded to our PAS measure, interacted with imperfect automated aids, and reported trust. RESULTS: Each of the two PAS measure factors demonstrated fit to the hypothesized factor structure and convergent and discriminant validity when compared with propensity to trust machines and trust in a specific aid. However, the high expectations and All-or-None Thinking scales showed low intercorrelations and differential relationships with outcomes, suggesting that they might best be considered two separate constructs rather than two subfactors of the PAS. All-or-None Thinking had significant associations with decreases in trust following aid errors, whereas high expectations did not. RESULTS therefore suggest that the All-or-None Thinking scale may best represent the PAS construct. CONCLUSION: Our PAS measure (specifically, the All-or-None Thinking scale) significantly predicted the severe trust decreases thought to be associated with high PAS. Further, it demonstrated acceptable psychometric properties across two samples. APPLICATION: This measure may be used in future work to assess levels of PAS in users of automated systems in either research or applied settings. Language: en

  • Continuous Calibration of Trust in Automated Systems
    2014
    Co-Authors: Stephanie M. Merritt, Kelli Huber, Jennifer Lachapell-unnerstall, Deborah Lee
    Abstract:

    Abstract : This report details three studies that have been conducted in order to explore user calibration of trust in automation. In the first, we discover that All-or-None Thinking about automation reliability was associated with severe decreases in trust following an aid error, but high expectations for automation performance were not. In the second study, we examine predictors and outcomes of calibration of trust. We measured calibration in three different ways. We found that awareness of the aid's accuracy trajectory (whether it was getting more or less reliable over time) was a significant predictor of calibration. However, we found that none of the three measurements of calibration had strong associations with task performance or the ability to identify aid errors. We also describe the conceptual premise and design of our third and final study. This study examines the development, loss, and recovery of trust in a route planning aid in a military simulation context. The results of this study will be presented in our final report.

Stephanie M. Merritt - One of the best experts on this subject based on the ideXlab platform.

  • Measuring Individual Differences in the Perfect Automation Schema.
    Human factors, 2015
    Co-Authors: Stephanie M. Merritt, Jennifer L. Unnerstall, Deborah Lee, Kelli Huber
    Abstract:

    OBJECTIVE: A self-report measure of the perfect automation schema (PAS) is developed and tested. BACKGROUND: Researchers have hypothesized that the extent to which users possess a PAS is associated with greater decreases in trust after users encounter automation errors. However, no measure of the PAS currently exists. We developed a self-report measure assessing two proposed PAS factors: high expectations and All-or-None Thinking about automation performance. METHOD: In two studies, participants responded to our PAS measure, interacted with imperfect automated aids, and reported trust. RESULTS: Each of the two PAS measure factors demonstrated fit to the hypothesized factor structure and convergent and discriminant validity when compared with propensity to trust machines and trust in a specific aid. However, the high expectations and All-or-None Thinking scales showed low intercorrelations and differential relationships with outcomes, suggesting that they might best be considered two separate constructs rather than two subfactors of the PAS. All-or-None Thinking had significant associations with decreases in trust following aid errors, whereas high expectations did not. RESULTS therefore suggest that the All-or-None Thinking scale may best represent the PAS construct. CONCLUSION: Our PAS measure (specifically, the All-or-None Thinking scale) significantly predicted the severe trust decreases thought to be associated with high PAS. Further, it demonstrated acceptable psychometric properties across two samples. APPLICATION: This measure may be used in future work to assess levels of PAS in users of automated systems in either research or applied settings. Language: en

  • Continuous Calibration of Trust in Automated Systems
    2014
    Co-Authors: Stephanie M. Merritt, Kelli Huber, Jennifer Lachapell-unnerstall, Deborah Lee
    Abstract:

    Abstract : This report details three studies that have been conducted in order to explore user calibration of trust in automation. In the first, we discover that All-or-None Thinking about automation reliability was associated with severe decreases in trust following an aid error, but high expectations for automation performance were not. In the second study, we examine predictors and outcomes of calibration of trust. We measured calibration in three different ways. We found that awareness of the aid's accuracy trajectory (whether it was getting more or less reliable over time) was a significant predictor of calibration. However, we found that none of the three measurements of calibration had strong associations with task performance or the ability to identify aid errors. We also describe the conceptual premise and design of our third and final study. This study examines the development, loss, and recovery of trust in a route planning aid in a military simulation context. The results of this study will be presented in our final report.

Amy Fredrickson - One of the best experts on this subject based on the ideXlab platform.

  • The use of effect sizes to characterize the nature of cognitive change in psychopharmacological studies : an example with scopolamine
    Human psychopharmacology, 2008
    Co-Authors: Amy Fredrickson, Peter J. Snyder, Jennifer R. Cromer, Elizabeth Thomas, Matthew Lewis, Paul Maruff
    Abstract:

    Drug induced cognitive change is generally investigated using small sample sizes. In terms of null hypothesis significance testing (NHST) this can render a meaningful change non-significant, as a result of insufficient power in the statistical model. NHST leads to 'all or none' Thinking, where a non-significant result is interpreted as an absence of change. An effect size calculation indicates the magnitude of change which has occurred post-intervention, and therefore whether a significant result is meaningful. We used a scopolamine challenge to demonstrate the usefulness of effect sizes. The aim of the study was to determine how effect sizes could describe the cognitive changes that occur following administration of subcutaneous scopolamine (s.c. scopolamine). Twenty four healthy young males (M = 32.6, sd = 4.5 years) were administered placebo and 0.2 mg, 0.4 mg & 0.6 mg of s.c. scopolamine using a 4-way crossover design. Memory, learning, psychomotor function, attention and executive function were assessed. Scopolamine significantly impaired performance on all tasks in a dose and time related manner. These results demonstrate the functionality of change scores to draw comparisons between different times and doses. This methodology overcomes the limitations of comparisons between studies using different tasks, doses and time at which cognitive functions are measured.

Deborah Lee - One of the best experts on this subject based on the ideXlab platform.

  • Measuring Individual Differences in the Perfect Automation Schema.
    Human factors, 2015
    Co-Authors: Stephanie M. Merritt, Jennifer L. Unnerstall, Deborah Lee, Kelli Huber
    Abstract:

    OBJECTIVE: A self-report measure of the perfect automation schema (PAS) is developed and tested. BACKGROUND: Researchers have hypothesized that the extent to which users possess a PAS is associated with greater decreases in trust after users encounter automation errors. However, no measure of the PAS currently exists. We developed a self-report measure assessing two proposed PAS factors: high expectations and All-or-None Thinking about automation performance. METHOD: In two studies, participants responded to our PAS measure, interacted with imperfect automated aids, and reported trust. RESULTS: Each of the two PAS measure factors demonstrated fit to the hypothesized factor structure and convergent and discriminant validity when compared with propensity to trust machines and trust in a specific aid. However, the high expectations and All-or-None Thinking scales showed low intercorrelations and differential relationships with outcomes, suggesting that they might best be considered two separate constructs rather than two subfactors of the PAS. All-or-None Thinking had significant associations with decreases in trust following aid errors, whereas high expectations did not. RESULTS therefore suggest that the All-or-None Thinking scale may best represent the PAS construct. CONCLUSION: Our PAS measure (specifically, the All-or-None Thinking scale) significantly predicted the severe trust decreases thought to be associated with high PAS. Further, it demonstrated acceptable psychometric properties across two samples. APPLICATION: This measure may be used in future work to assess levels of PAS in users of automated systems in either research or applied settings. Language: en

  • Continuous Calibration of Trust in Automated Systems
    2014
    Co-Authors: Stephanie M. Merritt, Kelli Huber, Jennifer Lachapell-unnerstall, Deborah Lee
    Abstract:

    Abstract : This report details three studies that have been conducted in order to explore user calibration of trust in automation. In the first, we discover that All-or-None Thinking about automation reliability was associated with severe decreases in trust following an aid error, but high expectations for automation performance were not. In the second study, we examine predictors and outcomes of calibration of trust. We measured calibration in three different ways. We found that awareness of the aid's accuracy trajectory (whether it was getting more or less reliable over time) was a significant predictor of calibration. However, we found that none of the three measurements of calibration had strong associations with task performance or the ability to identify aid errors. We also describe the conceptual premise and design of our third and final study. This study examines the development, loss, and recovery of trust in a route planning aid in a military simulation context. The results of this study will be presented in our final report.