Unequal Variance

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 25392 Experts worldwide ranked by ideXlab platform

Graeme D. Ruxton - One of the best experts on this subject based on the ideXlab platform.

  • the Unequal Variance t test is an underused alternative to student s t test and the mann whitney u test
    Behavioral Ecology, 2006
    Co-Authors: Graeme D. Ruxton
    Abstract:

    Often in the study of behavioral ecology, and more widely in science, we require to statistically test whether the central tendencies (mean or median) of 2 groups are different from each other on the basis of samples of the 2 groups. In surveying recent issues of Behavioral Ecology (Volume 16, issues 1–5), I found that, of the 130 papers, 33 (25%) used at least one statistical comparison of this sort. Three different tests were used to make this comparison: Student’s t-test (67 occasions; 26 papers), Mann–Whitney U test (43 occasions; 21 papers), and the t-test for Unequal Variances (9 occasions; 4 papers). My aim in this forum article is to argue for the greater use of the last of these tests. The numbers just related suggest that this test is not commonly used. In my survey, I was able to identify tests described simply as ‘‘t-tests’’ with confidence as either a Student’s t-test or an Unequal Variance t-test because the calculation of degrees of freedom from the 2 sample sizes is different for the 2 tests (see below). Hence, the neglect of the Unequal Variance t-test illustrated above is a real phenomenon and can be explained in several (nonexclusive ways) ways: 1. Authors are unaware that Student’s t-test is unreliable

  • The Unequal Variance t-test is an underused alternative to Student's t-test and the Mann–Whitney U test
    Behavioral Ecology, 2006
    Co-Authors: Graeme D. Ruxton
    Abstract:

    Often in the study of behavioral ecology, and more widely in science, we require to statistically test whether the central tendencies (mean or median) of 2 groups are different from each other on the basis of samples of the 2 groups. In surveying recent issues of Behavioral Ecology (Volume 16, issues 1–5), I found that, of the 130 papers, 33 (25%) used at least one statistical comparison of this sort. Three different tests were used to make this comparison: Student’s t-test (67 occasions; 26 papers), Mann–Whitney U test (43 occasions; 21 papers), and the t-test for Unequal Variances (9 occasions; 4 papers). My aim in this forum article is to argue for the greater use of the last of these tests. The numbers just related suggest that this test is not commonly used. In my survey, I was able to identify tests described simply as ‘‘t-tests’’ with confidence as either a Student’s t-test or an Unequal Variance t-test because the calculation of degrees of freedom from the 2 sample sizes is different for the 2 tests (see below). Hence, the neglect of the Unequal Variance t-test illustrated above is a real phenomenon and can be explained in several (nonexclusive ways) ways: 1. Authors are unaware that Student’s t-test is unreliable

Julie Brimblecombe - One of the best experts on this subject based on the ideXlab platform.

  • a comparison of dietary estimates from the national aboriginal and torres strait islander health survey to food and beverage purchase data
    Australian and New Zealand Journal of Public Health, 2017
    Co-Authors: Emma Mcmahon, Thomas P Wycherley, Kerin Odea, Julie Brimblecombe
    Abstract:

    Objective: We compared self-reported dietary intake from the very remote sample of the National Aboriginal and Torres Strait Islander Nutrition and Physical Activity Survey (VR-NATSINPAS; n=1,363) to one year of food and beverage purchases from 20 very remote Indigenous Australian communities (servicing ∼8,500 individuals). Methods: Differences in food (% energy from food groups) and nutrients were analysed using t-test with Unequal Variance. Results: Per-capita energy estimates were not significantly different between the surveys (899 MJ/person/day [95% confidence interval −152,1950] p=0.094). Self-reported intakes of sugar, cereal products/dishes, beverages, fats/oils, milk products/dishes and confectionery were significantly lower than that purchased, while intakes of meat, vegetables, cereal-based dishes, fish, fruit and eggs were significantly higher (p<0.05). Conclusion: Differences between methods are consistent with differential reporting bias seen in self-reported dietary data. Implications for public health: The NATSINPAS provides valuable, much-needed information about dietary intake; however, self-reported data is prone to energy under-reporting and reporting bias. Purchase data can be used to track population-level food and nutrient availability in this population longitudinally; however, further evidence is needed on approaches to estimate wastage and foods sourced outside the store. There is potential for these data to complement each other to inform nutrition policies and programs in this population.

  • A comparison of dietary estimates from the National Aboriginal and Torres Strait Islander Health Survey to food and beverage purchase data
    Australian and New Zealand Journal of Public Health, 2017
    Co-Authors: Emma Mcmahon, Thomas P Wycherley, Julie Brimblecombe
    Abstract:

    Objective: We compared self-reported dietary intake from the very remote sample of the National Aboriginal and Torres Strait Islander Nutrition and Physical Activity Survey (VR-NATSINPAS; n=1,363) to one year of food and beverage purchases from 20 very remote Indigenous Australian communities (servicing ∼8,500 individuals). Methods: Differences in food (% energy from food groups) and nutrients were analysed using t-test with Unequal Variance. Results: Per-capita energy estimates were not significantly different between the surveys (899 MJ/person/day [95% confidence interval −152,1950] p=0.094). Self-reported intakes of sugar, cereal products/dishes, beverages, fats/oils, milk products/dishes and confectionery were significantly lower than that purchased, while intakes of meat, vegetables, cereal-based dishes, fish, fruit and eggs were significantly higher (p

Lawrence T. Decarlo - One of the best experts on this subject based on the ideXlab platform.

  • On the statistical and theoretical basis of signal detection theory and extensions: Unequal Variance, random coefficient, and mixture models
    Journal of Mathematical Psychology, 2010
    Co-Authors: Lawrence T. Decarlo
    Abstract:

    Abstract Basic results for conditional means and Variances, as well as distributional results, are used to clarify the similarities and differences between various extensions of signal detection theory (SDT). It is shown that a previously presented motivation for the Unequal Variance SDT model (varying strength) actually leads to a related, yet distinct, model. The distinction has implications for other extensions of SDT, such as models with criteria that vary over trials. It is shown that a mixture extension of SDT is also consistent with Unequal Variances, but provides a different interpretation of the results; mixture SDT also offers a way to unify results found across several types of studies.

  • Using the PLUM procedure of SPSS to fit Unequal Variance and generalized signal detection models.
    Behavior Research Methods Instruments & Computers, 2003
    Co-Authors: Lawrence T. Decarlo
    Abstract:

    The recent addition of a procedure in SPSS for the analysis of ordinal regression models offers a simple means for researchers to fit the Unequal Variance normal signal detection model and other extended signal detection models. The present article shows how to implement the analysis and how to interpret the SPSS output. Examples of fitting the Unequal Variance normal model and other generalized signal detection models are given. The approach offers a convenient means for applying signal detection theory to a variety of research.

  • signal detection theory with finite mixture distributions theoretical developments with applications to recognition memory
    Psychological Review, 2002
    Co-Authors: Lawrence T. Decarlo
    Abstract:

    An extension of signal detection theory (SDT) that incorporates mixtures of the underlying distributions is presented. The mixtures can be motivated by the idea that a presentation of a signal shifts the location of an underlying distribution only if the observer is attending to the signal; otherwise, the distribution is not shifted or is only partially shifted. Thus, trials with a signal presentation consist of a mixture of 2 (or more) latent classes of trials. Mixture SDT provides a general theoretical framework that offers a new perspective on a number of findings. For example, mixture SDT offers an alternative to the Unequal Variance signal detection model; it can also account for nonlinear normal receiver operating characteristic curves, as found in recent research. Signal detection theory (SDT) provides a theoretical framework that has been quite useful in psychology and other fields (see Gescheider, 1997; Macmillan & Creelman, 1991; Swets, 1996). A basic idea of SDT is that decisions about the presence or absence of an event are based on decision criteria and on perceptions of the event or nonevent, with the perceptions being represented by probability distributions on an underlying continuum. Thus, in its simplest form, the theory considers two basic aspects of detection—the underlying representations, which are interpreted as psychological distributions of some sort (e.g., of perception or familiarity), and a decision aspect, which involves the use of decision criteria to arrive at a response. The present article extends SDT by viewing detection as consisting of an additional process. The result is a simple and psychologically meaningful extension of SDT that can be applied to any area of research where SDT has been applied. The approach is illustrated with applications to research on recognition memory, where the additional process can be interpreted as attention. In

Roger Ratcliff - One of the best experts on this subject based on the ideXlab platform.

  • Validating the Unequal-Variance assumption in recognition memory using response time distributions instead of ROC functions: A diffusion model analysis.
    Journal of Memory and Language, 2014
    Co-Authors: Jeffrey J. Starns, Roger Ratcliff
    Abstract:

    Recognition memory z-transformed Receiver Operating Characteristic (zROC) functions have a slope less than 1. One way to accommodate this finding is to assume that memory evidence is more variable for studied (old) items than non-studied (new) items. This assumption has been implemented in signal detection models, but this approach cannot accommodate the time course of decision making. We tested the Unequal-Variance assumption by fitting the diffusion model to accuracy and response time (RT) distributions from nine old/new recognition data sets comprising previously-published data from 376 participants. The η parameter in the diffusion model measures between-trial variability in evidence based on accuracy and the RT distributions for correct and error responses. In fits to nine data sets, η estimates were higher for targets than lures in all cases, and fitting results rejected an equal-Variance version of the model in favor of an Unequal-Variance version. Parameter recovery simulations showed that the variability differences were not produced by biased estimation of the η parameter. Estimates of the other model parameters were largely consistent between the equal- and Unequal-Variance versions of the model. Our results provide independent support for the Unequal-Variance assumption without using zROC data.

  • mixing strong and weak targets provides no evidence against the Unequal Variance explanation of zroc slope a comment on koen and yonelinas 2010
    Journal of Experimental Psychology: Learning Memory and Cognition, 2012
    Co-Authors: Jeffrey J. Starns, Caren M. Rotello, Roger Ratcliff
    Abstract:

    Koen and Yonelinas (2010; K&Y) reported that mixing classes of targets that had short (weak) or long (strong) study times had no impact on zROC slope, contradicting the predictions of the encoding variability hypothesis. We show that they actually derived their predictions from a mixture UnequalVariance signal detection (UVSD) model, which assumes 2 discrete levels of strength instead of the continuous variation in learning effectiveness proposed by the encoding variability hypothesis. We demonstrated that the mixture UVSD model predicts an effect of strength mixing only when there is a large performance difference between strong and weak targets, and the strength effect observed by K&Y was too small to produce a mixing effect. Moreover, we re-analyzed their experiment along with another experiment that manipulated the strength of target items. The mixture UVSD model closely predicted the empirical mixed slopes from both experiments. The apparent misfits reported by K&Y arose because they calculated the observed slopes using the actual range of z-transformed false-alarm rates in the data, but they computed the predicted slopes using an extended range from 5 to 5. Because the mixed predictions follow a slightly curved zROC function, different ranges of scores have different linear slopes. We used the actual range in the data to compute both the observed and predicted slopes, and this eliminated the apparent deviation between them.

  • Evaluating the Unequal-Variance and dual-process explanations of zROC slopes with response time data and the diffusion model.
    Cognitive Psychology, 2012
    Co-Authors: Jeffrey J. Starns, Roger Ratcliff, Gail Mckoon
    Abstract:

    We tested two explanations for why the slope of the z-transformed receiver operating characteristic (zROC) is less than 1 in recognition memory: the Unequal-Variance account (target evidence is more variable than lure evidence) and the dual-process account (responding reflects both a continuous familiarity process and a threshold recollection process). These accounts are typically implemented in signal detection models that do not make predictions for response time (RT) data. We tested them using RT data and the diffusion model. Participants completed multiple study/test blocks of an "old"/"new" recognition task with the proportion of targets and the test varying from block to block (.21, .32, .50, .68, or .79 targets). The same participants completed sessions with both speed-emphasis and accuracy-emphasis instructions. zROC slopes were below one for both speed and accuracy sessions, and they were slightly lower for speed. The extremely fast pace of the speed sessions (mean RT=526) should have severely limited the role of the slower recollection process relative to the fast familiarity process. Thus, the slope results are not consistent with the idea that recollection is responsible for slopes below 1. The diffusion model was able to match the empirical zROC slopes and RT distributions when between-trial variability in memory evidence was greater for targets than for lures, but missed the zROC slopes when target and lure variability were constrained to be equal. Therefore, Unequal variability in continuous evidence is supported by RT modeling in addition to signal detection modeling. Finally, we found that a two-choice version of the RTCON model could not accommodate the RT distributions as successfully as the diffusion model.

R M Rainsbury - One of the best experts on this subject based on the ideXlab platform.

  • environmental and social benefits of the targeted intraoperative radiotherapy for breast cancer data from uk targit a trial centres and two uk nhs hospitals offering targit iort
    BMJ Open, 2016
    Co-Authors: Nathan J Coombs, Joel M Coombs, Uma J Vaidya, Julian Singer, Max Bulsara, J S Tobias, Frederik Wenz, David Joseph, Douglas Brown, R M Rainsbury
    Abstract:

    Objective To quantify the journeys and CO2 emissions if women with breast cancer are treated with risk-adapted single-dose targeted intraoperative radiotherapy (TARGIT) rather than several weeks' course of external beam whole breast radiotherapy (EBRT) treatment. Setting (1) TARGIT-A randomised clinical trial ([ISRCTN34086741][1]) which compared TARGIT with traditional EBRT and found similar breast cancer control, particularly when TARGIT was given simultaneously with lumpectomy, (2) 2 additional UK centres offering TARGIT. Participants 485 UK patients (249 TARGIT, 236 EBRT) in the prepathology stratum of TARGIT-A trial (where randomisation occurred before lumpectomy and TARGIT was delivered simultaneously with lumpectomy) for whom geographical data were available and 22 patients treated with TARGIT after completion of the TARGIT-A trial in 2 additional UK breast centres. Outcome measures The shortest total journey distance, time and CO2 emissions from home to hospital to receive all the fractions of radiotherapy. Methods Distances, time and CO2 emissions were calculated using Google Maps and assuming a fuel efficiency of 40 mpg. The groups were compared using the Student t test with Unequal Variance and the non-parametric Wilcoxon rank-sum (Mann-Whitney) test. Results TARGIT patients travelled significantly fewer miles: TARGIT 21 681, mean 87.1 (SE 19.1) versus EBRT 92 591, mean 392.3 (SE 30.2); had lower CO2 emissions 24.7 kg (SE 5.4) vs 111 kg (SE 8.6) and spent less time travelling: 3 h (SE 0.53) vs 14 h (SE 0.76), all p<0.0001. Patients treated with TARGIT in 2 hospitals in semirural locations were spared much longer journeys (753 miles, 30 h, 215 kg CO2 per patient). Conclusions The use of TARGIT intraoperative radiotherapy for eligible patients with breast cancer significantly reduces their journeys for treatment and has environmental benefits. If widely available, 5 million miles (8 000 000 km) of travel, 170 000 woman-hours and 1200 tonnes of CO2 (a forest of 100 hectares) will be saved annually in the UK. Trial registration number ISRCTN34086741; Post-results. [1]: /external-ref?link_type=ISRCTN&access_num=ISRCTN34086741