Nonsampling Error

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 48 Experts worldwide ranked by ideXlab platform

Stephanie Denton - One of the best experts on this subject based on the ideXlab platform.

  • Cell Phones and Nonsampling Error in the American
    2012
    Co-Authors: Brian Meekins, Stephanie Denton
    Abstract:

    Recent research on the impact of cell phones has largely focused on coverage and nonresponse Error with few exceptions (Kennedy et al 2009, Brick et al 2011). In this work the authors focus on Nonsampling Error in the American Time Use Survey (ATUS). This nationally representative survey is conducted by the U.S. Census Bureau on behalf of the Bureau of Labor Statistics. The sample for the ATUS is derived from households that have completed Wave 8 of the Current Population Survey. Households that volunteer a phone number for that survey are then called for the ATUS using that phone number (those who do not volunteer a phone number are mailed an invitation to participate and an incentive). The vast majority of CPS respondents provide Census with a phone number. The ATUS further selects a sample member from within the household to answer relatively detailed questions including a 24 hour time use diary. In this work we examine the impact of calling cell phone numbers on nonresponse and measurement Error in the ATUS. Because the sample is derived from CPS completed interviews, we are able to model nonresponse using CPS data. Almost 40% of ATUS telephone sample volunteered their cell phone number for contact in the CPS. Those who volunteer their cell phone number for survey contact in the CPS are just as likely to say that a phone interview is acceptable. Cell phone volunteers are less likely to be complete ATUS interviews due to noncontact while their refusal rate is similar to those volunteering a landline number. Differences in measurement Error appear to be negligible. There are some differences in the estimates of time use, but these are largely due to demographic differences.

  • cell phones and Nonsampling Error in the american time use survey
    2012
    Co-Authors: Brian Meekins, Stephanie Denton
    Abstract:

    Recent research on the impact of cell phones has largely focused on coverage and nonresponse Error with few exceptions (Kennedy et al 2009, Brick et al 2011). In this work the authors focus on Nonsampling Error in the American Time Use Survey (ATUS). This nationally representative survey is conducted by the U.S. Census Bureau on behalf of the Bureau of Labor Statistics. The sample for the ATUS is derived from households that have completed Wave 8 of the Current Population Survey. Households that volunteer a phone number for that survey are then called for the ATUS using that phone number (those who do not volunteer a phone number are mailed an invitation to participate and an incentive). The vast majority of CPS respondents provide Census with a phone number. The ATUS further selects a sample member from within the household to answer relatively detailed questions including a 24 hour time use diary. In this work we examine the impact of calling cell phone numbers on nonresponse and measurement Error in the ATUS. Because the sample is derived from CPS completed interviews, we are able to model nonresponse using CPS data. Almost 40% of ATUS telephone sample volunteered their cell phone number for contact in the CPS. Those who volunteer their cell phone number for survey contact in the CPS are just as likely to say that a phone interview is acceptable. Cell phone volunteers are less likely to be complete ATUS interviews due to noncontact while their refusal rate is similar to those volunteering a landline number. Differences in measurement Error appear to be negligible. There are some differences in the estimates of time use, but these are largely due to demographic differences.

Howard Wainer - One of the best experts on this subject based on the ideXlab platform.

  • the most dangerous profession a note on Nonsampling Error
    Psychological Methods, 1999
    Co-Authors: Howard Wainer
    Abstract:

    Nonsampling Errors are subtle, and strategies for dealing with them are not particularly well known within psychology. This article provides a compelling example of an incorrect conclusion drawn from a nonrandom sample: H. C. Lomard's (1835) mortality data. This example is augmented by a second example (A. Wald, 1980) that shows how modeling the selection mechanism can correct for the bias introduced by Nonsampling Errors. These 2 examples are then connected to modern statistical methods that through the method of multiple imputation allow researchers to assess uncertainty in observational studies. The APA's task force on Statistical Inference has received comments and suggestions from interested parties throughout the entire time I have served on it. These comments have always been treated by the task force with careful attention. In the most recent batch was a one-page missive from John Tukey containing seven suggestions. In the course of my professional life I have made many Errors, but happily, ignoring statistical advice from John Tukey is not one of them. Tukey's fifth suggestion, in its entirety, is, "non-.ampling Errors deserve greater attention, especially when randomization is absent. The formal statistical analysis treats only some of the uncertainties" (J. W. Tukey, personal communication, June 16, 1997). Indeed, but Nonsampling Errors are subtle, and strategies for dealing with them are not particularly well known within psychology. Thus, I think it would be worthwhile to provide a particularly interesting illustration of one and point the way toward alternative methodologies for interested readers.

Brian Meekins - One of the best experts on this subject based on the ideXlab platform.

  • Cell Phones and Nonsampling Error in the American
    2012
    Co-Authors: Brian Meekins, Stephanie Denton
    Abstract:

    Recent research on the impact of cell phones has largely focused on coverage and nonresponse Error with few exceptions (Kennedy et al 2009, Brick et al 2011). In this work the authors focus on Nonsampling Error in the American Time Use Survey (ATUS). This nationally representative survey is conducted by the U.S. Census Bureau on behalf of the Bureau of Labor Statistics. The sample for the ATUS is derived from households that have completed Wave 8 of the Current Population Survey. Households that volunteer a phone number for that survey are then called for the ATUS using that phone number (those who do not volunteer a phone number are mailed an invitation to participate and an incentive). The vast majority of CPS respondents provide Census with a phone number. The ATUS further selects a sample member from within the household to answer relatively detailed questions including a 24 hour time use diary. In this work we examine the impact of calling cell phone numbers on nonresponse and measurement Error in the ATUS. Because the sample is derived from CPS completed interviews, we are able to model nonresponse using CPS data. Almost 40% of ATUS telephone sample volunteered their cell phone number for contact in the CPS. Those who volunteer their cell phone number for survey contact in the CPS are just as likely to say that a phone interview is acceptable. Cell phone volunteers are less likely to be complete ATUS interviews due to noncontact while their refusal rate is similar to those volunteering a landline number. Differences in measurement Error appear to be negligible. There are some differences in the estimates of time use, but these are largely due to demographic differences.

  • cell phones and Nonsampling Error in the american time use survey
    2012
    Co-Authors: Brian Meekins, Stephanie Denton
    Abstract:

    Recent research on the impact of cell phones has largely focused on coverage and nonresponse Error with few exceptions (Kennedy et al 2009, Brick et al 2011). In this work the authors focus on Nonsampling Error in the American Time Use Survey (ATUS). This nationally representative survey is conducted by the U.S. Census Bureau on behalf of the Bureau of Labor Statistics. The sample for the ATUS is derived from households that have completed Wave 8 of the Current Population Survey. Households that volunteer a phone number for that survey are then called for the ATUS using that phone number (those who do not volunteer a phone number are mailed an invitation to participate and an incentive). The vast majority of CPS respondents provide Census with a phone number. The ATUS further selects a sample member from within the household to answer relatively detailed questions including a 24 hour time use diary. In this work we examine the impact of calling cell phone numbers on nonresponse and measurement Error in the ATUS. Because the sample is derived from CPS completed interviews, we are able to model nonresponse using CPS data. Almost 40% of ATUS telephone sample volunteered their cell phone number for contact in the CPS. Those who volunteer their cell phone number for survey contact in the CPS are just as likely to say that a phone interview is acceptable. Cell phone volunteers are less likely to be complete ATUS interviews due to noncontact while their refusal rate is similar to those volunteering a landline number. Differences in measurement Error appear to be negligible. There are some differences in the estimates of time use, but these are largely due to demographic differences.

Belen Garcia Carceles - One of the best experts on this subject based on the ideXlab platform.

  • spanish exit polls sampling Error or nonresponse bias
    Revista Internacional De Sociologia, 2016
    Co-Authors: Jose Manuel Pavia Miralles, Elena Badal Valero, Belen Garcia Carceles
    Abstract:

    espanolExiste un gran numero de ejemplos de predicciones inexactas obtenidas a partir tanto de encuestas pre-electorales como de encuestas a pie de urna a lo largo del mundo. La presencia de tasas de no-respuesta diferencial entre distintos tipos de electores ha sido la principal razon esgrimida para justificar las proyecciones erroneas en las encuestas a pie de urna. En problemas de inferencia rara vez es posible comparar estimaciones y valores reales. Las predicciones electorales son una excepcion. La comparacion entre estimaciones y resultados finales puede realizarse una vez los votos han sido contabilizados. En este trabajo, examinamos los datos brutos recogidos en siete encuestas a pie de urna realizadas en Espana y testamos la hipotesis de que los datos recolectados en cada punto de muestreo puedan ser considerados una muestra aleatoria de los resultados realmente registrados en el correspondiente colegio electoral. Conocer la respuesta a esta pregunta es relevante para analistas y encuestadores electorales, ya que, si se rechaza la hipotesis, las deficiencias de los datos recogidos deberian ser subsanadas en concordancia. Los analistas podrian mejorar la calidad de sus estimaciones mediante la implementacion de estrategias de correccion local. En nuestro estudio encontramos una fuerte evidencia de Errores ajenos al muestreo en las encuestas a pie de urna en Espana y constatamos la importancia del contexto politico. El sesgo de no-respuesta es mayor en elecciones polarizadas y en un clima de violencia o presion. EnglishCountless examples of misleading forecasts on behalf of both pre-election and exit polls can be found all over the world. Non-representative samples due to differential nonresponse have been claimed as being the main reason for inaccurate exit-poll projections. In real inference problems, it is seldom possible to compare estimates and true values. Electoral forecasts are an exception. Comparisons between estimates and final outcomes can be carried out once votes have been tallied. In this paper, we examine the raw data collected in seven exit polls conducted in Spain and test the likelihood that the data collected in each sampled voting location can be considered as a random sample of actual results. Knowing the answer to this is relevant for both electoral analysts and forecasters as, if the hypothesis is rejected, the shortcomings of the collected data would need amending. Analysts could improve the quality of their computations by implementing local correction strategies. We find strong evidence of Nonsampling Error in Spanish exit polls and evidence that the political context matters. Nonresponse bias is larger in polarized elections and in a climate of fear.

Bruce D Spencer - One of the best experts on this subject based on the ideXlab platform.

  • developing an Error structure in components of census coverage Error
    Joint Statistical Meetings, 2010
    Co-Authors: Mary H Mulry, Bruce D Spencer
    Abstract:

    The 2010 Census Coverage Measurement Program (CCM) will evaluate the coverage of the 2010 U.S. Census. The 2010 CCM will provide estimates of the components of census coverage Error (erroneous enumerations and omissions) separately in addition to estimates of net coverage Error. Evaluation studies are underway to examine the quality of the 2010 CCM estimates and provide information for improving census coverage measurement methodology. Synthesizing the results of all the CCM evaluations will aid in forecasting and optimizing tradeoffs among costs and Errors for the 2020 census. The current plan is to use a simulation approach in constructing the synthesis and to provide estimates of Nonsampling bias in the estimated components of coverage Error. This paper explores the use of the evaluation studies to yield estimates of Nonsampling Error for use in the simulation. The U.S. decennial census counts of population are subject to Errors known as the components of census coverage Error, which are omissions and erroneous enumerations. The net Error is equal to the true population size minus the census count. Estimates of components of coverage Error and net Errors for the 2010 Census are based on data and analysis from the 2010 Census Coverage Measurement Program (CCM). The number of erroneous enumerations is estimated from validation of a sample of census enumerations, called the E sample. The net Error is estimated by the difference between the census count and a dual system estimate (DSE) based on the data from both the E sample and the P sample, a survey of the household population designed to ascertain inclusion in the census. The E sample and the P sample use the same stratified sample of block clusters. All the census enumerations geographically coded to the sample block clusters, or a subsample of them (in large blocks), are in the E sample. For the P sample, U.S. Census Bureau staff independently constructs a listing of the housing units in the sample block clusters without relying on any of the census addresses. A subsample of the listed addresses may be selected in the large blocks. This paper describes a plan to synthesize the results of the CCM evaluation studies, assessments, and other studies to develop a better understanding of the Error structure in estimates of the components of census coverage Error, erroneous enumerations and omissions, and estimates of net census coverage Error. There are several goals for the study. One is to assess the combined effect of all the sources of Error that can be estimated on the estimates of net census coverage Error, erroneous enumerations, and 1 This report is released to inform interested parties and encourage discussion of work in progress. The views expressed on statistical, methodological, and operational issues are those of the authors and not necessarily those of the U.S. Census Bureau.

  • loss function analysis for a c e revision ii estimates of census 2000 coverage Error
    Joint Statistical Meetings, 2003
    Co-Authors: Mary H Mulry, Randal S Zuwallack, Bruce D Spencer
    Abstract:

    1. BACKGROUND This paper discusses the use of confidence intervals and loss function analyses to evaluate the Census Bureau’s revised estimates of coverage Error in Census 2000 from the Accuracy and Coverage Evaluation Survey or A.C.E.(U. S. Census Bureau 2003). The original A.C.E. estimates in March 2001 indicated a 1.18 percent undercount in the Census 2000 population size of 281,421,906. The Census Bureau discovered that undetected duplicate enumerations in the census were a major source of Error in the A.C.E. estimates and in October 2001 produced the A.C.E. Revision Preliminary estimates, which indicated the net undercount was 0.06 percent (Thompson, Waite, Fay 2001, Mule 2002). The latter estimates included adjustments to account for duplicate census enumerations and other enumeration sample (E-sample) measurement Errors detected by the Measurement Error Reinterview (Raglin and Kresja 2001) and the Matching Error Study (Bean 2001). Subsequently, the Census Bureau developed the A.C.E. Revision II estimates, which included an adjustment for correlation bias and improved adjustments for measurement Error in the Esample and in the population sample (P-sample) and produced a revised estimate of -0.49 percent undercount. The A.C.E. Revision II estimates are subject to both Nonsampling Error and sampling Error. Two methods of summarizing the relative accuracy of the Census and the A.C.E. Revision II are confidence intervals for the net undercount rate and loss function analyses that estimate the overall difference in accuracy between the A.C.E. estimates and the unadjusted census estimates of population size (or level) and population shares. We form the confidence intervals for net undercount rate using estimates of variance and net bias for the census coverage correction factors. In the loss function analysis, we estimate loss by the weighted Mean Squared Error (MSE), with the weight of the reciprocal of the census count for levels and the reciprocal of the census share for shares. We estimate the aggregate loss for levels and shares for states, counties, and places across the nation and for counties and places within state. These methods for evaluating the accuracy of the census and an adjustment of the census have been used previously (Mulry and Spencer 1993, 2001; CAPE 1992).