Nonresponse Error

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 285 Experts worldwide ranked by ideXlab platform

Roger Tourangeau - One of the best experts on this subject based on the ideXlab platform.

  • How Errors Cumulate: Two Examples
    Journal of Survey Statistics and Methodology, 2019
    Co-Authors: Roger Tourangeau
    Abstract:

    Abstract This article examines the relationship among different types of nonobservation Errors (all of which affect estimates from nonprobability internet samples) and between Nonresponse and measurement Errors. Both are examples of how different Error sources can interact. Estimates from nonprobability samples seem to have more total Error than estimates from probability samples, even ones with very low response rates. This finding suggests that the combination of coverage, selection, and Nonresponse Errors has greater cumulative effects than Nonresponse Error alone. The probabilities of having internet access, joining an internet panel, and responding to a particular survey request are probably correlated and, as a result, may lead to greater covariances with survey variables than response propensities alone; the biases accentuate one another. With Nonresponse and measurement Error, the two sources seem more or less uncorrelated, with one exception—those most prone to social desirability bias (those in the undesirable categories) are also less likely to respond. In addition, the propensity for unit Nonresponse seems to be related to item Nonresponse.

  • Nonresponse Error measurement Error and mode of data collection tradeoffs in a multi mode survey of sensitive and non sensitive items
    Public Opinion Quarterly, 2010
    Co-Authors: Joseph W Sakshaug, Ting Yan, Roger Tourangeau
    Abstract:

    Although some researchers have suggested that a tradeoff exists between Nonresponse and measurement Error, to date, the evidence for this connection has been relatively sparse. We examine data from an alumni survey to explore potential links between Nonresponse and mea- surement Error. Records data were available for some of the survey items, allowing us to check the accuracy of the answers. The survey included relatively sensitive questions about the respondent's academic perfor- mance and compared three methods of data collection—computer-assisted telephone interviewing (CATI), interactive voice response (IVR), and an Internet survey. We test the hypothesis that the two modes of computerized self-administration reduce measurement Error but increase Nonresponse Error, in particular the Nonresponse Error associated with dropping out of the survey during the switch from the initial telephone contact to the IVR or Internet mode. We find evidence for relatively large Errors due to the mode switch; in some cases, these mode switch biases offset the advantages of self-administration for reducing measurement Error. We find less evidence for a possible second link between Nonresponse and measurement Error, based on a relationship between the level of effort needed to obtain the data and the accuracy of the data that are ultimately obtained. We also compare Nonresponse and measurement Errors across different types of sensitive items; in general, measurement Error tended to be the largest source of Error for estimates of socially undesirable

  • Cognitive Aspects of Survey Measurement and Mismeasurement
    International Journal of Public Opinion Research, 2003
    Co-Authors: Roger Tourangeau
    Abstract:

    During the past  years, survey methodology has undergone a paradigm shift. The old paradigm was based on a statistical model that focused on the effects of survey Errors on the estimates derived from survey data. The new paradigm is based on a social scientific model that focuses on the causes of survey Errors. Several developments have helped bring about this shift—the application of methods and concepts from cognitive psychology to the reduction of survey measurement Error, the development of new computerized methods of data collection, and the increase in concern about measurement and Nonresponse as sources of Error in survey estimates. The new paradigm has little to say about the topics, such as sampling Error, which were central to the old one; similarly, the old paradigm had little to say about how to reduce or prevent Errors, a major concern for the new one. Thus, the two paradigms do not clash so much as complement each other. One of the deepest changes in survey research over the last  years has been the shift in the reigning paradigm guiding methodological research on surveys. The earlier paradigm, which was completely dominant when I joined the field some  years ago, was based on a statistical model of survey Error. Perhaps the fullest expression of that model is found in the work of Hansen, Hurwitz, and Madow (), and Hansen, Hurwitz, and Bershad (). This has been a very successful paradigm and the fact that it’s still useful, even vital, can be seen in more recent work, such as Lessler and Kalbeek’s () Nonsampling Error in Surveys or Groves’s () Survey Costs and Survey Errors, where the outlines of the old paradigm can still be discerned just under the surface. Like any paradigm, the statistical paradigm has its limitations and, from our current vantage point, one of its key limitations is its focus on the consequences rather than the causes of survey Errors. The central concepts in the statistical paradigm—variance and bias—refer to the different effects that survey Errors This article is based on the speech that the author gave on the occasion of his receiving the  Helen Dinerman Award, during the WAPOR’s th Annual Conference in St Petersburg Beach, Florida, in May .  World Association for Public Opinion Research                 can have on survey estimates. These are extraordinarily important and useful concepts but they don’t tell us much about how Errors arise or about how to prevent (or at least reduce) the Errors in the first place. To address those issues requires well-developed theories about the sources of survey Errors, theories about how people decide whether to take part in surveys and theories about how they come up with answers to the questions. The new paradigm—which I’ll call the scientific paradigm for surveys—attempts to provide such theories. One manifestation of the new paradigm has been the attempt by Groves, Couper, and their colleagues to develop detailed theories about the sources of Nonresponse (e.g., Groves, Cialdini, & Couper, ; Groves & Couper, ; Groves, Singer, & Corning, ). These theories aren’t statistical models about the impact of Nonresponse on survey estimates, although they do have implications for the question of when Nonresponse is likely to bias survey results. Instead, the theories focus on why Nonresponse occurs—on who is likely to be hard to reach, on how interviewers try to extend the interaction with potential respondents and how they tailor what they say to fend off objections, on how interest in the topic affects willingness to take part in a survey. This work is beginning to have real impact on survey practice; for example, it has led to the development of new training procedures for interviewers to reduce the refusal rates in surveys. It’s hard to imagine how something similar would have come out of the older statistical paradigm. But an even more striking manifestation of the conceptual shift from a statistical to a scientific paradigm for surveys has been the movement to apply concepts and methods from cognitive psychology and related disciplines to reducing measurement Error. This movement is sometimes referred to as the Cognitive Aspects of Survey Methodology movement, or CASM for short. It’s hard to provide an exact date for the beginning of the paradigm shift—I suspect that’s true for most fields—but it’s clear that CASM has been gathering steam for about  years or so. My own involvement in the movement began in , when I attended a seminar sponsored by the Committee on National Statistics here in the United States that brought together survey methodologists and cognitive scientists to look at the underlying problems that produced Errors in surveys (see Jabine, Straf, Tanur, & Tourangeau, , for a summary of the CNSTAT seminar). Similar conferences were held about the same time in the United Kingdom and in what was then West Germany (both Sudman, Bradburn, & Schwarz, , and Tourangeau, Rips, & Rasinski, , provide brief histories of the CASM movement). My own work in this area culminated in a book, The Psychology of Survey Response (), with Lance Rips and Ken Rasinski. This book tries to summarize the theoretical and empirical work carried out under the CASM banner. I’m a little embarrassed to admit it, but this work can be accurately summed A    S M   M     up in a single sentence: reporting Errors in surveys arise from problems in the underlying cognitive processes through which respondents generate their answers to survey questions. To unpack this just a little, the basic insight of the CASM movement is that respondents give inaccurate or unreliable answers because they don’t really understand the questions, can’t remember the relevant information, used flawed judgment or estimation strategies, have trouble mapping their internal judgments onto one of the response options, or edit their answers in a misleading way before they report them. These problems reflect both the lifetime of experience in answering questions in everyday life that we bring to surveys (and the habits we’ve built up to get us through everyday conversations smoothly) and the shortcuts we take to reduce the cognitive burden imposed by interviews and similar tasks. Different researchers have given more or less emphasis to different portions of this picture, but whether they have emphasized the conversational roots of the cognitive processes in surveys (e.g., Schober, ; Schober & Conrad, ; Schwarz, ), the cognitive processes themselves (e.g., Burton & Blair, ; Conrad, Brown, & Cashman, ), or motivational and ability factors affecting how the cognitive processes are carried out (e.g., Krosnick, ; Krosnick & Alwin, ), a lot of very prominent survey researchers have jumped aboard the CASM bandwagon. This new paradigm has both encouraged and reflected a shift in concern about the most important sources of Error in surveys. The old paradigm mostly focused on various forms of sampling Error; the new paradigm emphasizes measurement Error and, to a lesser degree, Nonresponse Error instead. Of course, the old paradigm simply didn’t have that much to say about measurement and Nonresponse Error, particularly about how they arise. Likewise, the new paradigm is unlikely to have much impact on our understanding of sampling Error or on the practice of sampling. This nonoverlap between the two paradigms is probably a good thing—the new paradigm complements the old one rather than replacing it. Similarly, I think that a shift in emphasis from sampling to nonsampling Errors will also prove to be a useful development. My own guess is that until we get to very low levels of aggregation, measurement Error is a far more important contributor to total survey Error than sampling Error is; sampling Errors still get the lion’s share of the attention and resources because we know how to measure them and how to reduce them, but things are gradually changing. If anything, this shift in emphasis is likely to accelerate in the coming years as we continue to invent new tools for collecting survey data. The new computer-assisted methods of data collection—including audio computerassisted self-interviewing (or audio-CASI), its telephone counterpart interactive voice response, and Web and e-mail surveys—raise a host of new measurement issues. We are only just beginning to have a sense of when different methods of data collection yield similar results and when they diverge and of the key                variables that determine whether there is agreement or disagreement across modes of data collection. The issues raised by Web surveys are particularly hot right now, partly because Web surveys are primarily visual and use a much wider range of visual material (photographs, drawings, video clips) than has been true of surveys in the past. The issues raised by the new methods are likely to remain central to survey methodologists for some time to come. I personally look forward to continuing to work on them.

Robert M. Bossarte - One of the best experts on this subject based on the ideXlab platform.

  • Nonresponse Error in injury risk surveys
    American Journal of Preventive Medicine, 2006
    Co-Authors: Timothy P. Johnson, Allyson L. Holbrook, Young Ik Cho, Robert M. Bossarte
    Abstract:

    Background Nonresponse is a potentially serious source of Error in epidemiologic surveys concerned with injury control and risk. This study presents the findings of a records-matching approach to investigating the degree to which survey Nonresponse may bias indicators of violence-related and unintentional injuries in a random-digit-dialed (RDD) telephone survey. Methods Data from a statewide RDD survey of 4155 individuals aged 16 years and older conducted in Illinois in 2003 were merged with ZIP code–level data from the 2000 Census. Using hierarchical linear models, ZIP code–level indicators were used to predict survey response propensity at the individual level. Additional models used the same ZIP code measures to predict a set of injury-risk indicators. Results Several ZIP code measures were found to be predictive of both response propensity and the likelihood of reporting partner violence. For example, people residing in high-income areas were less likely to participate in the survey and less likely to report forced sex by partner, processes that suggest an over-estimation of this form of violence. In contrast, estimates of partner isolation may be under-estimated, as those residing in geographic areas with smaller-sized housing were less likely to participate in the survey but more likely to report partner isolation. No ZIP code–level correlates of survey response propensity, however, were found also to be associated with driving-under-the-influence (DUI) indicators. Conclusions There is evidence of a linkage between survey response propensity and one variety of injury prevention measure (partner violence) but not another (DUI). The approach described in this paper provides an effective and inexpensive tool for evaluating Nonresponse Error in surveys of injury prevention and other health-related conditions.

  • Nonresponse Error in injury-risk surveys.
    American journal of preventive medicine, 2006
    Co-Authors: Timothy P. Johnson, Allyson L. Holbrook, Young Ik Cho, Robert M. Bossarte
    Abstract:

    Nonresponse is a potentially serious source of Error in epidemiologic surveys concerned with injury control and risk. This study presents the findings of a records-matching approach to investigating the degree to which survey Nonresponse may bias indicators of violence-related and unintentional injuries in a random-digit-dialed (RDD) telephone survey. Data from a statewide RDD survey of 4155 individuals aged 16 years and older conducted in Illinois in 2003 were merged with ZIP code-level data from the 2000 Census. Using hierarchical linear models, ZIP code-level indicators were used to predict survey response propensity at the individual level. Additional models used the same ZIP code measures to predict a set of injury-risk indicators. Several ZIP code measures were found to be predictive of both response propensity and the likelihood of reporting partner violence. For example, people residing in high-income areas were less likely to participate in the survey and less likely to report forced sex by partner, processes that suggest an over-estimation of this form of violence. In contrast, estimates of partner isolation may be under-estimated, as those residing in geographic areas with smaller-sized housing were less likely to participate in the survey but more likely to report partner isolation. No ZIP code-level correlates of survey response propensity, however, were found also to be associated with driving-under-the-influence (DUI) indicators. There is evidence of a linkage between survey response propensity and one variety of injury prevention measure (partner violence) but not another (DUI). The approach described in this paper provides an effective and inexpensive tool for evaluating Nonresponse Error in surveys of injury prevention and other health-related conditions.

Don A. Dillman - One of the best experts on this subject based on the ideXlab platform.

  • quantifying the influence of incentives on mail survey response rates and their effects on Nonresponse Error
    2001
    Co-Authors: Virginia M Lesser, Don A. Dillman, Frederick O Lorenz, Robert T Mason
    Abstract:

    Over the past fifty years, an accumulation of research has shown that financial incentives improve response rates. Our objective is to determine whether or not this still holds true and to determine the impact of incentives on Nonresponse bias. A series of eight studies on both student and general populations was conducted to address these topics. The experiments were also designed to investigate how the delivery of the incentive may impact response rates. Financial incentives combined with multiple mailings continue to improve response rates. Demographic characteristics of the incentive groups were more similar to the selected sample as compared to the control group in most studies. This suggests that estimates produced from studies using financial incentives may have lower mean square Error than those studies offering no financial incentives.

  • The Role of Behavioral Survey Methodologists in National Statistical
    International Statistical Review, 2000
    Co-Authors: Don A. Dillman
    Abstract:

    Summary The expertise of behavioral survey methodologists is needed in national statistical agencies because of the necessity of using theory and research from the social sciences to reduce survey Error. In this paper various social science based explanations for measurement Error and Nonresponse Error are described in order to illustrate the conceptual foundations of such Error reduction efforts. Three roles for behavioral survey methodologists in national statistical agencies are then discussed. They include: 1) bring an Error reduction perspective to bear in an influential way for all aspects of designing and implementing agency surveys; 2) bring theoretical efficiency and effectiveness to experimental tests of alternative questionnaire designs and implementation procedures through the use of theories, concepts and pretests and findings of past behavioral science research; and 3) contribute to the expanding science of survey methodology.

  • THE ROLE OF BEHAVIORAL SURVEY METHODOLOGISTS IN NATIONAL STATISTICAL AGENCIES
    International Statistical Review, 2000
    Co-Authors: Don A. Dillman
    Abstract:

    Summary The expertise of behavioral survey methodologists is needed in national statistical agencies because of the necessity of using theory and research from the social sciences to reduce survey Error. In this paper various social science based explanations for measurement Error and Nonresponse Error are described in order to illustrate the conceptual foundations of such Error reduction efforts. Three roles for behavioral survey methodologists in national statistical agencies are then discussed. They include: 1) bring an Error reduction perspective to bear in an influential way for all aspects of designing and implementing agency surveys; 2) bring theoretical efficiency and effectiveness to experimental tests of alternative questionnaire designs and implementation procedures through the use of theories, concepts and pretests and findings of past behavioral science research; and 3) contribute to the expanding science of survey methodology.

  • The Design and Administration of Mail Surveys
    Annual Review of Sociology, 1991
    Co-Authors: Don A. Dillman
    Abstract:

    For reasons of cost and ease of implementation, mail surveys are more frequently used for social research than are either telephone or face-to-face interviews. In this chapter, the last two decades of research aimed at improving mail survey methods are examined. Discussion of this research is organized around progress made in overcoming four important sources of Error: sampling , noncoverage, measurement, and Nonresponse. Progress has been especially great in improving response rates as a means of reducing Nonresponse Error. Significant progress has also been made in finding means of overcoming measurement Error. Because mail surveys generally present few, if any, special sampling Error problems, little research in this area has been conducted. The lack of research on noncoverage issues is a major deficiency in research to date , and noncoverage Error presents the most significant impediment to the increased use of mail surveys. The 1990s are likely to see increased research on mail surveys, as efforts ar...

Brady T. West - One of the best experts on this subject based on the ideXlab platform.

  • Estimation of Underreporting in Diary Surveys: An Application using the National Household Food Acquisition and Purchase Survey
    Journal of Survey Statistics and Methodology, 2019
    Co-Authors: John A. Kirlin, Brady T. West, Ai Rene Ong, Shiyu Zhang, Xingyou Zhang
    Abstract:

    Abstract Diary surveys are used to collect data on a variety of topics, including health, time use, nutrition, and expenditures. The US National Household Food Acquisition and Purchase Survey (FoodAPS) is a nationally representative diary survey, providing an important data source for decision-makers to design policies and programs for promoting healthy lifestyles. Unfortunately, a multiday diary survey like the FoodAPS can be subject to various survey Errors, especially item Nonresponse Error occurring at the day level. The FoodAPS public-use data set provides survey weights that adjust only for unit Nonresponse. Due to the lack of day-level weights (which could possibly adjust for the item Nonresponse that arises from refusals on particular days), the adjustments for unit Nonresponse are unlikely to correct any bias in estimates arising from households that initially agree to participate in FoodAPS but then fail to report on particular days. This article develops a general methodology for estimating the extent of underreporting due to this type of item Nonresponse Error in diary surveys, using FoodAPS as a case study. We describe a methodology combining bootstrap replicate sampling for complex samples and imputation based on a Heckman selection model to predict food expenditures for person-days with missing expenditures. We estimated the item Nonresponse Error by comparing weighted estimates according to only reported expenditures and both reported expenditures and predictions for missing values. Results indicate that ignoring the missing data would lead to consistent overestimation of the mean expenditures and events per person per day and underestimation of the total expenditures and events. Our study suggests that the household-level weights, which generally account for unit Nonresponse, may not be entirely sufficient for addressing the Nonresponse occurring at the day level in diary surveys, and proper imputation methods will be important for estimating the size of the underreporting.

  • Nonresponse and measurement Error variance among interviewers in standardized and conversational interviewing
    Journal of Survey Statistics and Methodology, 2018
    Co-Authors: Brady T. West, Frederick G Conrad, Frauke Kreuter, Felicitas Mittereder
    Abstract:

    Recent methodological studies have attempted to decompose the interviewer variance introduced in interviewer-administered surveys into its potential sources, using the Total Survey Error framework. These studies have informed the literature on interviewer effects by acknowledging interviewers’ dual roles as recruiters and data collectors, thus examining the relative contributions of Nonresponse Error variance and measurement Error variance among interviewers to total interviewer variance. However, this breakdown may depend on the interviewing technique: some techniques emphasize behaviors designed to reduce variation in the answers collected by interviewers more so than other techniques. The question of whether the contributions of these Error sources to total interviewer variance change for different interviewing techniques remains unanswered. Addressing this gap in knowledge has important implications for interviewing practice because the technique used could alter the relative contributions of variance in these Error sources to total interviewer variance. This article presents results from an experimental study mounted in Germany that was designed to answer this question about two specific interviewing techniques. A national sample of employed individuals was first selected from a database of official administrative records, then randomly assigned to interviewers who themselves were randomized to conduct either conversational interviewing (CI) or standardized interviewing (SI), and finally measured face-to-face on a variety of cognitively challenging survey questions with official values also available for verifying the accuracy of responses. We find that although Nonresponse Error variance does exist among interviewers for selected measures (especially respondent age in the CI group), measurement Error variance tends to be the more important source of total interviewer variance, regardless of whether interviewers are using CI or SI.

  • Total Survey Error in Practice - Total survey Error in practice
    2017
    Co-Authors: Paul P. Biemer, Frauke Kreuter, Lars E. Lyberg, Brad Edwards, Edith D. De Leeuw, Stephanie Eckman, N. Clyde Tucker, Brady T. West
    Abstract:

    This book provides an overview of the TSE framework and current TSE research as related to survey design, data collection, estimation, and analysis. It recognizes that survey data affects many public policy and business decisions and thus focuses on the framework for understanding and improving survey data quality. The book also addresses issues with data quality in official statistics and in social, opinion, and market research as these fields continue to evolve, leading to larger and messier data sets. This perspective challenges survey organizations to find ways to collect and process data more efficiently without sacrificing quality. The volume consists of the most up-to-date research and reporting from over 70 contributors representing the best academics and researchers from a range of fields. The chapters are broken out into five main sections: The Concept of TSE and the TSE Paradigm, Implications for Survey Design, Data Collection and Data Processing Applications, Evaluation and Improvement, and Estimation and Analysis. Each chapter introduces and examines multiple Error sources, such as sampling Error, measurement Error, and Nonresponse Error, which often offer the greatest risks to data quality, while also encouraging readers not to lose sight of the less commonly studied Error sources, such as coverage Error, processing Error, and specification Error. The book also notes the relationships between Errors and the ways in which efforts to reduce one type can increase another, resulting in an estimate with larger total Error

  • “Interviewer” Effects in Face-to-Face Surveys: A Function of Sampling, Measurement Error, or Nonresponse?
    Journal of Official Statistics, 2013
    Co-Authors: Brady T. West, Frauke Kreuter, Ursula Jaenichen
    Abstract:

    Recent research has attempted to examine the proportion of interviewer variance that is due to interviewers systematically varying in their success in obtaining cooperation from respondents with varying characteristics (i.e., Nonresponse Error variance), rather than variance among interviewers in systematic measurement difficulties (i.e., measurement Error variance) - that is, whether correlated responses within interviewers arise due to variance among interviewers in the pools of respondents recruited, or variance in interviewer-specific mean response biases. Unfortunately, work to date has only considered data from a CATI survey, and thus suffers from two limitations: Interviewer effects are commonly much smaller in CATI surveys, and, more importantly, sample units are often contacted by several CATI interviewers before a final outcome (response or final refusal) is achieved. The latter introduces difficulties in assigning nonrespondents to interviewers, and thus interviewer variance components are only estimable under strong assumptions. This study aims to replicate this initial work, analyzing data from a national CAPI survey in Germany where CAPI interviewers were responsible for working a fixed subset of cases.

  • how much of interviewer variance is really Nonresponse Error variance
    Public Opinion Quarterly, 2010
    Co-Authors: Brady T. West, Kristen Olson
    Abstract:

    Kish's (1962) classical intra-interviewer correlation (ρ int ) provides survey researchers with an estimate of the effect of interviewers on variation in measurements of a survey variable of interest. This correlation is an undesirable product of the data collection process that can arise when answers from respondents interviewed by the same interviewer are more similar to each other than answers from other respondents, decreasing the precision of survey estimates. Estimation of this parameter, however, uses only respondent data. The potential contribution of variance in Nonresponse Errors between interviewers to the estimation of ρ int has been largely ignored. Responses within interviewers may appear correlated because the interviewers successfully obtain cooperation from different pools of respondents, not because of systematic response deviations. This study takes a first step in filling this gap in the literature on interviewer effects by analyzing a unique survey data set, collected using computer-assisted telephone interviewing (CATI) from a sample of divorce records. This data set, which includes both true values and reported values for respondents and a CATI sample assignment that approximates interpenetrated assignment of subsamples to interviewers, enables the decomposition of interviewer variance in means of respondent reports into Nonresponse Error variance and measurement Error variance across interviewers. We show that in cases where there is substantial interviewer variance in reported values, the interviewer variance may arise from Nonresponse Error variance across interviewers.

Frauke Kreuter - One of the best experts on this subject based on the ideXlab platform.

  • The Effect of Survey Mode on Data Quality: Disentangling Nonresponse and Measurement Error Bias
    Journal of Official Statistics, 2019
    Co-Authors: Barbara Felderer, Antje Kirchner, Frauke Kreuter
    Abstract:

    Abstract More and more surveys are conducted online. While web surveys are generally cheaper and tend to have lower measurement Error in comparison to other survey modes, especially for sensitive questions, potential advantages might be offset by larger Nonresponse bias. This article compares the data quality in a web survey administration to another common mode of survey administration, the telephone. The unique feature of this study is the availability of administrative records for all sampled individuals in combination with a random assignment of survey mode. This specific design allows us to investigate and compare potential bias in survey statistics due to 1) Nonresponse Error, 2) measurement Error, and 3) combined bias of these two Error sources and hence, an overall assessment of data quality for two common modes of survey administration, telephone and web. Our results show that overall mean estimates on the web are more biased compared to the telephone mode. Nonresponse and measurement bias tend to reinforce each other in both modes, with Nonresponse bias being somewhat more pronounced in the web mode. While measurement Error bias tends to be smaller in the web survey implementation, interestingly, our results also show that the web does not consistently outperform the telephone mode for sensitive questions.

  • Nonresponse and measurement Error variance among interviewers in standardized and conversational interviewing
    Journal of Survey Statistics and Methodology, 2018
    Co-Authors: Brady T. West, Frederick G Conrad, Frauke Kreuter, Felicitas Mittereder
    Abstract:

    Recent methodological studies have attempted to decompose the interviewer variance introduced in interviewer-administered surveys into its potential sources, using the Total Survey Error framework. These studies have informed the literature on interviewer effects by acknowledging interviewers’ dual roles as recruiters and data collectors, thus examining the relative contributions of Nonresponse Error variance and measurement Error variance among interviewers to total interviewer variance. However, this breakdown may depend on the interviewing technique: some techniques emphasize behaviors designed to reduce variation in the answers collected by interviewers more so than other techniques. The question of whether the contributions of these Error sources to total interviewer variance change for different interviewing techniques remains unanswered. Addressing this gap in knowledge has important implications for interviewing practice because the technique used could alter the relative contributions of variance in these Error sources to total interviewer variance. This article presents results from an experimental study mounted in Germany that was designed to answer this question about two specific interviewing techniques. A national sample of employed individuals was first selected from a database of official administrative records, then randomly assigned to interviewers who themselves were randomized to conduct either conversational interviewing (CI) or standardized interviewing (SI), and finally measured face-to-face on a variety of cognitively challenging survey questions with official values also available for verifying the accuracy of responses. We find that although Nonresponse Error variance does exist among interviewers for selected measures (especially respondent age in the CI group), measurement Error variance tends to be the more important source of total interviewer variance, regardless of whether interviewers are using CI or SI.

  • Total Survey Error in Practice - Undercoverage-Nonresponse trade-off
    Total Survey Error in Practice, 2017
    Co-Authors: Stephanie Eckman, Frauke Kreuter
    Abstract:

    Featuring a timely presentation of total survey Error (TSE), this edited volume introduces valuable tools for understanding and improving survey data quality in the context of evolving large-scale data sets\ud \ud This book provides an overview of the TSE framework and current TSE research as related to survey design, data collection, estimation, and analysis. It recognizes that survey data affects many public policy and business decisions and thus focuses on the framework for understanding and improving survey data quality. The book also addresses issues with data quality in official statistics and in social, opinion, and market research as these fields continue to evolve, leading to larger and messier data sets. This perspective challenges survey organizations to find ways to collect and process data more efficiently without sacrificing quality. The volume consists of the most up-to-date research and reporting from over 70 contributors representing the best academics and researchers from a range of fields. The chapters are broken out into five main sections: The Concept of TSE and the TSE Paradigm, Implications for Survey Design, Data Collection and Data Processing Applications, Evaluation and Improvement, and Estimation and Analysis. Each chapter introduces and examines multiple Error sources, such as sampling Error, measurement Error, and Nonresponse Error, which often offer the greatest risks to data quality, while also encouraging readers not to lose sight of the less commonly studied Error sources, such as coverage Error, processing Error, and specification Error. The book also notes the relationships between Errors and the ways in which efforts to reduce one type can increase another, resulting in an estimate with larger total Error

  • Total Survey Error in Practice - Total survey Error in practice
    2017
    Co-Authors: Paul P. Biemer, Frauke Kreuter, Lars E. Lyberg, Brad Edwards, Edith D. De Leeuw, Stephanie Eckman, N. Clyde Tucker, Brady T. West
    Abstract:

    This book provides an overview of the TSE framework and current TSE research as related to survey design, data collection, estimation, and analysis. It recognizes that survey data affects many public policy and business decisions and thus focuses on the framework for understanding and improving survey data quality. The book also addresses issues with data quality in official statistics and in social, opinion, and market research as these fields continue to evolve, leading to larger and messier data sets. This perspective challenges survey organizations to find ways to collect and process data more efficiently without sacrificing quality. The volume consists of the most up-to-date research and reporting from over 70 contributors representing the best academics and researchers from a range of fields. The chapters are broken out into five main sections: The Concept of TSE and the TSE Paradigm, Implications for Survey Design, Data Collection and Data Processing Applications, Evaluation and Improvement, and Estimation and Analysis. Each chapter introduces and examines multiple Error sources, such as sampling Error, measurement Error, and Nonresponse Error, which often offer the greatest risks to data quality, while also encouraging readers not to lose sight of the less commonly studied Error sources, such as coverage Error, processing Error, and specification Error. The book also notes the relationships between Errors and the ways in which efforts to reduce one type can increase another, resulting in an estimate with larger total Error

  • “Interviewer” Effects in Face-to-Face Surveys: A Function of Sampling, Measurement Error, or Nonresponse?
    Journal of Official Statistics, 2013
    Co-Authors: Brady T. West, Frauke Kreuter, Ursula Jaenichen
    Abstract:

    Recent research has attempted to examine the proportion of interviewer variance that is due to interviewers systematically varying in their success in obtaining cooperation from respondents with varying characteristics (i.e., Nonresponse Error variance), rather than variance among interviewers in systematic measurement difficulties (i.e., measurement Error variance) - that is, whether correlated responses within interviewers arise due to variance among interviewers in the pools of respondents recruited, or variance in interviewer-specific mean response biases. Unfortunately, work to date has only considered data from a CATI survey, and thus suffers from two limitations: Interviewer effects are commonly much smaller in CATI surveys, and, more importantly, sample units are often contacted by several CATI interviewers before a final outcome (response or final refusal) is achieved. The latter introduces difficulties in assigning nonrespondents to interviewers, and thus interviewer variance components are only estimable under strong assumptions. This study aims to replicate this initial work, analyzing data from a national CAPI survey in Germany where CAPI interviewers were responsible for working a fixed subset of cases.