Nonresponse Bias

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 321 Experts worldwide ranked by ideXlab platform

Andy Peytchev - One of the best experts on this subject based on the ideXlab platform.

  • Responsive and Adaptive Survey Design: Use of Bias Propensity During Data Collection to Reduce Nonresponse Bias
    Journal of Survey Statistics and Methodology, 2020
    Co-Authors: Andy Peytchev, Daniel Pratt, Michael A. Duprey
    Abstract:

    Abstract Reduction in Nonresponse Bias has been a key focus in responsive and adaptive survey designs, through multiple phases of data collection, each defined by a different protocol, and targeting interventions to a subset of sample elements. Key in this approach is the identification of nonrespondents who, if interviewed, can reduce Nonresponse Bias in survey estimates. From a design perspective, we need to identify an appropriate model to select targeted cases, in addition to an effective intervention (change in protocol). From an evaluation perspective, we need to compare estimates to a control condition that is often omitted from study designs, in addition to the need for benchmark estimates for key survey measures to provide estimates of Nonresponse Bias. We introduced a Bias propensity approach for the selection of sample members to reduce Nonresponse Bias. Unlike a response propensity approach in which the objective is to maximize the prediction of Nonresponse, this new approach deliberately excludes strong predictors of Nonresponse that are uncorrelated with survey measures and uses covariates that are of substantive interest to the study. We also devised an analytic approach to simulate which sample members would have responded in a control condition. This study also provided a rare opportunity to estimate Nonresponse Bias, using rich sampling frame information, prior round survey data, and data from extensive Nonresponse follow-up. The Bias propensity model yielded reasonable fit despite the exclusion of the strongest predictors of Nonresponse. The intervention was found to be effective in increasing participation among identified sample members. On average, the responsive and adaptive survey design reduced Nonresponse Bias by more than one-quarter—almost one percentage point—regardless of the choice of benchmark estimates. Effort under the control condition did not reduce Nonresponse Bias. While results are strongly encouraging, we argue for replication with varied populations and methods.

  • Prioritizing Low Propensity Sample Members in a Survey: Implications for Nonresponse Bias
    Survey Practice, 2014
    Co-Authors: Jeffrey Rosen, Andy Peytchev, Joseph Murphy, Tommy Holder, Jill A. Dever, Debbie R. Herget, Daniel J. Pratt
    Abstract:

    Many survey methodologists now agree that simply striving to increase the response rate is not an optimal approach for reducing Nonresponse Bias in the final survey estimates. Targeting sample cases that are underrepresented can help reduce Nonresponse Bias. However, the challenge lies in which cases to prioritize when resources are finite, and reducing the risk of Nonresponse Bias is the goal. We present an approach which identifies, prioritizes, and intervenes on low-propensity-to-respond cases during Nonresponse follow-up. Targeted cases were assigned to in-person interviewing. Our results suggest that in-person interviewing can be an effective approach for gaining participation from lowpropensity cases. We also find that targeting low-propensity cases could improve representation and therefore should be considered by survey practitioners as a tool for Nonresponse Bias reduction.

  • Reduction of Nonresponse Bias through Case Prioritization
    Survey research methods, 2010
    Co-Authors: Andy Peytchev, Jeffrey Rosen, Joseph Murphy, Sarah Riley, Mark R. Lindblad
    Abstract:

    How response rates are increased can determine the remaining Nonresponse Bias in estimates. Studies often target sample members that are most likely to be interviewed to maximize response rates. Instead, we suggest targeting likely nonrespondents from the onset of a study with a different protocol to minimize Nonresponse Bias. To inform the targeting of sample members, various sources of information can be utilized: paradata collected by interviewers, demographic and substantive survey data from prior waves, and administrative data. Using these data, the likelihood of any sample member becoming a nonrespondent is estimated and on those sample cases least likely to respond, a more effective, often more costly, survey protocol can be employed to gain respondent cooperation. This paper describes the two components of this approach to reducing Nonresponse Bias. We demonstrate assignment of case priority based on response propensity models, and present empirical results from the use of a different protocol for prioritized cases. In a field data collection, a random half of cases with low response propensity received higher priority and increased resources. Resources for high-priority cases were allocated as interviewer incentives. We find that we were relatively successful in predicting response outcome prior to the survey and stress the need to test interventions in order to benefit from case prioritization.

  • not all survey effort is equal reduction of Nonresponse Bias and Nonresponse error
    2009
    Co-Authors: Andy Peytchev, Rodney K Baxter, Lisa R Carleybaxter
    Abstract:

    Nonexperimental and experimental studies have shown a lack of association between survey effort and Nonresponse Bias. This does not necessarily mean, however, that additional effort could not reduce Nonresponse Bias. Theories on Nonresponse would suggest the use of different recruiting methods for additional survey effort in order to address Nonresponse Bias. This study looks at changes in survey estimates as a function of making additional calls under the same protocol and additional calls under a different protocol. Respondents who were interviewed as a result of more than five call attempts were not significantly different on any of the key survey variables than those interviewed with fewer than five calls. Those interviewed under a different survey protocol, however, were different on 5 of 12 measures. Additional interviews under both the same and different protocols contributed to the reduction of total Nonresponse error. In sum, the use of multiple protocols for part of the survey effort increased the response rate, changed point estimates, and achieved lower total Nonresponse error. Future work is needed on optimizing survey designs that implement multiple survey protocols.

  • Combining Information from Multiple Modes to Reduce Nonresponse Bias
    2005
    Co-Authors: Mick P. Couper, Andy Peytchev, Roderick J. A. Little, Victor J. Strecher, Kendra Rothert
    Abstract:

    Over 3,000 subjects were recruited in 3 U.S. regions for a randomized experiment of an online weight management intervention. Participants were sent invitations to web survey reassessments after 3, 6, and 12 months. High and increasing Nonresponse to the three followup surveys created the potential for Nonresponse Bias in key program outcomes. A subsample of the nonrespondents at the one-year follow-up was selected for a Nonresponse study. This subsample was then randomly assigned to a short telephone or mail survey. This was done in order to evaluate cost efficiency, differential effectiveness of mode combinations in reducing Nonresponse Bias, and measurement differences by mode. The responses from the Nonresponse study were then to be added to the baseline measures and used in an imputation model. Differences between the telephone and mail survey reports posed an added methodological problem, allowing further exploration of sensitivity of the results not just to Nonresponse, but also to the mode used in the second stage through comparison of different imputation models. Implications are discussed for cost, Nonresponse Bias, measurement differences, and post-imputation variance estimates.

Joseph Murphy - One of the best experts on this subject based on the ideXlab platform.

  • Prioritizing Low Propensity Sample Members in a Survey: Implications for Nonresponse Bias
    Survey Practice, 2014
    Co-Authors: Jeffrey Rosen, Andy Peytchev, Joseph Murphy, Tommy Holder, Jill A. Dever, Debbie R. Herget, Daniel J. Pratt
    Abstract:

    Many survey methodologists now agree that simply striving to increase the response rate is not an optimal approach for reducing Nonresponse Bias in the final survey estimates. Targeting sample cases that are underrepresented can help reduce Nonresponse Bias. However, the challenge lies in which cases to prioritize when resources are finite, and reducing the risk of Nonresponse Bias is the goal. We present an approach which identifies, prioritizes, and intervenes on low-propensity-to-respond cases during Nonresponse follow-up. Targeted cases were assigned to in-person interviewing. Our results suggest that in-person interviewing can be an effective approach for gaining participation from lowpropensity cases. We also find that targeting low-propensity cases could improve representation and therefore should be considered by survey practitioners as a tool for Nonresponse Bias reduction.

  • Reduction of Nonresponse Bias through Case Prioritization
    Survey research methods, 2010
    Co-Authors: Andy Peytchev, Jeffrey Rosen, Joseph Murphy, Sarah Riley, Mark R. Lindblad
    Abstract:

    How response rates are increased can determine the remaining Nonresponse Bias in estimates. Studies often target sample members that are most likely to be interviewed to maximize response rates. Instead, we suggest targeting likely nonrespondents from the onset of a study with a different protocol to minimize Nonresponse Bias. To inform the targeting of sample members, various sources of information can be utilized: paradata collected by interviewers, demographic and substantive survey data from prior waves, and administrative data. Using these data, the likelihood of any sample member becoming a nonrespondent is estimated and on those sample cases least likely to respond, a more effective, often more costly, survey protocol can be employed to gain respondent cooperation. This paper describes the two components of this approach to reducing Nonresponse Bias. We demonstrate assignment of case priority based on response propensity models, and present empirical results from the use of a different protocol for prioritized cases. In a field data collection, a random half of cases with low response propensity received higher priority and increased resources. Resources for high-priority cases were allocated as interviewer incentives. We find that we were relatively successful in predicting response outcome prior to the survey and stress the need to test interventions in order to benefit from case prioritization.

  • Nonresponse Bias in a mail survey of physicians.
    Evaluation & the health professions, 2007
    Co-Authors: Emily Mcfarlane, Joseph Murphy, Murrey Olmsted, Craig A. Hill
    Abstract:

    With the increased pressure on survey researchers to achieve high response rates, it is critical to explore issues related to Nonresponse. In this study, the authors examined the effects of Nonresponse Bias in a mail survey of physicians (N = 3,400). Because slightly more than one half of the sample did not respond to the survey, there was potential for Bias if nonresponders differed significantly from responders with respect to key demographic and practice variables. They analyzed response status and timing of response with respect to five variables: gender, region, specialty, urbanicity, and survey length. The potential consequences of Nonresponse Bias on the survey estimates were then analyzed. Men were more likely to respond, as were physicians receiving a shorter questionnaire. Repeated follow-up attempts reduced gender response Bias because male physicians were more likely to be early responders. Overall, higher response rates were not associated with lower response Bias.

Beverly Snaith - One of the best experts on this subject based on the ideXlab platform.

  • Estimating the effect of Nonresponse Bias in a survey of hospital organizations.
    Evaluation & the health professions, 2013
    Co-Authors: Emily F. Lewis, Maryann Hardy, Beverly Snaith
    Abstract:

    Nonresponse Bias in survey research can result in misleading or inaccurate findings and assessment of Nonresponse Bias is advocated to determine response sample representativeness. Four methods of assessing Nonresponse Bias (analysis of known characteristics of a population, subsampling of nonresponders, wave analysis, and linear extrapolation) were applied to the results of a postal survey of U.K. hospital organizations. The purpose was to establish whether validated methods for assessing Nonresponse Bias at the individual level can be successfully applied to an organizational level survey. The aim of the initial survey was to investigate trends in the implementation of radiographer abnormality detection schemes, and a response rate of 63.7% (325/510) was achieved. This study identified conflicting trends in the outcomes of analysis of Nonresponse Bias between the different methods applied and we were unable to validate the continuum of resistance theory as applied to organizational survey data. Further work is required to ensure established Nonresponse Bias analysis approaches can be successfully applied to organizational survey data. Until then, it is suggested that a combination of methods should be used to enhance the rigor of survey analysis.

  • Estimating the effect of Nonresponse Bias in a survey of hospital organizations.
    Evaluation & the Health Professions, 2013
    Co-Authors: Emily F. Lewis, Maryann Hardy, Beverly Snaith
    Abstract:

    Nonresponse Bias in survey research can result in misleading or inaccurate findings and assessment of Nonresponse Bias is advocated to determine response sample representativeness. Four methods of assessing Nonresponse Bias (analysis of known characteristics of a population, subsampling of nonresponders, wave analysis, and linear extrapolation) were applied to the results of a postal survey of U.K. hospital organizations. The purpose was to establish whether validated methods for assessing Nonresponse Bias at the individual level can be successfully applied to an organizational level survey. The aim of the initial survey was to investigate trends in the implementation of radiographer abnormality detection schemes, and a response rate of 63.7% (325/510) was achieved. This study identified conflicting trends in the outcomes of analysis of Nonresponse Bias between the different methods applied and we were unable to validate the continuum of resistance theory as applied to organizational survey data. Further ...

Robert M. Groves - One of the best experts on this subject based on the ideXlab platform.

  • Support for the Survey Sponsor and Nonresponse Bias
    Public Opinion Quarterly, 2012
    Co-Authors: Robert M. Groves, Stanley Presser, Roger Tourangeau, Mick P. Couper, Eleanor Singer, Brady T. West, Christopher Toppe
    Abstract:

    Abstract in an experiment designed to examine Nonresponse Bias, either the March of Dimes or the university of Michigan was identified as the sponsor of a survey mailed to individuals whose level of sup-port for the March of Dimes was known. The response rate was higher to the university survey, but support for the March of Dimes increased survey participation to the same extent in both conditions. As a result of the overrepresentation of supporters of the organization, both sur-veys showed Nonresponse Bias for variables linked to support. The Bias was greater, however, when the sponsor was identified as the March of Dimes. Thus, the university sponsor brought in not only more of the Robert M. Groves is Director of the Census Bureau, Washington, DC, uSA. Stanley presser is professor in the Sociology Department and the Joint program in Survey Methodology at the university of Maryland, College park, MD, uSA. Roger Tourangeau is vice president at Westat, Rockville, MD, uSA. Brady T. West is Research Assistant professor at the institute for Social Research, university of Michigan, Ann Arbor, Mi, uSA. Mick p. Couper is Research professor at the institute for Social Research, university of Michigan, Ann Arbor, Mi, uSA. eleanor Singer is Research professor emerita at the institute for Social Research, university of Michigan, Ann Arbor, Mi, uSA. Christopher Toppe is Senior Social Scientist at the Department of State, Washington, DC, uSA. This article is a revision of a paper presented at the 2008 Workshop on household Survey Nonresponse, ljubljana, Slovenia, and the 2009 American Association for public Opinion Research Annual Conference, hollywood, Fl, uSA. The authors are indebted to the uS National Science Foundation for grant support [SeS-0207435 to R.M.G., M.C., e.S., and S.p.]; to the March of Dimes for access to its database; to John lafrance, Courtney kennedy, emilia peytcheva, and haley Gu for assistance on various aspects of the project; and to the anony-mous referees for helpful comments. The views expressed in this article are the authors’ and do not necessarily reflect the views of the organizations with which they are affiliated. *Address correspondence to Stanley presser, university of Maryland, Sociology Department, 2112 Arts-Sociology Building, College park, MD, 20742-1315, uSA; e-mail: spresser@survey.umd.edu.

  • The Impact of Nonresponse Rates on Nonresponse Bias A Meta-Analysis
    Public Opinion Quarterly, 2008
    Co-Authors: Robert M. Groves, Emilia Peytcheva
    Abstract:

    Fifty-nine methodological studies were designed to esti- mate the magnitude of Nonresponse Bias in statistics of interest. These studies use a variety of designs: sampling frames with rich variables, data from administrative records matched to sample case, use of screening- interview data to describe nonrespondents to main interviews, followup of nonrespondents to initial phases of field effort, and measures of be- havior intentions to respond to a survey. This permits exploration of which circumstances produce a relationship between Nonresponse rates and Nonresponse Bias and which, do not. The predictors are design fea- tures of the surveys, characteristics of the sample, and attributes of the survey statistics computed in the surveys.

  • Experiments in Producing Nonresponse Bias
    Public Opinion Quarterly, 2006
    Co-Authors: Robert M. Groves, Stanley Presser, Roger Tourangeau, Mick P. Couper, Eleanor Singer, Giorgina Piani Acosta, Lindsay D. Nelson
    Abstract:

    While Nonresponse rates in household surveys are increasing in most industrialized nations, the increasing rates do not always produce Nonresponse Bias in survey estimates. The linkage between Nonresponse rates and Nonresponse Bias arises from the presence of a covariance between response propensity and the survey variables of interest. To understand the covariance term, researchers must think about the common influences on response propensity and the survey variable. Three variables appear to be especially relevant in this regard: interest in the survey topic, reactions to the survey sponsor, and the use of incentives. A set of randomized experiments tests whether those likely to be interested in the stated survey topic participate at higher rates and whether Nonresponse Bias on estimates involving vari- ables central to the survey topic is affected by this. The experiments also test whether incentives disproportionately increase the participation of those less interested in the topic. The experiments show mixed results in support of these key hypotheses.

  • Nonresponse Rates and Nonresponse Bias in Household Surveys
    Public Opinion Quarterly, 2006
    Co-Authors: Robert M. Groves
    Abstract:

    Many surveys of the U.S. household population are experiencing higher refusal rates. Nonresponse can, but need not, induce Nonresponse Bias in survey estimates. Recent empirical findings illustrate cases when the linkage between Nonresponse rates and Nonresponse Biases is absent. Despite this, professional standards continue to urge high response rates. Statistical expressions of Nonresponse Bias can be translated into causal models to guide hypotheses about when Nonresponse. causes Bias. Alternative designs to measure Nonresponse Bias exist, providing different but incomplete information about the nature of the Bias. A synthesis of research studies estimating Nonresponse Bias shows the Bias often present. A logical question at this moment in history is what advantage probability sample surveys have if they suffer from high Nonresponse rates. Since postsurvey adjustment for Nonresponse requires auxiliary variables, the answer depends on the nature of the design and the quality of the auxiliary variables.

Kristen Olson - One of the best experts on this subject based on the ideXlab platform.

  • survey participation Nonresponse Bias measurement error Bias and
    2006
    Co-Authors: Kristen Olson
    Abstract:

    A common hypothesis about practices to reduce survey Nonresponse is that those persons brought into the respondent pool through persuasive efforts may provide data filled with measurement error. Two questions flow from this hypothesis. First, does the mean square error of a statistic increase when sample persons who are less likely to be contacted or cooperate are incorporated into the respondent pool? Second, do Nonresponse Bias estimates made on the respondents, using survey reports instead of records, provide accurate information about Nonresponse Bias? Using a unique data set, the Wisconsin Divorce Study, with divorce records as the frame and questions about the frame information included in the questionnaire, this article takes a first look into these two issues. We find that the relationship between Nonresponse Bias, measurement error Bias, and response propensity is statistic- specific and specific to the type of Nonresponse. Total Bias tends to be lower on estimates calculated using all respondents, compared with those with only the highest contact and cooperation propensities, and Nonresponse Bias analyses based on respondents yield conclusions simi- lar to those based on records. Finally, we find that error properties of statistics may differ from error properties of the individual variables used to calculate the statistics.

  • survey participation Nonresponse Bias measurement error Bias and total Bias
    Public Opinion Quarterly, 2006
    Co-Authors: Kristen Olson
    Abstract:

    A common hypothesis about practices to reduce survey Nonresponse is that those persons brought into the respondent pool through persuasive efforts may provide data filled with measurement error. Two questions flow from this hypothesis. First, does the mean square error of a statistic increase when sample persons who are less likely to be contacted or cooperate are incorporated into the respondent pool? Second, do Nonresponse Bias estimates made on the respondents, using survey reports instead of records, provide accurate information about Nonresponse Bias? Using a unique data set, the Wisconsin Divorce Study, with divorce records as the frame and questions about the frame information included in the questionnaire, this article takes a first look into these two issues. We find that the relationship between Nonresponse Bias, measurement error Bias, and response propensity is statistic- specific and specific to the type of Nonresponse. Total Bias tends to be lower on estimates calculated using all respondents, compared with those with only the highest contact and cooperation propensities, and Nonresponse Bias analyses based on respondents yield conclusions simi- lar to those based on records. Finally, we find that error properties of statistics may differ from error properties of the individual variables used to calculate the statistics.