The Experts below are selected from a list of 12879 Experts worldwide ranked by ideXlab platform
Annelies G. Blom - One of the best experts on this subject based on the ideXlab platform.
-
Response Quality in Nonprobability and Probability-based Online Panels:
Sociological Methods & Research, 2020Co-Authors: Carina Cornesse, Annelies G. BlomAbstract:Recent years have seen a growing number of studies investigating the accuracy of Nonprobability online panels; however, response quality in Nonprobability online panels has not yet received much at...
-
Integrating probability and Nonprobability samples for survey inference
Journal of Survey Statistics and Methodology, 2020Co-Authors: A Kwaśniowski, Diego Andres Perez Ruiz, Joseph W. Sakshaug, Annelies G. BlomAbstract:Survey data collection costs have risen to a point where many survey researchers and polling companies are abandoning large, expensive probability-based samples in favor of less expensive Nonprobability samples. The empirical literature suggests this strategy may be suboptimal for multiple reasons, among them that probability samples tend to outperform Nonprobability samples on accuracy when assessed against population benchmarks. However, Nonprobability samples are often preferred due to convenience and costs. Instead of forgoing probability sampling entirely, we propose a method of combining both probability and Nonprobability samples in a way that exploits their strengths to overcome their weaknesses within a Bayesian inferential framework. By using simulated data, we evaluate supplementing inferences based on small probability samples with prior distributions derived from Nonprobability data. We demonstrate that informative priors based on Nonprobability data can lead to reductions in variances and mean squared errors for linear model coefficients. The method is also illustrated with actual probability and Nonprobability survey data. A discussion of these findings, their implications for survey practice, and possible research extensions are provided in conclusion.
-
A review of conceptual approaches and empirical evidence on probability and Nonprobability sample survey research
Journal of Survey Statistics and Methodology, 2020Co-Authors: Carina Cornesse, Annelies G. Blom, David Dutwin, Jon A. Krosnick, Edith D. De Leeuw, Stéphane Legleye, Josh Pasek, Darren Pennay, Benjamin Phillips, Joseph W. SakshaugAbstract:There is an ongoing debate in the survey research literature about whether and when probability and Nonprobability sample surveys produce accurate estimates of a larger population. Statistical theory provides a justification for confidence in probability sampling as a function of the survey design, whereas inferences based on Nonprobability sampling are entirely dependent on models for validity. This article reviews the current debate about probability and Nonprobability sample surveys. We describe the conditions under which Nonprobability sample surveys may provide accurate results in theory and discuss empirical evidence on which types of samples produce the highest accuracy in practice. From these theoretical and empirical considerations, we derive best-practice recommendations and outline paths for future research.
-
supplementing small probability samples with Nonprobability samples a bayesian approach
Journal of Official Statistics, 2019Co-Authors: Joseph W. Sakshaug, Diego Andres Perez Ruiz, A Kwaśniowski, Annelies G. BlomAbstract:Carefully designed probability-based sample surveys can be prohibitively expensive to conduct. As such, many survey organizations have shifted away from using expensive probability samples in favor of less expensive, but possibly less accurate, Nonprobability web samples. However, their lower costs and abundant availability make them a potentially useful supplement to traditional probability-based samples. We examine this notion by proposing a method of supplementing small probability samples with Nonprobability samples using Bayesian inference. We consider two semi-conjugate informative prior distributions for linear regression coefficients based on Nonprobability samples, one accounting for the distance between maximum likelihood coefficients derived from parallel probability and non-probability samples, and the second depending on the variability and size of the Nonprobability sample. The method is evaluated in comparison with a reference prior through simulations and a real-data application involving multiple probability and Nonprobability surveys fielded simultaneously using the same questionnaire. We show that the method reduces the variance and mean-squared error (MSE) of coefficient estimates and model-based predictions relative to probability-only samples. Using actual and assumed cost data we also show that the method can yield substantial cost savings (up to 55%) for a fixed MSE.
Carina Cornesse - One of the best experts on this subject based on the ideXlab platform.
-
Response Quality in Nonprobability and Probability-based Online Panels:
Sociological Methods & Research, 2020Co-Authors: Carina Cornesse, Annelies G. BlomAbstract:Recent years have seen a growing number of studies investigating the accuracy of Nonprobability online panels; however, response quality in Nonprobability online panels has not yet received much at...
-
A review of conceptual approaches and empirical evidence on probability and Nonprobability sample survey research
Journal of Survey Statistics and Methodology, 2020Co-Authors: Carina Cornesse, Annelies G. Blom, David Dutwin, Jon A. Krosnick, Edith D. De Leeuw, Stéphane Legleye, Josh Pasek, Darren Pennay, Benjamin Phillips, Joseph W. SakshaugAbstract:There is an ongoing debate in the survey research literature about whether and when probability and Nonprobability sample surveys produce accurate estimates of a larger population. Statistical theory provides a justification for confidence in probability sampling as a function of the survey design, whereas inferences based on Nonprobability sampling are entirely dependent on models for validity. This article reviews the current debate about probability and Nonprobability sample surveys. We describe the conditions under which Nonprobability sample surveys may provide accurate results in theory and discuss empirical evidence on which types of samples produce the highest accuracy in practice. From these theoretical and empirical considerations, we derive best-practice recommendations and outline paths for future research.
-
Is there an association between survey characteristics and representativeness? A meta-analysis
Survey research methods, 2018Co-Authors: Carina Cornesse, Michael BosnjakAbstract:How to achieve survey representativeness is a controversially debated issue in the field of survey methodology. Common questions include whether probability-based samples produce more representative data than Nonprobability samples, whether the response rate determines the overall degree of survey representativeness, and which survey modes are effective in generating highly representative data. This meta-analysis contributes to this debate by synthesizing and analyzing the literature on two common measures of survey representativeness (R-Indicators and descriptive benchmark comparisons). Our findings indicate that probability-based samples (compared to Nonprobability samples), mixed-mode surveys (compared to single-mode surveys), and other-than-Web modes (compared to Web surveys) are more representative, respectively. In addition, we find that there is a positive association between representativeness and the response rate as well as the number of auxiliary variables used in representativeness assessments. Furthermore, we identify significant gaps in the research literature that we hope might encourage further research in this area.
Jon A. Krosnick - One of the best experts on this subject based on the ideXlab platform.
-
A review of conceptual approaches and empirical evidence on probability and Nonprobability sample survey research
Journal of Survey Statistics and Methodology, 2020Co-Authors: Carina Cornesse, Annelies G. Blom, David Dutwin, Jon A. Krosnick, Edith D. De Leeuw, Stéphane Legleye, Josh Pasek, Darren Pennay, Benjamin Phillips, Joseph W. SakshaugAbstract:There is an ongoing debate in the survey research literature about whether and when probability and Nonprobability sample surveys produce accurate estimates of a larger population. Statistical theory provides a justification for confidence in probability sampling as a function of the survey design, whereas inferences based on Nonprobability sampling are entirely dependent on models for validity. This article reviews the current debate about probability and Nonprobability sample surveys. We describe the conditions under which Nonprobability sample surveys may provide accurate results in theory and discuss empirical evidence on which types of samples produce the highest accuracy in practice. From these theoretical and empirical considerations, we derive best-practice recommendations and outline paths for future research.
-
national surveys via rdd telephone interviewing versus the internet comparing sample representativeness and response quality
Public Opinion Quarterly, 2009Co-Authors: Linchiat Chang, Jon A. KrosnickAbstract:In a national field experiment, the same questionnaires were administered simultaneously by RDD telephone interviewing, by the In- ternet with a probability sample, and by the Internet with a Nonprobability sample of people who volunteered to do surveys for money. The probabil- ity samples were more representative of the nation than the Nonprobability sample in terms of demographics and electoral participation, even after weighting. The Nonprobability sample was biased toward being highly engaged in and knowledgeable about the survey's topic (politics). The telephone data manifested more random measurement error, more survey satisficing, and more social desirability response bias than did the Inter- net data, and the probability Internet sample manifested more random error and satisficing than did the volunteer Internet sample. Practice at completing surveys increased reporting accuracy among the probability Internet sample, and deciding only to do surveys on topics of personal interest enhanced reporting accuracy in the Nonprobability Internet sam- ple. Thus, the Nonprobability Internet method yielded the most accurate self-reports from the most biased sample, while the probability Internet sample manifested the optimal combination of sample composition ac- curacy and self-report accuracy. These results suggest that Internet data collection from a probability sample yields more accurate results than do
Mick P. Couper - One of the best experts on this subject based on the ideXlab platform.
-
Options for Conducting Web Surveys
Statistical Science, 2017Co-Authors: Matthias Schonlau, Mick P. CouperAbstract:Web surveys can be conducted relatively fast and at relatively low cost. However, Web surveys are often conducted with Nonprobability samples and, therefore, a major concern is generalizability. There are two main approaches to address this concern: One, find a way to conduct Web surveys on probability samples without losing most of the cost and speed advantages (e.g., by using mixed-mode approaches or probability-based panel surveys). Two, make adjustments (e.g., propensity scoring, post-stratification, GREG) to Nonprobability samples using auxiliary variables. We review both of these approaches as well as lesser-known ones such as respondent-driven sampling. There are many different ways Web surveys can solve the challenge of generalizability. Rather than adopting a one-size-fits-all approach, we conclude that the choice of approach should be commensurate with the purpose of the study.
J. Scott Long - One of the best experts on this subject based on the ideXlab platform.
-
Can Nonprobability Samples be Used for Social Science Research? A cautionary tale
Survey research methods, 2019Co-Authors: Elizabeth S. Zack, John M. Kennedy, J. Scott LongAbstract:Survey researchers and social scientists are trying to understand the appropriate use of Nonprobability samples as substitutes for probability samples in social science research. While cognizant of the challenges presented by Nonprobability samples, scholars increasingly rely on these samples due to their low cost and speed of data collection. This paper contributes to the growing literature on the appropriate use of Nonprobability samples by comparing two online non-probability samples, Amazon’s Mechanical Turk (MTurk) and a Qualtrics Panel, with a gold standard nationally representative probability sample, the GSS. Most research in this area focuses on determining the best techniques to improve point estimates from Nonprobability samples, often using gold standard surveys or census data to determine the accuracy of the point estimates. This paper differs from that line of research in that we examine how probability and Nonprobability samples differ when used in multivariate analysis, the research technique used by many social scientists. Additionally, we examine whether restricting each sample to a population well-represented in MTurk (Americans age 45 and under) improves MTurk’s estimates. We find that, while Qualtrics and MTurk differ somewhat from the GSS, Qualtrics outperforms MTurk in both univariate and multivariate analysis. Further, restricting the samples substantially improves MTurk’s estimates, almost closing the gap with Qualtrics. With both Qualtrics and MTurk, we find a risk of false positives. Our findings suggest that these online Nonprobability samples may sometimes be ‘fit for purpose,’ but should be used with caution.