The Experts below are selected from a list of 99 Experts worldwide ranked by ideXlab platform
Gerald Albaum - One of the best experts on this subject based on the ideXlab platform.
-
a multi group analysis of online Survey Respondent data quality comparing a regular usa consumer panel to mturk samples
Journal of Business Research, 2016Co-Authors: Scott M Smith, Catherine A Roster, Linda L Golden, Gerald AlbaumAbstract:With the exploding use of Internet Surveys, research efforts and data quality are increasingly subject to the effects of Respondents who do not give the required attention to Survey questions and who speed through the Survey, or who intentionally cheat with their answers. We investigate Respondent integrity and data quality for samples drawn from a “Regular” online panel and from Amazon's MTurk. New metrics for assessing sample integrity and online data quality are introduced. Overall, MTurk Respondents in both Respondent groups took less time to answer questions. The non-USA MTurk group deviated most from correct answers in attention filter questions and had more duplicate IP addresses. In addition, the results from the three Internet sample sources are substantively different. The choice of an Internet Survey sample vendor is critical, as it can impact sample composition, Respondent integrity, data quality, data structure and substantive results.
Scott M Smith - One of the best experts on this subject based on the ideXlab platform.
-
a multi group analysis of online Survey Respondent data quality comparing a regular usa consumer panel to mturk samples
Journal of Business Research, 2016Co-Authors: Scott M Smith, Catherine A Roster, Linda L Golden, Gerald AlbaumAbstract:With the exploding use of Internet Surveys, research efforts and data quality are increasingly subject to the effects of Respondents who do not give the required attention to Survey questions and who speed through the Survey, or who intentionally cheat with their answers. We investigate Respondent integrity and data quality for samples drawn from a “Regular” online panel and from Amazon's MTurk. New metrics for assessing sample integrity and online data quality are introduced. Overall, MTurk Respondents in both Respondent groups took less time to answer questions. The non-USA MTurk group deviated most from correct answers in attention filter questions and had more duplicate IP addresses. In addition, the results from the three Internet sample sources are substantively different. The choice of an Internet Survey sample vendor is critical, as it can impact sample composition, Respondent integrity, data quality, data structure and substantive results.
Catherine A Roster - One of the best experts on this subject based on the ideXlab platform.
-
a multi group analysis of online Survey Respondent data quality comparing a regular usa consumer panel to mturk samples
Journal of Business Research, 2016Co-Authors: Scott M Smith, Catherine A Roster, Linda L Golden, Gerald AlbaumAbstract:With the exploding use of Internet Surveys, research efforts and data quality are increasingly subject to the effects of Respondents who do not give the required attention to Survey questions and who speed through the Survey, or who intentionally cheat with their answers. We investigate Respondent integrity and data quality for samples drawn from a “Regular” online panel and from Amazon's MTurk. New metrics for assessing sample integrity and online data quality are introduced. Overall, MTurk Respondents in both Respondent groups took less time to answer questions. The non-USA MTurk group deviated most from correct answers in attention filter questions and had more duplicate IP addresses. In addition, the results from the three Internet sample sources are substantively different. The choice of an Internet Survey sample vendor is critical, as it can impact sample composition, Respondent integrity, data quality, data structure and substantive results.
Linda L Golden - One of the best experts on this subject based on the ideXlab platform.
-
a multi group analysis of online Survey Respondent data quality comparing a regular usa consumer panel to mturk samples
Journal of Business Research, 2016Co-Authors: Scott M Smith, Catherine A Roster, Linda L Golden, Gerald AlbaumAbstract:With the exploding use of Internet Surveys, research efforts and data quality are increasingly subject to the effects of Respondents who do not give the required attention to Survey questions and who speed through the Survey, or who intentionally cheat with their answers. We investigate Respondent integrity and data quality for samples drawn from a “Regular” online panel and from Amazon's MTurk. New metrics for assessing sample integrity and online data quality are introduced. Overall, MTurk Respondents in both Respondent groups took less time to answer questions. The non-USA MTurk group deviated most from correct answers in attention filter questions and had more duplicate IP addresses. In addition, the results from the three Internet sample sources are substantively different. The choice of an Internet Survey sample vendor is critical, as it can impact sample composition, Respondent integrity, data quality, data structure and substantive results.
Jun Liang - One of the best experts on this subject based on the ideXlab platform.
-
on cost aware biased Respondent group selection for minority opinion Survey
Discrete Mathematics Algorithms and Applications, 2016Co-Authors: Wei Wang, Matthew Tetteh, Jun LiangAbstract:This paper discusses a new approach to use a specially constructed social relation graph with high homophily to select a Survey Respondent group under a limited budget such that the result of the Survey is biased to the minority opinions. This approach has a wide range of potential applications, e.g., collecting diversified complaints from the customers while most of them are satisfied, but is hardly investigated. We formulate the problem of computing such a group as the p-biased-representative selection problem (p-BRSP), where p represents the size of the group constraint by the available budget. This problem has two independent optimization goals and therefore is difficult to deal with. We introduce two polynomial time algorithms for the problem, where each of which has an approximation ratio with respect to each of the objectives when the other optimization objective is substituted with a constraint. Under the substituted constraint, we prove that the first algorithm is an O(lnΔ)-approximation (which i...
-
biased Respondent group selection under limited budget for minority opinion Survey
4th International Conference on Computational Social Networks CSoNet 2015, 2015Co-Authors: Wei Wang, Matthew Tetteh, Jun Liang, Soyoon ParkAbstract:This paper discusses a new approach to use the information from a special social network with high homophily to select a Survey Respondent group under a limited budget such that the result of the Survey is biased to the minority opinions. This approach has a wide range of potential applications, e.g. collecting complaints from the customers of a new product while most of them are satisfied. We formally define the problem of computing such group with better utilization as the p-biased-representative selection problem (p-BRSP). This problem has two separate objectives and is difficult to deal with. Thus, we also propose a new unified-objective which is a function of the two optimization objectives. Most importantly, we introduce two polynomial time heuristic algorithms for the problem, where each of which has an approximation ratio with respect to each of the objectives.