Hostname: page-component-8448b6f56d-m8qmq Total loading time: 0 Render date: 2024-04-19T02:31:55.028Z Has data issue: false hasContentIssue false

The Generalizability of Survey Experiments*

Published online by Cambridge University Press:  12 January 2016

Kevin J. Mullinix
Affiliation:
Department of Government and Justice Studies, Appalachian State University, Boone, NC 28608, USA, e-mail: kevin.mullinix@gmail.com
Thomas J. Leeper
Affiliation:
Department of Government, London School of Economics and Political Science, London, UK, e-mail: thosjleeper@gmail.com
James N. Druckman
Affiliation:
Department of Political Science, Northwestern University, Scott Hall 601 University Place, Evanston, IL 60218, USA, e-mail: druckman@northwestern.edu
Jeremy Freese
Affiliation:
Department of Sociology, Northwestern University, 1810 Chicago Avenue, Evanston, IL 60208, USA, e-mail: jfreese@northwestern.edu

Abstract

Survey experiments have become a central methodology across the social sciences. Researchers can combine experiments’ causal power with the generalizability of population-based samples. Yet, due to the expense of population-based samples, much research relies on convenience samples (e.g. students, online opt-in samples). The emergence of affordable, but non-representative online samples has reinvigorated debates about the external validity of experiments. We conduct two studies of how experimental treatment effects obtained from convenience samples compare to effects produced by population samples. In Study 1, we compare effect estimates from four different types of convenience samples and a population-based sample. In Study 2, we analyze treatment effects obtained from 20 experiments implemented on a population-based sample and Amazon's Mechanical Turk (MTurk). The results reveal considerable similarity between many treatment effects obtained from convenience and nationally representative population-based samples. While the results thus bolster confidence in the utility of convenience samples, we conclude with guidance for the use of a multitude of samples for advancing scientific knowledge.

Type
Research Article
Copyright
Copyright © The Experimental Research Section of the American Political Science Association 2016 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Footnotes

*

The authors acknowledge support from a National Science Foundation grant for Time-Sharing Experiments in the Social Sciences (SES-1227179). Druckman and Freese are co-Principal Investigators of TESS, and Study 2 was designed and funded as a methodological component of their TESS grant. Study 1 includes data in part funded by an NSF Doctoral Dissertation Improvement Grant to Leeper (SES-1160156) and in part collected via a successful proposal to TESS by Mullinix and Leeper. Druckman and Freese were neither involved in Study 1 nor with any part of the review or approval of Mullinix and Leeper's TESS proposal (via recusal, given other existing collaborations). Only after data from both studies were collected did authors determine that the two studies were so complementary that it would be better to publish them together. The authors thank Lene Aarøe, Kevin Arceneaux, Christoph Arndt, Adam Berinsky, Emily Cochran Bech, Scott Clifford, Adrienne Hosek, Cindy Kam, Lasse Laustsen, Diana Mutz, Helene Helboe Pedersen, Richard Shafranek, Flori So, Rune Slothuus, Rune Stubager, Magdalena Wojcieszak, workshop participants at Southern Denmark University, and participants at The American Panel Survey Workshop at Washington University, St. Louis.

References

REFERENCES

Ahler, Douglas J. 2014. “Self-Fulfilling Misperceptions of Public Polarization.” The Journal of Politics 76 (3): 607–20.CrossRefGoogle Scholar
Baker, Reg. et al. 2010. “Research Synthesis: AAPOR Report on Online Panels.” Public Opinion Quarterly 74 (4): 171.Google Scholar
Barabas, Jason and Jerit, Jennifer. 2010. “Are Survey Experiments Externally Valid?American Political Science Review 104 (May): 226242.CrossRefGoogle Scholar
Benoit, Kenneth, Conway, Drew, Lauderdale, Benjamin E., Laver, Michael, and Mikhaylov, Slava. 2015. “Crowd-Sourced Text Analysis: Reproducible and Agile Production of Political Data.” American Political Science Review: Forthcoming.CrossRefGoogle Scholar
Berger, Arthur Asa. 2014. Media and Communications Research Methods: An Introduction to Qualitative and Quantitative Approaches. Los Angeles: Sage Publication, Inc.Google Scholar
Berinsky, Adam J., Huber, Gregory A., and Lenz, Gabriel S.. 2012. “Evaluating Online Labor Markets for Experimental Research: Amazon.com's Mechanical Turk.” Political Analysis 20 (Summer): 351–68.CrossRefGoogle Scholar
Berinsky, Adam J., Margolis, Michele F., and Sances, Michael W.. 2014. “Separating the Shirkers from the Workers? Making Sure Respondents Pay Attention on Self-Administered Surveys.” American Journal of Political Science 58 (3): 739–53.CrossRefGoogle Scholar
Bloom, Howard S. 2005. Learning More from Social Experiments. New York: Russell Sage Foundation.Google Scholar
Bohannon, John. 2011. “Social Science for Pennies.” Science 334 (October): 307.CrossRefGoogle ScholarPubMed
Brady, Henry E. 2000. “Contributions of Survey Research to Political Science.” PS: Political Science & Politics 33 (1): 4757.Google Scholar
Broockman, David E. and Green, Donald P.. 2013. “Do Online Advertisements Increase Political Candidates’ Name Recognition or Favorability? Evidence from Randomized Field Experiments.” Political Behavior 36 (2): 263–89.CrossRefGoogle Scholar
Callegaro, Mario, Baker, Reg, Bethlehem, Jelke, Göritz, Anja S., Krosnick, Jon A., and Lavrakas, Paul J.. 2014. “Online Panel Research: History, Concepts, Applications, and a Look at the Future.” In Online Panel Research: A Data Quality Perspective, eds. Callegaro, Mario, Baker, Reg, Bethlehem, Jelke, Göritz, Anja S., Krosnick, Jon A., and Lavrakas, Paul J.. Sussex, West, United Kingdom: John Wiley & Sons Ltd.CrossRefGoogle Scholar
Campbell, Donald T. 1969. “Prospective: Artifact and Control.” In Artifact in Behavioral Research, eds. Rosenthal, Robert and Rosnow, Robert. New York: Academic Press.Google Scholar
Cassese, Erin C., Huddy, Leonie, Hartman, Todd K., Mason, Liliana, and Weber, Christopher R.. 2013. “Socially Mediated Internet Surveys: Recruiting Participants for Online Experiments.” PS: Political Science and Politics 46 (4): 110.Google Scholar
Chandler, Jesse, Mueller, Pam, and Paolacci, Gabriele. 2014. “Nonnaiveté Among Amazon Mechanical Turk Workers: Consequences and Solution for Behavioral Researchers.” Behavior Research Methods 46 (1): 112–30.CrossRefGoogle ScholarPubMed
Chong, Dennis and Druckman, James N.. 2007a. “Framing Public Opinion in Competitive Democracies.” American Political Science Review 101 (4): 637–55.CrossRefGoogle Scholar
Chong, Dennis and Druckman, James N.. 2007b. “Framing Theory.” Annual Review of Political Science 10 (1): 103–26.CrossRefGoogle Scholar
Clifford, Scott and Jerit, Jennifer. 2015. “Is There a Cost to Convenience? An Experimental Comparison of Data Quality in Laboratory and Online Studies.” Journal of Experimental Political Science 1 (2): 120–31.CrossRefGoogle Scholar
Coppock, Alexander and Green, Donald P.. 2015. “Assessing the Correspondence Between Experimental Results Obtained in the Lab and Field: A Review of Recent Social Science Research.” Political Science Research and Methods 3 (1): 113–31.CrossRefGoogle Scholar
Druckman, James N. 2001. “The Implications of Framing Effects for Citizen Competence.” Political Behavior 23 (3): 225–56.CrossRefGoogle Scholar
Druckman, James N. 2004. “Priming the Vote: Campaign Effects in a US Senate Election.” Political Psychology 25: 577–94.CrossRefGoogle Scholar
Druckman, James N. and Lupia, Arthur. 2012. “Experimenting with Politics.” Science 335 (March): 1177–79.CrossRefGoogle ScholarPubMed
Druckman, James N. and Kam, Cindy D.. 2011. “Students as Experimental Participants: A Defense of the ‘Narrow Data Base’.” In Cambridge Handbook of Experimental Political Science, eds. Druckman, J. N., Green, D. P., Kuklinski, J. H., and Lupia, A.. New York: Cambridge University Press, 4157.CrossRefGoogle Scholar
Druckman, James N., Green, Donald P., Kuklinski, James H., and Lupia, Arthur. 2006. “The Growth and Development of Experimental Research in Political Science.” American Political Science Review 100 (November): 627–35.CrossRefGoogle Scholar
Druckman, James N., Peterson, Erik, and Slothuus, Rune. 2013. “How Elite Partisan Polarization Affects Public Opinion Formation.” American Political Science Review 107 (1): 5779.CrossRefGoogle Scholar
Dynamo. 2014. “Guidelines for Academic Requesters.” Retrieved 6 October 2015 from (http://wiki.wearedynamo.org/index.php/Guidelines_for_Academic_Requesters), Accessed October 6, 2015.Google Scholar
Egami, Naoki and Imai, Kosuke. 2015. “Causal Interaction in High-Dimension.” Working paper.Google Scholar
Entman, Robert M. 1993. “Framing: Toward Clarification of a Fractured Paradigm.” Journal of Communication 43 (4): 5158.CrossRefGoogle Scholar
Fowler, Anthony and Margolis, Michele. 2014. “The Political Consequences of Uninformed Voters.” Electoral Studies 34: 100–10.CrossRefGoogle Scholar
Franco, Annie, Malhotra, Neil, and Simonovits, Gabor. 2014. “Publication Bias in the Social Sciences: Unlocking the File Drawer.” Science 345 (August): 1502–5.CrossRefGoogle ScholarPubMed
Freese, Jeremy, Howat, Adam, Mullinix, Kevin J., and Druckman, James N.. 2015. “Limitations of Screening Methods to Obtain Representative Samples Using Online Labor Markets.” Working Paper, Northwestern University.Google Scholar
Gamson, William A. and Modigiliani, Andre. 1989. “Media Discourse and Public Opinion on Nuclear Power: A Constructionist Approach.” American Journal of Sociology 95 (1): 137.CrossRefGoogle Scholar
Gelman, Andrew and Stern, Hal. 2006. “The Difference Between ‘Significant’ and ‘Not Significant’ is not Itself Statistically Significant.” The American Statistician 60 (4): 328–31.CrossRefGoogle Scholar
Gerber, Alan S. and Green, Donald P.. 2008. “Field Experiments and Natural Experiments.” In Oxford Handbook of Political Methodology, eds. Box-Steffensmeier, J. M., Brady, H. E., and Collier, D.. New York: Oxford University Press, 357–81.Google Scholar
Gerber, Alan S. and Green, Donald P.. 2011. Field Experiments: Design, Analysis, and Interpretation. New York: W.W. Norton & Company.Google Scholar
Gerring, John. 2012. Social Science Methodology: A Unified Framework. New York: Cambridge University Press.Google Scholar
GfK. 2013. “Knowledge Panel Design Summary.” Available at: https://www.gfk.com/Documents/GfK-KnowledgePanel.pdf. Last accessed 20 November 2015.Google Scholar
Goodman, Joseph K., Cryder, Cynthia E., and Cheema, Amar. 2012. “Data Collection in a Flat World: The Strengths and Weaknesses of Mechanical Turk Samples.” Journal of Behavioral Decision Making 26: 213–24.CrossRefGoogle Scholar
Green, Donald P. and Kern, Holger L.. 2012. “Modeling Heterogeneous Treatment Effects in Survey Experiments with Bayesian Additive Regression Trees.” Public Opinion Quarterly 76 (3): 491511.CrossRefGoogle Scholar
Henrich, Joseph, Heine, Steven J., and Norenzayan, Ara. 2010. “The Weirdest People in the World?Behavioral and Brain Sciences 33 (April): 6183.CrossRefGoogle ScholarPubMed
Hillygus, D. Sunshine, Jackson, Natalie, and Young, McKenzie. 2014. “Professional Respondents in Nonprobability Online Panels.” In Online Panel Research: A Data Quality Perspetive, eds. Callegaro, Mario, Baker, Reg, Bethlehem, Jelke, Göritz, Anja S., Krosnick, Jon A., and Lavrakas, Paul J.. West Sussex, United Kingdom: John Wiley & Sons Ltd.Google Scholar
Holt, Charles A. 2006. Markets, Games, and Strategic Behavior: Recipes for Interactive Learning. Reading, MA: Addison-Wesley.Google Scholar
Horton, John J., Rand, David G., and Zeckhauser, Richard J.. 2011. “The Online Laboratory: Conducting Experiments in a Real Labor Market.” Experimental Economics 14 (3): 399425.CrossRefGoogle Scholar
Hovland, Carl I. 1959. “Reconciling Conflicting Results Derived from Experimental and Survey Studies of Attitude Change.” The American Psychologist 14: 817.CrossRefGoogle Scholar
Huber, Gregory A., Hill, Seth J., and Lenz, Gabriel S.. 2012. “Sources of Bias in Retrospective Decision-Making: Experimental Evidence of Voters’ Limitations in Controlling Incumbents.” American Political Science Review 106 (4): 720–41.CrossRefGoogle Scholar
Huff, Connor and Tingley, Dustin. 2015. “‘Who are these people?’ Evaluating the demographic characteristics and political preferences of MTurk survey respondents.” Research & Politics 2 (3): 111. DOI: 10.1177/2053168015604648.CrossRefGoogle Scholar
Iyengar, Shanto. 1991. Is Anyone Responsible? How Television Frames Political Issues. Chicago, IL: The University of Chicago Press.CrossRefGoogle Scholar
Jerit, Jennifer, Barabas, Jason, and Clifford, Scott. 2013. “Comparing Contemporaneous Laboratory and Field Experiments on Media Effects.” Public Opinion Quarterly 77 (1): 256–82.CrossRefGoogle Scholar
Kam, Cindy D., Wilking, Jennifer R., and Zechmeister, Elizabeth J.. 2007. “Beyond the ‘Narrow Data Base’: Another Convenience Sample for Experimental Research.” Political Behavior 29 (4): 415–40.CrossRefGoogle Scholar
Keeter, Scott, Kennedy, Courtney, Dimock, Michael, Best, Jonathan and Craighill, Peyton. 2006. “Gauging the Impact of Growing Nonresponse on Estimates from a National RDD Telephone Survey.” Public Opinion Quarterly 70 (5): 759–79.CrossRefGoogle Scholar
Klar, Samara. 2013. “The Influence of Competing Identity Primes on Political Preferences.” Journal of Politics 75 (4): 1108–24.CrossRefGoogle Scholar
Klar, Samara, Robison, Joshua, and Druckman, James N.. 2013. “Political Dynamics of Framing.” In New Directions in Media and Politics, ed. Ridout, Travis N.. New York: Routledge, 173192.Google Scholar
Klein, Richard A. et al. 2014. “Investigating Variation in Replicability: A ‘Many Labs’ Replication Project.” Social Psychology 45: 142–52.CrossRefGoogle Scholar
Kraft, Peter. 2008. “Curses—Winner's and Otherwise—in Genetic Epidemiology.” Epidemiology 19 (September): 649–51.CrossRefGoogle ScholarPubMed
Kriss, Peter H. and Weber, Roberto. 2013. “Organizational Formation and Change: Lessons from Economic Laboratory Experiments.” In Handbook of Economic Organization: Integrating Economic and Organizational Theory, ed. Northampton, A. Grandori.: Edward Elgar Publishing Limited, 245–72.Google Scholar
Krupnikov, Yanna and Levine, Adam Seth. 2014. “Cross-Sample Comparisons and External Validity.” Journal of Experimental Political Science 1 (Spring): 5980.CrossRefGoogle Scholar
Lupia, Arthur. 2014. “The 2013 Ithiel de Sola Pool Lecture: What is the Value of Social Science? Challengers for Researchers and Government Funders.” PS: Political Science & Politics 47 (January): 17.Google Scholar
Malhotra, Neil and Kuo, Alexander G.. 2008. “Attributing Blame: The Public's Response to Hurricane Katrina.” The Journal of Politics 70 (1): 120–35.CrossRefGoogle Scholar
McDermott, Rose. 2002. “Experimental Methodology in Political Science.” Political Analysis, 10: 325–42.CrossRefGoogle Scholar
Morawski, Jill G. 1988. The Rise of Experimentation in American Psychology. New Haven: Yale University Press.Google Scholar
Mutz, Diana C. 2011. Population-Based Survey Experiments. Princeton: Princeton University Press.Google Scholar
Nelson, Thomas E., Clawson, Rosalee A., and Oxley, Zoe M.. 1997. “Media Framing of a Civil Liberties Conflict and Its Effect on Tolerance.” American Political Science Review 91 (3): 567–83.CrossRefGoogle Scholar
Nock, Steven L. and Guterbock, Thomas M.. 2010. “Survey experiments.” In Handbook of Survey Research, eds. Marsden, P. V., and Wright, J. D.. Emerald, UK, 837–64.Google Scholar
Open Science Collaboration. 2015. “Estimating the Reproducibility of Psychological Science.” Science 349: 943.Google Scholar
Paolacci, Gabriele, Chandler, Jesse, and Ipeirotis, Panagiotis G.. 2010. “Running Experiments on Amazon Mechanical Turk.” Judgment and Decision Making, 5 (August): 411–9.CrossRefGoogle Scholar
Pew. 2012. Assessing the Representativeness of Public Opinion Surveys. Available at: http://www.people-press.org/2012/05/15/assessing-the-representativeness-of-public-opinion-surveys/. Last accessed 20 November 2015.Google Scholar
Rand, David G., Peysakhovich, Alexander, Kraft-Todd, Gordon T., Newman, George E., Wurzbacher, Owen, Nowak, Martin A., and Greene, Joshua D.. 2014. “Social Heuristics Shape Intuitive Cooperation.” Nature Communications 5: 112.CrossRefGoogle ScholarPubMed
Redlawsk, David P., Civettini, Andrew J., and Emmerson, Karen M.. 2010. “The Affective Tipping Point: Do Motivated Reasoners Ever ‘Get It’?Political Psychology 31: 563593.CrossRefGoogle Scholar
Riker, William H. 1996. The Strategy of Rhetoric: Campaigning for the American Constitution. New Haven, CT: Yale University Press.Google Scholar
Ross, Joel, Irani, Lily, Silberman, M. Six, Zaldivar, Andrew, and Tomlinson, Bill. 2010. “Who are the Crowdworkers? Shifting Demographics in Amazon Mechanical Turk. In CHI EA 2010, New York: ACM Press, 2863–72.Google Scholar
Sears, David O. 1986. “College Sophomores in the Laboratory: Influences of a Narrow Data Base on Social Psychology's View of Human Nature.” Journal of Personality and Social Psychology 51: 515530.CrossRefGoogle Scholar
Shadish, William R., Cook, Thomas D., and Campbell, Donald T.. 2001. Experimental and Quasi-Experimental Designs for Generalized Causal Inference. Boston, MA: Houghton-Mifflin.Google Scholar
Sniderman, Paul. 2011. The Logic and Design of the Survey Experiment: An Autobiography of a Methodological Innovation.” In Cambridge Handbook of Experimental Political Science, eds. Druckman, J. N., Green, D. P., Kuklinski, J. H., and Lupia, A.. New York: Cambridge University Press, 102–14.CrossRefGoogle Scholar
Steinmetz, Stephanie, Bianchi, Annamaria, Tijdens, Kea, and Biffignandi, Silvia. 2014. “Improving Web Surveys Quality: Potentials and Constraints of Propensity Score Adjustments.” In Online Panel Research: A Data Quality Perspetive, eds. Callegaro, Mario, Baker, Reg, Bethlehem, Jelke, Göritz, Anja S., Krosnick, Jon A., and Lavrakas, Paul J.. West Sussex, United Kingdom: John Wiley & Sons Ltd.Google Scholar
Valentino, Nicholas A., Traugott, Michael W., and Hutchings, Vincent L.. 2002. “Group Cues and Ideological Constraint: A Replication of Political Advertising Effects Studies in the Lab and in the Field.” Political Communicaton 19 (1): 2948.CrossRefGoogle Scholar
Wang, Wei, Rothschild, David, Goel, Sharad, and Gelman, Andrew. 2015. “Forecasting Elections with Non-representative Polls.” International Journal of Forecasting 31 (3): 980991.CrossRefGoogle Scholar
Weinberg, Jill D., Freese, Jeremy, and McElhattan, David. 2014. “Comparing Demographics, Data Quality, and Results of an Online Factorial Survey Between a Population-Based and a Crowdsource-Recruited Sample.” Sociological Science 1: 292310.CrossRefGoogle Scholar
Wright, James D., and Marsden, Peter V. 2010. “Survey Research and Social Science: History, Current Practice, and Future Prospects.” In Handbook of Survey Research, eds. Marsden, P. V., and Wright, J. D.. Emerald, UK, 326.Google Scholar
Supplementary material: File

Mullinix supplementary material

Mullinix supplementary material 1

Download Mullinix supplementary material(File)
File 102.5 KB