Skip to main content
Log in

Taking Fact-Checks Literally But Not Seriously? The Effects of Journalistic Fact-Checking on Factual Beliefs and Candidate Favorability

  • Original Paper
  • Published:
Political Behavior Aims and scope Submit manuscript

Abstract

Are citizens willing to accept journalistic fact-checks of misleading claims from candidates they support and to update their attitudes about those candidates? Previous studies have reached conflicting conclusions about the effects of exposure to counter-attitudinal information. As fact-checking has become more prominent, it is therefore worth examining how respondents respond to fact-checks of politicians—a question with important implications for understanding the effects of this journalistic format on elections. We present results to two experiments conducted during the 2016 campaign that test the effects of exposure to realistic journalistic fact-checks of claims made by Donald Trump during his convention speech and a general election debate. These messages improved the accuracy of respondents’ factual beliefs, even among his supporters, but had no measurable effect on attitudes toward Trump. These results suggest that journalistic fact-checks can reduce misperceptions but often have minimal effects on candidate evaluations or vote choice.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2

Similar content being viewed by others

Notes

  1. Wintersieck (2017) is a notable exception. There are crucial differences between our study and hers, however. First, whereas Wintersieck looks at candidates deemed “honest” by fact-checkers, our studies examine statements flagged by fact-checkers for being false. Second, while Wintersieck focuses on a statewide election and recruits student subjects at a university, we enroll broader pools of participants in two experiments about a national election. This sampling distinction is particularly relevant here given that students might be more disposed to engage in the effortful cognition required to counterargue unwelcome information such as fact-checks (Krupnikov and Levine 2014).

  2. Our preregistration for Study 1 documents our hypotheses and analysis plan (http://www.egap.org/registration/2194). Unless otherwise noted, all Study 1 analyses are consistent with this document. Study 2 was conducted too rapidly to be preregistered (it was fielded immediately after the debate) but our analysis follows Study 1 to the greatest extent possible.

  3. As discussed above, previous findings are mixed on both hypotheses. For H1, see Nyhan and Reifler (2010) (backfire on two of five studies) versus Wood and Porter (2018) (no cases of backfire). For H2, compare Wood and Porter (2018), which finds a consistent pattern of ideological differentials in belief updating, with Nyhan and Reifler (N.d.), which finds no evidence of differential acceptance when fact-checks are pro-attitudinal.

  4. Findings for two other preregistered research questions are described below and in the online appendix.

  5. As we describe below, we also seek to maximize the realism of the treatments we use to test the effects of elite messages denigrating a fact-check. Study 1 tests the effects of exposure to actual statements made by Paul Manafort, Trump’s campaign chairman at the time, challenging the fact-checking of Trump’s convention speech.

  6. It is important to note that journalistic fact-checks do not always logically contradict a speaker (e.g., Marietta et al. 2015; Uscinski and Butler 2013). Fact-checkers often seek instead to address possible inferences that listeners might draw from a candidate’s statement. For instance, Trump’s nomination speech described an “epidemic” of violent crime. He did not directly state that crime has increased, but a listener might infer as much (indeed, Trump made clear statements about increasing crime rates at other times). Like other journalistic fact-checks, our treatment thus cites FBI data on the long-term decline in violent crime. Similarly, Trump’s debate statement emphasized factory jobs leaving Ohio and Michigan. While he did not directly say that employment in Michigan and Ohio is suffering because of trade policy, he implied that widespread job loss was taking place. Consequently, our fact-check, like several in the media, provided data on changes in jobs and unemployment in those states.

  7. The full instrument is in Online Appendix A.

  8. Per our preregistration, respondents who indicated crime was up due to inequality or unemployment were coded as -1 (liberal), those who said crime was up due to moral decline or down due to tougher policing were coded as 1 (conservative), and other responses were coded as 0.

  9. Demographic and balance data for both samples are provided in Online Appendix C.

  10. All analyses in this section are consistent with our preregistration unless otherwise indicated. OLS models are replicated using ordered probit where applicable in Online Appendix C.

  11. Mean scores on two attention checks were 1.62 and 1.92 for controls and 1.59 and 1.87 for the treatment groups on Morning Consult and Mechanical Turk, respectively. (See Online Appendix A for wording.) We therefore deviate slightly from our preregistration to omit consideration of response time as a measure of attention.

  12. We report equivalent but more complex models estimated on the full sample in Online Appendix C.

  13. These quantities are estimated with respect to the control condition. These differences are not significant relative to the uncorrected condition.

  14. Findings are similar for perceived article bias (see Online Appendix C).

  15. In Table C19 in Online Appendix C, we show that the manipulation had no effect on favorability toward Clinton or Barack Obama either.

  16. Because the broader experiment in which Study 2 was embedded was designed to examine how post-debate news coverage affected debate perceptions, participants were assigned to one of five content consumption conditions that were orthogonal to the randomization we examine here (C-SPAN with no post-debate coverage, Fox News with or without post-debate coverage, or MSNBC with or without post-debate coverage). We excluded subjects who did not have access to cable and block-randomized by party and preferred cable channel. For additional details, consult (Gross et al. 2016).

  17. The instrument was prepared before transcripts were available, so the statement in our study differs slightly from the official transcript.

  18. See Online Appendix C for details on participant demographics and experimental balance. Though we cannot rule out the possibility of post-treatment bias (Montgomery et al. 2018), we find no significant effect of treatment assignment at wave 2 on wave 3 participation in a simple OLS model (\(\beta = 0.05\), \(p>.10\)).

  19. Such fact-checks are common. For instance, more than 60% of the claims rated by PolitiFact and the Washington Post Fact Checker were found to be mostly or totally false by both fact-checkers (Lim 2018). Moreover, fact-checkers consider it part of their mission to check claims against official data sources and frequently do so. Graves (2016, p. 85) writes, for instance, that “Fact-checkers always seek official data and often point to examples like this [a fact-check assessing claims about government spending and job growth using federal data] to explain what they do.”

  20. The design does not include a control condition or fact-check denial and denial/source derogation conditions. The omitted category is an uncorrected statement.

References

  • BBC. (2016). Post-truth’ declared word of the year by Oxford Dictionaries. November 16, 2016. Retrieved February 6, 2017, from http://www.bbc.com/news/uk-37995600.

  • Bolsen, T., Druckman, J. N., & Cook, F. L. (2014). The influence of partisan motivated reasoning on public opinion. Political Behavior, 36(2), 235–262.

    Google Scholar 

  • Chan, M. P. S., Jones, C. R., Jamieson, K. H., & Albarracín, D. (2017) Debunking: A meta-analysis of the psychological efficacy of messages countering misinformation. Psychological science, 28(11), 1531–1546

    Google Scholar 

  • Flynn, D. J. (2016). The scope and correlates of political misperceptions in the mass public. Unpublished paper, Dartmouth College.

  • Funk, C. L. (1999). Bringing the candidate into models of candidate evaluation. The Journal of Politics, 61(3), 700–720.

    Google Scholar 

  • Gaines, B. J., Kuklinski, J. H., Quirk, P. J., Peyton, B., & Verkuilen, J. (2007). Same facts, different interpretations: Partisan motivation and opinion on Iraq. Journal of Politics, 69(4), 957–974.

    Google Scholar 

  • Garrett, R. K., Nisbet, E. C., & Lynch, E. K. (2013). Undermining the corrective effects of media-based political fact checking? The role of contextual cues and naïve theory. Journal of Communication, 63(4), 617–637.

    Google Scholar 

  • Graves, L. (2016). Deciding what’s true: The rise of political fact-checking in American journalism. New York: Columbia University Press.

    Google Scholar 

  • Gross, K., Porter E., & Wood T. J. (2018) Identifying media effects through low-cost, multiwave field experiments. Political Communication. https://doi.org/10.1080/10584609.2018.1514447.

    Article  Google Scholar 

  • Guess, A., & Coppock, A. (2018). Does counter-attitudinal information cause backlash? Results from three large survey experiments. British Journal of Political Science. https://doi.org/10.1017/S0007123418000327

    Article  Google Scholar 

  • Hill, S. J. (2017). Learning together slowly: Bayesian learning about political facts. The Journal of Politics, 79(4), 1403–1418.

    Google Scholar 

  • Hochschild, J. L., & Einstein, K. L. (2015). Do facts matter? Information and misinformation in American politics. Norman, OK: University of Oklahoma Press.

    Google Scholar 

  • Jamieson, K. H. (2015). Implications of the demise of ‘Fact’ in political discourse. Proceedings of the American Philosophical Society, 159(1), 66–84.

    Google Scholar 

  • Jarman, J. W. (2016). Motivated to ignore the facts: The inability of fact-checking to promote truth in the public sphere. In J. Hannan (Ed.), Truth in the public sphere. London: Lexington Books.

    Google Scholar 

  • Khanna, K., & Sood, G. (2018). Motivated responding in studies of factual learning. Political Behavior, 40(1), 79–101.

    Google Scholar 

  • Krupnikov, Y., & Levine, A. S. (2014). Cross-sample comparisons and external validity. Journal of Experimental Political Science, 1(1), 59–80.

    Google Scholar 

  • Kunda, Z. (1990). The case for motivated reasoning. Psychological Bulletin, 108(3), 480–498.

    Google Scholar 

  • Lenz, G. S. (2012). Follow the leader? How voters respond to politicians’ performance and policies. Chicago, IL: University of Chicago Press.

    Google Scholar 

  • Lim, C. (2018). Checking how fact-checkers check. Research & Politics, 5(3), 2053168018786848.

    Google Scholar 

  • Marietta, M., Barker, D. C., & Bowser, T. (2015). Fact-checking polarized politics: Does the fact-check industry provide consistent guidance on disputed realities? The Forum: A Journal of Applied Research in Contemporary Politics, 13(4), 577–596.

    Google Scholar 

  • Molden, D. C., & Higgins, E. T. (2005). Motivated thinking. In K. J. Holyoak & R. G. Morrison (Eds.), The Cambridge & handbook of thinking and reasoning. Cambridge: Cambridge University Press.

    Google Scholar 

  • Montgomery, J. M., Nyhan, B., & Torres, M. (2018). How conditioning on posttreatment variables can ruin your experiment and what to do about it. American Journal of Political Science, 62(3), 760–775.

    Google Scholar 

  • Mummolo, J., & Peterson E. (2018) Demand effects in survey experiments: An empirical assessment. American Political Science Review. https://doi.org/10.2139/ssrn.2956147.

    Article  Google Scholar 

  • National Public Radio. (2016). Fact check: Trump And clinton debate for the first time. September 26, 2016. Retrieved February 15, 2017, from http://www.npr.org/2016/09/26/495115346/fact-check-first-presidential-debate.

  • New York Times. (2016). Our fact checks of the first debate. September 26th, 2016. Retrieved July 27, 2018, from https://www.nytimes.com/2016/09/27/us/politics/fact-check-debate.html.

  • Nyhan, B., & Reifler, J. (2010). When corrections fail: The persistence of political misperceptions. Political Behavior, 32(2), 303–330.

    Google Scholar 

  • Nyhan, B., & Reifler, J. (2015). The effect of fact-checking on elites: A field experiment on US state legislators. American Journal of Political Science, 59(3), 628–640.

    Google Scholar 

  • Nyhan, B., & Reifler, J. (N.d.). Do people actually learn from fact-checking? Evidence from a longitudinal study during the 2014 campaign.” Unpublished manuscript. Retrieved June 28, 2017, from http://www.dartmouth.edu/~nyhan/fact-checking-effects.pdf.

  • Nyhan, B., Reifler, J., & Ubel, P. A. (2013). The hazards of correcting myths about health care reform. Medical Care, 51(2), 127–132.

    Google Scholar 

  • Pierce, P. A. (1993). Political sophistication and the use of candidate traits in candidate evaluation.

  • Politico. (2016). Trump wrong on Michigan job losses. September 26, 2016. Retrieved November 11, 2017, from https://www.politico.com/blogs/2016-presidential-debate-fact-check/2016/09/trump-wrong-on-michigan-job-losses-228707.

  • Porter, E., Wood, T. J., & Kirby, D. (2018). Sex trafficking, Russian infiltration, birth certificates, and pedophilia: A survey experiment correcting fake news. Journal of Experimental Political Science, 2(5), 304–331.

    Google Scholar 

  • Rahn, W. M., Aldrich, J. H., Borgida, E., & Sullivan, J. L. (1990). A social cognitive model of candidate appraisal. In J. A. Ferejohn & J. H. Kuklinski (Eds.), Information and democratic processes. Champaign: University of Illinois Press.

    Google Scholar 

  • Schleifer, T. (2016). Paul Manafort doubts FBI statistics after agency spared Hillary. CNN, July 12, 2016. Retrieved February 13, 2017, from http://www.cnn.com/2016/07/21/politics/paul-manafort-fbi-statistics-hillary-clinton/.

  • Spivak, C. (2011). The fact-checking explosion. American Journalism Review, 32, 38–43.

    Google Scholar 

  • Sullivan, E., & Day, C. (2016). AP FACT CHECK: Crime stats don’t back Trump’s dire view. Associated Press, July 13, 2016. Retrieved October 22, 2018, from https://apnews.com/3e132f145e0c44cf96cb7f4fd448b34a.

  • Taber, C. S., & Lodge, M. (2006). Motivated skepticism in the evaluation of political beliefs. American Journal of Political Science, 50(3), 755–769.

    Google Scholar 

  • Uscinski, J., & Butler, R. (2013). The epistemology of fact checking. Critical Review, 25(2), 162–180.

    Google Scholar 

  • Weeks, B. E. (2015). Emotions, partisanship, and misperceptions: How anger and anxiety moderate the effect of partisan bias on susceptibility to political misinformation. Journal of Communication, 65(4), 699–719.

    Google Scholar 

  • Wintersieck, A. L. (2017). Debating the truth: The impact of fact-checking during electoral debates. American Politics Research, 45(2), 304–331.

    Google Scholar 

  • Wood, T., & Porter, E. (2018). The elusive backfire effect: Mass attitudes’ steadfast factual adherence. Political Behavior.

  • Young, D., Shannon, J. K. H. P., & Goldring, A. (2017). Debunking: A meta-analysis of the psychological efficacy of messages countering misinformation. Psychological Science, 28(11), 1531–1546.

    Google Scholar 

  • Zaller, J. (1992). The nature and origins of mass opinion. Cambridge: Cambridge University Press.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Thomas J. Wood.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

We thank Kim Gross, John Pfaff, and D.J. Flynn for comments and Kyle Dropp for fielding Study 1. This research received funding support from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program (Grant Agreement No. 682758). We also received support from the School of Media and Public Affairs at George Washington University. All errors are our own.

Electronic supplementary material

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Nyhan, B., Porter, E., Reifler, J. et al. Taking Fact-Checks Literally But Not Seriously? The Effects of Journalistic Fact-Checking on Factual Beliefs and Candidate Favorability. Polit Behav 42, 939–960 (2020). https://doi.org/10.1007/s11109-019-09528-x

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11109-019-09528-x

Keywords

Navigation