Abstract
A crisis in psychology has provoked researchers to seek remedies for bad practices that might damage the integrity of the discipline as a whole. The ardor for wholesale reform has led to a suite of proposed technical solutions, some of which are considered in the context of computational modeling by the target article. Any technical solution, however, must be placed within a larger cultural and scientific context to be effective (or, indeed, meaningful at all). Many of the suggestions presented in the target article represent good practice in computational cognitive modeling but, even then, still require some amount of nuance in the consideration of the relationship between practice and theory. We consider two examples—model preregistration and bookending—as a means of examining the limits of any proposed technical solution.
Similar content being viewed by others
Notes
Ironically, Lehrer himself later resigned from the New Yorker after fabricating Bob Dylan quotes to support the argument of his book Imagine, lending some support towards the idea that, whatever cultural issue that causes novelty to be preferenced over evidence is at play, it is not solely located within science, let alone psychology.
We think postregistration is a useful idea. Many modelers will have been frustrated by editors asking them to remove material detailing the full range of model variants they considered because it is seen as dry and indigestible. But we worry that the effort required to do justice to postregistration as envisaged in the target article means it is an idea that will be honored mainly in the breach. Postregistration is tantamount to the requirement that modeling studies be accompanied by substantial supplementary materials sections. Laboratory notebooks are usually aides memoire for researchers rather than public records intended to communicate to others. The effort required by authors, reviewers, and editors to turn them into truly useful adjuncts to scientific practice should not be underestimated.
This is a point made beautifully by Navarro (2019) within this journal.
In making our counterargument, we do not seek to provide cover for bad actors, but rather, think that it is just as likely that those acting in bad faith will find a way to game the preregistration system, potentially by making incomplete or late preregistrations, just as they have the null hypothesis significance testing one. The solution is as it has always been: skepticism, peer review, and due diligence in examining published claims. Modeling helps in this endeavor because, in most cases, the outcome of a model is easily reproduced by pushing a button on a computer. This is naturally why we are in full agreement with authors’ desire to promote openness and sharing of materials and code.
This is also one reason why we cannot, despite all of the limitations of null hypothesis significance testing and its applications in the wild, bring ourselves to endorse abandoning statistical significance entirely. Although it was never intended as such, we could see the wholesale abrogation of a type of statistical inference, either as a piece of advice for the field or as a directive at the journal-level, as tending in the direction of the same all-or-none thinking as individual scientists misusing statistical significance testing in the first place: the kind of cargo cult science that inappropriately identifies that a statistical procedure can provide some license to make inferential statements without understanding the mechanism by which it does so. At worst, it leads people away from an opportunity to gain a nuanced understanding of the limitations of any particular approach to statistical inference to once again promote a regime where understanding is unnecessary, only a set of imperatives are worth understanding; simply put, it replaces the cookbook with the rulebook.
References
Anderson, J.R. (1990). The adaptive character of thought. Hillsdale: L. Erlbaum Associates.
Bem, D.J. (2011). Feeling the future: experimental evidence for anomalous retroactive influences on cognition and affect. Journal of Personality and Social Psychology, 100(3), 407–425. https://doi.org/10.1037/a0021524.
Benjamin, D.J., Berger, J.O., Johannesson, M., Nosek, B.A., Wagenmakers, E.J., Berk, R., Bollen, K.A., Brembs, B., Brown, L., Camerer, C., Cesarini, D., Chambers, C.D., Clyde, M., Cook, T.D., Boeck, P.D., Dienes, Z., Dreber, A., Easwaran, K., Efferson, C., Fehr, E., Fidler, F., Field, A.P., Forster, M., George, E.I., Gonzalez, R., Goodman, S., Green, E., Green, D.P., Greenwald, A.G., Hadfield, J.D., Hedges, L.V., Held, L., Ho, T.H., Hoijtink, H., Hruschka, D.J., Imai, K., Imbens, G., Ioannidis, J.P.A., Jeon, M., Jones, J.H., Kirchler, M., Laibson, D., List, J., Little, R., Lupia, A., Machery, E., Maxwell, S.E., McCarthy, M., Moore, D.A., Morgan, S.L., Munafó, M, Nakagawa, S., Nyhan, B., Parker, T.H., Pericchi, L., Perugini, M., Rouder, J., Rousseau, J., Savalei, V., Schönbrodt, F.D., Sellke, T., Sinclair, B., Tingley, D., Zandt, T.V., Vazire, S., Watts, D.J., Winship, C., Wolpert, R.L., Xie, Y., Young, C., Zinman, J., Johnson, V.E. (2018). Redefine statistical significance. Nature Human Behaviour, 2(1), 6. https://doi.org/10.1038/s41562-017-0189-z.
Bernardo, J.M., & Smith, A.F.M. (1994). Bayesian theory. New York: Wiley, Chichester, Eng.
Harris, C.R., Coburn, N., Rohrer, D., Pashler, H. (2013). Two failures to replicate high-performance-goal priming effects. PLOS ONE, 8(8), e72,467. https://doi.org/10.1371/journal.pone.0072467.
Hawkins, G.E., Forstmann, B.U., Wagenmakers, E.J., Ratcliff, R., Brown, S.D. (2015). Revisiting the evidence for collapsing boundaries and urgency signals in perceptual decision-making. Journal of Neuroscience, 35 (6), 2476–2484. https://doi.org/10.1523/JNEUROSCI.2410-14.2015.
Heathcote, A., Brown, S.D., Wagenmakers, E.J. (2015). An introduction to good practices in cognitive modeling. In An introduction to model-based cognitive neuroscience (pp. 25–48): Springer.
Kaplan, R.M., & Irvin, V.L. (2015). Likelihood of null effects of large NHLBI clinical trials has increased over time. PLOS ONE, 10(8), e0132,382. https://doi.org/10.1371/journal.pone.0132382.
Kidwell, M.C., Lazarević, L.B., Baranski, E., Hardwicke, T.E., Piechowski, S., Falkenberg, L.S., Kennett, C., Slowik, A., Sonnleitner, C., Hess-Holden, C., Errington, T.M., Fiedler, S., Nosek, B.A. (2016). Badges to acknowledge open practices: a simple, low-cost, effective method for increasing transparency. PLOS Biology, 14(5), e1002,456. https://doi.org/10.1371/journal.pbio.1002456.
Klein, N. (2007). The shock doctrine: the rise of disaster capitalism, 1st. New York: Metropolitan Books/Henry Holt. oCLC: 128236664.
Lee, M.D., Criss, A., Devezer, B., Donkin, C., Etz, A., Leite, F.P., Matzke, D., Rouder, J.N., Trueblood, J.S., Vandekerckhove, J. (in this issue). Robust modeling in cognitive science. Computational Brain and Behavior.
Lehrer, J. (2010). The Truth Wears Off. The New Yorker, http://www.newyorker.com/magazine/2010/12/13/the-truth-wears-off.
Lykken, D.T. (1968). Statistical significance in psychological research. Psychological bulletin, 70(3), 151–159.
Marr, D. (1982). Vision: a computational investigation into the human representation and processing of visual information. Cambridge: The MIT Press.
Meehl, P.E. (1990). Why summaries of research on psychological theories are often uninterpretable. Psychological Reports, 66(1), 195–244. https://doi.org/10.2466/pr0.1990.66.1.195.
Meehl, P.E. (1997). The problem is epistemology, not statistics: replace significance tests by confidence intervals and quantify accuracy of risky numerical prediction. In L.L. Harlow, S.A. Mulaik, J.H. Steiger (Eds.) What If There Were No Statistical Tests?, Erlbaum, Mahwah, N.J (pp. 393–425).
Navarro, D.J. (2019). Between the devil and the deep blue sea: tensions between scientific judgement and statistical model selection. Computational Brain & Behavior, 2(1), 28–34. https://doi.org/10.1007/s42113-018-0019-z.
Nosek, B.A., Ebersole, C.R., DeHaven, A.C., Mellor, D.T. (2018). The preregistration revolution. Proceedings of the National Academy of Sciences, 115(11), 2600–2606. https://doi.org/10.1073/pnas.1708274114.
Open Science Collaboration. (2015). Estimating the reproducibility of psychological science. Science, 349(6251), aac4716. https://doi.org/10.1126/science.aac4716.
Shanks, D.R., Newell, B.R., Lee, E.H., Balakrishnan, D., Ekelund, L., Cenac, Z., Kavvadia, F., Moore, C. (2013). Priming intelligent behavior: an elusive phenomenon. PLOS ONE, 8(4), e56,515. https://doi.org/10.1371/journal.pone.0056515.
Smith, P.L., & Little, D.R. (2018). Small is beautiful: in defense of the small-N design. Psychonomic Bulletin & Review, pp. 1–19, https://doi.org/10.3758/s13423-018-1451-8.
The Levelt, Noort, and Drenth Committees. (2012). Flawed science: the fraudulent research practices of social psychologist Diederik Stapel. Tech. rep. Tilburg University, the University of Groningen, and the University of Amsterdam.
van den Berg, R., Awh, E., Ma, W.J. (2014). Factorial comparison of working memory models. Psychological Review, 121(1), 124–149. https://doi.org/10.1037/a0035234.
Vehtari, A., & Ojanen, J. (2012). A survey of Bayesian predictive methods for model assessment, selection and comparison. Statistics Surveys, 6, 142–228. https://doi.org/10.1214/12-SS102.
Wagenmakers, E.J., Wetzels, R., Borsboom, D., van der Maas, H.L.J., Kievit, R.A. (2012). An agenda for purely confirmatory research. Perspectives on Psychological Science, 7(6), 632–638. https://doi.org/10.1177/1745691612463078.
Acknowledgments
Thank you to David Wakeham, Christina van Heer, David Sewell, Jason Zhou, and Elle Pattenden for their thoughtful comments and questions.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Author note
This research was supported by Australian Research Council Discovery Grant DP160102360 to Daniel R. Little, Australian Research Council Discovery Early Career Researcher Award DE170100106 to Adam F. Osth, and Australian Research Council Discovery Grant DP180101686 to Philip L. Smith.
Rights and permissions
About this article
Cite this article
Lilburn, S.D., Little, D.R., Osth, A.F. et al. Cultural Problems Cannot Be Solved with Technical Solutions Alone. Comput Brain Behav 2, 170–175 (2019). https://doi.org/10.1007/s42113-019-00036-z
Published:
Issue Date:
DOI: https://doi.org/10.1007/s42113-019-00036-z