Being able to replicate and build upon the work of other scientists, or adjust one's ideas when the data suggest otherwise, is part of the bedrock supporting the scientific process. But given the number of variables—many unknown—that can affect a study, reproducibility is not a trivial goal. Awareness of the problem has been growing, and some fear that standards have relaxed for the worse. Recent reports of failures to reproduce published research (http://www.nature.com/nature/focus/reproducibility/) have rightly been taken seriously by research communities, funders and journals.

At Nature Methods, we think a great deal about how to probe whether a new method or tool is likely to perform well for scientists other than its developers. When possible, comparison with orthogonal methods is an obvious way to test whether a method reports accurately on a biological phenomenon. Assessing the performance of an approach in more than one setting is another way to increase the chance that the method will be useful for many.

But a clear-eyed view of what a method can and cannot do, and how robust its performance is to changes in the technical or biological context, is not relevant merely for a method's general utility. The quality of a tool and the skill and care with which it is wielded are inextricably linked to the reproducibility of the resulting data. And performance assessment does not apply only to newly developed methods: generating reliable data, as every scientist knows, requires that even workhorse methods are tested, again and every time, in the form of experimental controls.

This is stating the obvious, perhaps. But insufficient attention to methodology—to how transparently methods are reported, how well their limits are appreciated and how carefully they are applied—is clearly part of the problem with reproducibility.

Anne Plant and coauthors at the US National Institute for Standards and Technology provide their perspective on the matter in a Commentary (p. 895). Taking a position close to a methods journal's heart, they argue that a productive way to think about reproducibility is to focus on the confidence that can be placed in measurements, which is to say in methods, in biological research. They point out that making reliable measurements in systems as complex as biological ones is far from trivial. It requires vigilance and effort, and developing ways to assess and improve measurement reliability is a legitimate research area in itself.

In a second piece, related only in spirit, Guillaume Baffou and colleagues also argue that the complexity of biological systems can confound measurements (p. 899). They make a theoretical case questioning previously published measurements of temperature in individual cultured cells, including studies in our own pages. If they are correct, tools that were thought to report on cellular temperature changes may not in fact do so.

In a world increasingly awash in facts but also pseudo-facts, comment but also cant, the relative reliability of scientific data has perhaps never been more worth guarding. Paying close attention to the methods used in research is a good place to start.