Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain
the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in
Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles
and JavaScript.
Science moves forward by corroboration – when researchers verify others’ results. Science advances faster when people waste less time pursuing false leads. No research paper can ever be considered to be the final word, but there are too many that do not stand up to further study.
There is growing alarm about results that cannot be reproduced. Explanations include increased levels of scrutiny, complexity of experiments and statistics, and pressures on researchers. Journals, scientists, institutions and funders all have a part in tackling reproducibility. Nature has taken substantive steps to improve the transparency and robustness in what we publish, and to promote awareness within the scientific community. We hope that the articles contained in this collection will help.
The solutions adopted by the high-energy physics community to foster reproducible research are examples of best practices that could be embraced more widely. This first experience suggests that reproducibility requires going beyond openness.
Replicating our work took four years and 100,000 worms but brought surprising discoveries, explain Gordon J. Lithgow, Monica Driscoll and Patrick Phillips.
A survey of Nature readers revealed a high level of concern about the problem of irreproducible results. Researchers, funders and journals need to work together to make research more reliable.
Humans are remarkably good at self-deception. But growing concern about reproducibility is driving many researchers to seek ways to fight their own worst instincts.
A multi-laboratory study finds that single-molecule FRET is a reproducible and reliable approach for determining accurate distances in dye-labeled DNA duplexes.
Many factors can skew the results of a widely used amplification technique for microbiome analysis, but researchers are finding strategies for getting at the truth.
An extensive evaluation of differential expression methods applied to single-cell expression data, using uniformly processed public data in the new conquer resource.
The publishing system builds in resistance to replication. Paul Gertler, Sebastian Galiani and Mauricio Romero surveyed economics journals to find out how to fix it.
Using the ORI of plasmids used in enhancer assays as the sole core promoter and inhibiting the interferon I response triggered by plasmid transfection greatly reduces false positive and negative results in single-candidate and massively parallel enhancer assays and enables genome-wide enhancer screens.
As debate rumbles on about how and how much poor statistics is to blame for poor reproducibility, Nature asked influential statisticians to recommend one change to improve science. The common theme? The problem is not our maths, but ourselves.
Humans are remarkably good at self-deception. But growing concern about reproducibility is driving many researchers to seek ways to fight their own worst instincts.
Antibodies are the workhorses of biological experiments, but they are littering the field with false findings. A few evangelists are pushing for change.
Start-up firms say robotics and software that autonomously record every detail of an experiment can transform the efficiency and reliability of research.
A new mechanism for independently replicating research findings is one of several changes required to improve the quality of the biomedical literature.
Experimental biologists, their reviewers and their publishers must grasp basic statistics, urges David L. Vaux, or sloppy science will continue to grow.
Bayer halts nearly two-thirds of its target-validation projects because in-house experimental findings fail to match up with published literature claims, finds a first-of-a-kind analysis on data irreproducibility.
A multi-laboratory study finds that single-molecule FRET is a reproducible and reliable approach for determining accurate distances in dye-labeled DNA duplexes.
Camerer et al. carried out replications of 21 Science and Nature social science experiments, successfully replicating 13 out of 21 (62%). Effect sizes of replications were about half of the size of the originals.
The results of in vitro and in vivo screens to identify genes that are essential for the survival of a type of brain cancer show almost no overlap, underlining the need for caution when interpreting in vitro studies. See Letter p355.
Many factors can skew the results of a widely used amplification technique for microbiome analysis, but researchers are finding strategies for getting at the truth.
An extensive evaluation of differential expression methods applied to single-cell expression data, using uniformly processed public data in the new conquer resource.
Using the ORI of plasmids used in enhancer assays as the sole core promoter and inhibiting the interferon I response triggered by plasmid transfection greatly reduces false positive and negative results in single-candidate and massively parallel enhancer assays and enables genome-wide enhancer screens.
The finding that acute and chronic manipulations of the same neural circuit can produce different behavioural outcomes poses new questions about how best to analyse these circuits. See Article p.358
Quality control of cell lines used in biomedical research is essential to ensure reproducibility. Although cell line authentication has been widely recommended for many years, misidentification, including cross-contamination, remains a serious problem. We outline a multi-stakeholder, incremental approach and policy-related recommendations to facilitate change in the culture of cell line authentication.
The reliability and reproducibility of science are under scrutiny. However, a major cause of this lack of repeatability is not being considered: the wide sample-to-sample variability in the P value. We explain why P is fickle to discourage the ill-informed practice of interpreting analyses based predominantly on this statistic.
'Irreproducibility' is symptomatic of a broader challenge in measurement in biomedical research. From the US National Institute of Standards and Technology (NIST) perspective of rigorous metrology, reproducibility is only one aspect of establishing confidence in measurements. Appropriate controls, reference materials, statistics and informatics are required for a robust measurement process. Research is required to establish these tools for biological measurements, which will lead to greater confidence in research results.
Low-powered studies lead to overestimates of effect size and low reproducibility of results. In this Analysis article, Munafò and colleagues show that the average statistical power of studies in the neurosciences is very low, discuss ethical implications of low-powered studies and provide recommendations to improve research practices.
A wealth of microarray gene expression data and a growing volume of RNA sequencing data are now available in public databases. The authors look at how these data are being used and discuss considerations for how such data should be analysed and deposited and how data reuse could be improved.
Deficiencies in methods reporting in animal experimentation lead to difficulties in reproducing experiments; the authors propose a set of reporting standards to improve scientific communication and study design.
There are many different methods and tools available for the analysis of next-generation sequencing data. The challenges towards applying these analysis tools in a transparent and reproducible manner are presented, and a way forward for analysing these data in life sciences research is discussed.
Scientific reproducibility now very often depends on the computational method being available to duplicate, so here it is argued that all source code should be freely available.
Batch effects can lead to incorrect biological conclusions but are not widely considered. The authors show that batch effects are relevant to a range of high-throughput 'omics' data sets and are crucial to address. They also explain how batch effects can be mitigated.