next up previous [pdf]

Next: Tools for reproducible research Up: Introduction Previous: Introduction

Reproducible research philosophy

Peer review is the backbone of scientific progress. From the ancient alchemists, who worked in secret on magic solutions to insolvable problems, the modern science has come a long way to become a social enterprise, where hypotheses, theories, and experimental results are openly published and verified by the community. By reproducing and verifying previously published research, a researcher can take new steps to advance the progress of science.

Traditionally, scientific disciplines are divided into theoretical and experimental studies. Reproduction and verification of theoretical results usually requires only imagination (apart from pencils and paper), experimental results are verified in laboratories using equipment and materials similar to those described in the publication.

During the last century, computational studies emerged as a new scientific discipline. Computational experiments are carried out on a computer by applying numerical algorithms to digital data. How reproducible are such experiments? On one hand, reproducing the result of a numerical experiment is a difficult undertaking. The reader needs to have access to precisely the same kind of input data, software and hardware as the author of the publication in order to reproduce the published result. It is often difficult or impossible to provide detailed specifications for these components. On the other hand, basic computational system components such as operating systems and file formats are getting increasingly standardized, and new components can be shared in principle because they simply represent digital information transferable over the Internet.

The practice of software sharing has fueled the miraculously efficient development of Linux, Apache, and many other open-source software projects. Its proponents often refer to this ideology as an analog of the scientific peer review tradition. Eric Raymond, a well-known open-source advocate, writes (Raymond, 2004):

Abandoning the habit of secrecy in favor of process transparency and peer review was the crucial step by which alchemy became chemistry. In the same way, it is beginning to appear that open-source development may signal the long-awaited maturation of software development as a discipline.
While software development is trying to imitate science, computational science needs to borrow from the open-source model in order to sustain itself as a fully scientific discipline. In words of Randy LeVeque, a prominent mathematician (LeVeque, 2006),
Within the world of science, computation is now rightly seen as a third vertex of a triangle complementing experiment and theory. However, as it is now often practiced, one can make a good case that computing is the last refuge of the scientific scoundrel [...] Where else in science can one get away with publishing observations that are claimed to prove a theory or illustrate the success of a technique without having to give a careful description of the methods used, in sufficient detail that others can attempt to repeat the experiment? [...] Scientific and mathematical journals are filled with pretty pictures these days of computational experiments that the reader has no hope of repeating. Even brilliant and well intentioned computational scientists often do a poor job of presenting their work in a reproducible manner. The methods are often very vaguely defined, and even if they are carefully defined, they would normally have to be implemented from scratch by the reader in order to test them.

In computer science, the concept of publishing and explaining computer programs goes back to the idea of literate programming promoted by Knuth (1984) and expended by many other researchers (Thimbleby, 2003). In his 2004 lecture on ``better programming'', Harold Thimbleby notes[*]

We want ideas, and in particular programs, that work in one place to work elsewhere. One form of objectivity is that published science must work elsewhere than just in the author's laboratory or even just in the author's imagination; this requirement is called reproducibility.

Nearly ten years ago, the technology of reproducible research in geophysics was pioneered by Jon Claerbout and his students at the Stanford Exploration Project (SEP). SEP's system of reproducible research requires the author of a publication to document creation of numerical results from the input data and software sources to let others test and verify the result reproducibility (Schwab et al., 2000; Claerbout, 1992a). The discipline of reproducible research was also adopted and popularized in the statistics and wavelet theory community by Buckheit and Donoho (1995). It is referenced in several popular wavelet theory books (Mallat, 1999; Hubbard, 1998). Pledges for reproducible research appear nowadays in fields as diverse as bioinformatics (Gentleman et al., 2004), geoinformatics (Bivand, 2006), and computational wave propagation (LeVeque, 2006). However, the adoption or reproducible research practice by computational scientists has been slow. Partially, this is caused by difficult and inadequate tools.


next up previous [pdf]

Next: Tools for reproducible research Up: Introduction Previous: Introduction

2012-07-19