Month: June 2010

Reproducibility of speedup tests

June 10, 2010 Links No comments

Most of the time, when we talk about reproducibility in computational sciences, we assume that of numerical results. We expect computational experiments to produce identical results in different execution environments for same input data.
But it is not the case all the time – quite often, the goal of a research endeavor is to design a faster algorithm. Then, the result of the experiment is performance information and demonstration of a speedup over existing algorithms for finding a solution for the same problem. Speedup, just like numerical results, should be reproducible in different execution environments.
Sid-Ahmed-Ali Touati, Julien Worms, and Sebastien Briais of INRIA published an excellent work on methodology of reproducible speedup tests.
A part of the introduction from their paper is worth quoting on this blog:

Known hints for making a research result non reproducible
Hard natural sciences such as physics, chemistry and biology impose strict experimental methodologies and rigorous statistical measures in order to guarantee the reproducibility of the results with a measured confidence (probability of error/success). The reproducibility of the experimental results in our community of program optimisation is a weak point. Given a research article, it is in practice impossible or too difficult to reproduce the published performance. If the results are not reproducible, the benefit of publishing becomes limited. We note below some hints that make a research article non-reproducible:

  • Non using of precise scientific languages such as mathematics. Ideally, mathematics must always be preferred to describe ideas, if possible, with an accessible difficulty.
  • Non available software, non released software, non communicated precise data.
  • Not providing formal algorithms or protocols make impossible to reproduce exactly the ideas.
  • Hide many experimental details.
  • Usage of deprecated machines, deprecated OS, exotic environment, etc.
  • Doing wrong statistics with the collected data.

Part of the non-reproducibility (and not all) of the published experiments is explained by the fact that the observed speedups are sometimes rare events. It means that they are far from what we could observe if we redo the experiments multiple times. Even if we take an ideal situation where we use exactly the original experimental machines and software, it is sometimes difficult to reproduce exactly the same performance numbers again and again, experience after experience. Since some published performances numbers represent exceptional events, we believe that if a computer scientist succeeds in reproducing the performance numbers of his colleagues (with a reasonable error ratio), it would be equivalent to what rigorous probabilists and statisticians call a surprise. We argue that it is better to have a lower speedup that can be reproduced in practice, than a rare speedup that can be remarked by accident.

Read the full document for a thorough explanation of how to avoid creating non-reproducible and erroneous speedup tests by using proper scientific techniques.

Event of the year

June 9, 2010 Celebration No comments

Please reserve the date for the Madagascar “event of the year”: the School and Workshop in Houston on July 23-24, 2010. The program details and registration information will follow soon.

Raisings scientific standards

June 6, 2010 Links No comments

After returning back from the NSF Workshop Archiving Experiments to Raise Scientific Standards, here are some thoughts on reproducible research. Thanks to Dan Gezelter, Dennis Shasha, and others for inspiring discussions.

First of all, it is important to point out that reproducibility is not the goal in itself. There are many situations in which strict computational reproducibility is not achievable. The goal is an exposure of the scientific communication to a skeptic inquiry. A mathematical proof is an example of a scientific communication, which is constructed as a dialogue with a skeptic: someone who might say “What if your conclusions are not true?” Step by step, a mathematical proof is designed to convince the skeptic that the conclusion (a theorem) has to be true. As for computational results, even the simplest skeptic inquiry “What if there is a bug in your code?” cannot be answered unless the software code and every computational step that led to the published result are open for inspection.

If you attend a mathematical conference, you can notice that mathematicians do not usually go through every step in the proof to present a theorem, it is enough to sketch the main idea of the proof. However, the audience understands that the detailed proof should be available in the published work, otherwise the theorem cannot be accepted. Similarly, in a presentation of a computational work, one can simply show results of a computational experiment. However, such results cannot be accepted as scientific unless the full computation is disclosed for a skeptic inquiry. As stated by Dave Donoho (who paraphrased Jon Claerbout),

An article about computational science in a scientific publication is not the scholarship itself, it is merely advertising of the scholarship. The actual scholarship is the complete software development environment and the complete set of instructions which generated the figures.

If you don’t want to disclose the details of your computation, then the work that you do is not science. As for reproducibility, there seems to be different degrees of it:

  1. Replicability: the ability to reproduce the computation as published
  2. Reusability: the ability to apply a similar algorithm to a different problem or different input data
  3. Reconfigurability: the ability to obtain a similar result when the parameters of the experiment are perturbed deliberately

Some algorithms are perfectly replicable but of limited use, because they are too sensitive to the choice of parameters to be reusable or reconfigurable. Nevertheless, such algorithms deserve a place in the scientific body of knowledge, because they may lead to a discovery or invention of more robust algorithms.

Those who read Italian may enjoy the philosophical article on open software and reproducible research by Alessandro Frigeri and Gisella Speranza: “Eppur si muove” Software libero e ricerca riproducibile Eppur si muove (and yet it moves) are the words attributed to Galileo Galilei, the father of modern science.