Today the software and code that drive many scientific studies (and ultimately their results) are far from standardized. Often seen as a means to and end this code may not be made available to readers of scientific journals or even the reviewers. Can projects like Open ABM affect positive change when it comes to the peer review process; making it easier to verify that code often hidden down in the weeds of a scientific study?
Scientists working with computational models are typically reporting on the validity of a particular model when they present their results (i.e. how well it fits the data and its usefulness as a predictor for the phenomenon under consideration). But what happens when there is a moth in the tubes, a bug in the code? A recent Nature news article by Erika Check Hayden addresses the verification issue and some of its implications while highlighting a Mozilla project aimed at putting such code through a review process similar to that associated with commercial software products.
Mozilla plan seeks to debug scientific code
Software experiment raises prospect of extra peer review.
Erika Check Hayden
24 September 2013
Note that in the comments section on the Nature site, the author points to Open ABM as a model for getting source code out into the world.
“Erika Check • 2013-09-25 08:16 PM
These didn’t fit in the story, but two other projects trying to encourage peer review of code are CoMSES Net (http://www.openabm.org/) in ecology and social sciences and…”