In 2010, two renowned economists, Carmen Reinhart and Kenneth Rogoff, released research confirming what many fiscally conservative politicians had long suspected: that a country’s economic growth declines if public debt rises above a certain percentage of GDP. This paper found resonant ears with George Osborne, the soon-to-be UK Chancellor, who cited it several times in a speech outlining what would become the UK’s political playbook. Austerity Era: Reducing public services in order to pay off the national debt.
There was only one problem with the research presented by Reinhart and Rogoff. They inadvertently left five countries out of their analysis: they ran the numbers on only 15 countries instead of the 20 they thought they had chosen in their spreadsheet. When some lesser-known economists corrected for this error and a few others, the most striking part of the results disappeared. The relationship between debt and GDP was still there, but the effects of high debt were more subtle than the extreme cliff-edge that Osborne alluded to in his speech.
Scientists – like all of us – are not immune to mistakes. “It is clear that errors are everywhere, and a small fraction of these errors will change research conclusions,” says Malte Elsson, a professor at the University of Bern in Switzerland, who studies research methods, among other things. The problem is that there are not many people looking for these errors. Reinhart and Rogoff’s errors were only discovered in 2013 by an economics student whose professors asked his students to try to replicate the findings of prominent economic research.
In collaboration with fellow meta-science researchers Robin Arsland and Ian Hussey, Elson created a method for systematically finding errors in scientific research. Project – called mistake– Similar to Bug bounties In the software industry, hackers are rewarded for finding bugs in their code. In Elson’s project, researchers are paid to look for potential bugs, and are given bounties for each verified bug they discover.
The idea came from a discussion between Elson and Arsland, who encourages scientists to spot errors in his work by offering to buy them a beer if they spot a typo (maximum of three per paper) and €400 ($430) for the error. It changes the main conclusion of the paper. “We were aware of papers in our fields that were completely flawed because of demonstrable errors, but it was very difficult to correct the record,” Elson says. Elson believes all of these common mistakes could be a big problem. If a PhD researcher spends her degree pursuing a result that turns out to be wrong, that could amount to tens of thousands of wasted dollars.
Error-checking is not an essential part of publishing scientific papers, says Hussey, a meta-science researcher at Elson Lab in Bern. When a research paper is accepted by a scientific journal – e.g nature or Sciences– It is sent to a small number of experts in the field who provide their opinions on whether the paper is of high quality, logically sound, and makes a valuable contribution to the field. However, these reviewers typically do not check for errors, and in most cases they will not have access to the raw data or code they need to root out errors.