Journals, like magazines, ought to be held to a higher standard for the material they publish. Just as they are ranked on how often their articles are cited (the so-called impact factor), they ought to be rated on how often they retract papers and how forthcoming those notices are. They also should be graded on how many of the findings they publish are reproduced by future studies, what we’ve called a “reproducibility index” says Ivan Oransky the global editorial director of Medpage Today.
Rolling Stone’s retraction of an incendiary article about an alleged gang rape on the campus of the University of Virginia certainly deserves a place in the pantheon of legendary journalism screw-ups. It is highly unusual – although not unprecedented – for a news organization to air its dirty laundry so publicly.
One meme that’s emerged from the wreckage is that journalism ought to be more like science, which, it’s thought, is the epitome of a self-correcting system. In a story about the Rolling Stone retraction, for example, the New York Times reported that Nicholas Lemann, former dean of the Columbia Journalism School, teaches his students that the “Journalistic Method” is much like the scientific method:
It’s all about very rigorous hypothesis testing: What is my hypothesis and how would I disprove it? That’s what the journalist didn’t do in this case.
That’s a pretty analogy – but even in science, the reality doesn’t live up to the ideal.
![](https://62e528761d0685343e1c-f3d1b99a743ffa4142d9d7f1978d9686.ssl.cf2.rackcdn.com/files/77399/width668/image-20150408-18044-1azvrzc.jpg)
What’s certainly true, as a definitive 12,700-word report by Lemann’s successor at Columbia, Steve Coll, and colleagues points out, is that Sabrina Rubin Erdely, who wrote the magazine article, did not attempt to disprove her hypothesis by interviewing the alleged perpetrators of the alleged rape. Nor did Rolling Stone’s editors require her to go back and do such reporting before publishing the article.
We’d consider that akin to a failure of peer review, the process by which experts look for problems in methodology that might undermine a scientist’s conclusions. When you’re not pushing yourself – or someone else – to look for those problems, one possibility is that confirmation bias wins. That natural tendency for all of us to look for evidence that supports a narrative or theory we believe to be true is very powerful.
It’s also true that two of the three broad categories for why media outlets retract articles, as described in the New York Times, are roughly the same for science: outright fabrication and plagiarism. (The third category, and the one which pertains in the Rolling Stone debacle, relates to lack of skepticism. We’ll get back to that in a moment.)
Ideals of scientific publishing are a standard to emulate
But the similarities end there.
Science, and scientific publishing, rarely tells the story of a single event. Published papers, particularly in the world of biomedicine, typically relate what happened in experiments involving multiple tests. What Lemann is in fact describing is just one small, although essential, aspect of the scientific method – the effort to identify and eliminate bias in one’s thinking or testing of a hypothesis.
When science works as designed, subsequent findings augment, alter or completely undermine earlier research. When something new emerges that revises the prevailing wisdom, scientists can, and often do, correct the record by retracting their earlier work.
![](https://62e528761d0685343e1c-f3d1b99a743ffa4142d9d7f1978d9686.ssl.cf2.rackcdn.com/files/77400/width668/image-20150408-18086-w8xy3u.jpg)
Reality falls short
The problem is that in science – or, more accurately, scientific publishing – this process seldom works as directed.
![](https://62e528761d0685343e1c-f3d1b99a743ffa4142d9d7f1978d9686.ssl.cf2.rackcdn.com/files/77522/width237/image-20150409-15231-5rvd8w.jpg)
Through our work on Retraction Watch, we have found that journals – even when they end up retracting, which is not as often as they should – rarely give a full and clear picture of how and why a paper went off the rails. Retraction notices in science typically do not resemble the explications one finds in newspapers when an article is pulled – and never do they involve a report as detailed as Coll’s overview of the admittedly unique Rolling Stone case. Some journals even have advised readers to contact the authors of the original papers for more information, which somehow strikes them as a reasonable course of action, rather than publishing for all to see what the issues were that led to the retraction.