Fraud Squad

Following the Wakefield fraud story, I’ve seen several blogs and blog commenters suggesting that The Lancet was at fault for failing to catch the fraud in the peer review process.  I (and I think most practicing scientists) don’t agree with that condemnation, which I think shows that many people outside the field don’t really understand what peer review is supposed to do.  I’m just going to together a couple of comments I’ve made on other blogs …

At Hoyden About Town, I said (in part):

There’s been lots of criticism of The Lancet for publishing this study (they’ve since caused it to be retracted) — but I don’t agree with most of that criticism. Scientific journals should try to catch fraud, but this kind of fraud, wholesale fakery, is ironically hard to catch than the simpler forms where, say, a single part of one figure is changed. It’s not possible for a journal to go back and re-scrutinize all the primary data in all the papers it publishes, for example. It’s necessary to rely on the scientific process as a self-correcting mechanism. Of course, that’s pretty much what happened in this case — Wakefield’s work was rapidly and thoroughly refuted in the scientific literature — but the mainstream press has lagged far behind the scientific consensus. If it wasn’t for ambulance chasers, scandal-seeking newspapers, ignorant and naive reporters, and greedy lawyers, this would have diappeared within a year of Wakefield’s first article, as happens with almost all the mistaken, careless, and misinterpreted scientific papers that are published by the dozen every day.

Here I’ve bolded a sentence that’s worth re-emphasizing: This was fraud, but it didn’t directly set science back very far, because good science refuted it quickly and rapidly.  It was the non-science world that set back research on autism, by accepting what scientists already knew was wrong.

Effect Measure made a similar point about peer review:

Wakefield provided the case summaries (which we now know were doctored) and a reviewer would not have had access to or had the time to look at the original medical records. The same is true for the journal. Accurate representation of raw data is taken on trust. I don’t think The Lancet can be taken to task for not catching this. This kind of scientific misconduct is only found after the fact.

In the comments on the Effect Measure article, Sam C made an interesting suggestion:

Review is fine for good science and for research where errors will not have far-reaching impact (so the normal process of correction and extension by future workers is appropriate).

But cases like this need audit. An audit can not always detect deliberate fraud (just as in financial auditing), but it might pick up errors of protocol (like the O’Leary lab’s inadequate controls in their DNA/RNA work) or substandard or imperfect work (inappropriate statistical techniques, equipment whose limitations are not understood, results transferred incorrectly, etc.).

Engineering organisations use ISO standard QA systems, but these only work if applied correctly and conscientiously.

Perhaps any grant award should require that some percentage of the award be allocated to an independent audit of techniques, results and conclusions?

I replied to his comment as follows:

Sam, this is an interesting idea I haven’t heard floated before. I think it’s not doable as a portion of every grant, but I wonder if there could be a separate fund set aside specifically for audits. I’m not sure how it could work, and it would be a real problem to get it balanced properly, though — if there were a standing committee or organization, I could see it getting bogged down in bureaucracy, dinging every paper they come across for trivial procedural errors (“Patient #214 signed the form but failed to initial the 17th page”) so that genuine problems would be hidden anyway.

I don’t know much about formal audit procedures.  Is there a precedent for a useful type of audit that would focus on fraud detection and completely broken research, without getting bogged down in trivia?

I want to make a mention of the Journal of Experimental Medicine and the Journal of Cell Biology, which seem to me to be taking a much more proactive attitude toward fraud than most other scientific journals:

“The issue of data integrity should not be left to chance and probability. This is scholarly publishing, not blackjack.”

M. Rossner (2008). A false sense of security The Journal of Cell Biology, 183 (4), 573-574 DOI: 10.1083/jcb.200810172