Wednesday 3 October 2012

Goldacre mine

During a quick break between meetings, I've watched Ben Goldacre's new TED lecture. I met Ben when he gave a talk at the final conference of the Evidence project (in which I was involved), back in 2007. I like him, particularly for his wild hair, a feature I hold dear, and I quite often agree with him on issues related to the use of evidence in medical studies. In this presentation, he talks about the problem with publication bias, which I think this is one of his pièces de résistance. In particular he discusses the severe bias in the publication of clinical trial results, where unfavourable outcomes tend to be swept under the rug.

Of course, this is a very relevant issue; to some extent (and I gather this is what Ben argues as well), only a clear and very strict regulation could possibly solve it. Typical examples that people consider is to establish a register of all authorised trials. In other words, the idea is to put all ongoing studies on an official registry, so that people can expect some results. Failure to publish results, either positive or negative, would then be clear evidence of the latter. [I'm not sure whether something like this does exist. But even in that case, I don't think it is enforced $-$ and Ben seems to suggest this too, in the talk]. 

The problem is that, probably, this would not be enough to magically solve the problems $-$ I can definitely see people trying hard to find loopholes in the regulation, eg classifying a trial as "preliminary" (or something) and therefore maintain the right of not publishing the results, if they so wish...

May be this is an indirect argument for being more openly Bayesian in designing and analysing clinical trials. For example, if some information is present on the level of ongoing research in a given area, may be one could try and formalise some measure of "strength" of the published evidence. Or even more simply, if some sceptical prior is used to account for the information provided by the literature, the inference would be less affected by publication bias (I think this is a point that Spiegelhater et al discuss a lot in their book).

No comments:

Post a Comment