Why are these simple measures so rarely used in preclinical animal studies? And do animal studies show exaggerated effects as a consequence of poor methodology?
The March 2008 issue of Stroke reports a "meta-meta-analysis" of 13 studies comprising over fifteen thousand animals. Perhaps surprisingly, the study did not show a relationship between the use of randomization or masked outcome assessment and the size of treatment effect. It did, however, show a positive relationship between size of treatment effect and failure to mask investigators during treatment allocation.
This is probably the largest analysis of its kind. It isn't perfect: publication bias is very likely to skew the analysis. For example, size of treatment effect is likely to strongly influence whether a study gets published. If so, effects of methodological bias could be obscured; preclinical researchers might simply be stuffing their methodologically rigorous studies in their filing cabinets because no effect was observed.
The conclusion I draw? Preclinical researchers should randomize and mask anyway. There is some evidence it matters. Moreover, the logical rationale is overwhelming, and the inconvenience for investigators seems more than manageable. (photocredit: Chiara Marra 2007)