In past entries, I have rallied for extending such methodological rigor to preclinical research. This has three defenses. First, phase 1 human trials predicated on weak preclinical evidence are insufficiently valuable to justify their execution. Second, methodologically weak preclinical research is an abuse of animals. Third, publication of methodologically weak studies is a form of "publication pollution."
Two recent publications underscore the need for greater rigor in preclinical studies. The first is a paper in the journal Stroke (published online August 14, 2008; also reprinted in Journal of Cerebral Blood Flow and Metabolism). Many of the paper's authors have doggedly pursued the cause of preclinical methodological rigor in stroke research by publishing a series of meta-analyses of preclinical studies in stroke. In this article, Malcolm Macleod and co-authors outline eight practices that journal editors and referees should look for in reviewing preclinical studies. Many are urged by STAIR (Stroke Therapy Academic Industry Roundtable)– a consortium organized in 1999 to strengthen the quality of stroke research.
Their recommendations are:
1- Animals (precise species, strain, and details should be provided)
2- Sample-size calculation
3- Inclusion and exclusion criteria for animals
4- Randomization of animals
5- Allocation concealment
6- Reporting of animals concealed from analysis
7- Masked outcome assessment
8- Reporting interest conflicts and funding
There's an interesting, implicit claim in this paper: journal editors and referees partly bear the blame for poor methodological quality in preclinical research. In my next post, I will turn to a related news article about preclinical studies in Amyotrophic Lateral Sclerosis. (photo credit: 4BlueEyes, 2006)
No comments:
Post a Comment