Showing posts with label methodology. Show all posts
Showing posts with label methodology. Show all posts

Friday, October 10, 2008

The Problem with Models

Chicago in plastic and balsa. If only animal models were as convincing as the one pictured above from the Museum of Science and Industry. 

The August 7 issue of Nature ran a fascinating feature on how many scientists are reassessing the value of animal models used in neurodegenerative preclinical research ("Standard Model," by Jim Schnabel).

The story centers on the striking failure to translate promising preclinical findings to treatments for various neurodegenerative diseases. In one instance, a highly promising drug, minocycline, actually worsened symptoms in patients with ALS. In other instances, impressive results in mice have not been reproducible. According to the article, a cluster of patient advocacy groups, including organizations like Prize4Life and a non-profit biotechnology company ALS TDI, are spearheading a critical look at standard preclinical models and methodologies.

Much of the report is about limitations of mouse models. Scientists from the Jackson Laboratories (perhaps the world's largest supplier of research mice) warn that many mouse strains are genetically heterogenous; others develop new mutations on breeding. Other problems described in the article: infections that spread in mouse colonies, problems matching sex or litter membership in experimental and control groups, and small sample sizes. The result is Metallica-like levels of noise in preclinical studies. Combined with nonpublication of negative studies, and the result is many false positives.

The article bristles with interesting tidbits. One that struck me is the organizational challenges of changing the culture of model system use. According to the article, many academic researchers and grant referees have yet to warm to criticisms of models, and some scientists and advocates are asking for leadership from the NIH. Another striking point in the piece-alluded to in the article's closing-is a fragmentation of animal models that mirrors personalized medicine.

"Drugs into bodies." That's the mantra of translational research. It is an understandable sentiment, but also pernicious if it means more poorly conceived experiments on dying patients. What is needed is a way to make animal models- and guidelines pertaining to them- as alluring as supermodels. (photo credit: Celikens 2008)

Monday, October 6, 2008

STAIRing at Method in Preclinical Studies

Medical research, we all know, is highly prone to bias. Researchers are, after all, human in their tendencies to mix desire with assessment. So too are trial participants. Since the late 1950s, epidemiologists have introduced a number of practices to clinical research designed to reduce or eliminate sources of bias, including randomization of patients, masking (or "blinding") of volunteers and physician-investigators, and statistical analysis.

In past entries, I have rallied for extending such methodological rigor to preclinical research. This has three defenses. First, phase 1 human trials predicated on weak preclinical evidence are insufficiently valuable to justify their execution. Second, methodologically weak preclinical research is an abuse of animals. Third, publication of methodologically weak studies is a form of "publication pollution."

Two recent publications underscore the need for greater rigor in preclinical studies. The first is a paper in the journal Stroke (published online August 14, 2008; also reprinted in Journal of Cerebral Blood Flow and Metabolism). Many of the paper's authors have doggedly pursued the cause of preclinical methodological rigor in stroke research by publishing a series of meta-analyses of preclinical studies in stroke. In this article, Malcolm Macleod and co-authors outline eight practices that journal editors and referees should look for in reviewing preclinical studies. Many are urged by STAIR (Stroke Therapy Academic Industry Roundtable)– a consortium organized in 1999 to strengthen the quality of stroke research.

Their recommendations are:

1- Animals (precise species, strain, and details should be provided)
2- Sample-size calculation
3- Inclusion and exclusion criteria for animals
4- Randomization of animals
5- Allocation concealment
6- Reporting of animals concealed from analysis
7- Masked outcome assessment
8- Reporting interest conflicts and funding

There's an interesting, implicit claim in this paper: journal editors and referees partly bear the blame for poor methodological quality in preclinical research. In my next post, I will turn to a related news article about preclinical studies in Amyotrophic Lateral Sclerosis. (photo credit: 4BlueEyes, 2006)