Showing posts with label validity. Show all posts
Showing posts with label validity. Show all posts

Friday, October 10, 2008

The Problem with Models

Chicago in plastic and balsa. If only animal models were as convincing as the one pictured above from the Museum of Science and Industry. 

The August 7 issue of Nature ran a fascinating feature on how many scientists are reassessing the value of animal models used in neurodegenerative preclinical research ("Standard Model," by Jim Schnabel).

The story centers on the striking failure to translate promising preclinical findings to treatments for various neurodegenerative diseases. In one instance, a highly promising drug, minocycline, actually worsened symptoms in patients with ALS. In other instances, impressive results in mice have not been reproducible. According to the article, a cluster of patient advocacy groups, including organizations like Prize4Life and a non-profit biotechnology company ALS TDI, are spearheading a critical look at standard preclinical models and methodologies.

Much of the report is about limitations of mouse models. Scientists from the Jackson Laboratories (perhaps the world's largest supplier of research mice) warn that many mouse strains are genetically heterogenous; others develop new mutations on breeding. Other problems described in the article: infections that spread in mouse colonies, problems matching sex or litter membership in experimental and control groups, and small sample sizes. The result is Metallica-like levels of noise in preclinical studies. Combined with nonpublication of negative studies, and the result is many false positives.

The article bristles with interesting tidbits. One that struck me is the organizational challenges of changing the culture of model system use. According to the article, many academic researchers and grant referees have yet to warm to criticisms of models, and some scientists and advocates are asking for leadership from the NIH. Another striking point in the piece-alluded to in the article's closing-is a fragmentation of animal models that mirrors personalized medicine.

"Drugs into bodies." That's the mantra of translational research. It is an understandable sentiment, but also pernicious if it means more poorly conceived experiments on dying patients. What is needed is a way to make animal models- and guidelines pertaining to them- as alluring as supermodels. (photo credit: Celikens 2008)

Thursday, February 28, 2008

Masks and Random Thoughts on Preclinical Research Validity

Epidemiologists and biostatisticians have evolved numerous ways of reducing bias in clinical trials. Randomization of patients, and masking them to their treatment allocation are two. Another is masking clinicians who assess their outcomes.

Why are these simple measures so rarely used in preclinical animal studies? And do animal studies show exaggerated effects as a consequence of poor methodology?

The March 2008 issue of Stroke reports a "meta-meta-analysis" of 13 studies comprising over fifteen thousand animals. Perhaps surprisingly, the study did not show a relationship between the use of randomization or masked outcome assessment and the size of treatment effect. It did, however, show a positive relationship between size of treatment effect and failure to mask investigators during treatment allocation.

This is probably the largest analysis of its kind. It isn't perfect: publication bias is very likely to skew the analysis. For example, size of treatment effect is likely to strongly influence whether a study gets published. If so, effects of methodological bias could be obscured; preclinical researchers might simply be stuffing their methodologically rigorous studies in their filing cabinets because no effect was observed.

The conclusion I draw? Preclinical researchers should randomize and mask anyway.  There is some evidence it matters. Moreover, the logical rationale is overwhelming, and the inconvenience for investigators seems more than manageable. (photocredit: Chiara Marra 2007)