Monday, September 20, 2010

Are Trials Necessary?

Today's New York Times ran a heartbreaking story by Amy Harmon about two cousins who developed melanoma. One was entered into a cancer clinical trial and received the investigational drug PLX4032. The other was ineligible for the trial, and therefore unable to access the experimental drug. Guess which cousin died?

The article is one in a series of Harmon articles that seems to raise questions about whether rules governing drug testing and research are depriving desperately ill patients timely access to curative therapy. In this article, the narrative takes aim at two practices: 1- the practice of including control groups within trials, and randomly determining that some patients will receive standard care that is widely regarded as inadequate; 2- excluding patient access to drugs that have not yet demonstrated unequivocal therapeutic advantage.

As with many of my blog entries, I preface this one by saying that I am not a cancer doc, and therefore not in a position to evaluate whether PLX4032 is the wonder drug this story makes it out to be. I also preface my comment on this article by acknowledging the incredible pain and anxiety that patients suffer when denied access to a trial, or when denied access to a preferred drug within a trial. These disclaimers aside, I found the tenor of this article very problematic.

First, the reason investigators randomly determine treatment choice in trials is because, at the outside of a well designed trial, there is genuine uncertainty about whether the new drug is better, the same, or worse than the (inadequate) standard treatment. Many doctors participate in trials because they fervently believe the new regimen is better than the standard one. But the evidence shows, again and again, that on average, new drugs outperform old ones in a small portion of instances (maybe around 15-20%). It is just as likely that new drugs will underperform standard treatments- making patients sicker perhaps, or failing to deliver as much punch. So one concern about the article is the premise that doctor's personal beliefs about which cancer drug will perform better in a randomized controlled trial carries some moral weight. The evidence shows doctors in the aggregate haven't a clue- which is why functional healthcare systems run trials.

A second troubling premise here is that there is no harm to allowing public consumption of drugs that are not yet validated in rigorous clinical trials. CEOs of many pharmaceutical companies perhaps may share this view. But the historical record shows otherwise: in fact, many patients are severely harmed when drugs are introduced into clinical use before they have been established as safe and effective. Perhaps a few readers out there may be familiar with thalidomide? Or autologous bone marrow transplantation for breast cancer? Ever considered the price tag on these new cancer drugs, and do you want your government or insurance company purchasing a potentially useless drug?

Still, article zeros in on an ethical tension that is very difficult to eradicate from clinical research. Patients want- and are entitled- to be treated as individuals. Physicians also prefer to treat patients as individuals. Clinical trials, however require that patients be treated as tokens of larger populations- that they be treated, in a sense, as "stand ins" for future patients. Randomization has not been shown to deprive patients of access to life preserving drugs. However, it does rob patients of fulfilling their desire to be treated as individuals and to exercise personal choice. And this is one of the reasons why the field of research ethics is endlessly fascinating, important, and nettlesome. (photo credit: travelingMango 2008).