A new study in JAMA this week shows about 70 percent of newly-marketed drugs in the last decade came to market with comparative effectiveness data – research done against at least one existing similar therapy. (Authors Nikolas Goldberg et al. excluded orphan treatments for which there exist no alternatives.) Depending on whether that number’s good news or bad depends on whom you talk to. FDA Center for Drug Evaluation and Research’s Dr. Robert Temple told Reuters the number was “pretty impressive,” considering FDA doesn’t require head-to-head trials for approval as the European Medicines Agency does. But other physicians warned that having no data against anything but a placebo for a third of all new drugs in the US is not something to write home about.

The bigger question, the authors suggest, should be: Is this data actually being used by prescribers, and if not, how can the system change to make data beyond placebo controls useful in the exam room? Comparing drugs to existing therapies—sometimes called comparative effectiveness research–is an important tool to judge a drug’s safety or efficacy before large, randomized-control trials and longitudinal population studies are complete. “Making the data more accessible to clinicians and payers can help maximize their utility in prescribing and coverage decisions,” they write.

But there is little evidence, they say, that the comparative info included in approval literature up until now is getting factored in when scrips get written. They suggest possible ways to disseminate data, including independent drug info providers like RxFacts.org, and the paper provides good policy food-for-thought as the U.S. develops its own comparative effectiveness system – the Patient-Centered Outcomes Research Institute (PCORI).

For such a system to work right, determining where the rubber meets the road – i.e. what it’s going to take, statistically and pragmatically for CER data to be applied in a clinical setting – should be at the center of CER planning and design, says Leonard Zwelling in a recent piece on the Health Affairs blog. Research that has no chance of changing clinical practice for the better ought not to be done. Using certain methods associated with RCTs, Zwelling writes, could make CER – well – more effective, and therefore worth our time and money. He looks at the 2009 U.S. Preventive Service Task Force recommendations on mammograms as a case study in ineffective CER. Because the study didn’t determine at the outset what level of statistical significance would be needed to change practices, the medical community and public didn’t know how to weigh the data and balked. (It’s more complicated, of course: how risk gets communicated plays a big role, as does the classic conflict of weighing the aggregate against anecdote.)

As PCORI gears up, Zwelling comes to a similar place as the JAMA authors: figure out how to get comparative efficacy findings into the clinical setting in a way that makes for better medicine. Zwelling says PCORI would have a better chance of doing CER right by infusing a little RCT thinking: “establishing parameters that measure significant improvements in outcomes, reductions in costs, or increases in value, and the degree to which a study must show this benefit to be considered positive and thus alter current behavior by caregivers, patients and payers.” Otherwise, he asks, “why spend the money to even start the research?”

–Kate Petersen, PostScript blogger