I think the elevation of “crowdsourcing” as a legitimate indication for determining drug benefits or risks in the real world is a luddite-like anti-science trend that must be counteracted.

The latest example of this was just published on the Dose of Digital blog in the post “The Best Pharma Products According to Patients”. In that post, Jonathan Richman reports drug ratings from iGuard.org and says:

“…which are the top-rated products? Forget about all those head-to-head trials that payors want, but most companies are hesitant to conduct (for many reasons). If you want to know which treatment is best, why not check out its ratings? How far away is a future where patients select which products they want to take by using reviews such as those found on iGuard? I’m sure some of you are scoffing at this idea because you think physicians should be recommending treatments, not iGuard. Two questions for those of you thinking this: aren’t objective ratings guiding treatment requests better than DTC TV ads that also aim to get people to ask for a specific treatment? And if these ratings are available, why would physicians ignore them? How long before they too use these types of reviews to decide which treatments to prescribe?”

Heaven forbid that physicians use these ratings to decide on treatment options for their patients!!! If Jonathan really believes this, then he is promoting the most irresponsible and anti-scientific methodology I have ever come across!

According to the person who coined the term crowdsourcing, “A central principle animating crowdsourcing is that the group contains more knowledge than individuals.”

Therefore, a group of patients who take a specific drug has more knowledge than a single individual. OK, maybe a group of patients has more knowledge than a single patient, but do they have more knowledge than a single physician who has treated a large number of patients?

But more than that, what is this “group” that rated drugs like Viagra on iGuard.org?

More precisely, how MANY people are in this group and who are they?

Richman presents a table of drug ratings and for each drug there is a column that is labeled “Number of Patients.” The number of patients in the Viagra row is 21,500, which implies that the “Patient Effectiveness Score” of 6.7 for Viagra (vs. 7.4 for Cialis) is based on 21,500 ratings.

This is NOT the case. iGuard says it “tracks” 21,500 (now 22,000) Viagra patients but it does not say how many patients RESPONDED by submitting ratings. That is, we do not know what N is in this dataset. If it’s the same order of magnitude as number of comments received (ie, 36), then this is NOT good science nor is it even good “crowdsourcing.”

I am currently reading a small book that I recommend. It is entitled “The Numbers Game: The Commonsense Guide to Understanding Numbers in the News, in Politics, and in Life” by Michael Blasland and Andrew Dilnot. The title should have included blogs as well as “News” because more and more people are reading blogs that have even less fact-checking than your average newspaper!