}

Monday, July 23, 2012

Lies and statistics

There’s one complaint I make about the newsmedia over and over again, and for good reason: They never get any better. That complaint is about their habit of routinely publishing stories about a “survey” or a “study” without providing any information about the methodology, which would enable us to decided if there’s any validity to the information being provided.

Sometimes this is about important stuff—electoral races, public attitudes about an issue, or even just information that helps us to better understand the people around us.

And then sometimes the studies/surveys are about things of no real importance whatsoever: Today Stuff reported on the “most trusted” bands in New Zealand, as reported by Reader’s Digest. Said Stuff: “Over 1200 people voted in the online survey.”

That “online survey” part should raise red flags, since such surveys are so often—or even almost always—utter rubbish. In this case, we’re told nothing further.

Comment number 20 to the story, from someone named Duncan Stuart, hits all the main points I say whenever I encounter a survey that makes me raise an eyebrow, and it’s nice to read someone else calling for transparency:
What was the methodology of this survey? Were these 1200 randomly selected [respondents]? Did they rate each and every brand on the list? What was the rating scale and what did it take to be a winner? Sorry to be pedantic, but a survey like this could be a rather meaningless beauty pageant—for example in a field of 15 names the winner might be the one that gets a paltry 10% of the vote—slightly more than the runner up perhaps, but hardly a ringing endorsement.

This might be a cracker survey, but who can really tell? Readers Digest needs to be more transparent about method. After all, the winner in 2009 was Cadbury and we all saw how truly fragile their leadership really was. So it begs the question: how meaningful is this kind of survey?

As a market researcher, I'm expected to explain methodology as a routine part of delivering any data. This is so you can judge not just the raw results, but the fairness of the method.
Where I disagree with the commenter is that it isn’t the job of Reader’s Digest alone to be more transparent. Rather, the newsmedia also has an obligation to take seriously only those surveys that do supply complete methodology and to ignore those that don’t. No advertiser would get away with publishing an ad with “survey” results that were false or misleading, so why should the news pages get away with it? It’s a journalist’s job to verify, not just report.

So, without further information, I take “the nuclear option” and dismiss this survey as utter rubbish. Nevertheless, I’d probably put Whittaker’s at the top, too.

No comments: