Monday, May 19, 2014

And the Survey Says....!

This Graphic Is a Rough Guide to Bad (Or Badly Reported) Science

I've dealt with this topic before, but there's been another flood of stuff that made my hair frizz, so I'm going to run through this again.  There's a lot to be said for research, surveys, and the folks who report on both.  A lot of what's to be said can't be said here without someone blocking me, so we'll just get to the grit of it all.

Thanks to the 24-news cycle, which was invented the day someone figured out that  that big test pattern on the TV screen after 10 PM wasn't making anyone richer, we are flooded with information all the time.  Much of it is of only vague interest, and there's research (ha!) that indicates that we're developing stress from Information Overload.  It's not that we know too much.  It's that there's too much being thrown at us all at once, and we're not catching even a smidge of the best of it.  Then we feel guilty because we don't know what anyone is talking about in the lunch room or at the barn or wherever we surround ourselves with chattering humans.  It's painful to be so ignorant and so full of information at the same time.

"Wait!  HOW many Paints does it take to screw in light bulb?"

The problem lies in several areas.

One flawed spot is the place where the information is being created.  As the great chart linked above indicates, not all research is created equal.  Some of it isn't even created worthy of noticing.  Any poll that you never heard of probably isn't a great resource.  My first thought when I read that 73% of Americans believe or don't believe or buy or don't buy or whatever something I mostly don't care about is "why didn't they ask me?"  No one called me to find out how I feel about anything at all.

They used to.  I answered the phone once and submitted to a survey about how I felt about something having to do with DEP rules, and for the next few months the phone rang constantly.  That's how it works.  Most surveys are by no means random.  That group got my unlisted number from a Dept. of Ag form, so already I was in a select group of folks who live on agricultural land and actually bother to fill out forms.  If that's random, I'm a flounder.

Then from that list of the hundred or so they probably called, I was one of the ones willing to talk on the phone to a stranger (things get a little dull around here during the winter) for 20 minutes.  That brings the "N" (statistics talk for "number of subjects") down to probably three.  Of the three, maybe two of us had the same answers.  The resulting report would say 66% of farmers think the DEP is run by a gay Chinese conglomerate or whatever.

The same goes for any kind of research.  If you're going to read the news, you're going to have to learn to suss out the factoids amid the crapola.  The headline may scream that "X% of Humans are Living on the Edge!"  The fact may be that somewhere someone talked to a group of people and delivered a consensus and extrapolated from that to the world at large.

Nothing says "nonsense" like a headline that's designed to be scary.  Scary headlines exist to draw readers, not to deliver real, useful information.  That's the second flaw in this program.  But we all know how to change channels or websites or RSS feeds, so learn to be discriminating and that flaw will eventually vanish.

I'm not about to put down the hard work done by researchers in the field of equine science and veterinary medicine.  But before I fall prostrate at the thought that the feed I'm using is killing my animals, I first want to see the actual research study.  I want to see the "N".  If it's not at least triple-digit strong, I'm going to add a handful of salt.

Next I want to know how those N horses were chosen.  Were they all from one farm?  One county?  One vet practice?  One university?  If not, where did they come from?  It's obvious that a vet practice that specializes in a particular breed and has a preponderance of a specific illness or injury showing up isn't going to have an "N" that is representative of the entirety of the horse world.

And I want to know how the study was constructed.  Was there a control group of horses just standing around swatting flies, or were all the subjects part of the test group?  How did the researchers control for individual differences? And most importantly, when they reported the results did they indicate the possible degree of variation from the norm (that shows up as a + or - number, preferably in decimals, and called "standard deviation")?  Or did they just throw the whole thing together like a sophomore's midterm paper and hope no one would ask questions?

It's up to the reader and consumer to determine for himself the validity of the reports.  If no one else has tried to repeat the research, then it's hard to say how valid it might or might not be.  If you can, before you leap, look for studies that have been replicated ad nauseum.  The more times the same results are found, the closer to real the results are.

Be a knowledgeable consumer.  You are your only hope.

No comments: