Monday, May 17, 2010

Poll Dancing

by Distributorcap

You can not watch any "news" program on cable and not have to have some poll quoted at least 36x in a period of 9 seconds. And if you watch the next day, a new set of polls are quoted and the ones from the previous day are treated as if they never existed.

Call it Journalism by Numbers. Call it Journalism by misguided and misleading information. Or just call it a pole dance to tease and titillate.

This was written just before the 2008 election, but is apropos now as then. Call it distributorcap by numbers. (it has been edited to make it more current)

For the past eeks, polls have been nothing short of an incredible roller coaster ride of results. Obama’s popular, Obama sucks. Republicans will kick ass in November, now the Democrats are coming back. People love the Arizona immigration law, people love Sarah Palin (well anyone who loves Sarah Palin is not a people and definitely not human). Volcanoes, Greece, the dog ate my homework, etc. You name it, there have been a million reasons given for seeing such dramatic rises and falls in the polls.

But what makes the reporting of poll results so dangerous – is the irresponsible, shoddy and pedestrian way the pundits and media use the results of polls to represent news. There is no doubt in my mind that imprudent use of polls, especially fashioning them into news as opposed to data - directly affects the tenor of the campaign.

A bit about polls

We have hear it a million times – Polls are snapshots in time. Nothing more. They do not predict the future. They can help people make informed decisions about how they will act in the future – but they do not tell you what will happen.

Where the media falls apart in the reporting polling data is the fact they almost always ONLY give the results. They rarely (if ever) tell you exactly what is behind all those results. The data which ultimately gets wrapped up in the reporter's own opinions of the results, becomes the story. There is no context, there is no explanation, there is very little history.

The first thing a consumer of the this “polling data” needs to know – is this poll a scientific or unscientific poll? If you are looking for real national/state trends on any issue (politics, consumer preferences, issues etc.), you absolutely must look at scientific (or random sample) polls. If a poll is stated as unscientific (which in a nutshell means the people in the poll volunteer to be interviewed), the data may be good for a read and laugh – but it is most likely garbage for results and decision making.

The media tends to report a lot of data from unscientific polls, and then mention it as an aside or at the tail end of the story that the poll was unscientific.

Scientific polls.

Polls should be based on representative random samples using statistical analysis. Anything else is like throwing darts, using a Ouija board or turning over a toy 8-ball. Just because a poll is scientific, doesn’t mean it is a good poll. There are so many variables and rules that go into valid polling – no media outlet would ever have the time, nor the basic understanding to explain it to their viewers. Besides it would blow all the emotional impact out of their reporting if they had to explain exactly how the data was obtained.

Let’s stick to political polling – since this area tends to be the most overused for misleading usage of poll data. I would imagine current political reporting of poll data gives real pollsters and mathematicians the heebie jeebies.

First of all national polls regarding the Presidential election are MEANINGLESS (just ask Samuel Tilden and Al Gore). It is the state polls that matter – and they are dreadfully even more unreliable. But lets just stick to polls in general.

Some factors that are almost never discussed when reporting the “polls” are (in no particular order):

* Is the sample balanced to “fixed” characteristics? The sample should reflect characteristics of the population being surveyed that actually affect voting – primarily (but not limited to) age, gender, income, race, religion, education level and geography. If a national poll has too many people over 50, not enough low income, or too few Hispanics in it – is it truly reflecting the population at large? Probably not. And if the sample falls short or has too many of one characteristic – how does it compensate for that? You never ever hear a pundit say what the sample is comprised of or if it is balanced.

* Registered versus likely voters? A real tough one. What is a “likely voter?” Every polling organization has a different definition. Likely voters are more a function of historical trends and polling techniques rather than reality. And just because someone is likely today, doesn’t mean they are likely on November 4th. I tend to think (as do many polling organizations) polls that use registered voters are more reflective of reality than likely voters. But likely voters plays so much better as a sound byte.

* Then there is party identification – Democrat, Independent or Republican. This is not a fixed characteristic, but an attitude. And attitudes can and do change over time -- shifting as political climate blows in the wind. While the fixed characteristics (like age etc.) tend to remain relatively stable over the short term (like a campaign), party affiliation is likely to be muchy more ephemeral over the short term – for a variety of reasons. It is constantly changing. And during a survey, this question in particular is notorious for deceitful (and downright dishonest) answers.

* How random is the sample? Randomness slowly leaks away as people refuse to answer, are not home to answer, or don’t have land-line phones. Do you ever hear a pundit talk about how the pollster deals with ensuring randomness and what steps they take to minimize the impact? Of course not. Due to changing lifestyles and technologies it is becoming increasingly difficult to get a true (or close to) random sample using telephones. Also polling organizations in the effort to save money and time – might start excluding certain exchanges that heavily business oriented – even if they are not exclusively businesses. This of course affects randomness.

Some other factors which are all critical in insuring good data:

1. Are the polls and/or pundits using data from subsamples of a poll – this can terribly misrepresent the results. If a poll’s target was national registered voters and they start quoting – “in this poll Women over 50, or people with a college education” – be very wary. The poll was NOT designed for that – subsample data is very suspicious.
2. How are the questions asked? (the wording matters)
3. What order are the questions asked? – Asking for party identification at the beginning is very different than asking at the end.
4. Are they weighing the sample to normalize? (to bring characteristics into line). Can have dramatic effects on the outcome.
5. Is the interviewer injecting bias? (not something that can be easily measured or observed, but if the poll taker just hates Hillary, it will affect the way he asks questions)
6. Is this an assembly line poll which often lack the rigorous standards to make the data viable? (the daily Gallup Tracking Poll tends to come to mind). Automatic polls generated with a machine calling and asking questions are VERY suspect.
7. Who paid for or commissioned the poll? (Partisan polls obviously will tend to overstate the candidate. That is why Polls commissioned by RJ Reynolds on smoking are useless).
8. What size sample are they using? (Larger samples will tend to have smaller margins of error).
9. Do they offer disclosure of their methodology, questions and results?
10. What is the media outlet/pundit trying to do with the poll data? How are they reporting the data? The reporting of poll data should be used to inform – not sway. But the fervor attached into this kind of reporting is so blatant and so obvious – it cannot help but be influential in unintended (or maybe intended) ways. You can bet when Fox News talks polls it is trying to demoralize Democrats and energize Republicans.

Conclusion

Pundits have become addicted to polls. Their entire schtick is often based on the polls of the day. Each absolute movement has become the story, rather than the trend of movement. News organizations will also talk about “poll averages.” Poll averages are garbage. Like a drug, they give an immediate high – but in reality have little value. Averaging polls with different methodologies across different time periods, with different samples, different questions, different ways the questions are asked and different assumptions produce meaningless and misleading results. The reporting of polling averages is reckless at best, dangerous at worst.

It is important to remember that no matter how good the poll, no matter how wide the margin, a poll in September will in no way show that one candidate has locked up the election. Things change – and they change often and dramatically in politics. Back in 1980, Ronald Reagan was the risky choice against an unpopular incumbent. Reagan's move in the polls was very late, but it was decisive. In 1988 Michael Dukakis led by 18 points in the summer. By November the lead had vanished -- and on election day it wasn’t very close for Dukakis.

You have to be careful in what you read, hear and watch. Are news outlets and pundits cherry-picking polls they want to use to sell their narrative and pump the horse race aspect? The real picture requires looking at all the polls, not just the ones you see from the people you watch.

Some places to look at political polls are:
Pollster.com
Electoral-vote.com
Electionprojection.com
Fivethirtyeight.com
Realclearpolitics.com

National Council on Public Polls has the 20 questions any journalist should ask about polls and was the basis for a lot of this post.

Labels: ,

Bookmark and Share

2 Comments:

  • USA Today started this back in the 80s. I remember living in San Fran in 85 when a terror attack on US military happened. The Chronicle headlined that State dept suspected Libya. The Examiner headline said US suspected Gaddafi. USA Today headline was something like "87% of USA suspects Libya"

    By Anonymous joe in oklahoma, at 10:38 AM  

  • And at least 90% don't seem to have any concept of what constitutes a scientific study and are too happy to buy into any fraud that confirms their pet belief.

    By Blogger Capt. Fogg, at 11:07 AM  

Post a Comment

<< Home