Rasmussen Reports

The end of the Tea Party?

There is something hugely ironic that it is Rasmussen that delivers the bad news about the demise of the Tea party. The very polling company that Tea Party fanatics were saying were the most accurate, the only one true indicator of actual voting in the election has some very vert bad news for them..and it must be right…it is Rasmussen:

Views of the Tea Party movement are at their lowest point ever, with voters for the first time evenly divided when asked to match the views of the average Tea Party member against those of the average member of Congress.  Only eight percent (8%) now say they are members of the Tea Party, down from a high of 24% in April 2010 just after passage of the national health care law.   Read more »

Nate Silver rates the polls

Nate Silver has rated the various polling companies, many of which are included in his algorithms. When you look at their results you can see why Romney conservatives were so far off, they were relying on Gallup and Rasmussen and they were so dreadfully wonky it is lucky they have any credibility at all.

There were roughly two dozen polling firms that issued at least five surveys in the final three weeks of the campaign, counting both state and national polls. (Multiple instances of a tracking poll are counted as separate surveys in my analysis, and only likely voter polls are used.)

For each of these polling firms, I have calculated the average error and the average statistical bias in the margin it reported between President Obama and Mitt Romney, as compared against the actual results nationally or in one state.

For instance, a polling firm that had Mr. Obama ahead by two points in Colorado — a state that Mr. Obama actually won by about five points — would have had a three-point error for that state. It also would have had a three-point statistical bias toward Republicans there.

The bias calculation measures in which direction, Republican or Democratic, a firm’s polls tended to miss. If a firm’s polls overestimated Mr. Obama’s performance in some states, and Mr. Romney’s in others, it could have little overall statistical bias, since the misses came in different directions. In contrast, the estimate of the average error in the firm’s polls measures how far off the firm’s polls were in either direction, on average.

 

Obama wins

Obama has won the US Presidential election. The results are pretty much in line with what Nate Silver predicted (chart below). In the end it was not neck and neck and people really need to understand that it is the Electoral College that matters not Rasmussen polls.

Leighton Smith graciously rang to concede for our bet just after 6pm. We have agreed on the location for the lunch which we will both enjoy. We are yet to agree on a date dur to overlapping commitments but it will be in the next two weeks.

For those rabid frothing wierdos who dissed me for my assessment of the election, you can use the comments to eat humble pie.

In the meantime I have found a video of the 100 best maniacal laughs to entertain you.

Is it a tossup?

Not according to Nate Silver, and here is the math behind that:

There were 22 polls of swing states published Friday. Of these, Mr. Obama led in 19 polls, and two showed a tie. Mitt Romney led in just one of the surveys, a Mason-Dixon poll of Florida.

Although the fact that Mr. Obama held the lead in so many polls is partly coincidental — there weren’t any polls of North Carolina on Friday, for instance, which is Mr. Romney’s strongest battleground state — they nevertheless represent powerful evidence against the idea that the race is a “tossup.” A tossup race isn’t likely to produce 19 leads for one candidate and one for the other — any more than a fair coin is likely to come up heads 19 times and tails just once in 20 tosses. (The probability of a fair coin doing so is about 1 chance in 50,000.)

Instead, Mr. Romney will have to hope that the coin isn’t fair, and instead has been weighted to Mr. Obama’s advantage. In other words, he’ll have to hope that the polls have been biased in Mr. Obama’s favor.

1 in 50,000 is not good odds. Nate then discusses possibilities of errors in polling caused by statistical sampling errors and time factors involving snapshot polls. He explains how his model and other polling companies account for those errors. Then he discusses the biggest allegation about polling, that of bias:

This introduces the possibility that most of the pollsters could err on one or another side — whether in Mr. Obama’s direction, or Mr. Romney’s. In a statistical sense, we would call this bias: that the polls are not taking an accurate sample of the voter population. If there is such a bias, furthermore, it is likely to be correlated across different states, especially if they are demographically similar. If either of the candidates beats his polls in Wisconsin, he is also likely to do so in Minnesota.

The FiveThirtyEight forecast accounts for this possibility. Its estimates of the uncertainty in the race are based on how accurate the polls have been under real-world conditions since 1968, and not the idealized assumption that random sampling error alone accounts for entire reason for doubt.

To be exceptionally clear: I do not mean to imply that the polls are biased in Mr. Obama’s favor. But there is the chance that they could be biased in either direction. If they are biased in Mr. Obama’s favor, then Mr. Romney could still win; the race is close enough. If they are biased in Mr. Romney’s favor, then Mr. Obama will win by a wider-than-expected margin, but since Mr. Obama is the favorite anyway, this will not change who sleeps in the White House on Jan. 20.

My argument, rather, is this: we’ve about reached the point where if Mr. Romney wins, it can only be because the polls have been biased against him. Almost all of the chance that Mr. Romney has in the FiveThirtyEight forecast, about 16 percent to win the Electoral College, reflects this possibility.

Yes, of course: most of the arguments that the polls are necessarily biased against Mr. Romney reflect little more than wishful thinking.

So what about the claim that it is a toss-up:

Nevertheless, these arguments are potentially more intellectually coherent than the ones that propose that the race is “too close to call.” It isn’t. If the state polls are right, then Mr. Obama will win the Electoral College. If you can’t acknowledge that after a day when Mr. Obama leads 19 out of 20 swing-state polls, then you should abandon the pretense that your goal is to inform rather than entertain the public.

But the state polls may not be right. They could be biased. Based on the historical reliability of polls, we put the chance that they will be biased enough to elect Mr. Romney at 16 percent.

 

Debunking Oversampling

Rabid Republican supporters have constantly attacked me for not following their particular party line in predicting a Romney win. I even have a bet with Leighton Smith that Obama will win, not that I want him to win but simply because the math, the polls and the facts do not support any other contention.

In effect it is me with the safe bet. Nonetheless I get constant emails pointing out the error of my ways, constant comments doing the same and people pointing me to discredited loons who have come up with a conspiracy that suggests that all polling companies with the exception of Rasmussen are colluding to keep Obama president in conjunction with the liberal media paymasters.

Me? I prefer facts. Nate Silver looks at the current conspiracy of the day…oversampling:

In 2004, Democratic Web sites were convinced that the polls were biased toward George W. Bush, asserting that they showed an implausible gain in the number of voters identifying as Republicans. But in fact, the polls were very near the actual result. Mr. Bush defeated John Kerry by 2.5 percentage points, close to (in fact just slightly better than) the 1- or 2-point lead that he had on average in the final polls. Exit polls that year found an equal number of voters describing themselves as Democrats and Republicans, also close to what the polls had predicted.

Since President Obama gained ground in the polls after the Democrats’ convention, it has been the Republicans’ turn to make the same accusations. Some have said that the polls are “oversampling” Democrats and producing results that are biased in Mr. Obama’s favor. One Web site,unskewedpolls.com, contends that even Fox News is part of the racket in what it says is a “trend of skewed polls that oversample Democratic voters to produce results favorable for the president.”

People forget that these conspiracies follow the cycles. They also forget that they are wrong.

The criticisms are largely unsound, especially when couched in terms like “oversampling,” which implies that pollsters are deliberately rigging their samples.

But pollsters, at least if they are following the industry’s standard guidelines, do not choose how many Democrats, Republicans or independent voters to put into their samples — any more than they choose the number of voters for Mr. Obama or Mitt Romney. Instead, this is determined by the responses of the voters that they reach after calling random numbers from telephone directories or registered voter lists.

Pollsters will re-weight their numbers if the demographics of their sample diverge from Census Bureau data. For instance, it is typically more challenging to get younger voters on the phone, so most pollsters weight their samples by age to remedy this problem.

So what about the charge of partisan bias:

If the focus on “oversampling” and party identification is misplaced, however, FiveThirtyEight does encourage a healthy skepticism toward polling. Polling is difficult, after all, in an era in which even the best pollsters struggle to get 10 percent of households to return their calls — and then have to hope that the people who do answer the surveys are representative of those who do not.

So perhaps we should ask a more fundamental question: Do the polls have a history of being biased toward one party or the other?

The polls have no such history of partisan bias, at least not on a consistent basis. There have been years, like 1980 and 1994, when the polls did underestimate the standing of Republicans. But there have been others, like 2000 and 2006, when they underestimated the standing of Democrats.

What does Nate Silver do that is different?

We have an extensive database of thousands of polls of presidential and United States Senate elections. For the presidency, I will be using all polls since 1972, which is the point at which state-by-state surveys became more common and our database coverage becomes more comprehensive. For the Senate, I will be using all polls since 1990.

That is a pretty impressive amount of data, and one of the reasons why Nate Silver is the most accurate political statistician and why I follow his predictions. What does the data actually say, rather than the partisan hackery:

In the 10 presidential elections since 1972, there have been five years (1976, 1980, 1992, 1996 and 2004) in which the national presidential polls overestimated the standing of the Democratic candidate. However, there were also four years (1972, 1984, 1988 and 2000) in which they overestimated the standing of the Republican. Finally, there was 2008, when the average of likely voter polls showed Mr. Obama winning by 7.3 percentage points, his exact margin of victory over John McCain, to the decimal place.

In all but three years, the partisan bias in the polls was small, with the polling average coming within 1.5 percentage points of the actual result. (I use the term “bias” in a statistical sense, meaning simply that the results tended to miss toward one direction.)

Nate Silver also looked at state polls which showed similar traits. His conclusion over all is:

On the whole, it is reasonably impressive how unbiased the polls have been. In both presidential and Senate races, the bias has been less than a full percentage point over the long run, and it has run in opposite directions.

That does not mean the pollsters will necessarily get this particular election right. Years like 1980 suggest that there are sometimes errors in the polls that are much larger than can be explained through sampling error alone. The probability estimates you see attached to the FiveThirtyEight forecasts are based on how the polls have performed historically in practice, and not how well they claim to do in theory.

But if there is such an error, the historical evidence suggests that it is about equally likely to run in either direction.

Nor is there any suggestion that polls have become more biased toward Democratic candidates over time. Out of the past seven election cycles, the polls had a very slight Republican bias in 2010, and a more noticeable Republican bias in 1998, 2000 and 2006.

They had a Democratic bias only in 2004, and it was very modest.

Still, 2004 went to show that accusations of skewed polling are often rooted in wishful thinking.

I can almost taste my lunch now.