Knowing that exit polling has historically overestimated the Democratic vote and knowing how much the final regular polling in the 1980 race understated Ronald Reagan’s support compared to Jimmy Carter, it is worth looking at what the final poll results said in other presidential election years.
The facts show a similar trend in a pro-Democratic direction almost uniformly. Historically speaking, pollsters have underestimated how many people would vote for the Republican presidential candidate:
Writing at National Review, reporter Jim Geraghty quotes an anonymous pollster who provides a helpful review of past polling data:
In 1996, some reputable pollsters had Clinton winning by 18 percentage points late, and Pew had Clinton up by 19 in November; on Election Day, he won by 8.5 percentage points… In 2004, pollsters were spread out, but most underestimated Bush’s margin. (2000 may have been a unique set of circumstances with the last-minute DUI revelation dropping Bush’s performance lower than his standing in the final polls; alternatively, some may argue that the Osama bin Laden tape the Friday before the election in 2004 altered the dynamic in those final days.) In 2008, Marist had Obama up 9, as did CBS/New York Times and Washington Post/ABC News, while Reuters and Gallup both had Obama up 11.
Now, if this was just random chance of mistakes, you would see pollsters being wrong in both directions and by about the same margin in each direction at the same rate – sometimes overestimating how well the Democrats do some years, sometimes overestimating how well the Republicans do. But the problem seems pretty systemic – sometimes underestimating the GOP by a little, sometimes by a lot.
In 2004, the final telephone surveys mostly favored George W. Bush against John Kerry but the exit polls clearly did not. As usual, they overstated the Democrat vote (see our earlier report on reasons for this) which led many Democrats to expect that Kerry would win the popular vote and the presidency. When that did not happen, it triggered a widespread belief among hardcore Democrats that Republicans had somehow managed to “steal” the election in several different states, particularly in Ohio.
Of course, such Democrats never bothered to actually note that the organization which produced the exit polls, the National Election Pool, believed their own surveys were tilted toward Democrats. Here is what NEP told its subscriber news organizations after the 2004 election:
“Our investigation of the differences between the exit poll estimates and the actual vote count point to one primary reason: in a number of precincts a higher than average Within Precinct Error most likely due to Kerry voters participating in the exit polls at a higher rate than Bush voters. There has been partisan overstatements in previous elections, more often overstating the Democrat, but occasionally overstating the Republican.”
Having said all that, it is important to keep in mind that if you are seeing a poll that is predicting a massive vote increase for President Obama over his 2008 totals in a state or nation-wide, it probably is not the fault of an oversampling of Democrats. More likely is the assumptions made by the pollsters as to who constitutes a “likely voter,” something they determine not via party ID but by things like race, age, and income. A polling company’s likely voter (LV) model is critical because the electorate in any jurisdiction is always a non-random subset of the larger population. While some groups are more likely to vote than others, trying to sift out who actually will vote from those who only say they will is a difficult, non-scientific task. People with a background in statistical survey methods but a lack of experience in politics often miss this point.
I touched on this issue earlier this week but a more thorough explanation can be found in a blog post from the polling company Voter Survey Service (ht: James Taranto) which is a defense of its recent poll in Pennsylvania showing a much closer race than one commissioned for the Philadelphia Inquirer newspaper. In the posting, VSS gives a look inside its own model for determining who it should talk to, something pollsters rarely do:
Our vote model for gauging the number of interviews conducted with voters of different demographic groups (things like party affiliation, racial background and age range, etc.) is a blend of turnout models from both the 2008 and 2004 presidential elections, but leans more towards 2004 VTO and is predicated on the belief that turnout this November will not be anywhere near ’08 levels when 5.9 million votes were cast.
First, our ratio of interviews conducted with Republicans and Democrats in our recent polls (49D – 43R) gives Democrats a 6-point advantage based on the fact that Democrats outnumber Republicans in actual registration. However, this ratio is slightly more Republican based on both national and state polling showing that Republicans are more likely to vote than Democrats this year given high intensity among Republicans who strongly disapprove of the President’s job performance. Nonetheless, this +6 Democratic advantage is only one point less Democrat than the 7-point advantage these same exit polls gave Democrats in the 2008 presidential election. Besides, simply conducting more surveys with Democratic voters (as some have suggested) doesn’t necessarily translate into more votes for President Obama when you consider that Mitt Romney is winning Democratic-leaning counties in Western Pennsylvania by ten or more percentage points. Nonetheless, it is entirely appropriate to sample Republicans one or two points higher than in 2008 if you believe as we do that voter turnout this November will have little resemblance to the last presidential election.
Second, our ratio of younger to older voters reflects turnout that is likely to be slightly higher with older voters given the lack of enthusiasm from younger voters. In our surveys, 18-44 yr. olds make up 30% of all interviews and voters 45 years of age and older represent the remaining seventy percent. For instance, according to 2008 exit polls voter turnout among 18-29 year olds peaked at 18%, but national and state polling proves interest among younger voters down sharply this year due to higher unemployment with younger voters and college graduates in particular. So conducting approximately ten percent of surveys with 18-29 year olds is a reflection of this lower anticipated turnout among these less-enthusiastic voters. Besides, the fact that Obama backers have suggested that over sampling older voters skews results in favor of Mitt Romney is a striking revelation in a state like Pennsylvania known for having the 5th largest population of senior citizens in the country.
Ultimately, however, regardless of whether an organization decides to share details about its LV model, the best way to obtain a “sanity check” on any poll is to compare how it stacks up with recent electoral vote totals. This is the advice of Frank Newport of Gallup who wrote a column yesterday which obliquely criticizes his competitors for releasing polls which show ridiculous Obama leads in various states:
Basically, if an observer is concerned about a poll’s results, that observer should skip over the party identification question and just look at the ballot directly. In other words, cut to the chase. Don’t bother with party identification sample numbers. Look directly at the ballot.
For example, we know that in Ohio:
- Obama won by 5 points in 2008
- Bush won by 2 points in 2004
- Bush won by 3 points in 2000
Now if a given poll in Ohio in this election shows Obama with a 10-percentage-point lead, one should just ask, “How likely is it that Obama would be ahead by 10 points if he won by five points in 2008?” -- forgetting party identification, which we assume is going to be higher for the Democratic Party if Obama is ahead, anyway. The discussion of the ballot in the context of previous ballots is, in fact, a reasonable discussion. It may be unlikely that Obama will double his margin in 2012 from what occurred in Ohio in 2008. Or maybe not. But the focus should be directly on the ballot, and discussions of reasons why it might be different than one expects should not involve an attempt to explain the results by focusing on changes in party identification -- which is basically a tautological argument.
- Obama won by 3 points in 2008
- Bush won by 5 points in 2004
- Bush won by [much] less than one point in 2000.
So, if one sees a poll saying that Obama is leading Romney by nine points in Florida, then one should ask how likely it is that Obama will exceed his 2008 margin by six points. That is a reasonable discussion. But one need not attempt to say that the nine-point lead in the poll is suspect because there were too many Democrats and not enough Republicans in the sample compared to 2008. The finding of differences in party identification is, instead, simply reflecting what one sees on the ballot.
Essentially, it is much more direct to just focus on the trends and comparisons of the ballot question than it is to introduce an extraneous look at trends in party identification.
I’ve been analyzing election surveys at Gallup since the 1992 presidential election, and I don’t personally put a great deal of stock in survey-to-survey variations in party identification. All of our weighting focus is on the effort to bring more solid demographic variables into alignment with census figures -- including in recent years cell phone and landline phone use. We don't find that party identification is stable enough to be of much use when it comes to comparing sample-to-sample variations, or sample to exit poll differences.
One final thing to keep in mind: While polls have a tendency to oversample Democrats compared to Republicans, that fact does not mean that pollsters themselves are deliberately trying to demoralize conservative voters. While certain pollsters are partisan, the vast majority of polling companies solicit business from both sides of the political spectrum and as such, it is not in their interest to consistently produce results which would make Republicans unlikely to purchase their services.
In this regard, polling companies are quite different from media companies. While the latter style themselves as “mainstream” they are overwhelmingly dominated by liberal Democrats, many of whom have no problem gleefully touting inaccurate poll results that favor their own political affiliation. Deliberately skewing the news certainly can result in smaller audiences but because media audiences fluctuate for a variety of other reasons, media companies cannot see the effects of partisanship on their own revenues, unlike polling firms.