November 3, 1948. “Dewey Defeats Truman!” reads the bold front page headline of the Chicago Daily Tribune. But, of course, he had not. The polls had gotten it wrong. And if you keep up with the media, you might think polls continue getting it wrong. But do they? Are polls scientific? Are polls reliable?
Guests
- Darren ShawProfessor of Government at the University of Texas at Austin
- Robert LuskinAssociate Professor of Government at the University of Texas at Austin
- Christopher WlezienProfessor of Government at the University of Texas at Austin
Hosts
- Stuart TendlerFormer Administrative Assistant at the University of Texas at Austin
[0:00:01 Speaker 0] November 3rd. 1948 Dewey Defeats Truman, reads the bold front page headline of the Chicago Daily Tribune. But of course he had not. Polls have gotten it wrong. And if you keep up with the media, you might think polls continue getting it wrong. But do they are pole scientific? Our polls reliable. Welcome back to the connector, where we bring together innovative, groundbreaking and collaborative research inside the U. T. Austin political science universe. I’m your host, Stewart Tendler. In this episode, we talked with three political scientists. You know quite a lot about polling. We asked Robert Luskin, Darren Shaw and Chris Malaysian whether complaints about polling methods are valid. Bucking popular perception, our guests suggests that such complaints are misleading, often missing the point or ignoring inconvenient truths. In fact, in important ways, polls are becoming more accurate. But why did the media pay attention of polls? Regardless, is reporting on polls the equivalent of a news media junk food binge or a serious component feeding their storytelling needs? We close this episode with an in depth statement about the margin of error in polling and an explanation as to why the media often misreads this statistic. Why don’t we just start with everybody briefly introducing themselves, letting us know who who’s here?
[0:01:31 Speaker 3] Bob Luskin. I’m a professor here at U T and, uh, have co written actually a on article about pre election media polls with Darren and Mark Heatherington, Vanderbilt. I’m also advising the French national election Study.
[0:01:48 Speaker 2] I’m Daryn Shaw. I’m a professor here at University of Texas at Austin Um, co director of the Fox News Poll, along with Chris Anderson of Anderson Robbins Research. Um, one of the co p eyes principle investigators for the Texas Tribune, University of Texas Government Department. Pole is a regular statewide poll of Texas, and I’m a member of the Fox News Decision team.
[0:02:10 Speaker 1] I’m Chris Pleasant, also professor of government here at University of Texas at Austin. And I do work out polling of all sorts, really, from vote intentions over the course of campaigns to public preferences for policy and the relationship with what policymakers actually do.
[0:02:28 Speaker 0] Great. Well, thanks to each of you for being here, I figured it’s a presidential election year. It would make sense to talk to three people who know a lot more than a lot of people about polls. I wanted to start off. So I have this article from by Jill LePore. That was in the November 2015 issue of The New Yorker. And the little subtitle says Polling may never have been less reliable form or influential than it is now. And then if I get fast forward to the very end of the article quoting Donald Trump, she writes, Do we love these polls? He called out to a crowd in Iowa. Somebody said You love polls, I said. That’s only because I’ve been winning every single one of them, right? Right. Every single poll Two days later, when he lost his lead in Iowa to Ben Carson, he’d grown doubtful. I honestly think those polls were wrong. By the week of the third GOP debate, he’d fallen behind in a national CBS New York Times poll. The thing with these polls are also different, Trump said mournfully. It’s not very scientific, so that would be the first question I would pose our polls Scientific are poles reliable? Well,
[0:03:39 Speaker 2] I mean, that’s the question. Is it? I I’m amused by this story just because my experience in polling has been that when someone doesn’t like the result that occurs in a particular poll, they immediately become experts on methodology and in fact, difference to Bob and Chris. I’m thinking that my middle name ought to be changed to dear and flawed methodology Shaw. So I I always take these complaints about methodology with the large grain of salt. Now, having said that, the tough thing when I hear those sorts of complaints and arguments and observations on Poles is it. There are a lot of issues in contemporary polling, important issues about both the methodology about the underlying populations that were trying to survey into to get insight into, Ah, but but the ones that tend to be identified by candidates or advocates of particular policy positions tend not to be the ones that I think really are of interest to those of us who do this stuff for a living, you know, So and we could elaborate on this quite a bit, I’m sure. But you know questions about the advent of on line polling, about the need for cellphone supplements to traditional polling about likely voter samples about list based samples. These are things that are really important issues in the industry and, you know, do lead to issues that can bias poles and produced results that don’t square very well with election results. But you know these things, they’re kind of treated in a not very detailed manner. For obvious reasons. I think when it comes to you know, the public presentation of these things,
[0:05:09 Speaker 3] the context of that article I take it waas the nomination campaign. And it is true that polls of candidate preferences in nomination campaigns are less reliable than corresponding polls in the general election, where party identification ties people down makes makes their choices much more predictable. So you don’t have that, of course, to distinguish between the candidates and in a nomination campaign. That, and that means that people’s preferences are much more lay bile. And it’s, uh, there’s a greater element of unpredictability.
[0:05:48 Speaker 1] Yeah, one of the things reflecting on both what their and Bob have said. The polling used to be really, really simple. Everybody used one single methodology and one pretty identical, very similar question. They would get their their samples in a pretty similar way, and they would administer it in essentially the same way face to face we all went to phone, and we’ve gone now from ah, maybe having one standard or perhaps to toe having many, many different standards. And there’s not to say that these air, not scientific polls. Well, maybe there are different sciences and these air maybe being practice sciences, air being practised to a varying degree. Which is all to say. I think when we’re in the lesson, I think I tried emphasize with people and I think Darren was reflecting on this and Bob has in the past with me as well that it’s really makes it less harder and harder for us to rely on the results of any single pole or poles of particular survey houses. Which leads us to rely on, you know, aggregations we sometimes refer to as the poll of polls On that way. We’re sort of, you know, we’re not beholden to the decisions of a particular organization or a particular type point. There is an assumption there that these kind of errors that air in this polling, which is always the case, there’s always Aaron polling kind of cancel out. But which may not be the case that maybe there’s a systematic bias But
[0:07:06 Speaker 2] there’s some. There’s some really interesting examples like that get played up a lot. Sometimes they’re right. Sometimes they’re not. So you’ll hear a lot of people comment on the the Scottish independence vote. Last elections in Great Britain, the Israeli elections as well, some high profile elections in the United States, the Kentucky race from last cycle the Virginia raised from 2014 where the the conventional wisdom is the polls were massively off. One I think that’s overstated. They actually weren’t is far off. Is a lot of people popular. They think the Virginia case may be an exception to that, the one where the Republican was significantly understated but actually weren’t that far off in Britain and in the case, This guy’s referenda in Israel. And in fact, and there’s also the question of aggregating, you know, a question that basically asked what we called united some generic ballot test, you know, who were gonna vote for for Congress of the Legislature, translating that into seats Aziz, you do in Great Britain or in Israel. And so there is some of these questions of aggregation, but more fundamentally they weren’t that far off. And yet There’s been this tale told about how inaccurate the polls are. And some of the work that that Bob and I have done with Mark Heatherington has suggested a couple things. One even today, and we’re still collecting data from last couple of cycles, but they’re not that far off. I think the latest estimate we have Bob is a point of half or something like that, and they’re not off. Maybe even more importantly, for consumers out there, they’re not consistently off with respect to direction. There’s a slight tendency over time. And Bob, you may remember that LeBaron idea.
[0:08:35 Speaker 3] There’s more than one branch to this research. The one we’ve been focusing on has to do with the signed error that is taking account. We’re looking at not just how far the pole is off, but which way it airs toward the Democratic or the Republican. But another branch, which were not actively working out at the moment, but we’ve looked at, has to do with the absolute magnitude of the air and at least in our data, which do not include the last few cycles were working on that, at least in our data, there’s been a very noticeable downward trend in the error and even the absolute terror in pre election polls. They’ve been getting at least up to the point of our data stopped. For the moment, they have been getting more accurate despite the handicaps that pollsters face. I wanted to pick up on something that Chris said, and one on Darren just addressed it a swell that this has to do with the treatment of polls in the media. I think one reason for this overstatement the boat of the degree of error that both Darren and Chris pointed to is that the media do not generally focus on the average of polls, which is, I agree with Chris is what they should be doing. Instead, they take the polls one by one, and the latest poll, whatever it is, is gospel. Uh, you know, that’s the way things are. Even if it’s an outlier. That’s the way things are. And of course, it’s the most vivid examples of, especially in retrospect, looking at the election results. It’s the most vivid examples that remain in the media’s memory, so they, you know, they remember the cases that were far off for getting the ones that were close or off in the other direction. So the averages is still pretty close. But what they remember is the ones that were way off in general. I could not agree more than what the media should be doing is not focusing on each pole as it comes out, but reporting, Ah, running average. You know, the polls done inappropriate period, they will be more accurate. And, ah than any single poll if you take the average.
[0:10:34 Speaker 2] Yeah, I think that’s right. In fact, I think the aggregate statistics from 2012 and 2008. Ah, and I’m referring here toe prints of some of the work that Nate Silver is done on fivethirtyeight dot com, where he predicts, basically, using these averages. You know, there’s some question about how he waits them. But even if you just done, say, the real clear politics average of polls that I believe you would have gotten 49 under 50 states in one election and 50 out of 50 and the other and thats with no complicated waiting or anything like that, So so I agree with Chris and Bob about how you know there are of course airs in these polls, but not that far off. And certainly collectively they really give you pretty instructive picture about what’s going on.
[0:11:13 Speaker 1] But most of I think what we’re talking about here pertains to general elections
[0:11:17 Speaker 2] is true. That’s right. That’s
[0:11:19 Speaker 1] true. Does it? Certainly.
[0:11:20 Speaker 3] Our research is strictly general election, right?
[0:11:22 Speaker 1] Right and minus is mostly focused on that as well. It’s a lot. It’s a lot trickier when we’re facing Ah, trying to predict primary. Yeah,
[0:11:30 Speaker 2] And then there’s the question of using national poll results versus state by states, and some of these states have become very important. Every four years in the presidential cycle tend to be under polled. You know, I don’t know what a good sample of Vermont looks like because it’s never competitive at the senatorial level, and it’s never competitive the presidential level. But a couple weeks ago, we were all very interested in the Vermont primary and whether Kasich had closed on Trump and turns out he had. Although I don’t know that any polls necessarily got that right, It’s, you know, I’m I’m sort of fairly impressed by polls of Ohio or Florida because there’s so many you know institutions and organizations that do it, and you can kind of judge whether one’s an outlier. But if, say, North Dakota ever becomes a battleground state again, I have very little confidence that were we have the expertise or background, or they will be the number of poles necessary to get it right.
[0:12:19 Speaker 1] I think they did identify the Republican caucus scholars.
[0:12:23 Speaker 3] That’s our polls anticipating caucus votes or even more apt to be off than ones devoted primaries.
[0:12:31 Speaker 2] Yeah. I mean, look, if you identify 400 Republicans from some lists registered voter list and then you figure okay, 15% of them are gonna vote. You know how you get to that? 15% is one issue, and then another is, Well, if I’ve got 15% of 400 I don’t have a very large sample from which get a very accurate yeah, very accurate picture of who’s of what the distribution of the vote’s gonna be.
[0:12:56 Speaker 3] You know, there’s a general point that, ah least in my view, ought to be aired since we’re talking about polls and some degree about the media’s use of them. And that is to what extent normative lee, the media should be so obsessed with polling, it’s ah or indeed should be reporting polls at all. In my view, this is the junk food of political discourse. It’s ah, free of nutritive value. It’s, ah, what people should be talking about and what what the media should be letting the candidates conveyed to us is their policy positions. That’s that’s what people should be voting on to focus so much on polling. I understand why they do it. You know, it’s it’s a way of avoiding any risk of of bias. You know, this is science. We can point to the numbers in polling and makes reporting easier, but it’s not useful to voters and made maybe affirmatively unuseful to voters. It may need people to vote for the candidate who looks as though he or she is going to win, and that’s not the grounds on which people in a democracy should be voted
[0:13:59 Speaker 1] well. But we do know there’s I completely agree. We do know there’s work showing that people love these polls, right. Readers of the newspaper opportunity to choose between the latest right horse race coverage and the next installment on some policy issue, they tend to gravitate to the former.
[0:14:16 Speaker 2] But one thing you do realize when you when you see the other side of the picture that is from the media’s point of view and II under appreciated this ah, earlier in my career. But I have a pretty decent appreciation for it now. Is there need to tell stories? So you conduct a poll and you you give the top line results. You say Trump is winning by 10 points. And then you start telling stories about angry voters, about disaffected voters, about uneducated voters. And, you know, the more content there is out there with the Web and with 24 hour cable, the more they need there is to tell stories. I mean, when I’ve started doing decision teamwork and exit polling work, I thought the top of the hour call was the big deal. You know that we would say whether it can, I want to stay or not. And it turned out no. The top of the hour call last about one minute, and then they got another 59 minutes, you know, minus told minutes for commercials to talk about something. And so it’s what the exit polls do certainly on election night, and I think probably these other polls to is they allow you to, you know, not just say it trumps winning, but give me some sort of story and is we’ve suggested sometimes that story’s not terribly correct in some sense, but sometimes it is. But they need to tell a story.
[0:15:25 Speaker 3] Well, this story also influences the way they present the polling numbers. I remember noticing, I don’t remember exactly which candidates were involved. But earlier in this present nomination campaign, there was a poll on both Republicans and Democrats, and reporting on was no Candidate X is ahead of candidate why on the Republican side, by nine points exclamation point. And then there was a nine point gap between candidates Z and W. On the Democratic side, on the way that was phrased was, well, candidate ZIS is only ahead of candidate W by nine points. You know, it was the same numerical difference, but that’s what fit their narrative,
[0:16:14 Speaker 1] coming back to the point you made about the trial heat focused on the horse race and all that, um, and how that might not be the most important thing for them to be focusing on. I don’t mean the chauffeur organization, but there’s a big, well known survey organization by the name of Gallup, which would agrees with you. In fact, they’ve abandoned, um that part of their polling and actually increased their polling of substandard opinion. A striking change
[0:16:38 Speaker 3] there. Also, several countries that have unsuccessfully band either pre election polls or the publication of them, especially because of this.
[0:16:46 Speaker 2] I’m all in favor of a black market for Poles. I’d like to fill that niche. I have this piracy kind of thing about it. I think that’s kind of cool, actually. Renegade pollsters. But But I don’t want to go to prison for polling. The one thing I’ll say and I don’t disagree with what Bob said about, I think was a junk food quality of the ballots. There’s a literature in with respect to nomination politics or seek presidential primaries that talks about how people use information about viability and viability. Meaning, you know, chances that a candidate is is got a chance to actually win the nomination, and this is what the early contests do. This is what polling has kind of done. It allows Republicans for instance to discriminate between Carly Fiorina and Jeb Bush and Ted Cruz and Marco Rubio and others. Now, I I don’t know that I think that using that information is necessarily a good thing. But I’m not sure it’s not a good thing for those voters. You know, is Is it useful for me to, you know, look up in a Texas Poland, find that candidate I might have supported has zero support, but someone who is reasonably close to my preferences has ah chance to win. This is a unique condition the United States of presidential politics. But I’m trying to think of exceptions, maybe to the junk food rule, and
[0:18:04 Speaker 1] apply some multiparty system doesn t.
[0:18:08 Speaker 3] So yeah, so I do think that’s a benefit of polling information, assuming a reasonable degree of accuracy. But it doesn’t negate or, in my view, fully counterbalanced the the problematic side of things. So, sure, it makes it easier if you’d to judge which of two candidates you think you largely agree with, you should vote for on the basis of which has the greater chance of winning the election of getting the nomination. But it makes it much harder for you to judge who you generally agree with. If they’re not getting policy information, can I change the subject? Still staying with polling, of course. But the I didn’t want our time to elapse without my getting to mention, uh, probably my greatest dissatisfaction with reporting of polls, which is the use of the margin of error to dismiss many results as a quote statistical tie when there no such thing. There’s a kind of twisty logic behind this from classical statistical inference that if you assume that two candidates are actually tied and there’s a margin of error of, say, 4% that means that the probability of seeing and observed difference between them of greater than 4% is less than 40.5 on one in 20 chance. If the difference between them is less than 4% then there is a greater than 40.5 chance of that’s occurring by chance, right? The question is, does this Since any sample is a random draw, ideally a random draw from the population, the results will always be different from what’s true of the population in some degree. Occasionally it will be very different just by chance So the question is, what’s the probability of seeing what you see if their candidates are really tied? But if the margin of error is 4% and the candidates or two or 3% apart, it is still very unlikely, not 0.5 that maybe 0.2 or something like that. Still very unlikely that you’d see those results by chance. The candidates who is, say, two points ahead with a four point margin of Eric is very likely ahead, and to dismiss it as a statistical tie really is wrong.
[0:20:19 Speaker 1] It’s another argument for pulling the polls as well. If you’ve noticed the tendency, what one poll comes out, 500 respondents or so there’s a 3% difference. But it’s not significantly statistically significant. We have another one, which was a 4% difference. Not significant. Another one with three. If we add them together, we easily find them.
[0:20:38 Speaker 2] This is Yeah, this is my Ohio example from 2012 where President Obama I was up in, I want to say 10 consecutive balls, all within margin of error on dso you. You had people saying like Well, this is within March of Aaron, but if you calculate the chances that Obama is behind When you when you have seen 10 consecutive surveys showing him ahead by one point, even it’s it’s infinite testable. So you know, in fact, I think about that logically, right? If if if we are actually tied or if you know Romney is ahead, what are the chances that I’m gonna draw 10 straight polls showing the opposite? That the head? Yeah, yeah, exactly.
[0:21:14 Speaker 3] And that is a common scenario. Frequently, in a close election, you will find a series of polls, all with results within the margin of Arab, but all pointing in the same direction. And that’s because each of them is probably individually. The results showing Candidate X with slim lead should be taken seriously. And, uh, and evidence of that is that Candidate X is a slim leading. You know, five other polls
[0:21:39 Speaker 1] and it is try as we might, there’s there’s a riel incentive, not toe reflect that in a lot of reporters write. It makes the race a lot less interesting to say that Obama has Ohio in the
[0:21:49 Speaker 0] PAC s. Oh, are you convinced about the margin of error? Thanks for listening to this episode of the connector where we dove into the world of polling. If you are anything like me, you’ll be thinking about polls just a little bit differently from now on. Stay tuned for Part two of this episode when we turn our attention toward November 2016. If you have any questions or comments, you can find me a Gov dot utexas dot edu