• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

PoliGAF 2015 |OT2| Pls print

Status
Not open for further replies.
D

Deleted member 231381

Unconfirmed Member
find a new profession then. There is no reason to poll 1500 people all in the same day because presidential preferences do not change that fast outside of major telegraphed events like presidential debates. And even THOSE are unlikely to change in a single day since many viewers don't watch debates live, they watch post debate analysis or time shift via youtube. whether you poll them on the same day or over three days is completely irrelevant.

You're being deliberately obtuse now. Let's examine a real life example. Firstly, we'll suppose the polling average for the 2012 US Presidential election was accurate and that people intended to vote 48.8% Obama and 48.1% Romney in the three days prior to the election. Then suppose that 0.8% of the electorate changed their mind over the course of the final three days. Even though the US is highly polarized, it isn't so polarized that 99.3% of people do not sway or decide at the last minute, that would be absurd. Suddenly, Romney wins (the popular vote, at least) 48.5% to 48.3%, despite the fact that a three-day polling spread (correctly) put Obama ahead.

In the last day of the race? Implying that this is even possible demonstrates a laughable level of comprehension on your part regarding US politics. The US electorate is EXTREMELY polarized and very few voters are truly up for grabs. Unless Obama got caught with a dead boy or live girl on live television there is no reason to think daily sampling is preferable to a three day.

See above. Besides, equally as important as deciding who to vote is deciding *whether* you vote. A lot of turn-out models incorporate at last partially self-identified likelihood to vote. If, in the last few days, a lot of people decide actually they do rather intend to vote when previously they though they'd stay at home (or the reverse, they decide they don't want to vote after all, which sadly is more common), then again, you have problems detecting last minute trends that can throw you badly off the result.

you'll also note that Gallup which had a 2700 voter poll (VERY close to your 1K a day) was the farthest off base at Romney+1.

You do not understand polling. The reason why Pew is accurate and Gallup wasn't despite sample sizes is because all of those polls WEIGHT THEIR RESULTS to make a prediction about who is more likely to turn out on election day. Pew was more accurate because their weighting was more accurate. Gallup was off base by a mile because they made bad assumptions and assumed that republicans would be overrepresented, despite the number of people in their poll. Gallup came out and made a public statement admitting as much.

No shit. You're setting up some strawman where I'm arguing only sample size matters. If you actually read my original post about why US polling is unexpectedly poor, I specifically mention a tonne of different issues, all of which contribute to polling accuracy/inaccuracy. However, on the basis of polling performance the US doesn't appear to be any better at demographic sampling than the UK does, with a similar degree of inaccuracy between 2012 and 2015. I'm just saying that small samples is one of *a number of issues* alongside *several other things*, *all of which I listed*. I'm emphasizing those three things because I'm beginning to doubt you actually read posts properly.
 
You're being deliberately obtuse now. Let's examine a real life example. Firstly, we'll suppose the polling average for the 2012 US Presidential election was accurate and that people intended to vote 48.8% Obama and 48.1% Romney in the three days prior to the election. Then suppose that 0.4% of the electorate changed their mind in the final three days. Even though the US is highly polarized, it isn't so polarized that 99.7% of people do not sway or decide at the last minute, that would be absurd. Suddenly, Romney wins (the popular vote, at least) 48.5% to 48.3%.

I'm not how much clearer I can make it. You don't understand the numbers you're looking at. It's already been pointed out to you that RCP's average uses a small number of polls, and doesn't make allowances for certain pollsters being less accurate. The rolling average wasn't 48.8% Obama to 48.1% Romney. At all.

The Huffpost average found here

gave the election at a wider 48.2% Obama to 46.7% Romney, using 589 polls from 62 pollsters, which was closer to the true result because once again RCP's average is not good.

ON TOP OF THAT, you're ignoring weighting. You cannot take a straight average of a dozen polls and just assume that's the way the electorate is. All of those polls are making different assumptions on who will actually show up. If you took Pew and Gallup and Averaged them together you would find that Obama was expected to lead by 2%- but that isn't accurate because Gallup's weighting was so poor. the reason nate silver is considered accurate is because his model takes weighting and accuracy into account in his aggregate. There were known issues with Gallup long before election day, and most people following polls realized this and adjusted their models accordingly- RCP did not.

FINALLY these polls are all likely voters. Not registered voters, not all adults. these are people who routinely vote in elections and no, they don't swing from one candidate to the other without significant events. Just about everyone switches over to LV polls by the final month because of this. Again, you don't understand the US electorate or how these polls work.

No shit. You're setting up some strawman where I'm arguing only sample size matters. If you actually read my original post about why US polling is unexpectedly poor, I specifically mention how sampling is stratified as well. However, on the basis of polling performance the US doesn't appear to be any better at demographic sampling than the UK does, with a similar degree of inaccuracy between 2012 and 2015. I'm just saying that small samples is one of *a number of issues* alongside *several other things*, *all of which I listed*.

I've already pointed out how "US polling" isn't poor at all- your failure to understand is the issue.
 
D

Deleted member 231381

Unconfirmed Member
I'm not how much clearer I can make it. You don't understand the numbers you're looking at. It's already been pointed out to you that RCP's average uses a small number of polls, and doesn't make allowances for certain pollsters being less accurate. The rolling average wasn't 48.8% Obama to 48.1% Romney. At all.

The Huffpost average found here

gave the election at a wider 48.2% Obama to 46.7% Romney, using 589 polls from 62 pollsters, which was closer to the true result because once again RCP's average is not good.

Two points: firstly, I am comparing RCP to the UK's poll average because they work the same way: they literally just take a bunch of polls and average them. If you compare the HuffPost *weighted* average (and this weighting is not being done by the pollsters themselves but by an independent authority) to an *unweighted* UK average, you are not doing a like for like comparison and you are biasing your test. Of course a carefully weighted average will do better than an unweighted one.

Secondly, this is still not significantly more accurate than the UK election. The actual result was 51.1% (+2.9) to 47.2% (+0.5), compared to the HuffPost weighted average. The UK's general election result was was 36.9% (+1.9) to 30.1% (-3.5), compared to the UK **unweighted** polling average - and this is ignoring the fact that the UK has multiple large parties, so you naturally expect slightly more error. If you include the Lib Dems 7.9% (-1.4) and UKIP 12.6% (+1.3) and work out the squared errors, then the UK election polls were almost exactly as accurate (square error 2.18%) as the American polls (square error 2.08%) - and this is comparing a weighted to a non-weighted prediction!

ON TOP OF THAT, you're ignoring weighting. You cannot take a straight average of a dozen polls and just assume that's the way the electorate is. All of those polls are making different assumptions on who will actually show up. If you took Pew and Gallup and Averaged them together you would find that Obama was expected to lead by 2%- but that isn't accurate because Gallup's weighting was so poor. the reason nate silver is considered accurate is because his model takes weighting and accuracy into account in his aggregate. There were known issues with Gallup long before election day, and most people following polls realized this and adjusted their models accordingly- RCP did not.

First, see above for why I am not precisely comparing a weighted average like HuffPost to the UK average. Second, I fully understand the weighting process for weighting averages of polls. However, you are innately cheating the comparison by bringing in Nate Silver's predictions/independent weighting not conducted by the polling companies themselves. What I am effectively saying is 'American polls, taken on their own merits, are not very good'. What you are saying is 'American polls, once you have applied Nate Silver's independent adjustments that were not part of the original polling set-up, are good.'

No shit do polls have more value when someone who can spot how badly the original polling company weighted different samples internal to their own poll sorts them out. I'm not arguing that. The UK has no real equivalent to Nate Silver. However, *setting Nate Silver aside for the moment* and judging the quality of the polls purely on their own merits and before applying *external adjustments*, American polls do not perform well. The average American poll taken on its own merit performed more poorly than the average British poll taken on its own merit in their respective elections.

I have highlighted several reasons for this. One, and by no means the most important one, I listed was sample size. One of the others, and probably the most important one, is demography. I explicitly listed in my original post the difference between UK and US demography and how different US demographics are weighted *internally, within a given poll* is poorer in comparison.

FINALLY these polls are all likely voters. Not registered voters, not all adults. these are people who routinely vote in elections and no, they don't swing from one candidate to the other without significant events. Just about everyone switches over to LV polls by the final month because of this. Again, you don't understand the US electorate or how these polls work.

Likely voters are not some easily ascertainable group that remain exactly and precisely the same throughout time. The way likely voters are found is *usually* that a specific sample is phoned, asked how likely they are to vote, discarded if they say below a certain amount, and then their responses further weighted - i.e., people who said 10/10 likely weighted higher than others, people who are black or poor weighted lower than others because even for identical rates of self-reported likelihood to vote poor people and black people statistically turn up less.

That means that if people change their mind about how likely they are to vote in the final few days (which is absolutely not uncommon, I can link you a shit-ton of papers if you're really interested), then the sample of voters *who are considered likely voters in the first place* changes, which obviously has an impact on accuracy.

Honestly, declaring I have no idea how polls work again and again does not help you. It is okay to admit when you are wrong.
 
D

Deleted member 231381

Unconfirmed Member
Hey guess what. You guys are never gonna agree!

We partly disagree because (I think) we're comparing different things. manmade is essentially saying "if you know how to discard or weight different American polls (not subsamples of the polls, the actual polls themselves) based on a prior study of accuracy and bias, you can make a good prediction". I'm saying "if you pick an American poll at random, on average the poll itself makes a poor prediction". So I think (may be wrong) we're talking at cross-purposes, because I agree with what manmade's argument is if it is as I've stated it above - that's why people like Nate Silver and Sam Wong can do a good job. If he genuinely is arguing my second point (which is talking about how good American polls are before you apply independent analysis), though, he's just wrong. It's not a matter of opinion, there is literal data to prove otherwise.
 
Two points: firstly, I am comparing RCP to the UK's poll average because they work the same way: they literally just take a bunch of polls and average them

and as I've tried to explain, why you would choose RCP of all people to do this is just bizarre, since RCP is only taking a small subset of polls they happen to prefer, assuming they are all equally valid, and basing the race off of it. As we can see with the primary election, RCP's selection process is flawed, as there is a GIANT gap between Carson and Trump that is not reflected at RCP, but is immediately apparent elsewhere. RCP's poll selection portrays the race as closer than it actually is, due to editorial bias. This is not a polling issue, this is an issue with RCP who is not a pollster.

Secondly, this is still not significantly more accurate than the UK election. The actual result was 51.1% (+2.9) to 47.2% (+0.5), compared to the HuffPost weighted average. The UK's general election result was was 36.9% (+1.9) to 30.1% (-3.5), compared to the UK **unweighted** polling average - and this is ignoring the fact that the UK has multiple large parties, so you naturally expect slightly more error. If you include the Lib Dems 7.9% (-1.4) and UKIP 12.6% (+1.3) and work out the squared errors, then the UK election polls were almost exactly as accurate (square error 2.18%) as the American polls (square error 2.08%) - and this is comparing a weighted to a non-weighted prediction!

The UK also has significantly less demographic diversity, so taking a straight unweighted poll doesn't work. The US is at least 37% nonwhite with far more of the population being recent immigrants. You can bring up political parties all you want, but racial demographics being as skewed as they are in the US vs. the UK makes it FAR more difficult to poll- especially since about 19 milion people in the US speak english "poorly" or "not at all."

And as I've tried to explain, there is no such thing as an unweighted poll, since what a likely voter IS varies from pollster to pollster as they attempt to predict which demographics are likely to turn out. What motivates white voters to turn out will NOT motivate minorities, and vice versa- in fact a lot of the error in weighting in 2012 revolved around white voter "anger" which failed to appear.

First, see above for why I am not precisely comparing a weighted average like HuffPost to the UK average. Second, I fully understand the weighting process for weighting averages of polls. However, you are innately cheating the comparison by bringing in Nate Silver's predictions/independent weighting not conducted by the polling companies themselves. What I am effectively saying is 'American polls, taken on their own merits, are not very good'. What you are saying is 'American polls, once you have applied Nate Silver's independent adjustments that were not part of the original polling set-up, are good.'

It's not innately "cheating" to make the assertion that you cannot simply grab polls at random without considering whether or not that pollster is any good. There are dozens of fly by night outfits that few people take seriously with crazy weighting and high margin of error. Putting these on an equal footing with PPP, Pew, or even Rasmussen is idiocy.

Understanding the methodology behind each particular pollster and why they weight the way they do (Rasmussen was notorious for having a republican "house effect" in past races) is essential to being able to criticize a poll as "good" or "bad." if you cannot do this (and it appears that you can't) then your opinion on US polling isn't worth the time it takes to type it out.

Likely voters are not some easily ascertainable group that remain exactly and precisely the same throughout time. The way likely voters are found is *usually* that a specific sample is phoned, asked how likely they are to vote, discarded if they say below a certain amount, and then their responses further weighted - i.e., people who said 10/10 likely weighted higher than others, people who are black or poor weighted lower than others because even for identical rates of self-reported likelihood to vote poor people and black people statistically turn up less.

and as I've pointed out REPEATEDLY you do not understand what you're talking about. "Likely Voter Models" vary from pollster to pollster. A likely voter at gallup is not a likely voter at Pew, is not a likely voter at PPP. Racial demographics, Age, and location all play into this to a way that does not occur in the UK. Young voters or black voters might be less likely to vote, but exactly HOW less likely is going to be up to the pollster. Again- you seem to be completely unaware of this, and assume you can simply average likely voters together because everyone is using the same model to determine what a likely voter is. This. Is. Wrong. and I already linked to you the gallup article that explains this:

Gallup’s seven-question model to determine likely voters is famous in the polling world, but may have contributed to errors in 2012. While most likely voter models improved Romney’s 2012 standing, Gallup’s resulted in a larger-than-average four-point shift. In particular, the finding mirrors problems in the 2008 New Hampshire primary, when Gallup’s likely voter model produced larger errors than un-adjusted data, according to a report by the American Association for Public Opinion Research.

Gallup said the 2012 election data were not sufficient to diagnose what was wrong withits likely voter model, but plans to test the accuracy of the model in gubernatorial elections this year in New Jersey and Virginia by comparing survey results with records of whether respondents actually voted.

Are we clear?
 
D

Deleted member 231381

Unconfirmed Member
and as I've tried to explain, why you would choose RCP of all people to do this is just bizarre, since RCP is only taking a small subset of polls they happen to prefer and basing the race off of it. As we can see with the primary election, RCP's selection process is flawed, as there is a GIANT gap between Carson and Trump that is not reflected at RCP, but is immediately apparent elsewhere. RCP's poll selection portrays the race as closer than it actually is, due to editorial bias.

This doesn't make much difference. You can use the Huffington Post's example (as I did), and the data still supports my argument. I used RCP because (to my knowledge) it simply takes an average of all significant pollsters equally.

The UK also has significantly less demographic diversity, so taking a straight unweighted poll doesn't work. The US is at least 37% nonwhite with far more of the population being recent immigrants. You can bring up political parties all you want, but racial demographics being as skewed as they are in the US vs. the UK makes it FAR more difficult to poll- especially since about 19 milion people in the US speak english "poorly" or "not at all."

And as I've tried to explain, there is no such thing as an unweighted poll, since what a likely voter IS varies from pollster to pollster as they attempt to predict which demographics are likely to turn out. What motivates white voters to turn out will NOT motivate minorities, and vice versa- in fact a lot of the error in weighting in 2012 revolved around white voter "anger" which failed to appear

You and I are talking about different things entirely. Let's talk about weighting. There are two types of weighting that are important when looking at an average of polls. The first is internal weighting. Say a pollster conducts a poll, gets through to 1,000 people, and 900 of those people are white and 100 black. This does not accurately reflect the American demographics, so they weight each white person at 15/18ths of a vote and each black person at 2.5 votes.

The second is how you weight the polls themselves when producing your average (external weighting). Let's say we have two polls. One says the result will be 50 D / 50 R. The other says the result will be 54 D / 46 R. We know from the past X elections that the second polling company has typically performed better. So, when we produce an average of these polls, we weight the second poll better than the first, such that our weighted average concludes the current state of the race is probably 53 D / 47 R, rather than 52 D / 48 R which is the unweighted average.

When I say 'unweighted', what I have been referring to is the second, NOT the first. When I say 'unweighted', I do not mean failing to properly stratify the different demographic samples in any given poll. This would be absurd. Any given poll must have its subsamples weighted properly to give an accurate result. In fact, I would argue that British polls do this *better* than American polls.

When I say 'unweighted', what I mean is that looking at the average American poll and comparing it to the average British poll, without giving particular polls more importance according to accuracy in prior elections, British polls perform better.

However, I will go one step further. As far as I know, there were no major British pundits that were providing weighted averages (in the second sense) of UK polls - in other words, there was no Nate Silver. There were a number of people who made election predictions on the basis of polling data, but they did so directly from what data polls gave them, rather than considering the prior record of those polls (mostly because the prior record of these polls was very good, as in 2010).

However, if they had, then British polls would have performed even better than they did and significantly better than American ones. This is because of weighting *in the first sense*. UK polls do better demographic sampling than American ones, simply because they stratify for more things.

It's not innately "cheating" to make the assertion that you cannot simply grab polls at random without considering whether or not that pollster is any good. There are dozens of fly by night outfits that few people take seriously with crazy weighting and high margin of error. Putting these on an equal footing with PPP, Pew, or even Rasmussen is idiocy.

This is my point, though. On average, American polling is bad. Yes, it improves when you say "well, we know X, Y and Z were bad in the future, so we'll discard them, and we know in the past, A, B, and C were inaccurate for this reason, so we will correct them ourselves". But this doesn't say anything meaningful. Literally any developed Western country that has some basic level of polling will produce better results when you carefully handpick and moderate the polls you look at. You're effectively saying "Something is good when you take away the bad things!"

Here's a thought experiment. The year is 20XX. Britain and America are having an election. You have no prior knowledge of the success rate of any polling company, and cannot make adjustments to your predictions based on the past success of any poll, and you can only take the poll at face value. Which will allow you to make more accurate predictions about the popular vote: American or British polls?

The answer is factually British ones.

Understanding the methodology behind each particular pollster and why they weight the way they do (Rasmussen was notorious for having a republican "house effect" in past races) is essential to being able to criticize a poll as "good" or "bad." if you cannot do this (and it appears that you can't) then your opinion on US polling isn't worth the time it takes to type it out.

This is what I mean by us talking at cross purposes. I do not disagree with what you typed here. If you know the prior accuracy and bias of a particular polling organization, you can assign different importance to or even outright correct particular polls to end up with a better prediction. This does not mean that the *polls* are good. It means that the analysts are good at knowing how to tease out meaningful information from what are at face value inaccurate polls.

and as I've pointed out REPEATEDLY you do not understand what you're talking about. "Likely Voter Models" vary from pollster to pollster. A likely voter at gallup is not a likely voter at Pew, is not a likely voter at PPP. Racial demographics, Age, and location all play into this to a way that does not occur in the UK. Again- you seem to be completely unaware of this, and assume you can simply average likely voters together because everyone is using the same model to determine what a likely voter is. This. Is. Wrong.

Are we clear?

I was pointing to the typical example of how likely voters are determined. I am not saying they are the same across polls. Different pollsters obviously consider who is a likely voter differently. What I am saying, and what is factually true, is that the sample of likely voters changes across time as people reconsider their likelihood to vote. Almost all pollsters put *at least some weight* on self-reported likelihood to vote in their likely voter models (although there are some exceptions). In the final few days before an election, people can often reconsider their likelihood to vote quite rapidly. There is a lot of data to suggest this is the case, which I can link you to. Therefore, polls which take three samples from three separate days can often miss important developments in the result. This is true even when companies use different likely voter turn-out models, something I do not deny.

You are wrong.

EDIT: Also, just so we're clear, this isn't US vs. UK dick-waving; I have no interest in that. It is simply an observation that US polling is a lot less well-regulated than British polling which is subject to a number of stringent laws, which produces a lot more cowboy pollster companies that drag the standard of the industry down, meaning that you have to rely more and more on a small number of experts to be able to describe the race rather than being able to interpret it directly from polling data yourself. In many other respects, the US has better political punditry than the UK.
 

Wilsongt

Member
It would be incredibly tacky if the GOP starts using the Paris attacks to ridicule Hillary to try to woo the electorate their way out of fear.

They're going to do just that, aren't they?

Edit:

Yep.
 
It would be incredibly tacky if the GOP starts using the Paris attacks to ridicule Hillary to try to woo the electorate their way out of fear.

They're going to do just that, aren't they?

Is that even possible? Hillary is a hawk, it's going to be ridiculous if Rubio is on the debate stage going "I'll be much tougher on terror" while Hillary is calling for over-the-top intervention.
 

Wilsongt

Member
Is that even possible? Hillary is a hawk, it's going to be ridiculous if Rubio is on the debate stage going "I'll be much tougher on terror" while Hillary is calling for over-the-top intervention.

Donald J. Trump
✔
@realDonaldTrump

President Obama said "ISIL continues to shrink" in an interview just hours before the horrible attack in Paris. He is just so bad! CHANGE.
8:39 AM - 14 Nov 2015

Zeke Miller
✔
@ZekeJMiller

Kasich: "Just as France did for us in aftermath of the infamous 9/11 attacks, we should invoke Article 5"

Jake Tapper
✔
@jaketapper

Santorum: US "sitting around nibbling at" ISIS; "they're fighting the United States and they're winning"
12:00 PM - 14 Nov 2015
 
This doesn't make much difference. You can use the Huffington Post's example (as I did), and the data still supports my argument. I used RCP because (to my knowledge) it simply takes an average of all significant pollsters equally.

And here again I have to point out that what defines "significant" is skewed by RCP editorial bias. They ignore relevant pollsters and included flawed ones, using a far smaller sample than someone like huffpost who took just about everyone into account. Huffpost's average while not perfect illustrates that Obama was ahead vs. Romney, RCP set them at a tie. One of these was wrong and it was RCP.

You and I are talking about different things entirely. Let's talk about weighting. There are two types of weighting that are important when looking at an average of polls. The first is internal weighting. Say a pollster conducts a poll, gets through to 1,000 people, and 900 of those people are white and 100 black. This does not accurately reflect the American demographics, so they weight each white person at 15/18ths of a vote and each black person at 2.5 votes.

The second is how you weight the polls themselves when producing your average (external weighting). Let's say we have two polls. One says the result will be 50 D / 50 R. The other says the result will be 54 D / 46 R. We know from the past X elections that the second polling company has typically performed better. So, when we produce an average of these polls, we weight the second poll better than the first, such that our weighted average concludes the current state of the race is probably 53 D / 47 R, rather than 52 D / 48 R which is the unweighted average.

Yes, let's talk about weighting, since your understanding of how polls are weighted isn't just poor, it's flat out wrong.

In your example, all the pollster is doing is correcting for a lack of black participants in their poll results, weighting black voters higher to account for their percentage of the population, then going about their way.

WEIGHTING DOES NOT WORK THIS WAY.

Now – theoretically speaking – with every additional person who is polled, the margin of error of that poll decreases accordingly. But those models assume that your random sample is also a representative sample. What happens when that isn’t the case? For instance, let’s say we have a population of about 1,000 people, and prior studies and voter registration records and such show that the population is split 50/50 between Democrats and Republicans. If we take a poll of 100 people out of this population, it is within the realm of possibility that we end up with responses, for example, from 80 Democrats and 20 Republicans.

If we did, we could go ahead and publish the poll results – and they would be wildly off and our reputation as a pollster would be damaged. Instead, knowing what we know, we can go ahead and make those results look more like what they ought to look like, based on our population data. We can take the results of those 20 Republicans and multiply it by 2.5, and take the result of the 80 Democrats and divide it by 1.6 – making it look like our final sample was indeed 50 Democrats and 50 Republicans when it actually wasn’t.
Now while some people and would-be pundits decry companies engaging in such “tinkering” with their polls, given the inherent nature of random samples I don’t think any serious person would say pollsters should not make adjustments to their numbers from time to time – with the end goal being achieving a representative sample as well. Here’s where it gets complicated, though: the example given above is an incredibly basic, elementary version of weighting a poll.

This is what you are doing. Extremely basic, elementary weighting of polls.
Major Pollsters like gallup use Multi-Demographic Weighting to break the responses down futher. This is literally polling 101- in fact this is where I went to explain this to you, a place called "polling 101."

At the most basic (and least useful) level, a pollster might simply ask a question to the effect of, “Are you planning to vote in the upcoming election?” One small step beyond that would be a question such as, “On a scale of 1-5 (or 1-10, or 1-7, or whatever) how likely are you to vote in November?” Then the pollster will only continue the poll with those who answered with a “5” (or maybe a 4 or a 5, etc). These two questions represent quick and easy LV screen that allow the pollster to technically report their results as being from the population of “likely voters;” however, self-reporting motivation is a tenuous proposition – especially when it comes to voting. The social stigma surrounding voting is such that a vast majority of people will answer “Yes!” when you ask them if they are planning to vote.

In fact, a study done by Pew Research showed that when given the chance to rate how likely they were to vote on a scale of 1-10, 77% of American adults said 10. An additional 13% gave an 8 or a 9, giving a total of 90% of the population who were very likely to vote. Odd, then, how our election turnout in a Presidential election year is usually just over 50% (or, put another way, about 75% of the 70%, using the numbers at the beginning of this post).

To solve this problem, most pollsters have developed what they refer to as an index. An index is nothing more than a series of questions, the answers to which are scored and then used as a likely voter screen. Some indexes are short, maybe two or three questions, while some are quite extensive with up to 10 questions or more. The goal, of course, it utilizing the data to determine likelihood of voting.

For instance, most likely voter screens will include a question about your past election history: how many elections you’ve voted in in the past x number of years, or if you voted in the previous Presidential or primary election – something to that effect. People who have voted in the past are rated more highly on the index than those who have not. Others will ask a respondent’s interest in politics, or how closely they follow the news about the election, and those who follow it closely or say they are “very interested” get rated higher. In the end, the scores are tallied, a cutoff point is determined, and the likely voter sample is achieved.

The most famous index in the polling world is Gallup’s, because they have made it the most accessible to the public. Gallup utilizes a series of seven questions, each of which they score with a 0 or a 1 – and then only respondents who scored a 7 out of 7 (or sometimes a 6 out of 7, depending on sample size) are used in their likely voter sample.

Now, there are some obvious problems with this index. Take me, for example: I have voted in every single election and nearly every single primary since I turned 18. I am the epitome of a likely voter. However, if Gallup had polled me using this index during the 2008 election, I would have flunked the test. Why? Because my family had just moved into a new house in a new voting precinct. I would have had to answer questions 2 and 3 with a no, and Gallup would have cut me out of their sample. But I suppose examples such as mine are rather statistically insignificant, and Gallup seems to be doing all right with their index – being the most recognized pollster in the world and all.

CBS utilizes a six-question index, but rather than setting a threshold like Gallup or ABC (i.e., to qualify as a “likely voter” you have to score a 6 or a 7), CBS actually weights their polls based on their likely voter screen. I know – let’s see how complicated we can make this process, right? CBS assigns weights to each of the six index questions and, based on the combination of the six answers, assign an overall weight (ranging from 0.05 to 1.0) to each respondent. Thus, for instance, someone who answers a CBS poll could be determined to have a 75% chance of voting in November, giving their responses a .75 weight in the poll. In this way, likely voters’ responses are given full weight while people who may or may not vote are still given a voice in the topline results. (And, it must be noted, it is questionable as to the accuracy that this screening/weighting method produces.)

So why does this matter to us, as political pundits and poll consumers? It gives us another rather easy way to judge the value of a particular poll. If it is a poll of simply adults, we know that half of the people represented by the survey probably aren’t even going to vote. If it’s a poll using a registered voter model, we know around 25% of the voters represented by the results aren’t going to vote. And if it’s a likely voter model, we know that some level of subjectivity went into selecting which responses were reported and which ones were not.

And therein lies the rub: many people (myself included) place a higher value on likely voter models because we believe they will produce more accurate outcomes, since they more accurately reflect (to some degree or another) who will actually show up in the voting booth. However, we must temper that with the understanding that all likely voter models are not created equal. And many pollsters will not release their LV screens to the public, fearing that other companies may steal them or that people will use them to illegitimize their polls.
Sometimes, likely voter models can be off. Generally and historically speaking, people who are registered Republicans vote in higher percentages than those who are registered Democrats (which is one of the reasons ABC uses party ID in their likely voter screen). Whatever the reason is for this (registered Republicans tend to be older, tend to be more responsible, tend not to be smoking pot in their parents’ basement on election day, take your pick), it has been borne out in election after election. Therefore, as we eliminate respondents from our poll results by shifting from adults to RVs to LVs, the results tend to become more favorable to Republican candidates. (This is not always the case, and this historical assumption was strained during the 2004 election when many LV model polls produced results more favorable to John Kerry — but as an oversimplified rule of thumb Republicans do better in LV polls.)
However, sometimes LV models drill too far down and exclude too many people. The 2000 election is a prime example of this. The final polls in 2000 were, quite interestingly, split: those which utilized RV models mostly predicted a Gore victory by a few points, and those which used LV models mostly predicted a Bush victory by a couple points. Of course, the final result was a tiny 0.5% margin in favor of Gore – which meant the RV models most likely did not exclude enough people while the LV models excluded too many. So just because a pollster filters responses through a likely voter screen does not make a poll automatically more accurate, it just gives it the potential to be more accurate.

http://race42016.com/2012/04/30/polling-101-voter-models/

So when I say "you don't understand" here, I'm trying to hammer in that your understanding of weighting by pollsters is EXTREMELY basic, and is completely ignoring the process of likely voter screening entirely.

You are clueless, I award you no points, and may god have mercy on your soul.

You are wrong.

NO U.

EDIT: Also, just so we're clear, this isn't US vs. UK dick-waving; I have no interest in that. It is simply an observation that US polling is a lot less well-regulated than British polling which is subject to a number of stringent laws, which produces a lot more cowboy pollster companies that drag the standard of the industry down, meaning that you have to rely more and more on a small number of experts to be able to describe the race rather than being able to interpret it directly from polling data yourself. In many other respects, the US has better political punditry than the UK.

Less well regulated does not mean "less accurate" and it's foolish to make that assertion. Do "cowboy pollsters" exist? Of course they do- but there is enough available data that few if any media outlets take such pollsters seriously. Those that make a business of aggregating polls (with the exception of RCP) take the track record of such pollsters into account and grant them little to no weight, if they bother to report on their results at all.
 
You may have noticed that I gave Obama a nod, over his handling of Israel, despite my usual critical stance, because, unlike Hillary and the Republicans, who personally role out the red carpet for Netanyahu, he has at least been critical, where warranted (on settlements etc), but do you know what his greatest achievement is, and no, I'm not talking about Obamacare?

It's Net Neutrality, because it means that any "jo smo", with half a brain, can continue to go on the Internet and learn about practically anything, and most importantly, it acts as a real equalizer, on the political front, as, with a little effort, you can get to understand the political system and untangle political spin put out by the establishment media etc.
 
D

Deleted member 231381

Unconfirmed Member
And here again I have to point out that what defines "significant" is skewed by RCP editorial bias. They ignore relevant pollsters and included flawed ones, using a far smaller sample than someone like huffpost who took just about everyone into account. Huffpost's average while not perfect illustrates that Obama was ahead vs. Romney, RCP set them at a tie. One of these was wrong and it was RCP.

This makes 0 difference to what I'm saying when, purely for you, I used the HuffPost for comparison as well.

Yes, let's talk about weighting, since your understanding of how polls are weighted isn't just poor, it's flat out wrong.

In your example, all the pollster is doing is correcting for a lack of black participants in their poll results, weighting black voters higher to account for their percentage of the population, then going about their way.

WEIGHTING DOES NOT WORK THIS WAY.

Holy shit you are a moron. NO SHIT THAT IS NOT THE ONLY WAY IT WORKS. I was not saying a sample reweighting for racial identity is the only fucking way that people weight a sample, I was giving it as a very simple example of A weighting process (not ALL OF WEIGHTING EVER) so that we could stop talking about different things with you thinking they were the same.

I KNOW you weight for race, you weight for age, gender, socioeconomic bracket, party identification, likelihood to vote, and frankly as many other factors as you have the money and time to figure out a weighting for. I SPECIFICALLY POINTED THIS OUT IN THE VERY FIRST FUCKING POST I MADE THAT STARTED THIS ARGUMENT.

Crab said:
UK polls are typically stratified by gender, age group (typically between 5 different age demographics), region, newspaper readership, party self-identification, and ethnicity; US polls are typically stratified by gender, age group (usually between 3 different age demographics) and self-identified race.

NOT ONLY have I already pointed this out, my argument is that US POLLS TYPICALLY DO THEIR DEMOGRAPHIC WEIGHTING VERY POORLY, by pointing out how badly the average US poll performs - worse than the British ones which were widely condemned for their poor 2010 performance. You have literally had NO COHERENT RESPONSE to this other than saying 'well Nate Silver can figure it out' - yes, thanks, no shit that someone specifically paid to work out how polls have gone wrong can make a decent prediction out of them. That is my ENTIRE point - you can't take American polls at face value as well as you can polls from other countries because they are so often wrong you need special analysis to begin with.

You're not even arguing my point, you're arguing with some weird... well, I'd call it a strawman but that implies you made a conscious effort. You're arguing with a complete and total misunderstanding of what I'm saying. I hate to come across all patronizing with the capitalized emphasis but christ is nothing getting through to you. I spent time working at YouGOV, believe me I know how polls work.
 

SL128

Member
Crab, Man, skimming your conversation I think it would be productive for you both to take a break and calm down before the discussion deteriorates further.

It would be incredibly tacky if the GOP starts using the Paris attacks to ridicule Hillary to try to woo the electorate their way out of fear.

They're going to do just that, aren't they?

Edit:

Yep.
They're a party that had 9/11 happen on their watch, but managed to blame it on the Democrats; this is not nearly as tacky as they've been.
 
NOT ONLY have I already pointed this out, my argument is that US POLLS TYPICALLY DO THEIR DEMOGRAPHIC WEIGHTING VERY POORLY, by pointing out how badly the average US poll performs - worse than the British ones which were widely condemned for their poor 2010 performance. You have literally had NO COHERENT RESPONSE to this other than saying 'well Nate Silver can figure out' - yes, thanks, no shit that someone specifically paid to work out how polls have gone wrong can make a decent prediction out of them. That is my ENTIRE point - you can't take American polls at face value as well as you can polls from other countries.

You're not even arguing my point, you're arguing with some weird... well, I'd call it a strawman but that implies you made a conscious effort. You're arguing with a complete and total misunderstanding of what I'm saying. I hate to come across all patronizing with the capitalized emphasis but christ is nothing getting through to you. I spent time working at YouGOV, believe me I know how polls work.

And I don't think you have. You've made some extremely basic and poorly sourced assertions about US polls, with little more than "But the british are better, thanks to these random polls I looked at", along with leaning on RCP's average for bizarre reasons you've yet to explain other than "it's a straight average, why shouldn't it work?"

You also seem to keep going back to the well of a daily poll being preferable to a three day, based on completely fictional movements in likely voters that have no basis in reality.

Elections aren't conducted in a vaccuum. In a hypothetical fantasy world where we wake up and have to grab polls at random knowing nothing, perhaps the british model would be better! Fortunately we live in a world where data is easily accessible and finding reputable pollsters as well as their methodology is trivial...except perhaps in your case.

That being said- I'm done for the day. Amusing as this was I have things to do
 

Tarkus

Member
200w.gif
 
D

Deleted member 231381

Unconfirmed Member
And I don't think you have. You've made some extremely basic and poorly sourced assertions about US polls, with little more than "But the british are better, thanks to these random polls I looked at", along with leaning on RCP's average for bizarre reasons you've yet to explain other than "it's a straight average, why shouldn't it work?"

My 'assertion' is, and always was, this: if you took all the polls released in the three days run-up to the United States presidential election, and selected one randomly with all those polls being given an equal chance to be selected, then the poll you found would, on average, be less accurate than if you took all the polls released in the final three days run-up to the British general election, and selected one randomly with all those polls being given an equal chance to be selected. In other words, when taken at face value, something I have explicitly stated multiple times, US polls are quite poor. My evidence for this? The fact that this is actually *true*. The squared error for British polls was 2.18%. For American polls, it was 2.32% (for reference, I used the top five from here as they all ran til the 4th. if you have any other polls that ran til the 4th, I will happily include them).

Elections aren't conducted in a vaccuum. In a hypothetical fantasy world where we wake up and have to grab polls at random knowing nothing, perhaps the british model would be better! Fortunately we live in a world where data is easily accessible and finding reputable posters as well as their methodology is trivial...except perhaps in your case.

This is literally irrelevant to what I've been saying. At no point have I said "the polls are flawed, therefore trust nothing, listen to nobody". I have been saying "American polls are not very good when taken at face value". The point of saying this is to point out *just how important it is* to apply very careful consideration to them in the way that Nate Silver does. I don't even think you actually disagree with me; you just misunderstood what I posted, got into an argument you didn't want to back out of it, and then couldn't admit you were wrong when you realized.
 
I don't think I'm going to make the debate tonight. My fever's 104 and I have to sit here and wait for my mom's physical therapy to come to the house.

I want to crawl in a hole and die. I love debates. : sigh :
 

kingkitty

Member
The People have spoken. I must make all future threads.

I call dibs dibs on the next next next debate thread (maybe the next Dem thread? or the one after?) I've lost track of who has dibs on what. Hopefully that next debate thread won't follow an immense tragedy that will cause me to delete 90 percent of my op.
 

Makai

Member
I call dibs dibs on the next next next debate thread (maybe the next Dem thread? or the one after?) I've lost track of who has dibs on what. Hopefully that next debate thread won't follow an immense tragedy that will cause me to delete 90 percent of my op.
Put it back. I missed it.
 

dramatis

Member
Theme the next Republican debate thread on Disney Princesses.

Although there might actually be more Republican candidates than Disney Princesses
 

Makai

Member
Theme the next Republican debate thread on Disney Princesses.

Although there might actually be more Republican candidates than Disney Princesses
The People are demanding Star Wars.

So of course I'm thinking about Star Trek. I'm gonna try to resist my troll urge
 

Makai

Member
Just skimmed that discussion, but Crab, you can't compare head to head election polls over a year out. Isn't UK's election cycle 6 weeks with no primaries? You bet American polls are accurate 6 weeks out from election day.
 

User 406

Banned
In unrelated revelations, the vast majority of people really don't get how politics work, do they?

Seriously, I've heard the same damn misplaced melodramatic disappointment in Obama from other kids who voted for him as well. Not even the slightest interest in understanding why he couldn't just snap his fingers and get all his policies through. Would be worth Bernie actually winning so they could keep the same whine going when he inevitably "let us down" through no fault of his own.
 

B-Dubs

No Scrubs
But my op is already perfect (and more appropriate). I'm keeping these deleted scenes for the next opportunity.

Well actually, a good chunk of it was just a lazy, more detailed retread of this: http://www.neogaf.com/forum/showpost.php?p=181550658&postcount=256 But then I snipped snipped. But maybe next time.

I liked the ham sandwich joke.

These things can be tricky, especially when something like Paris happens the day before and you've got to do a total rewrite as a result.
 
Seriously, I've heard the same damn misplaced melodramatic disappointment in Obama from other kids who voted for him as well. Not even the slightest interest in understanding why he couldn't just snap his fingers and get all his policies through. Would be worth Bernie actually winning so they could keep the same whine going when he inevitably "let us down" through no fault of his own.

THe moment he doesn't give them everything they want, they'll tune out in 2018. Then, they'll moan how...well, it will be someone's fault, I'm sure.

Hehe



104 is pretty damn high. Get well, adam.

Thanks. I think I have the flu. Everything hurts, and nothing is good. I want to pass out but I can't until my mom's physical therapist comes and leaves.

Anyone who says they do is a liar. Ain't an exact science for a reason.

There are, however, vastly varying degrees of ignorance.

Fair point. I just get entirely frustrate with this retconning our entire political system. I like realism in my politics, but maybe that's just me being a pragmatic, neo-liberal something or other.
 
Fair point. I just get entirely frustrate with this retconning our entire political system. I like realism in my politics, but maybe that's just me being a pragmatic, neo-liberal something or other.

And that's understandable, but even then, it gets messy very quickly. For example, what would be a realistic approach to securing more dem victories in local elections and midterms?
 
And that's understandable, but even then, it gets messy very quickly. For example, what would be a realistic approach to securing more dem victories in local elections and midterms?

I'm talking more along the lines of not understanding how basic things work. The idea that a President, for instance, can wave a magic wand and make all these things happen. Like, that's just not how this works. Or the idea that if the Supremes don't do what we want, we'll protest them an then they will. Or that massive shifts to the structure of our healthcare and education systems can just happen over night.

That's the stuff that gets me. Not the idealism, because I think idealism channeled is an awesome thing. It's the "We're going to do this because we're right and right always wins!" that gets me. I've volunteered in quite a few campaigns that, while we were right, we weren't going to win. I still gave it my all, but I was realistic. I wasn't trying to find every loophole I could find to prove how everyone else was wrong.
 
Status
Not open for further replies.
Top Bottom