and as I've tried to explain, why you would choose RCP of all people to do this is just bizarre, since RCP is only taking a small subset of polls they happen to prefer and basing the race off of it. As we can see with the primary election, RCP's selection process is flawed, as there is a GIANT gap between Carson and Trump that is not reflected at RCP, but is immediately apparent elsewhere. RCP's poll selection portrays the race as closer than it actually is, due to editorial bias.
This doesn't make much difference. You can use the Huffington Post's example (as I did), and the data still supports my argument. I used RCP because (to my knowledge) it simply takes an average of all significant pollsters equally.
The UK also has significantly less demographic diversity, so taking a straight unweighted poll doesn't work. The US is at least 37% nonwhite with far more of the population being recent immigrants. You can bring up political parties all you want, but racial demographics being as skewed as they are in the US vs. the UK makes it FAR more difficult to poll- especially since about 19 milion people in the US speak english "poorly" or "not at all."
And as I've tried to explain, there is no such thing as an unweighted poll, since what a likely voter IS varies from pollster to pollster as they attempt to predict which demographics are likely to turn out. What motivates white voters to turn out will NOT motivate minorities, and vice versa- in fact a lot of the error in weighting in 2012 revolved around white voter "anger" which failed to appear
You and I are talking about different things entirely. Let's talk about weighting. There are two types of weighting that are important when looking at an average of polls. The first is
internal weighting. Say a pollster conducts a poll, gets through to 1,000 people, and 900 of those people are white and 100 black. This does not accurately reflect the American demographics, so they weight each white person at 15/18ths of a vote and each black person at 2.5 votes.
The second is how you weight the polls themselves when producing your average (
external weighting). Let's say we have two polls. One says the result will be 50 D / 50 R. The other says the result will be 54 D / 46 R. We know from the past X elections that the second polling company has typically performed better. So, when we produce an average of these polls, we weight the second poll better than the first, such that our weighted average concludes the current state of the race is probably 53 D / 47 R, rather than 52 D / 48 R which is the unweighted average.
When I say 'unweighted', what I have been referring to is the second, NOT the first. When I say 'unweighted', I do not mean failing to properly stratify the different demographic samples in any given poll. This would be absurd. Any given poll must have its subsamples weighted properly to give an accurate result. In fact, I would argue that British polls do this *better* than American polls.
When I say 'unweighted', what I mean is that looking at the average American poll and comparing it to the average British poll, without giving particular polls more importance according to accuracy in prior elections, British polls perform better.
However, I will go one step further. As far as I know, there were no major British pundits that were providing weighted averages (in the second sense) of UK polls - in other words, there was no Nate Silver. There were a number of people who made election predictions on the basis of polling data, but they did so directly from what data polls gave them, rather than considering the prior record of those polls (mostly because the prior record of these polls was very good, as in 2010).
However, if they had, then British polls would have performed even better than they did and significantly better than American ones. This is because of weighting *in the first sense*. UK polls do better demographic sampling than American ones, simply because they stratify for more things.
It's not innately "cheating" to make the assertion that you cannot simply grab polls at random without considering whether or not that pollster is any good. There are dozens of fly by night outfits that few people take seriously with crazy weighting and high margin of error. Putting these on an equal footing with PPP, Pew, or even Rasmussen is idiocy.
This is my point, though. On average, American polling is bad. Yes, it improves when you say "well, we know X, Y and Z were bad in the future, so we'll discard them, and we know in the past, A, B, and C were inaccurate for this reason, so we will correct them ourselves".
But this doesn't say anything meaningful. Literally any developed Western country that has some basic level of polling will produce better results when you carefully handpick and moderate the polls you look at. You're effectively saying "Something is good when you take away the bad things!"
Here's a thought experiment. The year is 20XX. Britain and America are having an election. You have no prior knowledge of the success rate of any polling company, and cannot make adjustments to your predictions based on the past success of any poll, and you can only take the poll at face value. Which will allow you to make more accurate predictions about the popular vote: American or British polls?
The answer is factually British ones.
Understanding the methodology behind each particular pollster and why they weight the way they do (Rasmussen was notorious for having a republican "house effect" in past races) is essential to being able to criticize a poll as "good" or "bad." if you cannot do this (and it appears that you can't) then your opinion on US polling isn't worth the time it takes to type it out.
This is what I mean by us talking at cross purposes. I do not disagree with what you typed here. If you know the prior accuracy and bias of a particular polling organization, you can assign different importance to or even outright correct particular polls to end up with a better prediction. This does not mean that the *polls* are good. It means that the analysts are good at knowing how to tease out meaningful information from what are at face value inaccurate polls.
and as I've pointed out REPEATEDLY you do not understand what you're talking about. "Likely Voter Models" vary from pollster to pollster. A likely voter at gallup is not a likely voter at Pew, is not a likely voter at PPP. Racial demographics, Age, and location all play into this to a way that does not occur in the UK. Again- you seem to be completely unaware of this, and assume you can simply average likely voters together because everyone is using the same model to determine what a likely voter is. This. Is. Wrong.
Are we clear?
I was pointing to the typical example of how likely voters are determined. I am not saying they are the same across polls. Different pollsters obviously consider who is a likely voter differently. What I am saying, and what is factually true, is that the sample of likely voters changes across time as people reconsider their likelihood to vote. Almost all pollsters put *at least some weight* on self-reported likelihood to vote in their likely voter models (although there are some exceptions). In the final few days before an election, people can often reconsider their likelihood to vote quite rapidly. There is a lot of data to suggest this is the case, which I can link you to. Therefore, polls which take three samples from three separate days can often miss important developments in the result. This is true even when companies use different likely voter turn-out models, something I do not deny.
You are wrong.
EDIT: Also, just so we're clear, this isn't US vs. UK dick-waving; I have no interest in that. It is simply an observation that US polling is a lot less well-regulated than British polling which is subject to a number of stringent laws, which produces a lot more cowboy pollster companies that drag the standard of the industry down, meaning that you have to rely more and more on a small number of experts to be able to describe the race rather than being able to interpret it directly from polling data yourself. In many other respects, the US has better political punditry than the UK.