Electoral-Vote.com - FiveThirtyEight: An Assessment

Status
Not open for further replies.
I don't have a hate boner, but I'm REALLY curious if overrating a poll that caused a monstrous tightening was done just to drive clicks closer to election day. As I said on twitter, they're either ahead of a curve, or such a tremendous outlier that it'll ruin their credibility forever going forward.

538 doesnt need clicks with the way staffers cite them as being as accurate as internal polling and how several media outlets regularly work with and promote them. This "clicks" talk is just one of a handful of weak arguments presented because nobody can completely discredit him.
 
As a math nerd, I find it strange that people are dumping on Nate's model before we have a result. Why do we want all of the models to herd together? If Nate gets 49 or 50 states right again then this thread will be pretty funny.

The big thing that Silver has been talking about is how there are a lot of undecided voters this year as compared to 2012 or 2008. 49-46 is a lot more certain than 43-40. His model weighs that gap heavier than others and we get to see the results in 5 days. Nate is a man with a model, not some political prophet. His numbers don't move on a whim. His model isn't being tweaked. He just has less confidence than others. And that's okay. Maybe he's right or wrong but it's hard to prescribe blame when we don't even have a result yet, especially with some of the mediocre to bad polls on the dem side that have been coming out these last few days.

This gets at one of the bigger issues here and it's that people have problems understanding modelling and uncertainty at fundamental level. So much of this is really just luck. We know very well that none of these polls are without bias, and so we're trying to synthesize and predict from polls that may not be fully representative of the population and biased further behind that.

For predicting weather, we have multiple models that forecasters weave together to create a single, often deterministic forecast they give you. There's a lot of value in having these multiple discrete models, because they're all biased in some ways.

Even if we have the result, it doesn't really mean much about him. His model may be better next time than someone's whose was better this time. There's this common saying in some of these fields "all models are wrong, but some are useful", and that's probably always going to be true. They're inherently simplifications of real world phenomenon that involve chaotic processes and inherently change over time.
 
538 doesnt need clicks with the way staffers cite them as being as accurate as internal polling and how several media outlets regularly work with and promote them. This "clicks" talk is just one of a handful of weak arguments presented because nobody can completely discredit him.

Every commercial website needs clicks, especially ones owned by Disney/ESPN following the Grantland model. Regardless of whether or not 538 is engaging in clickbaiting, they are certainly under pressure to drive traffic to their site.
 
So are they saying since 538 benefits from polls changing, and its in their best personal interests that 538 skews their numbers to favor massive poll swings to get more clicks? Sounds very conspiracy like. 538 seems to be very math driven and not politically biased. I would be surprised if 538 is making up numbers to get more clicks

I was a huge 538 fanboy, and still kind of want to be - but see how drastically the polling changes based upon each random-ass survey that may or may not even be valid is pretty disconcerting.

Their model is the most pessimistic of them all and it doesn't really seem like there is a good reason for it - in fact it damn near seems like bad polls are weighted more heavily (or outliers) which goes against basic aggregated statistics.
 
Is "538 is doing it for the clicks for ESPN" the new Polygon gave that game an 8 for teh clicks maannn. It just seems like people are rejecting 538 because it's given them more anxiety about the election and they would rather find data that conforms to what they want to see. It's a more conservative model and maybe that's ok.

It's perfectly fine to criticize the model or criticize bad twitter punditry but it seems like much of this is a knee jerk reaction to something that slightly threatens your emotional investment of the election. Kinda like the many review threads where people spend pages attacking the one or two outlier reviews because IGN gave that game you preordered and were super hyped about a 7. So now you have to nick pick the shit out of the review and join others in relentlessly mocking the reviewer.

Maybe some of y'all need to stop nervously refreshing the forecast.

The McMullin article wasn't clickbait?
 
The McMullin article wasn't clickbait?

...no?

The article title was:

How Evan McMullin Could Win Utah And The Presidency

The article was about... how Evan McMullin could win Utah and the presidency.

It's an unlikely, implausible edge-case. But the article doesn't really ever imply it's anything but that, and it's not clickbait in any traditional sense which normally implies the title is misleading or omitting essential information in some way.
 
What worries me at this point is that there doesn't seem to be a point to actually looking at 538 until a few days before the election if their model is so prone to massive changes in a short period of time.

Anything they write before then is basically click bait if it's practically guaranteed five days later we'll see a radically different probability.
You can't evaluate a forecaster based on a single event.
A single event out of three in the website's history. Yes, you can and absolutely should evaluate 538 if it ends up massively off the mark for this Presidential Election.
 
A single event out of three in the website's history. Yes, you can and absolutely should evaluate 538 if it ends up massively off the mark for this Presidential Election.

What worries me at this point is that there doesn't seem to be a point to actually looking at 538 until a week before the election if their model is so prone to massive changes in a short period of time.
Look at it Nov 7 at night. Right now the 3 most likely out based on their model are narrow Clinton win, 320 Clinton win and 360 Clinton win. Last week 360 Clinton win was leading.

The same uncertainty that give trump a better chance in their model also give Clinton the best chance for a blowout.
 
Look at it Nov 7 at night. Right now the 3 most likely out based on their model are narrow Clinton win, 320 Clinton win and 360 Clinton win. Last week 360 Clinton win was leading.

The same uncertainty that give trump a better chance in their model also give Clinton the best chance for a blowout.
I'm not concerned about their likelihood of Clinton winning. I'm concerned about the worth as a forecaster that can only predict rain will show up when dark clouds appear.
 
That's basically all of them.
Then all of them should stop acting like their election predictions for 364 days of the year mean jack shit. Though I guess that doesn't pay the bills. Instead, people I know are constantly checking 538. News outlets are constantly citing them.
 
If 538's model is static and the same all election long, then you can't accuse them of being close for clicks right?

You can only accuse their model of being shitty in the first place, but then you should be saying that the whole cycle and not just now.
 
If 538's model is static and the same all election long, then you can't accuse them of being close for clicks right?

You can only accuse their model of being shitty in the first place, but then you should be saying that the whole cycle and not just now.

Plenty of people have been doing just that. There were threads on this very board discussing how over-reactive the model was in July. Nothing being said about 538's model now is new. Silver has had a target on his back for over a year, this is not a new thing! And criticism of 538 for not living up to its premise has been there since its very launch.
 
Then all of them should stop acting like their election predictions for 364 days of the year mean jack shit. Though I guess that doesn't pay the bills. Instead, people I know are constantly checking 538. News outlets are constantly citing them.

But that's sort of the central question. Is that their job, or is it the job of the people to understand that projections have uncertainty in them. People understand with the weather that the further out you are, the more uncertainty that exists. So why do people fail to translate that knowledge into stuff from stats.

A huge part of this comes from people just not understanding randomness, plus we know pretty well that people's intuitive ideas about randomness isn't actually that random. So when people see that a model was correct or mostly correct X out of X times, they put more faith in it, but that's such shitty metric.
 
If 538's model is static and the same all election long, then you can't accuse them of being close for clicks right?

You can only accuse their model of being shitty in the first place, but then you should be saying that the whole cycle and not just now.
At least a less swingy forecast is actually making a prediction.

We saw 538 swing massively in the last week and we have 5 days until the election. Their model might as well be "Who the fuck knows" at this point.
 
But that's sort of the central question. Is that their job, or is it the job of the people to understand that projections have uncertainty in them. People understand with the weather that the further out you are, the more uncertainty that exists. So why do people fail to translate that knowledge into stuff from stats.

A huge part of this comes from people just not understanding randomness, plus we know pretty well that people's intuitive ideas about randomness isn't actually that random. So when people see that a model was correct or mostly correct X out of X times, they put more faith in it, but that's such shitty metric.
That's the metric 538 constantly touts for its reputation though.
 
At least a less swingy forecast is actually making a prediction.

We saw 538 swing massively in the last week and we have 5 days until the election. Their model might as well be "Who the fuck knows" at this point.

You are not understanding forecasting. A "swingy" model isn't a problem, if it's reacting to actual changes in the state of the race. Voters do change their minds during these things. A model sure as heck better have adjusted its prediction in wake of the "grab 'em by the pussy" tape.
 
But that's sort of the central question. Is that their job, or is it the job of the people to understand that projections have uncertainty in them. People understand with the weather that the further out you are, the more uncertainty that exists. So why do people fail to translate that knowledge into stuff from stats.

A huge part of this comes from people just not understanding randomness, plus we know pretty well that people's intuitive ideas about randomness isn't actually that random. So when people see that a model was correct or mostly correct X out of X times, they put more faith in it, but that's such shitty metric.

Let me ask you this (and I don't mean this as a deflection), is the unusual percentage of undecideds truly the reason why the model has been swinging wildly? When Clinton was predicted with a nearly 90% chance of winning a couple of weeks ago, how were the undecideds being accounted for? Is the proportion of undecideds really that high that they're essentially the factor in deciding the election?

Unless I'm misunderstanding, the main argument seems to be that the closer the candidates are to 50/50, the more uncertainty there is. And the high uncertainty is due to a high proportion of undecideds. But then what explains the weeks when Clinton had an extremely high chance, that kind of argues the opposite, doesn't it? That there is little uncertainty and the undecideds will go for Clinton? Then why count them as undecideds in the first place?

If it's a model that adheres that strongly to the uncertainty factor, I don't know how we were getting anything than a 60-70% Clinton win chance in the first place. It seems the model itself is not always actually accounting for uncertainty given the times when the prediction was extremely favoring her.
 
That's the metric 538 constantly touts for its reputation though.

That's how most people do it, because it's what sticks with people, but how much should people just "understand"? This is a theoretical question about what people understand about uncertainty.
 
I think you can make an educated assessment of 538's model and say that it adjusts too drastically to polling in a way that other forecasts do not. Maybe that's a critique on the efficiency of the model.

I don't think anybody can judge (right now) if 538's model is flat out wrong. It still says that Clinton is favored over Trump to win the race in 5 days. We'll only be able to dig into it more once the election is over, and we have actual numbers to back up theoreticals.
 
You are not understanding forecasting. A "swingy" model isn't a problem, if it's reacting to actual changes in the state of the race. Voters do change their minds during these things. A model sure as heck better have adjusted its prediction in wake of the "grab 'em by the pussy" tape.
They also heavily adjusted their prediction in response to convention bumps in July. We've seen the post pussy bump and debate numbers drop massively despite the lack of a large scandal.

If uncertainty matters so much, then stop making 90% predictions, because it's not like the uncertainty just disappeared for those times.
 
At least a less swingy forecast is actually making a prediction.

We saw 538 swing massively in the last week and we have 5 days until the election. Their model might as well be "Who the fuck knows" at this point.
You literally don't know what a model is.
 
...no?

The article title was:



The article was about... how Evan McMullin could win Utah and the presidency.

It's an unlikely, implausible edge-case. But the article doesn't really ever imply it's anything but that, and it's not clickbait in any traditional sense which normally implies the title is misleading or omitting essential information in some way.

I disagree that the title of the article is not clickbait, but we'd be arguing opinion and semantics at this point. It's an interesting article, but there's really no practical way he "COULD" win the Presidency, and so I find it a little misleading. I guess I have a higher standard for 538 than most sites, as I know it sure wouldn't seem as out of place on HuffPo or something.
 
I like having one model that focuses on being ahead of any movements through emphasising trends at the expense of consistency, and another that focuses on consistency at the expense of being slow to changes in the election. You just need to have the context to get the right information from them.

I won't defend the bad punditry of Nate Silver, but I do think the model has its uses.
 
Let me ask you this (and I don't mean this as a deflection), is the unusual percentage of undecideds truly the reason why the model has been swinging wildly? When Clinton was predicted with a nearly 90% chance of winning a couple of weeks ago, how were the undecideds being accounted for? Is the proportion of undecideds really that high that they're essentially the factor in deciding the election?

Unless I'm misunderstanding, the main argument seems to be that the closer the candidates are to 50/50, the more uncertainty there is. And the high uncertainty is due to a high proportion of undecideds. But then what explains the weeks when Clinton had an extremely high chance, that kind of argues the opposite, doesn't it? That there is little uncertainty and the undecideds will go for Clinton? Then why count them as undecideds in the first place?

If it's a model that adheres that strongly to the uncertainty factor, I don't know how we were getting anything than a 60-70% Clinton win chance in the first place. It seems the model itself is not always actually accounting for uncertainty given the times when the prediction was extremely favoring her.

What has happened in the past couple of weeks is that the voters who were declaring themselves undecided before are declaring themselves Republicans now. In many/most polls, Hillary's numbers themselves have not gone down. But the undecideds have gone down, and Trump's have gone up (and Johnson has gone down some). Basically, Republicans are getting some of the "never Trump" people who were thinking they might skip this election, cross the line and vote for Hillary (hence, they were undecided), or vote for Johnson back into the fold. That's why her predictive numbers are going down, because the amount of undecideds are not only going down, they're swinging Republican.
 
As a math nerd, I find it strange that people are dumping on Nate's model before we have a result. Why do we want all of the models to herd together? If Nate gets 49 or 50 states right again then this thread will be pretty funny.

The big thing that Silver has been talking about is how there are a lot of undecided voters this year as compared to 2012 or 2008. 49-46 is a lot more certain than 43-40. His model weighs that gap heavier than others and we get to see the results in 5 days. Nate is a man with a model, not some political prophet. His numbers don't move on a whim. His model isn't being tweaked. He just has less confidence than others. And that's okay. Maybe he's right or wrong but it's hard to prescribe blame when we don't even have a result yet, especially with some of the mediocre to bad polls on the dem side that have been coming out these last few days.

while that is a fair point, the problem is that his model is p unstable so how do you count him getting 49 states right?

a predictor that gets 49 states right the day before is cool and all, but it's useless if it only got 40 states right in the months before the election; for example, how many times has Florida flipped? even the supposedly more stable "polls-plus" model has had Florida flip wildly this year

I think we need a new metric that also charts how much time before the election each system got right each state, and on Florida alone 538 has flipped four times, while Upshot has had it on Clinton since July. if Clinton wins Florida, Upshot should get way more points in this metric, and obviously, if Trump wins Florida, Upshot should get 0 points, but 538 should get 0 points on either result. or fractional points depending on how much time it predicted the winner correctly minus the time it predicted the loser, dunno

yes, 538 will probably predict perfectly the election the night before the actual election, but that's not good enough anymore and it is not enough to measure its success that way
 
I like having one model that focuses on being ahead of any movements through emphasising trends at the expense of consistency, and another that focuses on consistency at the expense of being slow to changes in the election. You just need to have the context to get the right information from them.

I won't defend the bad punditry of Nate Silver, but I do think the model has its uses.
I've been saying this for the past 3 days. Look at the all the data available and come to a conclusion or belief from there.

My favorite aggregators are Votamatic, Pollyvote, and Princeton Electoral Consortium.
 
I like having one model that focuses on being ahead of any movements through emphasising trends at the expense of consistency, and another that focuses on consistency at the expense of being slow to changes in the election. You just need to have the context to get the right information from them.

I won't defend the bad punditry of Nate Silver, but I do think the model has its uses.
I guess that is the nice thing about a model like that.

The issue is when a "trend" immediately shifts back after a week. Sure, the model will be ahead on any trends, but that's only because it treats everything as a trend.
 
Let me ask you this (and I don't mean this as a deflection), is the unusual percentage of undecideds truly the reason why the model has been swinging wildly? When Clinton was predicted with a nearly 90% chance of winning a couple of weeks ago, how were the undecideds being accounted for? Is the proportion of undecideds really that high that they're essentially the factor in deciding the election?

Unless I'm misunderstanding, the main argument seems to be that the closer the candidates are to 50/50, the more uncertainty there is. And the high uncertainty is due to a high proportion of undecideds. But then what explains the weeks when Clinton had an extremely high chance, that kind of argues the opposite, doesn't it? That there is little uncertainty and the undecideds will go for Clinton? Then why count them as undecideds in the first place?

If it's a model that adheres that strongly to the uncertainty factor, I don't know how we were getting anything than a 60-70% Clinton win chance in the first place. It seems the model itself is not always actually accounting for uncertainty given the times when the prediction was extremely favoring her.

I can't tell you much about their specific model, but I can tell you that in some domains the model is just the first step in the projections. In some fields, like weather forecasting, humans then interpret, adjust, and communicate the models based on their understandings of the biases of the model. I don't know if he's hand-adjusting afterwards, but it's not out of the realm of possibility, nor is it inherently wrong.

We will never have good causal explanations for the shifts and variance. Some of it will be sampling variance, some of it will be biases from pollsters or the specific wording on a poll, some will be biased based on what was in the news that week.

Yes, in general, the closer to 50/50 within specific states, the more uncertainty because in all but two states, getting the majority means getting all the electoral votes. The more states close to this threshold, the more overall uncertainty there will be (eeh, somewhat true, depends on which states and how many projected electoral votes someone gets.
 
Is there any chance of reversing this freefall in Hillary polls since FBI attempt to throw the election? Pretty scary how effective it's been and how shady it is.
 
I can't tell you much about their specific model, but I can tell you that in some domains the model is just the first step in the projections. In some fields, like weather forecasting, humans then interpret, adjust, and communicate the models based on their understandings of the biases of the model. I don't know if he's hand-adjusting afterwards, but it's not out of the realm of possibility, nor is it inherently wrong.

He's not adjusting it, for whatever it's worth. Nate mentioned that on the podcast the other day, the final outcome of the prediction model would be the same regardless of if he were to suddenly die as the only thing it needs is someone feeding it the poll numbers.
 
I think you can make an educated assessment of 538's model and say that it adjusts too drastically to polling in a way that other forecasts do not. Maybe that's a critique on the efficiency of the model.

I don't think anybody can judge (right now) if 538's model is flat out wrong. It still says that Clinton is favored over Trump to win the race in 5 days. We'll only be able to dig into it more once the election is over, and we have actual numbers to back up theoreticals.

We will not have sufficient data to make any sort of meaningful critique unless it is emphatically wrong. And since 538's model is a lot more conservative than most of the others, it cannot be emphatically wrong without all of the other models being wrong as well.

I guess that is the nice thing about a model like that.

The issue is when a "trend" immediately shifts back after a week. Sure, the model will be ahead on any trends, but that's only because it treats everything as a trend.

The thing is that we have no way of knowing for certain whether the shift back is a reversion to the mean or a random walk, and your criticism is only valid if it's the former.
 
I think most people who follow this kinda thing assumed this would be the case when ESPN bought them out. Sam Wang or Nate Cohn are where it's at now.

Wrong.

diggler-with-laptop.jpg
 
while that is a fair point, the problem is that his model is p unstable so how do you count him getting 49 states right?

a predictor that gets 49 states right the day before is cool and all, but it's useless if it only got 40 states right in the months before the election; for example, how many times has Florida flipped? even the supposedly more stable "polls-plus" model has had Florida flip wildly this year

I think we need a new metric that also charts how much time before the election each system got right each state, and on Florida alone 538 has flipped four times, while Upshot has had it on Clinton since July. if Clinton wins Florida, Upshot should get way more points in this metric, and obviously, if Trump wins Florida, Upshot should get 0 points, but 538 should get 0 points on either result

yes, 538 will probably predict perfectly the election the night before the actual election, but that's not good enough anymore and it is not enough to measure its success that way

But of course this is nonsense. Elections aren't like the weather, there's an endogeneity here. If a campaign sees that a generally-accurate prediction has them losing a state three months out, then they can and *do* divert funds and visits and effort and policy toward courting voters in that state.

People respond to campaigns, models respond to people, campaigns respond to models.
 
The "freefall" is scary and awful but there are built-in floors to how low Hillary can drop and Trump can rise that are about the candidates and not this "scandal" the media are pushing because they want a horse race. Not only that but over 24 million people have voted already. Plus, 538 has been a bit of an outlier on how good Trump's chances really are.
 
Nate Gold has earned his reputation and deserves to be trusted until he gets it wrong.

Why would getting it wrong take away any trust? It's possible that a streak of successes is just randomness. If we abandon models the first time their wrong, we'll forever be chasing randomness (which is largely the case anyway).

Any good model is going to get things wrong, and hopefully it will be improved.
 
Huh? She's been losing points every day. Look at the projections.

What about the other very reputable projections that have her holding steady? It's really only Nate's aggregate that has her dropping. 538 is subject to variance while other aggregates are more bias driven because of their confidence.
 
Why would getting it wrong take away any trust? It's possible that a streak of successes is just randomness. If we abandon models the first time their wrong, we'll forever be chasing randomness (which is largely the case anyway).

Any good model is going to get things wrong, and hopefully it will be improved.
So why should getting it right build trust?
 
while that is a fair point, the problem is that his model is p unstable so how do you count him getting 49 states right?

a predictor that gets 49 states right the day before is cool and all, but it's useless if it only got 40 states right in the months before the election; for example, how many times has Florida flipped? even the supposedly more stable "polls-plus" model has had Florida flip wildly this year

I think we need a new metric that also charts how much time before the election each system got right each state, and on Florida alone 538 has flipped four times, while Upshot has had it on Clinton since July. if Clinton wins Florida, Upshot should get way more points in this metric, and obviously, if Trump wins Florida, Upshot should get 0 points, but 538 should get 0 points on either result. or fractional points depending on how much time it predicted the winner correctly minus the time it predicted the loser, dunno

yes, 538 will probably predict perfectly the election the night before the actual election, but that's not good enough anymore and it is not enough to measure its success that way

This would be fine if people's opinions were static - but they're not. If the issue is that 538 keeps going back and forth on Florida and that's seen as wishy-washy then that's because the model believes that the state is a tossup. now if 538 predicts Florida being 50/50 and it gets the winner right but the margin is 60-40 then that's an inaccurate model.

If we were able to simulate the election two weeks ago then Clinton probably blows him out of the water. Unfortunately people's opinions are changing like the weather with this election. This election's a lot more unstable for various reasons than previous ones and I think people are having a hard time dealing with that.

Nate's written about this at length if anyone wants to indulge. Some people just want the numbers and that's fine too. The standard deviation of this election is much higher than in the past. Clinton/Trump is actually about as much of a gap as Obama/Romney but the race was wayyyyy more stable so it was easier to have high confidence.
 
What about the other very reputable projections that have her holding steady? It's really only Nate's aggregate that has her dropping. 538 is subject to variance while other aggregates are more bias driven because of their confidence.
PEC has her dropping too. I'll agree 538 is freaking me out a bit though. Florida, Iowa, Ohio all going red and North Carolina going white is alarming.
 
Why would getting it wrong take away any trust? It's possible that a streak of successes is just randomness. If we abandon models the first time their wrong, we'll forever be chasing randomness (which is largely the case anyway).

Any good model is going to get things wrong, and hopefully it will be improved.

But even a Trump victory isn't evidence of a model being wrong. Let's say we play Russian Roulette and I give you a six-chamber gun and tell you there's an 83% chance that you'll survive. If then you die, that isn't evidence that my projection was wrong. I was also telling you you had a 17% chance of dying. Unlikely, but totally possible. Unlikely things happen.
 
This election has actually been remarkably stable. Clinton has lead the whole way through since the Spring and there hasn't been that much fluctuation by historical standards.
 
But of course this is nonsense. Elections aren't like the weather, there's an endogeneity here. If a campaign sees that a generally-accurate prediction has them losing a state three months out, then they can and *do* divert funds and visits and effort and policy toward courting voters in that state.

People respond to campaigns, models respond to people, campaigns respond to models.

while this is fair, Nate's model is already trying to compensate for the campaign changes during the election; his "polls plus" model at least integrates different biases after either conventions (Nate Silver never shuts up about convention bumps) and if he had a way to model and adjust every press release the FBI has made he would do so

so I don't see why you couldn't judge the model by how well it did far away from the actual election. after all, there is only one election at one date, so every poll, model or pundit is trying to predict that one outcome; more capable models and pundits will try to predict the wild swings in the campaigns

if the feedback cycle of model->campaign->people is too strong then the model will become useless at actually predicting, because the campaigns are abusing it. the ideal would be building a model that also considered how the campaigns will divert funds and efforts given early results and thus predicted better the election and responded less wildly to small changes in campaigning and polls. yeah, that sounds pie-in-the-sky but predicting perfectly each state result the night before also sounded pie-in-the-sky in the 20th century

cause, what good is a model that cannot tell you who is going to win Florida for months until the very night before the election? It is ok if the polls swing wildly but the model should offer some stability; even the poll-plus model which according to Nate is more conservative has flipped off and on back and forth

I actually think that both campaigns are following 538 and it has resulted in a weird feedback cycle that somehow hasn't affected the other models, probably because Silver's model includes silly things like convention bumps. maybe we are in front of a problem where simpler is better, and trying to predict considering what the campaigns will do is a wild goose chase, and just quietly aggregating polls is better. I mean, the Clinton campaign at least at one point diverted funds from states that told them they were safe, and a drop in polls in those states followed. which normally would indicate the model follows reality closely, but maybe it means the polls 538 weights more are more easily influenced by hidden factors in the campaign. maybe simply the people polled lived near campaign centers and 538 gave more weight to those, dunno. we already heard of one poll where one black guy for Trump outlier threw the poll off massively

This would be fine if people's opinions were static - but they're not. If the issue is that 538 keeps going back and forth on Florida and that's seen as wishy-washy then that's because the model believes that the state is a tossup. now if 538 predicts Florida being 50/50 and it gets the winner right but the margin is 60-40 then that's an inaccurate model.

the 538 model already seems to include provisions to control for people changing opinions and wide opinion trends but they seem to be making the model worse

yes, opinions change, but if another model can predict 49 states with similar accuracy to 538's and doesn't have Florida and other states flipflopping wildly, then that means the other model is clearly superior to 538's. hell, I'd say that a model that gets one more state wrong than 538 and is more stable is still better

only Silver('s model) seems to think the campaign is significantly less stable; other poll aggregators seem to think the result is going to be clear




edit: OF COURSE in the middle of writing this post Florida turned blue on 538, at least on polls only

I guess I helped
 
I guess I don't see instability as a sign of model weakness. Speaking subjectively, this has been a pretty unusual election! Emails, leaks, Russia, taxes, sexual assault, you name it. It's hard to forecast something as unusual as this year when there isn't really precedent for these kinds of things. So, I don't necessarily dismiss model uncertainty as bad math.

There are lots of other measures that we can use. i.e. historically high unfavorable numbers for both candidates, that would suggest uncertainty as well.That's not to say that I believe Nate's numbers to be gospel but I don't think being lower than the rest invalidates his findings.
 
I'm not concerned about their likelihood of Clinton winning. I'm concerned about the worth as a forecaster that can only predict rain will show up when dark clouds appear.

I think I figured out the problem. You want Nate to build a model that tells the future, Nate built a model that takes polls and calculates the chances of who's going to win.

Nate isn't in the business of telling the future, only what the current data says.
 
Status
Not open for further replies.
Top Bottom