Electoral-Vote.com - FiveThirtyEight: An Assessment

Status
Not open for further replies.
I just took a look at Ohio, and they have Hillary UP even though Trump won 4 of the last 5 polls (and 6 of the last 10). And out of those 10 polls, Hillary literally only won ONE POLL. How in the hell does that make sense?

This is dangerous confirmation bias....and I've seen it every where here. RCP and 538 were the go to sites when everyone talked about polling aggregates earlier this year. And now that they aren't reflecting the "Hillary will win by a landslide" narrative, people started trashing them.

Always trust the Wang.

I kind of want Ohio to go red. That way when Clinton wins, people can stop saying "As Ohio goes, so does the nation"
 
is the consensus that had it been anyone near normal for the republicans, Clinton would be in all sorts of trouble?

also what does that say about her? I know they will claim the victory but cmon her opponent should of been lapped 10 times allready, not just behind on the finishing straight
 
is the consensus that had it been anyone near normal for the republicans, Clinton would be in all sorts of trouble?

also what does that say about her? I know they will claim the victory but cmon her opponent should of been lapped 10 times allready, not just behind on the finishing straight


It isn't a Clinton thing it's a polarization thing. Trump is getting 36-38% just on having an (R) next to his name on the ballot. That's why he can bomb all the debates, pick a fight with a war hero's family and brag about sexual assault without the bottom falling out.
 
I'm not sure it's more due to her weaknesses (which for some include being a woman) or due to the fact that there are a lot of people who will vote (R) no matter what.

It's been bonkers to see my very religious extended family tie themselves in knots to justify their choice.
 
is the consensus that had it been anyone near normal for the republicans, Clinton would be in all sorts of trouble?

also what does that say about her? I know they will claim the victory but cmon her opponent should of been lapped 10 times allready, not just behind on the finishing straight

well the republican party did get to pit Trump against a person they've been striving to depict as the antichrist for twenty years, so the Republicans had a highly-abnormal advantage to take the edge off of their highly-abnormal disadvantage.
 
I just took a look at Ohio, and they have Hillary UP even though Trump won 4 of the last 5 polls (and 6 of the last 10). And out of those 10 polls, Hillary literally only won ONE POLL. How in the hell does that make sense?

This is dangerous confirmation bias....and I've seen it every where here. RCP and 538 were the go to sites when everyone talked about polling aggregates earlier this year. And now that they aren't reflecting the "Hillary will win by a landslide" narrative, people started trashing them.

That's what all of this sounds like to me too. None of us truly knows what's behind each of these different statistical models, and cherry picking polls or early voting results to back up ideas isn't helpful. Perhaps looking at an aggregate of the aggregate models is useful (see the NY times' model), but otherwise you can't say what's right or wrong until we have the final vote.
 
Here's a good comparison of the most popular models in the tight states.

zr5eJBO.jpg
 
is the consensus that had it been anyone near normal for the republicans, Clinton would be in all sorts of trouble?

also what does that say about her? I know they will claim the victory but cmon her opponent should of been lapped 10 times allready, not just behind on the finishing straight

It would probably be the same. A generic Republican would be doing better with college educated women, but would have nowhere near the advantage that Trump has amassed with non-college whites

Trump has a unique appeal to working class whites that would not have been replicated by a Mitt Romney or Paul Ryan type.

Compared to the current projections, against a traditional Republican Hillary would likely win Iowa, Ohio, and Maine D2, but lose North Carolina.
 
It isn't a Clinton thing it's a polarization thing. Trump is getting 36-38% just on having an (R) next to his name on the ballot. That's why he can bomb all the debates, pick a fight with a war hero's family and brag about sexual assault without the bottom falling out.

Events may have played out completely differently with a different candidate as well. I think it much less likely that Russia would try to put its finger on the scale for a Neocon like Rubio. There might have been no DNC hack / Wikileaks dogging Clinton in this alternate timeline.
 
Can you provide a technical criticism of his model with specifics on what you think should be different?

It is too sensitive. I remember Nate saying that it allows them to detect trends better, but it also makes it very sensitive to statistical noise. Another issue is that it oscillates wildly. A good predcition model should be stable. The reality is that people's opinions don't change that much. So why is his win probabilities changing so much? Probably because it is really sensitive to statistical noise.

Here are some good articles about the stability of this election:

http://election.princeton.edu/2016/09/29/the-incredibly-stable-2016-campaign/
http://election.princeton.edu/2016/08/03/why-is-the-pec-polls-only-forecast-so-stable/

I trust Sam Wang and Nate Cohen better as they have better credentials. They are showing a much higher win probability than Nate's. Can you provide technical criticisms of their model? What is the reason to trust 538 over theirs? It should also be noted that Wang started doing his analysis before Nate did and has been very accurate throughout the years. I know Nate Silver got very popular (IMO his website and data visualization is way better which helps communicate the information better), but he isn't ass good as Sam Wang IMO.

Here are the technical details of Sam Wang's model: http://election.princeton.edu/faq/#metamargin
 
It would probably be the same. A generic Republican would be doing better with college educated women, but would have nowhere near the advantage that Trump has amassed with non-college whites

Trump has a unique appeal to working class whites that would not have been replicated by a Mitt Romney or Paul Ryan type.

Compared to the current projections, against a traditional Republican Hillary would likely win Iowa, Ohio, and Maine D2, but lose North Carolina.

The fact that North Carolina's Republican Governor is obsessed with his constituents genitalia is a pretty big factor this year. I think Dems would do well in NC no matter who is at the top of either ticket.
 
It is too sensitive. I remember Nate saying that it allows them to detect trends better, but it also makes it very sensitive to statistical noise. Another issue is that it oscillates wildly. A good predcition model should be stable. The reality is that people's opinions don't change that much. So why is his win probabilities changing so much? Probably because it is really sensitive to statistical noise.

Silver's response on that critique has been that the polls themselves have been very volatile this election, compared to those in the past, and that if a model is based on polling data, then changes in the data should affect the prediction.

It is really hard to know whether these changes are actually noise as you speculate, or if they represent real movements in people's opinions. Do you actually believe we will see the exact same outcome if the election were held today as we would if it had been held two weeks ago, at the height of Clinton's poll numbers?

I trust Sam Wang and Nate Cohen better as they have better credentials.

How so?

They are showing a much higher win probability than Nate's. Can you provide technical criticisms of their model?

I'm not really familiar enough with Cohn's model to critique it. There are a few things I don't like about Wang's model, based on the information in the link you provided, although it is not clear to me that it is a complete description of his model. For instance, on his site he talks about how he incorrectly modeled the incumbency factor in 2004, causing him to mispredict the race, but there is no actual information about how this is factored into the model. Anyway, the thing I was going to criticize in his model was the premise that state win probabilities are independent. This is implicit in this formula:

EV1+EV2 electoral votes (i.e. winning both): P1 * P2. EV1 electoral votes: P1 * (1-P2). EV2 electoral votes: (1-P1) * P2. No electoral votes: (1-P1) * (1 – P2).

This is bad because it cannot account for systemic error, only for random errors. However we have evidence that systemic error can occur in polls in recent history (e.g. undersampling of young people in 2008 who didn't have landline phones). In contrast, 538 assumes some correlation in error, both nationally and regionally.

To put this into an example, let's say that you somehow knew that Clinton was going to win Georgia, and I asked you to predict North Carolina based on that. The Wang model would say "The outcome of Georgia has no effect on North Carolina", whereas the 538 model would say "If Clinton wins Georgia, it probably means there is great African American turnout and polls are under-rating her, therefore she will almost certainly also win North Carolina".

What is the reason to trust 538 over theirs?

I don't have any particular loyalty to 538's model over others. I am just responding to what I view as weak arguments.
 
Huffington Post is getting into things now, too:



He ratcheted the panic up to 11 on Friday with his latest forecast, tweeting out, “Trump is about 3 points behind Clinton ― and 3-point polling errors happen pretty often.”

So who’s right?

The beauty here is that we won’t have to wait long to find out. But let’s lay out now why we think we’re right and 538 is wrong. Or, at least, why they’re doing it wrong.

The short version is that Nate is changing the results of polls to fit where he thinks the polls truly are, rather than simply entering the poll numbers into his model and crunching them.

Silver calls this unskewing a “trend line adjustment.” He compares a poll to previous polls conducted by the same polling firm, makes a series of assumptions, runs a regression analysis, and gets a new poll number. That’s the number he sticks in his model ― not the original number.

He may end up being right, but he’s just guessing. A “trend line adjustment” is merely political punditry dressed up as sophisticated mathematical modeling.

Guess who benefits from the unskewing?

By the time he’s done adjusting the “trend line,” Clinton has lost 0.2 points and Trump has gained 1.7 points. An adjustment of below 2 points may not seem like much, but it’s enough to throw off his entire forecast, taking a comfortable 4.6 point Clinton lead and making it look like a nail-biter.

It’s enough to close the gap between the two candidates to below 3 points, which allows Silver to say that it’s now anybody’s ballgame, because “3-point polling errors happen pretty often.”


...


I get why Silver wants to hedge. It’s not easy to sit here and tell you that Clinton has a 98 percent chance of winning. Everything inside us screams out that life is too full of uncertainty, that being so sure is just a fantasy. But that’s what the numbers say. What is the point of all the data entry, all the math, all the modeling, if when the moment of truth comes we throw our hands up and say, hey, anything can happen. If that’s how we feel, let’s scrap the entire political forecasting industry.


...


So if Nate Cohn and Nate Silver both see a roughly 3-point race, why is one Nate confident in a Clinton win and the other sparking a collective global freakout?

Because Silver is also unskewing state polls, which explains, for instance, why 538 is predicting Trump will win Florida, even as we and others (and the early vote) see it as a comfortable Clinton lead. To see how it works in action, take the Marist College poll conducted Oct. 25-26. Silver rates Marist as an “A” pollster, and they found Clinton with a 1-point lead. Silver then “adjusted” it to make it a 3-point Trump lead. HuffPost Pollster, meanwhile, has near certainty Clinton is leading in Florida.


Silver responded:

This article is so fucking idiotic and irresponsible. https://twitter.com/ryangrim/status/794993465666994180

He goes on in a few other tweets to say that his model is tested and empirical.
 
They have this article explaining why their model shows a higher amount of uncertainty compared to some models who have it at 99% and they explain why they choose to go this way, so the people acting like this is all punditry really have no leg to stand on. They just haven't done the legwork to look into the methodology.

http://fivethirtyeight.com/features...r-model-is-more-bullish-than-others-on-trump/

Or they have a vested interest in pushing their models and their sites over one of the bigger threats in the neighborhood.

poll forecasting is far more an art than a science.
 
The irony in HP's argument is that you can look at RealClearPolitics, which is another raw poll averaging site and their national average diverges even further from HP's number than 538's does.
 
The irony in HP's argument is that you can look at RealClearPolitics, which is another raw poll averaging site and their national average diverges even further from HP's number than 538's does.

RCP is a bad site to get good poll averages though. They're actively biased with what polls they plug into their averages and hence consistently produce Republican leaning numbers. You're better off getting the numbers from just about anywhere else.
 
RCP is a bad site to get good poll averages though. They're actively biased with what polls they plug into their averages and hence consistently produce Republican leaning numbers. You're better off getting the numbers from just about anywhere else.

That is what I was getting at. Just because a site does a raw average doesn't mean their numbers are more accurate.
 
RCP is a bad site to get good poll averages though. They're actively biased with what polls they plug into their averages and hence consistently produce Republican leaning numbers. You're better off getting the numbers from just about anywhere else.

if Hugh Hewitt likes RCP averages,then that means it's crap
 
They have this article explaining why their model shows a higher amount of uncertainty compared to some models who have it at 99% and they explain why they choose to go this way, so the people acting like this is all punditry really have no leg to stand on. They just haven't done the legwork to look into the methodology.

http://fivethirtyeight.com/features...r-model-is-more-bullish-than-others-on-trump/

They stand to gain by pushing the close race narrative and being different from others. With the way Silver has reacted, it would seem people are on to something.
 
I just took a look at Ohio, and they have Hillary UP even though Trump won 4 of the last 5 polls (and 6 of the last 10). And out of those 10 polls, Hillary literally only won ONE POLL. How in the hell does that make sense?

This is dangerous confirmation bias....and I've seen it every where here. RCP and 538 were the go to sites when everyone talked about polling aggregates earlier this year. And now that they aren't reflecting the "Hillary will win by a landslide" narrative, people started trashing them.
Huffpo has Ohio ahead because they take a weighted average back to a few weeks ago, at which time Hillary led b6 a lot. I think you're going to see the average get closer to Hillary as Tuesday appears, as Trump is probably going to win Ohio, not that Hillary needs to.

I don't know why you think 538 and RCP are the "go to" for this election. Both have problems. RCP, for instance selectively inputs polls, instead of inputting all scientific polls. 538's problems have been detailed extensively in this thread. I'm not saying don't look at them, what I am saying is look at all the models, and determine who is going to win from that. When you do that, you see Clinton has a 90% chance to win, which is really, really good.

That is what I was getting at. Just because a site does a raw average doesn't mean their numbers are more accurate.
A simple average is usally better though. There has yet to be a better way to read data than a weighted average (with polls, weighted for time).
 
It is really hard to know whether these changes are actually noise as you speculate, or if they represent real movements in people's opinions. Do you actually believe we will see the exact same outcome if the election were held today as we would if it had been held two weeks ago, at the height of Clinton's poll numbers?

A lot of pollsters and pundits say yes, same outcome, and predicted the late tightening. The election date forces people who've said they were undecided or even against their party's candidate to choose, and their choice is what they've known all along it would be: voting for the party they historically support.

There's a popular theory going around, which is that the variation in polls is less due to changing opinions and more due to changing in the willingness to answer. When Clinton or Trump, either one, had bad headlines, it made their supporters less likely to answer polls, but not actually less likely to vote in the end. Makes sense. These days, almost nothing can change a large majority of people's minds.

How to account for that sort of thing in a model is a tough question, I'm sure. How do you tell a real shift in opinion from unwillingness to answer?
 
A simple average is usally better though. There has yet to be a better way to read data than a weighted average (with polls, weighted for time).
Nothing in principle that a weighted average is bad, but if you select it wrong, then your weighted average will be less accurate than a simple average.

I (and Sam Wang) prefer medians in this case. Medians are resistant to outliers.
 
Nothing in principle that a weighted average is bad, but if you select it wrong, then your weighted average will be less accurate than a simple average.

I (and Sam Wang) prefer medians in this case. Medians are resistant to outliers.

I should be clear. I'm for a simple average, the only thing I would be in favor of weighting is with time, so as time goes back far enough the weights tend towards zero.

I'm not a fan of any other way of adding analyses to the polls.
 
How do you even use the Princeton site? All I see are the overall % chances at the top, and maps on the right.

Where do you see the chances of each state?

http://election.princeton.edu/

You dont. The PEC is a simple average of the last few weeks of Huffpo averages. Go there to see the percentages for each state.

You could also run his code yourself, if you like. It's open source.
 
You dont. The PEC is a simple average of the last few weeks of Huffpo averages. Go there to see the percentages for each state.

You could also run his code yourself, if you like. It's open source.

this is probably the reason anyone would not trust the PEC.

its not a well designed intuitive website like 538 is, with their nicely well summed up graphs and amazing presentation.

its just cold hard boring facts thrown at your fucking face
 
this is probably the reason anyone would not trust the PEC.

its not a well designed intuitive website like 538 is, with their nicely well summed up graphs and amazing presentation.

its just cold hard boring facts thrown at your fucking face

The main reason not to trust PEC is the mid-year change to the model, using this year's data polling variance to predict future variance.

Even last night:
https://twitter.com/SamWangPhD/status/795007244756799489
Sam Wang: Hey, maybe I should multiple Trump's odds by 10-20x.

With predictive models, you're not supposed to look at the prediction of a single event, and then re-predict that same event by changing the parameters until it looks rights. That's the best way to instill user bias. It's supposed to be build the model on historical data, and see how it runs on new data, and then maybe update it for next time.
 
A simple average is usally better though. There has yet to be a better way to read data than a weighted average (with polls, weighted for time).

Do you have quantitative evidence to back up the claim that it's better?

I do get the worry that weighting opens up an avenue for bias to creep in. I also worry about overfitting a model, and that pollster-based adjustments will have a lag (e.g. if they change their methodology or likely voter model, the adjustments will take a while to catch up).

On the other hand, I think there is a pretty straightforward counterargument in favor of weighting and adjusting. As with my complaint about Wang's model, simple averaging works when polling error is random. If all error is random, then the average of multiple samples should converge on the true mean. This is not true when errors are correlated. I would point to the USC / LA Times poll. Because they use the same sample in every poll, then by definition there is total correlation in the sampling error in all of their poll results.

Furthermore the existence of likely voter screens probably means that ever poll has some degree of correlated error with its own previous results. So there is a purely statistics-based argument in favor of weighting and adjusting by pollster. Although obviously it is only helpful if you get that weighting right.

A lot of pollsters and pundits say yes, same outcome, and predicted the late tightening. The election date forces people who've said they were undecided or even against their party's candidate to choose, and their choice is what they've known all along it would be: voting for the party they historically support.

There's a popular theory going around, which is that the variation in polls is less due to changing opinions and more due to changing in the willingness to answer. When Clinton or Trump, either one, had bad headlines, it made their supporters less likely to answer polls, but not actually less likely to vote in the end. Makes sense. These days, almost nothing can change a large majority of people's minds.

How to account for that sort of thing in a model is a tough question, I'm sure. How do you tell a real shift in opinion from unwillingness to answer?

I'm aware of this theory. It may hold some water, but the big question about it is whether or not the change in willingness to respond correlates with a change in willingness to vote. It is not unreasonable to think that if people are too discouraged by their candidate to respond to a poll (even if they haven't switched allegiance) that they might also be less likely to go out and cast a ballot of the election were held that day. Ultimately enthusiasm is considered to be a predictor of whether or not someone is likely to vote in these polls anyway.
 
The main reason not to trust PEC is the mid-year change to the model, using this year's data polling variance to predict future variance.

Even last night:
https://twitter.com/SamWangPhD/status/795007244756799489
Sam Wang: Hey, maybe I should multiple Trump's odds by 10-20x.

With predictive models, you're not supposed to look at the prediction of a single event, and then re-predict that same event by changing the parameters until it looks rights. That's the best way to instill user bias. It's supposed to be build the model on historical data, and see how it runs on new data, and then maybe update it for next time.

He's not actually changing the model, he was discussing hypotheticals.
 
Silver's response on that critique has been that the polls themselves have been very volatile this election, compared to those in the past, and that if a model is based on polling data, then changes in the data should affect the prediction.

It is really hard to know whether these changes are actually noise as you speculate, or if they represent real movements in people's opinions. Do you actually believe we will see the exact same outcome if the election were held today as we would if it had been held two weeks ago, at the height of Clinton's poll numbers?

And Sam has said the exact opposite is true, and this has been one of the most stable elections in a while.

What do you define as outcome? I think Clinton has enough of a lead that she would have won if the election were today or two weeks ago. I really don't think opinion changes that drastically. There is an extreme amount of partisanship going on right now.



Sam is a neuroscientist that uses statistical methodologies in his research, and Nate is a former poker player. I know that doesn't necessarily mean anything, but I am going to trust the guy who probably has been exposed to more statistics theory.



I'm not really familiar enough with Cohn's model to critique it. There are a few things I don't like about Wang's model, based on the information in the link you provided, although it is not clear to me that it is a complete description of his model. For instance, on his site he talks about how he incorrectly modeled the incumbency factor in 2004, causing him to mispredict the race, but there is no actual information about how this is factored into the model. Anyway, the thing I was going to criticize in his model was the premise that state win probabilities are independent. This is implicit in this formula:



This is bad because it cannot account for systemic error, only for random errors. However we have evidence that systemic error can occur in polls in recent history (e.g. undersampling of young people in 2008 who didn't have landline phones). In contrast, 538 assumes some correlation in error, both nationally and regionally.

To put this into an example, let's say that you somehow knew that Clinton was going to win Georgia, and I asked you to predict North Carolina based on that. The Wang model would say "The outcome of Georgia has no effect on North Carolina", whereas the 538 model would say "If Clinton wins Georgia, it probably means there is great African American turnout and polls are under-rating her, therefore she will almost certainly also win North Carolina".



I don't have any particular loyalty to 538's model over others. I am just responding to what I view as weak arguments.

Given the margin that Clinton has, is there really a good chance of a large enough systemic polling error for Clinton to lose? Polling methodologies have gotten better and there are so many different polls out there.

Like we talked about before, the strong adjustment for national polls just makes it susceptible to statistical noise. I just cannot take a 'prediction' model seriously the oscillates between ~50% to ~90% to ~50% to ~90% to close to back being ~60%. To me, that is just an indication that polls will not do a good job of predicting the election. A prediction model cannot be that volitile given how set in the ways people are.

When I see a comparison of all the models (and note: this is NOT the polls-only prediction for 538), it just makes me question either the assumptions of the specific mathematical process 538 is using. A bunch of smart statisticians shouldn't be coming up with wildly different answers unless polls aren't a good indicator (which I don't think is the case here).

https://mobile.twitter.com/jshkatz/status/793553664724140032
 
Sam is a neuroscientist that uses statistical methodologies in his research, and Nate is a former poker player. I know that doesn't necessarily mean anything, but I am going to trust the guy who probably has been exposed to more statistics theory.
https://mobile.twitter.com/jshkatz/status/793553664724140032
As much as I've complained against Silver's model, he did get a B.A. In Economics at the University of Chicago, so calling him just a poker player is a bit much.
 
is the consensus that had it been anyone near normal for the republicans, Clinton would be in all sorts of trouble?

also what does that say about her? I know they will claim the victory but cmon her opponent should of been lapped 10 times allready, not just behind on the finishing straight
This is some people's analysis.

These people are bad at analysis. The GOP's field had more jobbers than the 40-man Royal Rumble. They were the coffee dregs of the party.
 
is the consensus that had it been anyone near normal for the republicans, Clinton would be in all sorts of trouble?

also what does that say about her? I know they will claim the victory but cmon her opponent should of been lapped 10 times allready, not just behind on the finishing straight

no because nominating a "normal" republican would have ripped the Republican party apart before we ever even got to the general. They had Trump forced on them by the wack jobs that they let hijack their party.
 
What do you define as outcome? I think Clinton has enough of a lead that she would have won if the election were today or two weeks ago. I really don't think opinion changes that drastically. There is an extreme amount of partisanship going on right now.

I meant Clinton's margin of victory.

Sam is a neuroscientist that uses statistical methodologies in his research, and Nate is a former poker player. I know that doesn't necessarily mean anything, but I am going to trust the guy who probably has been exposed to more statistics theory.

Good news. I have a PhD and my dissertation was in algorithms and information theory, so if those are your standards, you can just trust everything I say ;) Although I would personally advise against that.

Given the margin that Clinton has, is there really a good chance of a large enough systemic polling error for Clinton to lose? Polling methodologies have gotten better and there are so many different polls out there.

I don't really know. It seems plausible to me that likely voter screens may end up overestimating black turnout and underestimating Hispanic turnout in this particular election. If Clinton were to win Nevada with similar margins as Obama did in 2012 (as some analysis of early voting is predicting), then it would be a pretty significant polling miss, although obviously not in Trump's favor.

Like we talked about before, the strong adjustment for national polls just makes it susceptible to statistical noise. I just cannot take a 'prediction' model seriously the oscillates between ~50% to ~90% to ~50% to ~90% to close to back being ~60%. To me, that is just an indication that polls will not do a good job of predicting the election. A prediction model cannot be that volitile given how set in the ways people are.

Ultimately, it is primarily a polls-based model. If the polls do not do a job of accurately reflecting the race, then their model will not either. Or "Garbage in, garbage out" as they say.

When I see a comparison of all the models (and note: this is NOT the polls-only prediction for 538), it just makes me question either the assumptions of the specific mathematical process 538 is using. A bunch of smart statisticians shouldn't be coming up with wildly different answers unless polls aren't a good indicator (which I don't think is the case here).

https://mobile.twitter.com/jshkatz/status/793553664724140032

These are complex systems. The electorate is a moving target. Polls may have widely varying quality of methodology. There are all sorts of likely voter screens layered on top (which are as much psychology as statistics). Disagreement is healthy, and I would be more worried if everyone was in perfect agreement.

Also, I don't know if I would say they are "wildly" different answers. 538 has Clinton as a 2:1 favorite, Upshot has her as a 5:1 favorite, and PEC has her as a lock. Yes, they are different, but not that different. None of them is predicting that Trump is the strong favorite.
 
Here's an interesting article by George Berry that at some of the 538 assumptions and shows how his model could get some of those results.

Medium link

Most models of the election (e.g. PEC, HuffPo) do not do the type of trendline adjusting that 538 does. Using just a simple median of all the polls, it’s pretty easy to come up with numbers that are relatively in line with the consensus that Hillary has a high probability of winning. However, once we start adjusting for trendlines with certain model parameters, we can convince ourselves that things might actually be looking up for Trump. There actually seems to be an inflection point in the smoothing parameter, indicating that we should have high confidence that our parameter is correct before making predictions based on it.


there's some more info in the comments and such.
 
Also, I don't know if I would say they are "wildly" different answers. 538 has Clinton as a 2:1 favorite, Upshot has her as a 5:1 favorite, and PEC has her as a lock. Yes, they are different, but not that different. None of them is predicting that Trump is the strong favorite.

I am definitely not saying Nate Silver is an idiot and there is no value in his model, but I am not a fan how the variations in his model freak people. I am also not a fan how he basically has said anyone who is predicting a higher win probability than he is doesn't understand statistics.

Sam just posted another article that describes his model more and does some comparisons to 538. Thought it was a good read.

http://election.princeton.edu/2016/11/06/is-99-a-reasonable-probability/#more-18522
 
I find it silly how people (and even the model makers themselves) could possible be having conflicts over this, recurring to ad-hominems and what not. IMO there's no point in having different models if they don't have different approaches; we'll eventually learn from them all and improve our forecasting abilities. This seems like such a "theoretical" matter that having so many "feelings" involved seems downright wrong. Every model is gonna get something wrong and something right, we'll learn from it and tweak them again, the end. Why make it so personal? It's all algorithms and math. Obviously the political aspect of this forecast in particular is going to bring up some feelings, but in the end how do you attack a model for not producing the readings you want when it's just that?

I like Nate Silver and Wang and follow them both, as well as people discussing what they think about the different algorithms and models, but when it gets into "he's an idiot" or whatever it's just weird.
 
I find it silly how people (and even the model makers themselves) could possible be having conflicts over this, recurring to ad-hominems and what not. IMO there's no point in having different models if they don't have different approaches; we'll eventually learn from them all and improve our forecasting abilities. This seems like such a "theoretical" matter that having so many "feelings" involved seems downright wrong. Every model is gonna get something wrong and something right, we'll learn from it and tweak them again, the end. Why make it so personal? It's all algorithms and math. Obviously the political aspect of this forecast in particular is going to bring up some feelings, but in the end how do you attack a model for not producing the readings you want when it's just that?

I like Nate Silver and Wang and follow them both, as well as people discussing what they think about the different algorithms and models, but when it gets into "he's an idiot" or whatever it's just weird.

He he he, this actually happens all the time in academia and is actually way worse. Professors will get into heated arguments at conferences and worse things are said.
 
I find it silly how people (and even the model makers themselves) could possible be having conflicts over this, recurring to ad-hominems and what not. IMO there's no point in having different models if they don't have different approaches; we'll eventually learn from them all and improve our forecasting abilities. This seems like such a "theoretical" matter that having so many "feelings" involved seems downright wrong. Every model is gonna get something wrong and something right, we'll learn from it and tweak them again, the end. Why make it so personal? It's all algorithms and math. Obviously the political aspect of this forecast in particular is going to bring up some feelings, but in the end how do you attack a model for not producing the readings you want when it's just that?

I like Nate Silver and Wang and follow them both, as well as people discussing what they think about the different algorithms and models, but when it gets into "he's an idiot" or whatever it's just weird.

2016 - when even the arguments among polling and stats nerds got heated and personal.
 
Unlike Monday-morning QBing a term in office, where it is impossible to say who would have handled an event best, we get to see on Wednesday morning, if not Tuesday night, whose forecasts were best.
 
Unlike Monday-morning QBing a term in office, where it is impossible to say who would have handled an event best, we get to see on Wednesday morning, if not Tuesday night, whose forecasts were best.

Not completely, though. For sites that project a margin of victory, we can compare that, and it may give some insight into whether a more or less aggressive trendline model was better in this election.

The question about how much you should allow for systemic error may not be settled, though. If it turns out there is a big polling miss, then 538's conservative model will be vindicated. On the other hand, if results match the polls exactly, it will not prove the contrapositive (although no doubt people will claim that it does).
 
Status
Not open for further replies.
Top Bottom