But of course this is nonsense. Elections aren't like the weather, there's an endogeneity here. If a campaign sees that a generally-accurate prediction has them losing a state three months out, then they can and *do* divert funds and visits and effort and policy toward courting voters in that state.
People respond to campaigns, models respond to people, campaigns respond to models.
while this is fair, Nate's model is already trying to compensate for the campaign changes during the election; his "polls plus" model at least integrates different biases after either conventions (Nate Silver never shuts up about convention bumps) and if he had a way to model and adjust every press release the FBI has made he would do so
so I don't see why you couldn't judge the model by how well it did far away from the actual election. after all, there is only one election at one date, so every poll, model or pundit is trying to predict that one outcome; more capable models and pundits will try to predict the wild swings in the campaigns
if the feedback cycle of model->campaign->people is too strong then the model will become useless at actually predicting, because the campaigns are abusing it. the ideal would be building a model that also considered how the campaigns will divert funds and efforts given early results and thus predicted better the election and responded less wildly to small changes in campaigning and polls. yeah, that sounds pie-in-the-sky but predicting perfectly each state result the night before also sounded pie-in-the-sky in the 20th century
cause, what good is a model that cannot tell you who is going to win Florida for months until the very night before the election? It is ok if the polls swing wildly but the model should offer some stability; even the poll-plus model which according to Nate is more conservative has flipped off and on back and forth
I actually think that both campaigns are following 538 and it has resulted in a weird feedback cycle that somehow hasn't affected the other models, probably because Silver's model includes silly things like convention bumps. maybe we are in front of a problem where simpler is better, and trying to predict considering what the campaigns will do is a wild goose chase, and just quietly aggregating polls is better. I mean, the Clinton campaign at least at one point diverted funds from states that told them they were safe, and a drop in polls in those states followed. which normally would indicate the model follows reality closely, but maybe it means the polls 538 weights more are more easily influenced by hidden factors in the campaign. maybe simply the people polled lived near campaign centers and 538 gave more weight to those, dunno. we already heard of one poll where one black guy for Trump outlier threw the poll off massively
This would be fine if people's opinions were static - but they're not. If the issue is that 538 keeps going back and forth on Florida and that's seen as wishy-washy then that's because the model believes that the state is a tossup. now if 538 predicts Florida being 50/50 and it gets the winner right but the margin is 60-40 then that's an inaccurate model.
the 538 model already seems to include provisions to control for people changing opinions and wide opinion trends but they seem to be making the model worse
yes, opinions change, but if another model can predict 49 states with similar accuracy to 538's and doesn't have Florida and other states flipflopping wildly, then that means the other model is clearly superior to 538's. hell, I'd say that a model that gets one more state wrong than 538 and is more stable is still better
only Silver('s model) seems to think the campaign is significantly less stable; other poll aggregators seem to think the result is going to be clear
edit: OF COURSE in the middle of writing this post Florida turned blue on 538, at least on polls only
I guess I helped