The Scorecard: 2024 Presidential Election Forecast
Accuracy, Calibration, and Lessons from 2024
The 2024 Presidential Election was a long time ago. At least one Dodgers World Series and one Pope ago. Even so, with the 2026 midterm elections on the horizon, I find my mind drifting back to the lessons learned from the 2024 election.
That’s why you’re getting The Scorecard: a new article format I’m using to look back at how my models actually performed, focusing on accuracy, calibration, and what the results mean going forward.
Accuracy here is simply a measure of how wrong I was. So that’s where I’ll start:
Across all state and district contests, my model missed by an average of 2.43 points, with smaller errors in the states that mattered most. The eventual winner was favored in 49 of 50 states, D.C., and the Nebraska and Maine congressional districts that award an electoral vote.
A more compelling way to see this is on a map. Below are my final predicted margins from election night, followed by the actual outcome.1
The maps are similar overall, with the most obvious difference being that the actual outcome is noticeably redder. This pattern wasn’t unique to this model. Across the field, election models in 2024 tended to miss in the same direction, slightly in Democrats favor. This kind of shared miss is an expected feature of election modeling, where polling errors are highly correlated across states rather than independent, a point Nate Silver emphasizes repeatedly.
So, if everyone missed together, how did the models compare?2
Given the shared direction of the miss, the distinction becomes magnitude. And by that measure, this model produced the smallest average error across states, especially swing states, and favored the eventual winner in more contests.
That difference is easier to see here.
Each point represents a state’s predicted margin versus its actual result, with distance from the diagonal reflecting error.
What Makes This Model Different?
The biggest distinction between this model and the others is not how it interpreted the polls, but how much weight it placed on signals that had nothing to do with them. On the morning of the 2024 election, I noted how predictive fundamentals had been in the previous cycle and briefly explained what they are in my election-day breakdown. I also noted that while many probabilistic models largely phase out fundamental data as election day nears, this model would not. My view is that fundamental data should be used to the extent that it remains predictive, rather than phased out by default. That choice meant the model would look meaningfully different from the field, and could just as easily have hurt performance as helped it.
Ultimately, 2024 became another case study in how predictive fundamentals can be, even late in the cycle. In fact, the fundamentals-only model would have been the strongest performer in 2024.
Remarkably, the fundamentals-only model favored the eventual winner in every state and district without relying on a single state-level poll.
This doesn’t mean future models should be fundamentals-only. The takeaway from 2024 is that when polling errors are shared and highly correlated, incorporating a strong, independent signal like fundamentals can help smooth those errors.
Presidential elections are rare. That makes modeling decisions consequential, because if you miss the mark, you have to wait four more years to correct course. My view is that, over time, fundamentals will improve model performance more often than not. And if I had to make one final forecast, it’s that by 2028, more models will be incorporating fundamental data more fully into their final forecasts, and the novelty of this model may wear off.
Official election results sourced from certified state results and compiled by the Associated Press. All margins shown as two-party vote margins.
Final forecast margins sourced from each model’s published 2024 presidential forecast:
