The 2024-25 NBA Finals ended with the Oklahoma City Thunder beating the Indiana Pacers in a full seven games, completing a West-heavy postseason that saw OKC sweep through as the clear favorite from the moment the brackets opened.
I wanted to know what our NBA win-probability model would have produced if we had run it on every playoff game before tipoff, using only data available prior to Round 1. No peeking. No post-hoc feature tuning. Just the model we shipped, applied honestly to all 84 playoff games.
This post is the backtest. Every round, every Finals game, and a clear read on what the model is doing right and where the variance crushed it.
The Headline
50 correct out of 84 games. 59.5% accuracy.
Broken out by round:
- East 1st Round (20 games): 13/20 (65.0%)
- West 1st Round (23 games): 17/23 (73.9%)
- East Semifinals (11 games): 3/11 (27.3%)
- West Semifinals (12 games): 6/12 (50.0%)
- East Conference Finals (6 games): 3/6 (50.0%)
- West Conference Finals (5 games): 4/5 (80.0%)
- NBA Finals (7 games): 4/7 (57.1%)
For reference, here are typical accuracy benchmarks for NBA playoff predictions:
| Source | Typical playoff accuracy |
|---|---|
| FiveThirtyEight CARM-Elo | 60-65% |
| Pinnacle closing-line favorite | 62-66% |
| ESPN BPI playoff picks | 58-64% |
| Chalk (always pick higher seed) | 62-68% |
| Public expert average | 55-60% |
Our 59.5% is in the public-expert range overall, but the split is telling: the West bracket hit 17/23 (73.9%) in Round 1 and 4/5 (80.0%) in the Conference Finals. The East bracket is where the model struggled — particularly the East Semifinals at 3/11.
The West Was the Model's Playground
OKC entered the 2024-25 postseason as the clear ELO favorite. Our computed ratings at the end of the regular season had them at 1,782 — more than 50 points clear of the second-best team and nearly 200 points clear of the median playoff team.
This is exactly the scenario calibrated models handle well. When one team's regular-season ELO is that dominant, the probabilities for any given matchup become less noisy, and a 7-game series becomes a strongly favored outcome rather than a toss-up.
West 1st Round: 17 of 23 (73.9%) — the model called the brackets cleanly, with only a handful of close-series misses.
West Conference Finals: 4 of 5 (80.0%) — nearly perfect against the eventual series winner.
The East Was the Model's Weakness
East Semifinals: 3 of 11 (27.3%) — the worst single round of the entire postseason.
This is a specific model weakness we're diagnosing. The East bracket featured several close-in-ELO matchups — teams separated by 20-40 points of ELO where the model was leaning slightly favored but the actual outcomes went the other way. A 40-ELO edge translates to maybe a 55% pre-game WP. Over 11 games with that kind of edge, expected wins are ~6. We got 3.
That's within statistical variance for a small sample, but it points to a real issue: the model doesn't currently weight playoff-specific factors strongly enough for the East bracket in years when East teams are close in strength. Injuries, rotation changes, coaching adjustments over a long series — all of these matter more when ELO is close. When ELO is decisive (as in the West with OKC), they matter less.
The Finals: 4 of 7 Correct
OKC beat Indiana 4-3. Our per-game predictions:
| Game | Location | Model P(home) | Actual | ✓/✗ |
|---|---|---|---|---|
| Game 1 | @ OKC | 86.8% | IND 111-110 OKC | ✗ |
| Game 2 | @ OKC | 86.8% | OKC 123-107 IND | ✓ |
| Game 3 | @ IND | 45.9% | IND 116-107 OKC | ✗ |
| Game 4 | @ IND | 45.9% | OKC 111-104 IND | ✓ |
| Game 5 | @ OKC | 86.8% | OKC 120-109 IND | ✓ |
| Game 6 | @ IND | 45.9% | IND 108-91 OKC | ✗ |
| Game 7 | @ OKC | 86.8% | OKC 103-91 IND | ✓ |
All three OKC home games went to the favored side exactly as the model predicted (Games 2, 5, 7). That's the model's confidence being honored on its most-confident calls.
The misses: Game 1 was a signature series upset — Indiana stealing the opener at OKC on a last-second shot. The model had OKC at 86.8%, which is what the final series outcome ultimately supported, but one game's bounce broke the model's cleanest call. Game 3 was an expected Indiana home win where the model leaned OKC (it had IND at home at 45.9%, meaning OKC was a slight favorite even on the road). Game 6 was the same pattern — IND at home, model leaning OKC, IND won convincingly.
The pattern: the model correctly identified OKC as the overall Finals favorite (which they were, and they won), but it under-weighted Indiana's home-court advantage in this specific matchup. Indiana's regular-season home ELO was ~50 points above their away ELO — a real effect that our season-average ELO didn't capture cleanly.
What the Model Got Right Structurally
Two deeper wins that matter beyond raw accuracy:
-
The OKC call was correct at the series level. The model had OKC as a >80% favorite to win the Finals, and they did. Over the course of 7 games, that strong signal was worth two specific game calls (2, 5) with very high confidence, and the overall series conclusion was exactly what a calibrated model should produce.
-
West Conference Finals 4/5 (80%). This round is usually the best test of a calibrated model because both teams have already survived two rounds and their regular-season ELO is a well-cooked signal. Hitting 80% here means the ELO differential was doing its job.
What the Model Needs to Improve
Three things this postseason exposed:
- Team-specific home splits. Our ELO uses a uniform HFA (80 points for NBA). Indiana's actual home advantage was significantly higher in the 2024-25 season. Next season we're adding team-specific HFA calibrated from regular-season home/away win differentials.
- East Semifinals variance. The cluster of close-ELO East teams drove most of the model's miss-rate. We're investigating whether adding matchup-specific features (pace fit, injury-adjusted lineups, head-to-head regular-season record) would break that tie.
- Back-to-back game state. The model doesn't currently know whether a team is on a back-to-back or has rested for 2 days. In the playoffs this matters less (schedules are more spread out), but it's a free feature to add for regular-season calibration which would carry into playoff priors.
The Takeaway
The model called the West correctly all the way through. It called 4 of the 5 West Conference Finals games. It called every OKC home Finals game correctly at 86.8% confidence. It correctly identified OKC as the eventual champion.
Overall 59.5% is in the public-expert range. Not KenPom-level — but also not a model that needs to be scrapped. The NBA model's training ECE of 5.31% is the honest calibration number: when our model says 70% confidence, it hits about 70%. On the most-confident bucket (the 86.8% OKC home Finals calls), we were 3 of 3.
Next up: 2025-26 NBA playoffs starting later this month. We'll publish live per-series predictions as the brackets open, and we'll follow up with a retrospective the same way we did this year. If you want to backtest your own NBA strategies against the same snapshot data, you can pull live NBA edges via the API — 7-day free trial, no credit card.
Data sources: ESPN NBA game data (public); ELO computed from game results with K=20 and HFA=80 (NBA-tuned). All 84 postseason games were held out of the ELO training set. Pre-game predictions use the deployed wp_model_NBA.pkl with pace/ORTG/DRTG priors from season-to-date team box scores. The full prediction table is reproducible from the /v1/backtest endpoint.