Updated 2026-04-24

The Sports Prediction Transparency Index

Most "sports prediction" providers won't tell you their Expected Calibration Error, keep their methodology secret, or hide pricing behind "contact us" forms. We score 21 real sources on five dimensions anyone can verify. This page measures transparency, not predictive edge, ROI, or who has the best model.

What this index measures

Public evidence: calibration disclosure, historical archives, methodology transparency, API access, and visible pricing. Equal weights are intentional. If you disagree with the weighting, compare rows dimension-by-dimension instead of relying on the total.

What this index does not measure
  • Predictive edge or profitability
  • Market-making quality or trading liquidity
  • Customer support, UX polish, or brand size
  • Who would win in a head-to-head model contest
Self-reported ECE roll call

Of 14 sports forecasters tracked, only 1 publish a numeric Expected Calibration Error

ECE is the single most important honesty metric for a probability product. Most providers won't publish theirs. Buckets: Numeric — an actual ECE figure on a defined holdout. Derivable — raw forecasts + outcomes are public so ECE can be computed. Partial — publishes accuracy / Brier / reliability but not ECE. Silent — publishes nothing about model accuracy.

7 sources are excluded from the denominator above because they are venues, aggregators, or closing-line providers — they don't claim to be forecasters.

1
Numeric
1
Derivable
5
Partial
7
Silent
ZenHodl disclosure

How we score ourselves

ZenHodl is shown separately because this is our rubric. We include ourselves for disclosure, but we do not want a self-ranked row mixed into the main competitor tables. Our current score is 24/25, based on public evidence we link to below.

Self-reported ECE: 4.39% — verifiable from the linked NCAAMB season report. We commit to publishing this number for every sport, every season, including when it gets worse.

Last verified: 2026-04-24. Notes: Per-sport ECE published. Every trade on /results. API + docs. Tiered pricing.

Dimension scores
ECE█████
Archive█████
Method████·
API█████
Price█████

Model / Analytics Providers

Model-driven sources that publish projections, ratings, or probabilistic forecasts.

Source Total ECE Archive Method API Price Reported ECE
FiveThirtyEight (archived)
Model / Analytics Providers
Why this score?
Archived since 2023. All predictions + data on GitHub. Best methodology docs ever. Dead product.
Last verified 2026-04-24
21/25 ███·· █████ █████ ███·· █████ Brier published; ECE derivable from raw data ↗
Bart Torvik
Model / Analytics Providers
Why this score?
College basketball. Excellent reliability tables every March. Free. Unstable JSON endpoints.
Last verified 2026-04-24
18/25 ████· ████· ████· █···· █████ reliability tables (no single ECE figure) ↗
MoneyPuck
Model / Analytics Providers
Why this score?
NHL-only, XGBoost-based. Methodology documented. Limited JSON availability.
Last verified 2026-04-24
17/25 ███·· ███·· ████· ██··· █████ log-loss reported, not ECE ↗
KenPom
Model / Analytics Providers
Why this score?
College basketball only. Methodology in FAQ. $20/yr clear. HTML only, no API.
Last verified 2026-04-24
13/25 ███·· ██··· ███·· ····· █████
Massey Ratings
Model / Analytics Providers
Why this score?
Free college ratings. HTML only, no API, some methodology detail.
Last verified 2026-04-24
12/25 █···· ███·· ███·· ····· █████ accuracy by sport published (not ECE) ↗

Odds / Aggregation Tools

Products built around market comparison, line shopping, or expected-value screens.

Source Total ECE Archive Method API Price Reported ECE
OddsJam
Odds / Aggregation Tools
Why this score?
Aggregator, not a model. EV finder uses market consensus. Published pricing, tiered API.
Last verified 2026-04-24
8/25 ····· ····· █···· ████· ███··
Unabated
Odds / Aggregation Tools
Why this score?
Sharp analytics, line shopping. No published accuracy. Tiered pricing.
Last verified 2026-04-24
7/25 ····· ····· ██··· ██··· ███··

Picks / Content Sites

Consumer-facing picks, expert selections, and media-driven betting content.

Source Total ECE Archive Method API Price Reported ECE
BetQL
Picks / Content Sites
Why this score?
'AI-powered' picks. Published accuracy claims not ECE. Consumer subscription.
Last verified 2026-04-24
7/25 ····· █···· █···· █···· ████· accuracy% claims (not ECE)
SportsLine
Picks / Content Sites
Why this score?
CBS property. Expert picks. No ECE. Published subscription pricing.
Last verified 2026-04-24
5/25 ····· ····· █···· ····· ████·
Action Network
Picks / Content Sites
Why this score?
Affiliate-driven content. 'Entertainment purposes only.' Expert picks, no ECE published.
Last verified 2026-04-24
4/25 ····· ····· ····· █···· ███··
Covers
Picks / Content Sites
Why this score?
Consensus picks aggregator. No methodology, no ECE.
Last verified 2026-04-24
3/25 ····· ····· ····· ····· ███··

Prediction Markets / Venues

Trading venues and market platforms. These are included for transparency comparison, not because they are model vendors.

Source Total ECE Archive Method API Price Reported ECE
Manifold Markets
Prediction Markets / Venues
Why this score?
Play-money. Real forecasters, real leaderboards. Full API. Not real-money.
Last verified 2026-04-24
21/25 ██··· █████ ████· █████ █████ leaderboard Brier scores (not platform-wide ECE) ↗
PredictIt
Prediction Markets / Venues
Why this score?
Academic. Historical data downloadable. $850 position cap limits serious use.
Last verified 2026-04-24
15/25 █···· ████· ███·· ███·· ████· n/a (venue, not a forecaster)
Polymarket
Prediction Markets / Venues
Why this score?
Global prediction market. CLOB API public. On-chain historical data. Not US-legal for direct trading.
Last verified 2026-04-24
15/25 ····· ███·· ██··· █████ █████ n/a (venue, not a forecaster)
Kalshi
Prediction Markets / Venues
Why this score?
CFTC-regulated prediction market. Full API. Not a model — a venue.
Last verified 2026-04-24
12/25 ····· ██··· ██··· █████ ███·· n/a (venue, not a forecaster)

Enterprise / Data Vendors

API-first or institutional data providers that sell feeds, odds, or probability products.

Source Total ECE Archive Method API Price Reported ECE
The Odds API
Enterprise / Data Vendors
Why this score?
Aggregated book odds, not a model. Transparent pricing, real API.
Last verified 2026-04-24
13/25 ····· █···· ██··· █████ █████ n/a (odds aggregator, not a model)
SportsDataIO
Enterprise / Data Vendors
Why this score?
API-first sports data. Some ML/projections but no ECE. Published pricing.
Last verified 2026-04-24
11/25 ····· ····· ██··· █████ ████·
Sportradar
Enterprise / Data Vendors
Why this score?
Enterprise-grade. Published internal ECE in whitepapers occasionally. Full API. No public pricing.
Last verified 2026-04-24
10/25 ██··· ····· ███·· █████ ·····
Stats Perform
Enterprise / Data Vendors
Why this score?
Institutional-grade probability feeds. Sparse public benchmarks. Custom contracts.
Last verified 2026-04-24
10/25 ██··· ····· ███·· █████ ·····
Pinnacle (closing lines)
Enterprise / Data Vendors
Why this score?
Gold-standard closing lines. Implicit model via market. No direct API sales.
Last verified 2026-04-24
1/25 ····· ····· █···· ····· ····· implicit (closing line is a market, not a model)
1. ECE
Do they publish Expected Calibration Error on a known holdout?
5 = reliability table per sport · 0 = never mentioned
2. Archive
Can you see every past prediction, including misses?
5 = full auditable ledger · 0 = no archive
3. Method
Is the methodology documented and reproducible?
5 = open-source or detailed paper · 0 = "proprietary AI"
4. API
Programmatic access for developers?
5 = full REST + SDK + docs · 0 = HTML scraping
5. Price
Pricing visible before you contact sales?
5 = tiered pricing published · 0 = "contact us"

Why this index exists

There's no shortage of sites selling sports predictions. There's a chronic shortage of sites willing to tell you how accurate they actually are. We built this scorecard because when we went looking for a benchmark to beat, nothing existed. So we made one.

ZenHodl appears on this page for disclosure, but we separate our own score from the main grouped tables to avoid the obvious conflict of interest. We publish Expected Calibration Error per sport (4.39% on 5,345 NCAAMB games), keep every trade on /results, and document our methodology in our blog. We don't score ourselves 5/5 on methodology because we haven't open-sourced the training code — that's honest about where we're not yet.

This index is updated monthly. If you think a score is wrong, tell us — we'll re-verify.

Methodology: Scores are manually curated based on publicly visible homepage / methodology / pricing pages as of the last-updated date. Each competitor is re-verified at least monthly. We do not score internal / commercial claims we can't independently verify. Archived projects, picks sites, venues, and data vendors are included for transparency comparison, but they are grouped separately to reduce apples-to-oranges comparisons.
Challenge a score →

If a source changed materially and we missed it, send the evidence and we’ll re-check it.