We run eleven automated trading bots on Polymarket across NBA, NHL, MLB, NCAAMB, NCAAWB, CFB, NFL, soccer, tennis, CS2, and LoL. Every fill is logged on-chain. Every trade settles to public data. There is nowhere to hide.
This post is the honest accounting: which markets the bots make money on, which they lose money on, and which we have killed entirely. The goal is not to brag about the winners — it is to show you the shape of a real automated trading book, including the parts that did not work.
The Aggregate Picture
Across all sports and all market types, the bots are profitable. The headline number is not interesting on its own — what matters is the decomposition. A profitable book usually contains several losing buckets balanced by a few large winners. Knowing which is which is the difference between a bot you can scale and a bot that will silently drift into red.
The decomposition we run weekly is a three-axis cut: sport, market type (moneyline / spread / total), and edge band (5-10c / 10-15c / 15-20c / 20-25c). That gives roughly 100 cells. Most are too small to draw conclusions from. About fifteen are statistically meaningful. Of those fifteen, eleven are positive and four are negative. The four negative cells get filters applied or get killed entirely.
Moneyline vs Spread vs Total
Moneyline markets are our cleanest category. The contracts pay out cleanly on a single binary outcome — did the team win — and the model has the best calibration there. Across moneyline, our average Closing Line Value is positive 2.8 cents per trade, with the strongest sports being NCAAMB (+4.6c) and NHL (+3.1c). Detailed CLV per sport is on our public CLV dashboard.
Spread markets are profitable but smaller. The fair-probability math on a spread is more complex (you need a point-margin distribution, not just a binary), and the edges decay faster as the game progresses. We run spread bots on NBA and NCAAMB only. Average CLV is positive 1.4 cents. Smaller than moneyline but consistently above the cost line.
Total markets we have largely killed. Our backtest showed positive expected value on NBA totals at the entry point, but the realized P&L on hold-to-settlement was negative 5.2 cents per trade. The discrepancy traces back to closing-line liquidity: total markets close on much thinner books than moneylines, and our model's fair value at close was systematically wrong by a small margin that our backtest's idealized closing price did not capture. NBA TOTAL is excluded by default. NHL totals were similar and also disabled.
The lesson: market type matters more than most strategies admit. Even with the same model output and the same edge filter, moneyline / spread / total can have wildly different realized economics.
In-Play vs Pre-Game
Within moneyline, we split between pre-game entries and in-play entries.
Pre-game entries are taken in the hour before tipoff. The model has the most stable feature set (no live updating), the orderbook is at maximum depth, and slippage is minimal. The downside is that the market also has had hours to converge on a price, so edges are smaller. Average pre-game edge is 8-10 cents and average pre-game CLV is positive 3.2 cents.
In-play entries are taken whenever the model's fair probability diverges from market by more than the threshold. Edges can be much larger (12-20 cents on a momentum swing) but also more often wrong (the in-play model is reacting to fresher data than the pregame model and has more variance). Average in-play CLV across sports is positive 2.4 cents — meaningfully smaller than pregame CLV per trade, but the volume is higher, so the dollar contribution is larger.
We run a hard pregame side filter on NBA and NHL — if the pregame model says one team is the favorite and the in-play model fires for the other side, we skip the trade. That single filter improved NHL CLV by about 1.5 cents per trade.
The Bots We Killed
Three bot configurations have been killed entirely after extended live testing.
The first was a NBA mean-reversion bot that bought dips and sold spikes. Backtest looked promising. Live result was a slow bleed of about 1.2 cents per trade across 400 trades. Diagnosis: the dips and spikes the bot was fading were almost always information-driven (a turnover, a clutch shot, a pace shift) rather than noise. We were systematically betting against new information. Killed in March.
The second was a SPREAD/TOTAL taker that tried mean-reversion exits on score-driven repricing. 10 of 10 stop-losses in shadow mode, zero wins. The reasoning is structural: SPREAD/TOTAL prices reprice permanently when the score changes, because the score change is real. There is no mean reversion to wait for. Killed before any real money was risked.
The third was a CFB moneyline bot that ran during the 2025-26 college football season. Negative across every edge bucket. The CFB model itself is the issue — feature coverage is patchy across teams, and the home-field advantage is much larger and more team-specific than other sports. CFB is currently disabled in dynamic_sizing.BASE_ALLOCATION until the model is rebuilt with better coverage.
What the Decomposition Reveals
Three patterns recur across sports.
The middle of the edge distribution is where money lives. Edges below 5 cents get eaten by costs. Edges above 20 cents are usually model errors masquerading as opportunity. The reliable profit is in the 8-18 cent band. Every sport's profitable bucket sits in roughly that range.
Slippage scales with sport liquidity. NBA has the deepest books and lowest slippage; CS2 has the thinnest and highest. We compute per-sport effective costs and require larger raw edges from the thinner books to get past the cost gate. The cost adjustment is not optional — without it, the thin-book sports look more profitable than they are.
The market efficiency improves over time. A trade that looked great in 2024 backtests is harder to execute in 2026 because more sharp traders have entered the market. Our walk-forward methodology catches this; static backtests do not.
What We Publish
The full per-sport CLV breakdown is at zenhodl.net/admin/clv. Per-bucket P&L is at zenhodl.net/results. Recent trades are surfaced on zenhodl.net/transparency. The public benchmark of model performance versus held-out games is at zenhodl.net/methodology. Nothing about the bots is hidden — including the parts that did not work.
If you are building your own automated book, the most valuable thing we can offer is not a list of what we made money on. It is the shape of how a real book decomposes when you cut it three ways. The aggregate is profitable; the cells are mixed; the disciplined ones win.
The Bottom Line
Automated trading on prediction markets is profitable when you decompose ruthlessly, kill the losing buckets quickly, and resist the urge to "fix" a bot that is working. The bots we still run are the ones that survived months of cells getting cut. The ones we no longer run are not failures — they are the cost of finding the ones that work.
Live trade ledger and per-sport CLV at zenhodl.net/results and zenhodl.net/clv. The full bot architecture is taught in the bot course included with every API plan.