← Back to blog

Why We Reject 65% of Our Own Trading Signals

2026-04-07 strategy discipline polymarket intermediate

We built a system that detects trading signals on Polymarket. It finds dozens per day across 8 sports. We used to trade all of them. Now we trade 35% of them. And we make more money.

This isn't a paradox. It's the most important lesson in prediction market trading.

The Evidence

We did a deep analysis of our CS2 bot's first 33 live trades. The bot was running with a 4-cent minimum edge — basically, it traded everything that looked like an opportunity.

Result: 45.5% win rate, -67 cents. Losing money.

Then we asked: what if we had only taken trades that passed stricter filters?

With those filters, 11 of the 33 trades would have passed. 65% rejected. The 11 survivors had a 55% win rate and made +40 cents. The same 33 signals — without the filter, lose money. With the filter, profitable.

The 22 rejected trades collectively lost 107 cents. The filter didn't just improve performance at the margins. It flipped the sign from negative to positive.

What Each Filter Catches

Low-Edge Signals (under 10 cents)

Our CS2 data showed that trades with less than 10 cents of detected edge had a 38% win rate. These aren't real edges. They're noise that looks like signal.

The math: Polymarket's taker fee is 2 cents. Average slippage on CS2 is 4.9 cents. Model uncertainty adds another 2-3 cents of effective error. A "10-cent edge" is really a 1-3 cent edge after friction. And a 6-cent edge is negative.

We require 10 cents minimum on CS2, 8 cents on traditional sports (lower slippage), and 15 cents for series-level esports (higher model uncertainty).

Underdog Bets (entry under 38 cents)

Buying a team priced at 35 cents means you're betting they have a 35%+ chance when the market thinks they don't. Our CS2 data: 0-40 cent entry bucket had a 17% win rate.

Why does the market beat the model on underdogs? Because the market integrates information the model doesn't have: roster changes, injuries, team form, tournament context. When a team is priced at 35 cents, the market is collectively saying "we know something." The model often doesn't.

We set a minimum entry of 38 cents, which forces the bot to only bet on underdogs when the detected edge is enormous (compensating for the higher risk that the market knows something we don't).

High-Entry Bets (entry over 70 cents)

This is asymmetric risk. Buying at 75 cents means upside is 25 cents (if the team wins) and downside is 75 cents (if they lose). Even at 60% win rate, this is barely breakeven:

0.60 × 25 - 0.40 × 75 = 15 - 30 = -15 cents per trade

You need 75% win rate just to break even on a 75-cent entry. Our live data showed only 55% win rate in the 70-85 cent bucket — deeply negative.

We cap entries at 70 cents for CS2, 78 cents for traditional sports (where models are more mature).

High-Slippage Fills (over 8 cents)

Slippage is the difference between the price you targeted and the price you actually filled at. Our CS2 bot had 5 trades with more than 10 cents of slippage. All 5 lost.

High slippage usually means the market moved against you between your signal and your fill. If the market moved that much in 6 seconds (our average latency), it probably had information your model didn't — a round result, an economy shift, another sharp trader's analysis.

We now cap fills at 8 cents above the signal price. If the order can't fill within that range, it's rejected entirely.

Quote Freeze Windows

From our microstructure research: 7,889 quote freeze events on Kalshi sports markets. Median freeze duration was 58 seconds, median post-freeze price gap was 5.5 cents, max gap was 93 cents.

70.5% of freezes happen within 30 seconds of a score change — exactly when our bot most wants to trade. We block trading during freeze windows and for 30 seconds after detected score changes. This prevents trades that look profitable but are actually being executed against stale prices.

Why Hold-to-Settlement Amplifies Filter Value

Every rejected trade is one we don't have to manage. We never sell positions — we hold to settlement (0 or 100 cents). This means every filter that blocks a bad trade saves us the entire loss, not just a stop-loss amount.

On an active trading strategy, a filter that blocks a -30 cent trade might only save -15 cents (because you'd exit at some loss). On hold-to-settlement, that filter saves the full -30 cents.

The combination of aggressive filtering and hold-to-settlement is more powerful than either alone. Filters reduce variance. Hold-to-settlement maximizes profit when you do trade. Together they produce consistent positive expectancy.

The Failed Active Strategies

Before arriving at "filter hard and hold," we tried three active approaches:

Mean-reversion taker. Buy dips after score changes, sell on reversion. 0% win rate on 10 live trades. The "dip" was new information, not noise. Spreads reprice permanently after score changes — there's no reversion.

Compression sniper. Enter during tight spreads, exit when spreads widen. Negative in backtests and worse live. Tight spreads precede volatility, not stability.

Double-down martingale. Average down on losing positions. Turned $5 losses into $50 losses. The "dip" was information, not noise.

All three strategies tried to be clever about entries and exits. All three lost money. The winning strategy was the dumbest possible: buy when the model says yes, hold until the game ends, reject everything that doesn't clear a high bar.

The Rejection Rates by Bot

Average: about 60-65% rejection across all bots. We say yes to roughly 3 out of every 10 detected signals.

The trades that survive are characterized by medium-sized edges (10-20 cents, not too small or suspiciously large), medium entry prices (40-70 cents, avoiding both underdogs and favorites), fresh data (quotes under 45 seconds old), and reasonable slippage (under 8 cents).

These are boring trades. No dramatic underdogs. No huge edges. No clever timing. Just consistent, filtered, hold-to-settlement positions in games where the model genuinely disagrees with the market by a meaningful amount.

The Counterintuitive Math

Most retail bettors think the path to profit is finding more signals. It's not. The path to profit is having the discipline to skip the bad signals you've already found.

Every signal you take is an opportunity to lose money. The expected value of a bet equals the probability you're right times the upside minus the probability you're wrong times the downside. Bad signals have negative expected value. Taking them — even if you win sometimes — costs you money over a large enough sample.

The CS2 audit proved this empirically. The same bot, the same signals, the same execution: -67 cents without filters, +40 cents with filters. The model didn't get smarter. The bot just learned to say no.

Discipline is the strategy. Rejection is the alpha.


The execution filters described here are taught in Module 5 of our course. Module 4 covers the backtesting methodology that exposes which filters actually matter (spoiler: most of the ones people obsess over don't).

Get ZenHodl Weekly

One weekly email with live results, one model insight, and product updates.

Tuesday mornings. No spam.

Want to build this yourself?

The ZenHodl course teaches you to build a complete prediction market bot in 6 notebooks.

Join the community

Discuss strategies, share results, get help.

Join Discord