Live prices move in seconds after a goal, a card, or a break of serve. Do they tell the truth, on average? Are they sharp, or do they drift? This field guide shows how to test in-play odds for efficiency, with clear steps you can repeat.
The match is tense. A team scores from a fast break. Your screen blinks. Odds jump, the market locks, then opens again. The home team, once a small favorite, is now a clear one. It feels right. But is it right in a strict sense?
In-play betting is a small lab. Many events, short cycles, rich price moves. That mix lets us test if live odds are close to fair. We can measure how fast they adjust. We can check if they stay well‑calibrated over time. Below is a hands-on way to do it, with plain tools and open math.
In-play markets are not magic. They are a blend of signal, delay, and risk control. Prices must reflect new info. But they also must manage latency, feed cuts, and trader rules. So “efficiency” here means something narrow and testable: odds should be well‑calibrated and fast to settle near a fair level.
This is not a full debate on the efficient market hypothesis. We keep it simple. We score odds the way weather teams score rain forecasts. If 0.60 odds mean 60% win chance in a set of like spots, we call that good. If not, we call it bias, and we try to find where it comes from.
It is not one place. It can be an exchange, a book with a spread, or an odds feed that many books use. Markets may pause after goals or key points. Liquidity can change by league, minute, and score. Your study should note which books or feeds you use, and when markets go “suspended.”
Good data is the hard part. You need time‑stamped snapshots of odds, plus the match state. For football, mark minute, score, red cards, and big chances if you have them. For tennis, mark serve, game score, and break points. Save a snapshot each time the price ticks, and also on fixed time steps, like every 5 seconds.
Sync is vital. Odds feeds have delay. TV has delay. Livescore can lag too. If you match odds to events, pick one clock as your truth. Then estimate offsets for each feed. Write them down. If the market locks on goals, note the lock time and the open time. These details explain many “edges” that vanish on clean data.
Editor’s note: if you are new to the space and want a simple look at safe sites, payments, and KYC basics, see this guide to online casinos. It is not a betting tip. It helps you judge platforms and read terms with care.
Ethics matter. Use only legal data. Follow local law. If a league bans live bets in your region, do not scrape or place bets. This is a study, not advice. Please play safe and set limits.
First, turn odds into implied chances. Then remove the margin (also called vig). For decimal odds O, the raw implied chance is 1/O. Sum all raw chances in a market (home, draw, away), then scale them so the sum is 1.0. Save both raw and de‑vigged values. Work in log‑odds if you test time series, as it plays nice with jumps.
Next, score calibration. The Brier score is a simple and strict tool. It punishes both over‑ and underconfidence. For richer tests, see proper scoring rules. Plot a reliability curve: group cases by quoted chance, and compare to real hit rate. A perfect line sits on the 45° diagonal.
You can also test how prices move after events. Measure time to a new level after a goal. Watch for an overshoot, then a slow pull back. Count how often this happens, and how large the swing is. This points to short “windows” where the market is thin.
Beware bad habits. Do not cherry‑pick leagues. Do not drop matches with messy data unless you report how many and why. Avoid data snooping. If you try many tests, hold out some data for a final check.
For plots and tools, a standard guide on a calibration curve can help. It shows how to build bins and fit a curve to measure drift.
The table below is a small, tight plan you can run in any sport with point‑in‑time odds. It covers calibration, margin, event response, martingale tests, and information gain vs a base model.
| Calibration (reliability) | Quoted chances match real hit rates. | Brier score; slope of reliability curve. | Odds snapshots, final results, time bins. | Flat over/under‑pricing; drift by minute or score. |
| Vig‑adjusted pricing | After de‑vig, prices are unbiased. | Mean error vs pre‑match prior; log loss delta. | Raw odds, margin model, pre‑match priors. | Bias when leading vs trailing; league tier effects. |
| Event response | Prices jump near‑instant to a new fair level. | Half‑life to new level; overshoot size. | Event times; tick‑by‑tick odds. | Delay on lock; thin books; repeated overshoots. |
| Martingale / runs | No drift you can use after costs. | Runs test; variance tests on log‑odds. | Clean time series of fair odds. | Serial links that hint at naive edges. |
| Information content | Live odds add at least as much info as base model. | KL divergence; log loss delta vs base. | Model probs and market probs, same time stamps. | Where market shines (late game, high leverage). |
Note: “Martingale” here is from probability theory; see a short entry at the Encyclopedia of Mathematics.
Old work on race and sports pools set the base. A clear view comes from Thaler and Ziemba on parimutuel betting markets. It shows how crowd views, rules, and costs shape prices. Not all bias is “free money.” Many gaps pay less than fees.
The favorite–longshot bias is a classic theme. See this brief note at NBER. In live play, form can flip this bias by state. A longshot who leads can turn into a short favorite fast, and some books may shade a bit to protect risk.
Let us sketch a small test in football. Get open event data for match states, such as the StatsBomb Open Data set. For prices, use a legal odds feed or a partner book. Sync all times to the event clock. Work on 1X2 (home/draw/away) odds. Save a snapshot each 5s, plus on every price change, and when goals or red cards land.
Make a base model for pre‑match edge. A simple one is a team strength model like the one in the FiveThirtyEight soccer methodology. Blend it with a time‑decay rule so live odds do not drift too far from known strength unless events force it.
Now test calibration by minute bins. For each minute, group all cases where the live fair chance for “home win” is in, say, [0.45, 0.55]. Check how often the home team then wins. Plot all bins. Do the same for “draw” and “away.” Repeat by game state: level, leader by 1, leader by 2+. This will show if odds get too bold in late time, or too shy right after a goal.
Then study event response. For each goal, track the live fair odds on the main side (the new leader) from 10s before to 60s after the event. Measure the first stable price after the market unlocks. Time to half of the full jump is your half‑life. Overshoot is peak minus stable value. Summarize by league and by book. Odds that overshoot and then scale back are a sign of thin book depth or fast risk moves.
Live odds tend to be sharp in top leagues with strong data, clean feeds, and high volume. In my tests, the best curves came from the last 15 minutes, when the score and time cap the state space. Books also guard these minutes well. Lock times are short. Slippage is low.
Weak spots pop up in low tiers, youth games, or odd cups. Rare events cause trouble: a second red card, a goalie injury, or VAR swings. A study on game flow in hoops, like this work on win probability in basketball, shows how big swings can fool the eye. In live football, the same idea holds: the path is noisy, but the end odds should still tell a fair story on average.
Odds reflect not just skill, but trust. If a match is under a cloud, or if a feed is poor, your study will pick up noise that is not “market error.” Read basic sports betting integrity guidance to frame your checks.
It helps to track alerts and high‑risk spots. Public logs like the IBIA integrity reports can flag leagues or regions with issues. If you see odd bias in such leagues, do not rush to trade it. The cause may be outside your model.
Keep a lab book. Note versions of code and data. Save the list of matches you drop and why. Store the final CSVs and a small “toy” set you can share. Add a short notebook that runs the core tests in minutes. This lets peers check your work.
When you compare market odds to a base model, measure information gain. A simple way is KL divergence, as in scipy.stats.entropy. Lower is better. It shows if live odds give more signal than your model at the same time point. Check early, mid, and late game blocks.
State your guardrails. Say if you skip live shots data. Say if you cap minutes due to few cases. Say how you handle extra time. Note which books or feeds you used. Note your region. Small details block big mistakes.
In-play odds can be sharp, fast, and fair in big, clean markets. They can wobble when data is thin or when rare events hit. With basic tools—de‑vig, Brier score, event response tests, and KL checks—you can measure that line. This is not a hunt for a trick. It is a way to see where the market works, where it lags, and how to build better models with it.
This article is for research. It is not financial advice. If you bet, set limits. Take breaks. If you need help, seek support in your country.
Written by the Editorial Analytics Team. We work with sports data, scoring rules, and model audits. We test markets, share code, and mark our methods. Contact the team for notes on data sources and replications.
Last updated: 16 March 2026