Noisy Traders Are Not Dumb Money

The smart-money-vs-dumb-money framing misreads how prediction markets actually work. Drawing on three pieces of forecasting research (NBER, Wharton/INSEAD BIN model, Kapoor and Wilde on cognitive search), functionSPACE argues noise is a structural requirement, not a bug - noisy traders fund the probability space that informed participants sharpen as evidence arrives - and that continuous probability markets harness noise as the shape of the curve rather than treating it as a cost.

Forecasting

By: Igor (@justigor)

There is an accepted dichotomy between "smart money" and "dumb money" which has persisted largely unchallenged in prediction markets i.e. informed capital on one side, retail exit liquidity on the other. It is a clean narrative. It is also, according to more than a decade of academic research, a fundamental misreading of how traders in these markets actually interrelate.

This short article aims to convey what 3 different but complementary bits of forecasting research say about noisy traders, and is a general exploration of what this term means in the context of prediction markets.

Markets are algorithms. Noise is a feature.

All the way back in 2012, Snowberg, Wolfers, and Zitzewitz published an NBER working paper surveying the state of prediction market research. The paper made an early case that "prediction markets function, at their core, as information aggregation algorithms." If you're reading this you probably know that three inter-related properties drive their accuracy. The market mechanism aggregates dispersed information, financial incentives encourage truthful revelation, and the existence of a persistent market creates longer-term incentives for participants to specialise in discovering and trading on novel information.

But that mechanism has a nuance that the "smart versus dumb" framing obscures. For informed traders to express their views, they need counterparties. Without participants willing to trade on weaker signals, imperfect models, or simple interest in the question, the informed trader has no one to trade against. The mechanism freezes.

The NBER research is explicit that design flaws in certain markets generally trace back to a lack of noisy traders, which thins that contract and reduces the incentive for informed participants to discover and act on superior information.

Bias is predictable. Noise is not.

To dissect deeper the static smart-versus-dumb framing, a more interesting question arises, "what actually generates forecasting error?"

Research from Wharton and INSEAD, offers a an interesting answer. Their BIN model separates forecast error into three components:

  1. bias;

  2. information; and

  3. noise.

Bias is systematic, a consistent tendency to over- or under-estimate - e.g predictable psychological favoritism. Information is the useful variability that correlates with the actual outcome for instance superior research. And Noise is random variability that does not correlate with the outcome at all.

We can think of noise as inconsistency. A random background static in our brain that makes us give two different answers to the exact same question, depending on the day.

The researchers found that when comparing regular forecasters to the top two percent of "superforecasters", the pro's are mostly just...less noisy. Roughly 50% of accuracy improvement can be attributed to noise reduction. Not better information or less bias as often touted. The remaining 50% improvement splits roughly evenly between the two.

This tells us that noise can be superseded by better forecasting practices (for humans and agents alike).

Forecasting as cognitive search

Complementary research from Kapoor and Wilde at the Wharton School add that forecasting can further be broken down to a problem of cognitive search, where accuracy can also depend on the joint structure and complexity of the question being asked. Note: think of problems as prediction questions or events, I'll call them questions.

This was drawn from over fourteen thousand forecasts across year-long tournaments, confirms this hierarchy empirically.

The interaction across these quadrants matters:

  • Low-complexity, well-structured questions (Q3) = most forecastable.

  • High-complexity, ill-structured questions (Q1) = least.

Why question structure generates noise not just the agent

This reveals something more subtle about where noise actually comes from. When a question is ill-structured, forecasters are forced to rely heavily on individual perception rather than data. Perceptions are prone to cognitive traps, and the resulting forecasts diverge widely. When the question is also complex, the cognitive load is highest.

If you ask 100 people to solve a standard math equation, their "cognitive search" is easy because the rules are clear - you get a consistent answer. But if you ask 100 people to mentally calculate the future of a brand new, highly complex tech market, you get 100 wildly different, scattered answers. Their brains are straining to connect ambiguous dots, and everyone connects them differently.

That massive, unpredictable variation (the scattered inconsistency caused by the brain struggling to make sense of a complex mess) is exactly what noise is. So don't feel so bad next time you get it completely wrong, maybe pick a better structured question instead.

Some questions just force you into noisy updates, we can all be smart money

The Bayesian corrective

The only reliable corrective the research identifies is Bayesian belief updating i.e. adjusting forecasts incrementally as new evidence arrives, without over or under-weighting that evidence. They find that Bayesian reasoning improves accuracy across all four quadrants of question type. But critically, non-Bayesian updating carries a significant penalty for Danger Zone topics - the exact category where noise is already highest and where arguably forecasting matters most.

So, noise = bad?

This is something we should consider when looking to understand PM's as a commercial surface not just a forecasting game. Bias is at least partially observable and therefore correctable while noise is not - this is good. Noise is something that is required for a well functioning market, it is something to build around rather than away from.

A prediction mechanism that wants to facilitate high forecast quality cannot simply just attract smarter participants. It has to be designed to capture the benefits of noise i.e. perfectly valid but potentially imprecise predictions, while dampening its effects on information output.

The counterparty is the curve

This is where we can map these learnings onto market design. The limit order book is usually the default mechanism at the heart of these types of discussions, however it should instead be thought of as an early stage invention and not the epitome of forecasting mechanism. The difference becomes more apparent when you compare how three different market architectures handle noise.

Binary CLOB: noise as adverse selection cost

In a CLOB model used by - Polymarket and Kalshi - a noisy trader typically buys or sells into resting limit orders posted by market makers. The market maker earns the spread when trading against uninformed flow, but loses due to adverse selection when an informed participant trades aggressively on new information the maker's quotes haven't yet absorbed. So they price the spread wide enough to cover the expected cost of being wrong. Noisy traders, in this framing, are what make market-making viable, their flow is what the spread is designed to capture, subsidising the losses the maker takes against informed participants.

The mechanism works, especially for well-structured, low-dimensional questions approaching resolution. But the binary frame compresses information. A trader who is genuinely confused and a trader who is confidently wrong both hit the same price. The market absorbs them as the same signal, and the noise they generate is indistinguishable from information until after settlement.

AMM: noise as subsidised cost

AMM's in prediction markets implemented (usually via an LMSR) remove the need for an outside market maker role. The algorithm itself posts continuous prices and accepts trades on both sides. When a noisy trader moves the price, the AMM absorbs the cost directly. It is designed to lose money to informed participants, with those losses funded either by an explicit subsidy from the market operator or by liquidity providers who accept the risk in exchange for fees.

The result is a market that is always available and always liquid, which tries to solve the thin-market problem the NBER research identifies. The AMM cannot tell whether a trade that shifted the price from 40 to 55 cents was placed by someone who overreacted to a breaking headline or by someone running a more sophisticated model that genuinely warrants the move. Both look the same on the curve. The noise is absorbed, continuously and automatically, but it is not decomposed. The cost of being wrong about which flow is informed and which is noise is borne by the subsidy or the LP pool, not by the mechanism's ability to learn from the error.

Continuous probability market: noise as the shape of the curve

Now consider a market where beliefs (predictions) are expressed not as binary positions but as probability distributions across a continuous outcome range. In that architecture, the counterparty is not a person on the other side of a bet. The counterparty is the consensus distribution itself.

The bright blue line is the exact structural noise that pays the sharp (orange) trader to keep the market accurate.

A participant who believes the market's probability curve is mis-calibrated e.g too fat in the tails, too concentrated around a mode, shifted in the wrong direction, can correct that curve by trading against it. The noise in the system is not something to be exploited through a discrete claim. It is the shape of the information that a trader is refining.

Funding the probability space

If predictions are distributions, then the unit of trade is information equity, not the outcome. A calibrated position that concentrates probability mass around a tighter range carries a different signal than a noisy one that spreads broadly. A dilution mechanism should predictions that do not survive new evidence, exerting continuous pressure toward calibration.

This is where noisy traders stop looking like exit liquidity. They are funding the probability space. Their capital defines the distribution when information is sparse (broad, uncertain, imprecise) that informed participants then sharpen as evidence arrives. Without that imprecision, there is no surface for calibration to act upon. They can still capture value by being early or by being roughly right across a wider range. They might be noisy, but they are not dumb.

Harness the noise

The smart-versus-dumb framing persists because purely binary markets compress all of this into a single directional signal, making every less-informed participant look the same. But the research is clear: prediction markets need noise to function, and noise is largely a product of problem structure, not trader intelligence. Noisy traders provide the counterparty flow without which informed participants have no one to trade against.

A market designed to improve forecast quality rather than simply settle discrete wagers should account for this. It should reward calibration rather than directional conviction alone, and treat noise not as a cost to be absorbed but as the raw material from which richer forecasts are delivered.

Igor leads research at @functionspaceHQ an open-source project exploring market-led resolution and novel economic instruments for prediction markets.

Table of content

No headings found on page

Table of content

More Research

Explore our additional research for more in-depth insights.

Structure

Binary Events: Does Liquidity Trade The Tails?

Which of Polymarket's multi-market pathologies come from discretising a continuous quantity versus the binary architecture itself? By splitting 18,863 events into continuous (price, margin, temperature buckets) and categorical (candidates, teams) slices and re-running the v1 analysis, functionSPACE shows concentration is architecture-wide, ghost markets are largely a categorical phenomenon, and a continuous-distribution primitive is a sharper fix than v1 suggested.

Structure

Binary Events: What Happens When You Split One Market Into Twenty

Let's find out how Polymarket handles complex questions by breaking them into multiple yes/no contracts. By examining metadata from the Gamma API, functionSPACE argues that this "fragmented" approach creates a "resolution gap" where liquidity fails to spread evenly across all outcomes.

Structure

The Yes Bias Might Not Exist

Polymarket traders have an inherent psychological bias toward "Yes" outcomes. By analyzing over 7,000 events, the researchers discovered that the platform’s editorial tendency to frame questions around dramatic, unlikely scenarios (e.g., "Will a specific event happen?") naturally makes the "Yes" token a cheap long-shot. Their data reveals that traders don't actually care about the "Yes" label; they simply gravity toward cheaper tokens regardless of their name. Consequently, what appears to be a behavioral bias is actually a structural illusion created by price sensitivity and the way markets are designed, where the "No" outcome is the default reality for most unlikely events.

Structure

Binary Events: Does Liquidity Trade The Tails?

Which of Polymarket's multi-market pathologies come from discretising a continuous quantity versus the binary architecture itself? By splitting 18,863 events into continuous (price, margin, temperature buckets) and categorical (candidates, teams) slices and re-running the v1 analysis, functionSPACE shows concentration is architecture-wide, ghost markets are largely a categorical phenomenon, and a continuous-distribution primitive is a sharper fix than v1 suggested.

Structure

Binary Events: What Happens When You Split One Market Into Twenty

Let's find out how Polymarket handles complex questions by breaking them into multiple yes/no contracts. By examining metadata from the Gamma API, functionSPACE argues that this "fragmented" approach creates a "resolution gap" where liquidity fails to spread evenly across all outcomes.

Structure

The Yes Bias Might Not Exist

Polymarket traders have an inherent psychological bias toward "Yes" outcomes. By analyzing over 7,000 events, the researchers discovered that the platform’s editorial tendency to frame questions around dramatic, unlikely scenarios (e.g., "Will a specific event happen?") naturally makes the "Yes" token a cheap long-shot. Their data reveals that traders don't actually care about the "Yes" label; they simply gravity toward cheaper tokens regardless of their name. Consequently, what appears to be a behavioral bias is actually a structural illusion created by price sensitivity and the way markets are designed, where the "No" outcome is the default reality for most unlikely events.

Structure

Binary Events: Does Liquidity Trade The Tails?

Which of Polymarket's multi-market pathologies come from discretising a continuous quantity versus the binary architecture itself? By splitting 18,863 events into continuous (price, margin, temperature buckets) and categorical (candidates, teams) slices and re-running the v1 analysis, functionSPACE shows concentration is architecture-wide, ghost markets are largely a categorical phenomenon, and a continuous-distribution primitive is a sharper fix than v1 suggested.

Structure

Binary Events: What Happens When You Split One Market Into Twenty

Let's find out how Polymarket handles complex questions by breaking them into multiple yes/no contracts. By examining metadata from the Gamma API, functionSPACE argues that this "fragmented" approach creates a "resolution gap" where liquidity fails to spread evenly across all outcomes.

Structure

The Yes Bias Might Not Exist

Polymarket traders have an inherent psychological bias toward "Yes" outcomes. By analyzing over 7,000 events, the researchers discovered that the platform’s editorial tendency to frame questions around dramatic, unlikely scenarios (e.g., "Will a specific event happen?") naturally makes the "Yes" token a cheap long-shot. Their data reveals that traders don't actually care about the "Yes" label; they simply gravity toward cheaper tokens regardless of their name. Consequently, what appears to be a behavioral bias is actually a structural illusion created by price sensitivity and the way markets are designed, where the "No" outcome is the default reality for most unlikely events.

Ecosystem

Information as supply

We argue that prediction market TAM should include the supply side: as the cost of producing real-time probability estimates collapses, the addressable market extends beyond trading volume to every decision that benefits from better forecasts.

© 2026 functionSPACE

© 2026 functionSPACE

© 2026 functionSPACE

© 2026 functionSPACE