Prediction Market Resolution: A State Confirmation Problem
Prediction markets need to agree on what actually happened - a state confirmation problem. functionSPACE compares delegated approaches (Kalshi) against collective ones (UMA, Reality.eth, Chainlink, AI oracles) and argues that a market-led confirmation surface, where capital replaces voting weight and outcomes sit on numerical ranges, is the only mechanism auditable at the incentive layer.
Resolution

By: Igor (@justigor)
Be it society or information, systems need to decide what is true. State confirmation is how that system agrees on shared facts. A helpful way to understand how this is done in practice is to explore state confirmation through the lens of permissioned/delegated against permissionless/collective mechanisms.
Permissioned systems primarily employ institutions to complete the work of confirmation. Examples include courts, banks, regulators and governments. System participants provide certain rights to enforce decisions, and people generally accept these because of laws, history or reputation. Their power lies in the establishment of long-term trust and access to information. They are relatively efficient and can interpret or adapt to complexity. However they come with core risks, they are prone to over-centralisation, censorship, and failures can lead to systemic decay.
Permissionless mechanisms on the other hand use open networks and rules instead. Our key example is Blockchains. The rules are written in software and math with incentive design driving outcomes. There is no single authority to overrule the algorithm. They gain legitimacy through open code, cryptography and financial incentives. They do bore critical limitations, chiefly being slow decision making, control by wealthy actors while being expensive to secure.
There is no perfect way to decide what is true. All systems involve trade-offs.
Systems run by institutions can adapt and correct mistakes, but they can also be abused. Systems run by code are less opinionated and are hard to manipulate, but they are rigid and difficult to fix when something goes wrong. One side prioritises certainty the other, neutrality.
To ground this a bit, lets hone in on state confirmation as the process of verifying and agreeing on some piece of real-world information (the "state" of an event or data) in a decentralised system.
In a prediction market, this means bringing an external truth on-chain or determining the outcome of an off chain event in a trust-minimised way. This is essentially the well-known oracle problem. Smart contracts need to be told which outcome actually happened in the real world.
When truth is unclear, context matters, and mistakes need to be reversible, institutional systems work better, such as courts. When facts are objective, neutrality is crucial or finality matters, rule-based systems work better.
There is no magic solution to fix the real-world messiness this brings into the relative beauty of information trading
This dynamic is underpinned by common knowledge of evidence: if the answer can be verified by evidence (public info, APIs, etc.), then everyone knows what's true, and any false claim is low-hanging fruit for a challenger. If the question is subjective or ambiguous, it might ultimately need the arbitrator (which essentially means the game defers to a different mechanism's equilibrium).
The abstract question of "who confirms shared reality?" then turns into a very operational one:
Who is allowed to propose the outcome?
Who is allowed (and incentivised) to challenge it?
What makes "lying" expensive enough relative to "telling the truth"?
What's the path to finality when the world is ambiguous, politicised, or still unfolding?
As confirmation mechanisms should be judged by whether they converge reliably under adversarial pressure and ambiguity, what is actually happening in practice?
Lay of the land
Delegated trust in practice, Kalshi and others.
As Messari outlines that many markets still settle via "centralised adjudication": Kalshi resolves by referencing official sources and finalises internally; it's predictable but depends entirely on the central adjudicator and typically takes 24–48 hours.
The "truth service" is a firm-like institution. The hard problems don't disappear. Instead, they show up as governance, legal accountability, policy, and internal process. The mechanism risk is concentrated but user experience and finality can be straightforward. There are some potential pro's, chiefly being a clearer path for dispute resolution.
The collective approach: Augur, UMA, RealityEth/Kleros
This deserves a whole deep-dive in itself as a state confirmation system analysis but we will do a brief overview.
Augur represents the maximalist collective model largely relying on the threat of a fork as key deterrent.
UMA by design concentrates its trust not in a single arbiter, but in an active monitoring layer plus a Schelling-style voting backstop. It works well when markets are economically meaningful, well-watched, and importantly, uncontencious or obscure in terms of outcome. Its security depends on continued community engagement and alertness rather than constant collective participation as opposed to Augur's initial design (it's newly released design requires some digging into).
Given its concentration with Polymarket, UMA draws the most criticism. There are plenty of detailed and well articulated limitations or failure examples in practice.
Reality.eth operates as a public bonding and escalation game. Anyone can answer and anyone can challenge by posting a larger bond. In most cases, correct answers finalise cheaply because no one rationally disputes truth, while sustained lies require burning capital until arbitration is triggered. When disputes escalate, Kleros can be fitted on as part of a modular stack, this provides a decentralised court layer with randomised jurors and appeal rounds, converting economic disagreement into a structured voting process.
Blockchain Oracle i.e. Chainlink
Chainlink is an infrastructure-led oracle model built around independent node operators who source data from multiple APIs and aggregate it on-chain, typically via medians. Rather than using dispute games or tokenholder votes per query, Chainlink relies on repeated interactions, reputation, staking, and economic incentives to keep operators honest over time.
Many existing prediction markets (including Polymarket) already plug in Chainlink for fast priced feeds for markets that do not have dispute mechanisms. For Chainlink to become the everything oracle for prediction markets, we would need to imagine an infinite horizontal scaling across information domains and potential topics. This is an impossibility. In addition, while composable, utilising Chainlink as a sole oracle for a prediction market platform/protocol creates an economic dependancy between market hosts, data providers and users.
AI/LLM
While not a mechanism in itself, it can be seen as an algorithmic predictor. Its soundness is statistical (how often it's correct) rather than economic. It can be combined with games (like multiple AIs voting or AIs plus human disputes) to enhance game-theoretic soundness. In addition there are lower likelihood but potentially catastrophic black swan risks brought about by poisoning which may undermine the integrity of the underlying trade surface.
that AI oracles, if combined with robust sourcing, could be less swayed by any one malicious source (the AI could cross-check multiple references). In addition to human parsing of the AI judges quirks and tendencies could be considered when making a trade. While novel this still begs the question, where is the incentive? and where is the point of failure? Some argue
Oracle problem in prediction markets
The delegated vs collective trade-off becomes a settlement UX vs adversarial robustness vs governance surface:
Delegated: simpler convergence story, concentrated trust and control.
Collective: distributed trust and open challenge, but finality can become a governance and coordination game precisely when stakes are highest.
In order to compare and contrast some of these existing systems, through the lens of the state confirmation idea, the primary value/risks of these various approaches are:
Trust minimisation: The extent to which a system removes the need to rely on a specific human, entity, or technology provider. The ideal is to create a neutral battleground where any agent can participate to correct a lie.
Finality (speed and cost): How quickly a piece of data is considered irreversible and the ongoing operational expense to maintain that feed.
To illustrate the main approaches discussed we can imagine something like:

Illustrative scale
Manipulation resistance: The economic or structural difficulty for an attacker to force a false outcome through the system. Effective resistance turns lying into a "burning money" strategy.
Game theoretic soundness: The theoretical robustness of the incentive model; Purely technical means are not always enough. Often we rely on humans or entities to report truth, which introduces the need for incentives and game theory.
The purpose here is not to value these different approaches but to further illustrate the trade-offs.
What's missing here is the practical view. Usability, anti-fragility, and sustained trust can only be assessed over a longer period of time. We do know which properties don't work however. Chiefly being very long settlement times, opaque post-event clarifications, last minute category inclusions, and unnecessarily semantical reinterpretations of market questions.
It could even be argued that the last 2-3% of an event-contract probability is purely resolution risk priced in.
What becomes possible with an unbounded market-led mechanism?
We believe the core challenge for a prediction market oracle is to produce a credibly-neutral surface for agents to confirm state. How can a market designed specifically for state confirmation be leveraged?
A permissioned system fails this test because its internal reasoning is opaque and its incentives are external and unknowable. An open, adversarial economic game is the only known mechanism that is auditable at the incentive layer. We may not know why each individual participant acted, but we can precisely calculate the economic stakes and incentives that led to the collective outcome.
A market led mechanism means the ability of financially incentivised participants to reach state confirmation around a schelling-point. Where that schelling-point is representative of common-knowledge. We therefore replace governance dispute resolution with capital equilibrium resolution.
This transforms the oracle into a specialised market surface itself. When that market surface is mathematically linked to its prediction market counterpart, it can enhance its manipulation resistance properties, deliver trust over time and reach Lindy status.
How credible-neutrality is more likely by forcing outcomes onto a numerically defined range
Neutral protocols can't adapt well when the truth is unclear. When something weird happens, they break or fork and a human-in-the-loop is often a counter-force put in place to deal with this.
By moving prediction outcomes to a numerical range you significantly reduce event ambiguity risk. If you combine this with an unbounded market-led state confirmation system, you direct the chaotic energy of facts assertion into a productive output for prediction markets. With adequate dampers in place, equilibrium can be achieved neutrally and rather than through a rigid rules based system.

In PM's, chaos reigns, but its not a bad thing
As capital is exchanged rather than used to signify voting weight, the oligarchic-like tendencies of stake based voting is eliminated. Whilst you still have to be concerned with adequate distribution of the underlying currency, this becomes a protocol level, incentive-auditable problem, as the ability to profit form the work of state confirmation improves supply side economics.
Modularisation of state confirmation
The many fair and exciting benefits of AI or API based truth extraction, including finality, can then be a layer above resolution mechanism, not THE resolution mechanism in itself.
A market-led confirmation surface turns disagreement into price discovery
The practical implication is that oracle design is a form of institutional design. It determines how reality enters a digital system, how errors propagate, and how costly it is to distort shared facts. When that surface is credibly neutral, auditable, and economically legible, participants can reason about it. When it is opaque, they cannot.
Further references of interest:
https://consensuslabs.ch/blog/trustless-oracles-feeding-real-world-data-to-blockchains
https://blog.ethereum.org/2014/03/28/schellingcoin-a-minimal-trust-universal-data-feed
Igor leads research at @functionspaceHQ an open-source project exploring market-led resolution and novel economic instruments for prediction markets.
More Research
Explore our additional research for more in-depth insights.



