๐Ÿ“š Personal bits of knowledge

๐Ÿ“ Add comprehensive toolkit for mechanism design, detailing various evaluation strategies and their applications

+128 -151
+95 -151
Impact Evaluators.md
··· 4 4 5 5 It's hard to do [[Public Goods Funding]], open-source software, research, etc. that don't have a clear, immediate financial return, especially high-risk/high-reward projects. Traditional funding often fails here. Instead of just giving money upfront (prospectively), Impact Evaluators create systems that look back at what work was actually done and what impact it actually had (retrospectively). It's much easier to judge the impact in a retrospective way! 6 6 7 - ## Notes 8 - 9 - - The goal is to **create strong incentives for people/teams to work on valuable, uncertain things** by promising a reward if they succeed in creating demonstrable impact. 10 - - Impact Evaluators work well on concrete things that you can turn into measurable stuff. 11 - - They are powerful things and will overfit. When the goal is not well aligned, they can be harmful. E.g: Bitcoin wasn't created to maximize the energy consumption. An Impact Evaluators might become an Externalities Maximizers. 12 - - **Start local and iterate**. Begin with small communities defining their own [[Metrics]] and evaluation criteria. Use rapid [[Feedback Loops]] to learn what works before scaling up. 13 - - Each community understands its context better than outsiders ([seeing like a state blinds you to local realities](https://slatestarcodex.com/2017/03/16/book-review-seeing-like-a-state/)) 14 - - Local experiments surface patterns for higher-level systems 15 - - Small groups enable iterated games that reward trust and penalize defection 16 - - Reduced size reduce friction 17 - - Impact evaluation should be done by the community at the local level. They should define their own metrics and evaluation criteria. 18 - - IEs should start small (community) and simple. Iterate as fast as possible with a learning feedback loop (there isn't a community one in deepfunding)! 19 - - Figure out system structures and incentives and use as an examples for the level above. 7 + - The goal is to **create a system with strong [[Incentives]] for people/teams to work on valuable, uncertain things** by distributing a reward according to the demonstrable impact. 8 + - Impact Evaluators work well on concrete areas where you can turn into easily measurable metrics. Impact Evaluators are powerful and will overfit. When the goal is not well aligned, they can be harmful. E.g: Bitcoin increasing the energy consumption of the planet. **Impact Evaluators can become Externalities Maximizers**. 9 + - **Start local and iterate**. 10 + - Begin with small communities with their own [[Metrics]] and evaluation criteria. 11 + - Use rapid [[Feedback Loops]] to learn what works. 12 + - Each community understands its context better than outsiders ([seeing like a state blinds you to local realities](https://slatestarcodex.com/2017/03/16/book-review-seeing-like-a-state/)). 13 + - Multiple local experiments surface patterns for higher-level abstractions. 14 + - Impact evaluation should be done by the community at the local level. 15 + - E.g: "Developers" in OSO filter for GitHub accounts with more than 5 commits. Communities might or might not align with that metric. 20 16 - Focus on positive sum games and mechanisms. 21 - - E.g: OSO's "developer count" requires +5 commits to be counted. You might or might not align with that metric. 22 - - **Community Feedback Mechanism**. Implement robust feedback systems that allow participants to report and address concerns about the integrity of the metrics or behaviors in the community. Use the feedback to refine and improve the system. 23 - - [Every community and institutions wants to see a better, more responsive and dynamic provision of public goods within them, usually lack information about which goods have the greatest value and know quite a bit about social structure internally which would allow them to police the way GitCoin has in the domains it knows](https://gov.gitcoin.co/t/a-vision-for-a-pluralistic-civilizational-scale-infrastructure-for-funding-public-goods/9503/11). 24 - - IE's helps a community with more data and information to make better decisions. 25 - - Open Data Platforms for the community to gather better data and make better decisions. 26 - - Can open data be rewarded with an IE? What does a block reward mean there? 27 - - Prioritize consent and community feedback. 17 + - Small groups enable iterated games that reward trust and penalize defection. Reduced size reduce friction. 18 + - Have a deadline or something like that so it fades away if it's not working or actively used. 19 + - [The McNamara Fallacy](https://en.wikipedia.org/wiki/McNamara_fallacy). Never choose metrics on the basis of what is easily measurable over what is meaningful. Data is inherently objectifying and naturally reduces complex conceptions and process into coarse representations. There's a certain fetish for data that can be quantified. 20 + - Cultivate a culture which welcomes experimentation. 21 + - Ostrom's Law. "A resource arrangement that works in practice can work in theory" 22 + - **Community Feedback Mechanism**. 23 + - Implement robust feedback systems that allow participants to report and address concerns about the integrity of the metrics or behaviors in the community. 24 + - Use the feedback to refine and improve the system. 25 + - Prioritize consent and community feedback. 28 26 - Community should steer the ship. 27 + - You want a reactive and self balancing system. Loops where one parts reacts the other parts. 28 + - Feedback loop with the errors of the previous round. 29 29 - Design a democratic control that reacts to feedback. 30 - - Allow people to express themselves as much as they want. 31 - - Super expert with lots of context already have the weights! 32 - - IEs are like nuclear power: extremely powerful if used correctly, but so very easy to get wrong, and when things go wrong the whole thing blows up in your face. 33 - - For areas with continuous output (e.g: minting for "better path finding algorightms"), follow Bittensor model. 34 - - IEs, as most systems should have a deadline or something like that so it fades away if it's not working. 35 - - **Simplicity as a principle**. Fix rules to keep things simple and easy to play. Opinionated framework with sane defaults! 36 - - [The simpler a mechanism, the less space for hidden privilege](https://vitalik.eth.limo/general/2020/09/11/coordination.html). Fewer parameters mean more resistance to corruption and overfit and more people engaging. 30 + - Allow people to express themselves as much as they want. 31 + - E.g: an expert can give very precise feedback/knowledge/weights to a set of projects, while a community member can give a more general feedback. 32 + - Which algorithm is the best assigning weights is not the best question. 33 + - What would you change about the algorithm? 34 + - What would you change about the process? 35 + - **Communities usually lack important information to fund public goods** 36 + - [Every community and institutions wants to see a better, more responsive and dynamic provision of public goods within them, usually lack information about which goods have the greatest value and know quite a bit about social structure internally which would allow them to police the way GitCoin has in the domains it knows](https://gov.gitcoin.co/t/a-vision-for-a-pluralistic-civilizational-scale-infrastructure-for-funding-public-goods/9503/11). 37 + - Impact Evaluators act as a framework for information gathering and can help communities make better decisions. 38 + - [[Open Data]] Platforms for the community to gather better data and make better decisions. 39 + - **Simplicity as a principle**. 40 + - [The simpler a mechanism, the less space for hidden privilege](https://vitalik.eth.limo/general/2020/09/11/coordination.html). 41 + - Fewer parameters mean more resistance to corruption and overfit and more people engaging. 42 + - Fix rules to keep things simple and easy to play. Opinionated framework with sane defaults! 37 43 - Demonstrably fair and impartial to all participants (open source and publicly verifiable execution), with no hidden biases or privileged interests 38 44 - Don't write specific people or outcomes into the mechanism (e.g: using multiple accounts) 39 - - **Build anti-Goodhart resilience**. Any metric used for decisions [becomes subject to gaming pressures](https://en.wikipedia.org/wiki/Campbell%27s_law). Design for evolution: 40 - - Run multiple evaluation algorithms in parallel and let humans choose 41 - - Use exploration/exploitation trade-offs (like multi-armed bandits) to test new metrics 42 - - Make the meta-layer for evaluating evaluators explicit 43 - - **Collusion resistance**. Any mechanism helping under-coordinated parties will also help [over-coordinated parties extract value](https://vitalik.eth.limo/general/2019/04/03/collusion.html). Countermeasures include: 44 - - Identity-free incentives (like proof-of-work). 45 - - Fork-and-exit rights for minorities. 46 - - Privacy pools that exclude provably malicious actors. 47 - - Multiple independent "dashboard organizations" preventing capture. 45 + - **Build anti-Goodhart resilience**. 46 + - Any metric used for decisions [becomes subject to gaming pressures](https://en.wikipedia.org/wiki/Campbell%27s_law). 47 + - Design for evolution: 48 + - Run multiple evaluation algorithms in parallel and let humans choose. 49 + - Use exploration/exploitation trade-offs (like multi-armed bandits) to test new metrics. 50 + - Make the meta-layer for evaluating evaluators explicit. 51 + - For areas/ecosystems with a continuous and evaluable output (e.g: "better path finding algorithm", "ROC AUC of X", ...), follow Bittensor model. 52 + - The easier to verify the solution is (e.g: verify a program passes the test vs verify the experiment replicates), the less human judgment is needed, the less Goodhart's Law applies. 53 + - If the domain of the IE is sortable and differentiable, it can be seen as pure optimization and doesn't require humans subjective evaluation. 54 + - **Collusion resistance**. 55 + - Any mechanism helping under-coordinated parties will also help [over-coordinated parties extract value](https://vitalik.eth.limo/general/2019/04/03/collusion.html). Countermeasures include: 56 + - Identity-free incentives (like proof-of-work). 57 + - Fork-and-exit rights for minorities. 58 + - Privacy pools that exclude provably malicious actors. 59 + - Multiple independent "dashboard organizations" preventing capture. 48 60 - They should be flexible as it's hard to predict ways the evaluation metrics will be gamed. 49 - - [Campbell's Law](https://en.wikipedia.org/wiki/Campbell%27s_law). The more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor. 50 - - [The McNamara Fallacy](https://en.wikipedia.org/wiki/McNamara_fallacy). Never choose metrics on the basis of what is easily measurable over what is meaningful. Data is inherently objectifying and naturally reduces complex conceptions and process into coarse representations. There's a certain fetish for data that can be quantified. 61 + - [Campbell's Law](https://en.wikipedia.org/wiki/Campbell%27s_law). The more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor. 51 62 - **Separate data from judgment**. [Impact Evaluators work like data-driven organizations](https://handbook.davidgasquez.com/data/data-culture): 52 - - Gather objective attestations about work (commits, usage stats, dependencies) 53 - - Apply multiple "evaluation lenses" to interpret the data 54 - - Let funders choose which lenses align with their values 55 - - **Design for composability**. Define clear data structures (graphs, weight vectors) as APIs between layers. This enables: 56 - - Multiple communities to share measurement infrastructure 57 - - Different evaluation methods to operate on the same data 58 - - Evolution through recombination rather than redesign 59 - - IEs should define also a Data Structure for each layer so they can compose (graph, weight vector). That is the API 60 - - E.g: Deepfunding problem data structure is a graph. Weights are a vector/dict, ... 61 - - **Embrace plurality over perfection**. [No single mechanism can satisfy all desirable properties](https://en.wikipedia.org/wiki/Arrow%27s_impossibility_theorem) (efficiency, fairness, incentive compatibility, budget balance). Different contexts need different trade-offs. 62 - - There is no "end of history" because whenever you fix an evaluation, some group has an incentive to abuse or break it again and feast on the wreckage. 63 + - Gather objective attestations about work (commits, usage stats, dependencies). 64 + - Apply multiple "evaluation lenses" to interpret the data. 65 + - Let funders choose which lenses align with their values. 66 + - When collecting data, [pairwise comparisons and rankings are more reliable than absolute scoring](https://anishathalye.com/designing-a-better-judging-system/). 67 + - Humans excel at relative judgments, but struggle with absolute judgments. 68 + - Many algorithms can be used to convert pairwise comparisons into absolute scores. 69 + - Pairwise shines when all the context is in the UX. 70 + - **Design for composability**. Define clear data structures (graphs, weight vectors) as APIs between layers. 71 + - Multiple communities could share measurement infrastructure. 72 + - Different evaluation methods can operate on the same data. 73 + - Evolution through recombination rather than redesign. 74 + - To create a permissionless way for projects to participate, staking is a solution. 75 + - Fix a Data Structure (API) for each layer so they can compose (graph, weight vector). 76 + - E.g: Deepfunding problem data structure is a graph. Weights are a vector/dict, ... 77 + - **Embrace plurality over perfection**. 78 + - [No single mechanism can satisfy all desirable properties](https://en.wikipedia.org/wiki/Arrow%27s_impossibility_theorem) (efficiency, fairness, incentive compatibility, budget balance). Different contexts need different trade-offs. 79 + - There will be no "stable state". Whenever you fix an evaluation, some group has an incentive to abuse or break it again and feast on the wreckage. 63 80 - This is the formal impossibility theorem that no mechanism can simultaneously achieve four desirable criteria: 64 81 - Pareto Efficiency: The outcome achieved by the mechanism maximizes the overall welfare or some other desirable objective function. 65 82 - Incentive Compatibility: Designing mechanisms so that participants are motivated to act truthfully, without gaining by misrepresenting their preferences. 66 83 - Individual Rationality: Ensuring that every participant has a non-negative utility (or at least no worse off) by participating in the mechanism. 67 84 - Budget Balance: The mechanism generates sufficient revenue to cover its costs or payouts, without running a net deficit. 68 - - When collecting data, [pairwise comparisons and rankings are more reliable than absolute scoring](https://anishathalye.com/designing-a-better-judging-system/). Humans excel at relative judgments, but struggle with absolute judgments. 69 - - Pairwise shines when all the context is in the UX. 70 - - **Legible Impact Attribution**. Make contributions and their value visible. [Transform vague notions of "alignment" into measurable criteria](https://vitalik.eth.limo/general/2024/09/28/alignment.html) that projects can compete on. 71 - - Designing IEs has the side effect of making impact more legible, decomposed into specific properties, which can be represented by specific metrics 72 - - Something like l2beat as a leaderboard 73 - - IEs should [make "making the next L2beat" a permissionless process](https://vitalik.eth.limo/general/2024/09/28/alignment.html) for the space. Independent entities should arise to evaluate how projects met the IE criteria 74 - - Do more to make different aspects of alignment legible, while not centralizing in one single "watcher", we can make the concept much more effective, and fair and inclusive in the way that the Ethereum ecosystem strives to be 75 - - Support organizations like L2beat to track project alignment 76 - - Let projects compete on measurable criteria rather than connections 77 - - Enable neutral evaluation by EF and others 78 - - Create separation of powers through multiple independent "dashboard organizations" 79 - - Seeing like a State blinds you to the realities that are complex. Need a way to evolve the metric to be anti-Goodhart's 80 - - Not even anti-goodharts. Research says the best thing to do is to give all money to vaccine distribution, ... 81 - - Tradeoffs in public goods funding approaches: 82 - - Voting on models: feels too abstract for voters and doesn't leverage their specific project expertise 83 - - Voting on metrics: judges just play with numbers until they get their favored allocation 84 - - Voting directly on projects: halo effect, peanut butter distributions, heavy operational workload 85 + - **Legible Impact Attribution**. Make contributions and their value visible. 86 + - [Transform vague notions of "alignment" into measurable criteria](https://vitalik.eth.limo/general/2024/09/28/alignment.html) that projects can compete on. 87 + - Designing Impact Evaluators has the side effect of making impact more legible, decomposed into specific properties, which can be represented by specific metrics. 88 + - Do more to make different aspects of alignment legible, while not centralizing in one single "watcher" (e.g: l2beats, ...). 89 + - Let projects compete on measurable criteria rather than connections. 90 + - Create separation of evaluations through multiple independent "dashboard organizations". 85 91 - **Incomplete contracts problem**. [It's expensive to measure what really matters](https://meaningalignment.substack.com/p/market-intermediaries-a-post-agi), so we optimize proxies that drift from true goals. 86 - - Current markets optimize clicks and engagement over human flourishing 87 - - The more powerful the optimization, the more dangerous the misalignment 92 + - Current markets optimize clicks and engagement over human flourishing. 93 + - The more powerful the optimization, the more dangerous the misalignment. 88 94 - Four interconnected issues: 89 - - Incomplete contracts - It's too expensive to measure what really matters (human flourishing), so we contract on proxies (hours worked, subscriptions) 90 - - Power asymmetries - Large suppliers face millions of individual consumers with take-it-or-leave-it contracts 91 - - Externalities - Individual flourishing depends on community wellbeing, but contracts remain individualized 92 - - Information asymmetries - Suppliers control the metrics and optimize for growth rather than user outcomes 93 - - **Information elicitation without verification**. Getting truthful data from subjective evaluation when you can't verify it requires clever [[Mechanism Design]]: 94 - - [Peer prediction mechanisms](https://jonathanwarden.com/information-elicitation-mechanisms/) that reward agreement with hidden samples 95 - - [Bayesian Truth Serum](https://www.science.org/doi/10.1126/science.1102081) that uses both answers and predictions. 96 - - Coordination games where truth serves as a Schelling point. 95 + - Incomplete contracts - It's too expensive to measure what really matters (human flourishing), so we contract on proxies (hours worked, subscriptions). 96 + - Power asymmetries - Large suppliers face millions of individual consumers with take-it-or-leave-it contracts. 97 + - Externalities - Individual flourishing depends on community wellbeing, but contracts remain individualized. 98 + - Information asymmetries - Suppliers control the metrics and optimize for growth rather than user outcomes. 99 + - **Information elicitation without verification**. 100 + - Getting truthful data from subjective evaluation when you can't verify it requires clever [[Mechanism Design]]: 101 + - [Peer prediction mechanisms](https://jonathanwarden.com/information-elicitation-mechanisms/) that reward agreement with hidden samples 102 + - [Bayesian Truth Serum](https://www.science.org/doi/10.1126/science.1102081) that uses both answers and predictions. 103 + - Coordination games where truth serves as a Schelling point. 104 + - Tradeoffs when jurors vote on public goods funding allocation: 105 + - Voting directly on projects: halo effect, peanut butter distributions, heavy operational workload 106 + - Voting on models: feels too abstract for voters and doesn't leverage their specific project expertise 107 + - Voting on metrics: judges just play with numbers until they get their favored allocation 97 108 - [An allocation mechanism can be seen as a measurement process, with the goal being the reduction of uncertainty concerning present beliefs about the future. An effective process will gather and leverage as much information as possible while maximizing the signal-to-noise ratio of that information โ€” aims which are often at odds](https://blog.zaratan.world/p/quadratic-v-pairwise). 98 - - In the digital world, we can apply several techniques to the same input and evaluate the potential impacts. E.g: Simulate different voting systems and see which one fits the best with the current views. This is a case for the system to have a final mechanism that acts as a layer for human to express preferences. 109 + - In the digital world, we can apply several techniques to the same input and evaluate the potential impacts. E.g: Simulate different voting systems and see which one fits the best with the current views. This is a case for the system to **have a meta-evaluation mechanism that acts as a layer for human to express preferences**. 99 110 - **Make evaluation infrastructure permissionless**. Just as anyone can fork code, anyone should be able to fork evaluation criteria. This prevents capture and enables innovation. 100 - - Impact Evaluators need to be (permissionless) forkable 101 - - Anyone should be able to [fork the evaluation system with their own criteria](https://vitalik.eth.limo/general/2024/09/28/alignment.html), preventing capture and enabling experimentation 102 - - [IEs are the scientific method in disguise like AI evals](https://eugeneyan.com/writing/eval-process/). You need automated IEs, which is basically science applied to building better systems. You also need human oversight. 111 + - Anyone should be able to [fork the evaluation system with their own criteria](https://vitalik.eth.limo/general/2024/09/28/alignment.html), preventing capture and enabling experimentation. 112 + - [IEs are the scientific method in disguise, like AI evals](https://eugeneyan.com/writing/eval-process/). 103 113 - **Focus on error analysis**. Like in [LLM evaluations](https://hamel.dev/blog/posts/evals-faq/), understanding failure modes matters more than optimizing metrics. Study what breaks and why. 104 114 - IEs will have to do some sort of "error analysis". [Is the most important activity in LLM evals](https://hamel.dev/blog/posts/evals-faq/#q-why-is-error-analysis-so-important-in-llm-evals-and-how-is-it-performed). Error analysis helps you decide what evals to write in the first place. It allows you to identify failure modes unique to your application and data. 105 - - **Layer human judgment on algorithmic engines**. The ["engine and steering wheel" pattern](https://vitalik.eth.limo/general/2025/02/28/aihumans.html) - let algorithms handle scale while humans set direction and audit results. 106 - - Use humans for sensing qualitative properties, machines for bookkeeping and preserve legitimacy by letting people choose/vote on the prefered evaluation mechanism. 107 - - Making it so people don't have to do somehting is cool. Makeing it so people can't do that thing is bad. E.g: time saving tools like AI is great but humans should be able to jump in if they want! 115 + - **Reduce cognitive load for humans**. Let [algorithms handle scale while humans set direction and audit results](https://vitalik.eth.limo/general/2025/02/28/aihumans.html). 116 + - Use humans for sensing qualitative properties, machines for bookkeeping and preserve legitimacy by letting people choose/vote on the prefered evaluation mechanism. 117 + - Making it so people don't have to do something is cool. Making it so people can't do that thing is bad. E.g: time saving tools like AI is great but humans should be able to jump in if they want! 108 118 - If people don't want to have their "time saved" have the freedom to express themselves. E.g: offer pairwise comparisons by default but let people expand on feedback or send large project reviews. 109 119 - Information gathering is messy and noisy. It's hard to get a clear picture of what people think. Let people express themselves as much as they want. 110 - - The more humans gets involved, the messier (papers, ... academia). You cannot get away from humans in most problems. 111 - - In the digital world, we can apply several techniques to the same input and evaluate the potential impacts. E.g: Simulate different voting systems and see which one fits the best with the current views. This is a case for the system to have a final mechanism that acts as a layer for human to express preferences. 112 - - The easier to verify the solution is (e.g: verify a program passes the test vs verify the experiment replicates), the better and faster the IE can be. 113 - - If the domain of the IE is sortable and differentiable, it's easy as it can be seen as pure optimization and doesn't require humans subjective evaluation. 120 + - The more humans get involved, the messier (papers, ... academia). You cannot get away from humans in most problems. 114 121 - **Verify the evaluation is actually better than the baseline**. 115 122 - Run multiple "aggregations" algorithms and have humans blindly select which one they prefer (blind test). 116 123 - The meta-layer can help compose and evaluate mechanisms. How do we know mechanism B is better than A? Or even better than A + B, how do we evolve things? 117 - - Reinforcement Learning? 118 - - Genetic algorithms? 119 124 - Is the evaluation/reward better than a centralized/simpler alternative? 120 125 - E.g: on tabular clinical prediction datasets, standard logistic regression was found to be on par with deep recurrent models. 121 126 - **Exploration vs Exploitation**. IEs are optimization processes with tend to exploit (more impact, more reward). This ends up with a monopoly (100% exploit). You probably want to always have some exploration. 122 - - Do IEs need some explore/exploit thing? E.g. Use multi-armed bandit algorithms to adaptively choose between evaluation mechanisms based on historical performance and context. 123 - - The most important thing to do is to keep experimenting and learns from previous iterations 124 - - Cultivate a culture which welcomes experimentation. 125 - - Ostrom's Law. "A resource arrangement that works in practice can work in theory" 126 127 - [IEs need to show how the solution is produced by the interactions of people each of whom possesses only partial knowledge](https://news.ycombinator.com/item?id=44232461). 127 - - Having discrete rounds simplify the process. Like a batch pipeline. 128 - - Film festivals are "local" IEs each one serving different values/communities. 129 - - You can reduce coordination overhead through adaptive lazy consensus (continuous pairwise voting). 130 - - To create a permissionless way for projects to participate, staking is a solution. 131 - - You want a reactive and self balancing system. Loops where one parts reacts the other parts. 132 - - Feedback loop with the errors of the previous round. 133 - - The entire thing needs to be like a game. People want to participate because is fun and they get some rewards. 134 - - Decide metrics so that gaming/exploiting them means having a better tool, system, process. 135 - - Which algorithm is the best assigning weights is not the best question. 136 - - What would you change about the algorithm? 137 - - What would you change about the process? 138 - - Have a democratic way of expressing the values of the community and some representatives. 139 - - Economist might be good at analyzing economies but doens't mean they're good at creating them. A phisicist or ecologist might be a better fit. 140 - - Complex model of people aren't always good (performative reactions, noise, ...) 141 128 142 129 ## Principles 143 130 ··· 159 146 - Cybernetics 160 147 - Game Design 161 148 - Social Choice Theory 162 - - Mechanism Design 149 + - [[Mechanism Design]] 163 150 - Computational Social Choice 164 151 - Machine Learning 165 152 - Voting Theory 166 153 - Process Control Theory 167 154 - Large Language Models Evaluation 168 - 169 - ## Mechanism Toolkit 170 - 171 - - **Staking and Slashing**. Require deposits that get burned for misbehavior. Simple but requires upfront capital. 172 - - **Pairwise Comparison Engines**. Convert human judgments into weights using [Elo ratings or Bradley-Terry models](https://www.keiruaprod.fr/blog/2021/06/02/elo-vs-bradley-terry-model.html). 173 - - **Unprovable Vote Schemes (MACI)**. Use zero-knowledge and key-revocation games so ballots can't be sold or coerced. 174 - - **Collusion-safe games**. Rely on identity-free incentives (PoW-like) or security-deposit futarchy where bad coordination is personally costly. 175 - - **Fork-and-exit**. Make systems easy to split so minority users can counter-coordinate against cartels. 176 - - **Quadratic Mechanisms**. [Funding](https://vitalik.eth.limo/general/2019/12/07/quadratic.html) and voting that make influence proportional to square root of resources, reducing plutocracy. 177 - - **Prediction and Decision Markets (Futarchy)**. ["Vote values, bet beliefs"](https://medium.com/ethereum-optimism/retroactive-public-goods-funding-33c9b7d00f0c) - conditional markets choose policies that maximize agreed-upon metrics. 178 - - **Distilled-Human-Judgement Markets**. A jury scores a small sample, open AI/human traders supply full answers, rewards fit; scales expertise cheaply. 179 - - **Engine-and-steering-wheel pattern**. Open competition of AI "engines" acts under a simple, credibly-neutral rule-set set and audited/reinforced by humans. 180 - - **Research Augmented Bonding Curves (ABCs) / Curation Markets**. Automated market makers that route fees to upstream dependencies based on usage. 181 - - **Information-Elicitation without Verification**. [Peer-prediction mechanisms](https://jonathanwarden.com/information-elicitation-mechanisms/), [Bayesian Truth Serum](https://www.science.org/doi/10.1126/science.1102081), and other techniques to get truthful data from subjective evaluation. 182 - - **Token-Curated Registries (TCRs)**. Stakeholders deposit tokens to curate lists; challengers and voters decide on inclusions, with slashing/redistribution to discourage bad entries. 183 - - **Deliberative protocols**. [Structured discussion processes](https://jonathanwarden.com/deliberative-consensus-protocols/) that surface information before voting. 184 - - **Harberger Taxes/COST (Common Ownership Self-assessed Tax)** - Entities self-assess value and pay tax on it, but must sell at that price if someone wants to buy. Useful for allocating scarce positions/rights in evaluation systems. 185 - - **Dominant Assurance Contracts** - Entrepreneur provides refund + bonus if funding threshold isn't met, solving the assurance problem in public goods funding more elegantly than traditional crowdfunding. 186 - - **Conviction Voting** - Preferences gain strength over time rather than snapshot voting. Voters continuously express preferences and conviction builds, reducing governance attacks. 187 - - **Retroactive Oracles** - Designated future evaluators whose preferences are predicted by current markets. Separates the "who decides" from "what they'll value" questions. 188 - - **Sortition/Random Selection** - Randomly selected evaluation committees from qualified pools. Reduces corruption and strategic behavior while maintaining statistical representativeness. 189 - - **Optimistic Mechanisms** - Actions are allowed by default but can be challenged within a time window. Reduces friction for honest actors while maintaining security. 190 - - **Vickrey-Clarke-Groves (VCG) Mechanisms** - Generalized truthful mechanisms where participants pay the externality they impose on others. Could be adapted for impact evaluation. 191 - - **Streaming/Continuous Funding** - Instead of discrete rounds, continuous flows based on current evaluation state. Reduces volatility and gaming of evaluation periods. 192 - - **Liquid Democracy** - Delegation of evaluation power to trusted experts, revocable at any time. Balances expertise with democratic control. 193 - - **Threshold Cryptography/Secret Sharing** - For private evaluation scores that only become public when aggregated. Prevents anchoring and collusion during evaluation. 194 - - **Augmented Bonding Curves with Vesting** - Time-locked rewards that vest based on continued positive evaluation over time, aligning long-term incentives 195 - - **Multi-armed Bandits** - Adaptive mechanism selection algorithms that balance exploration and exploitation. Dynamically choose between evaluation mechanisms based on historical performance and context to optimize for both learning and effectiveness. 196 - - **Privacy Pools** - Systems that maintain participant privacy while excluding provably malicious actors. Allow honest participants to prove non-membership in bad actor sets without revealing their identity. 197 - - **Reinforcement Learning for Meta-Evaluation** - Use RL to evolve evaluation mechanisms through trial and error. The system learns which evaluation approaches work best in different contexts by treating mechanism selection as a sequential decision problem. 198 - - **Genetic Algorithms** - Evolution-based optimization for evaluation mechanisms. Breed and mutate successful evaluation strategies, allowing the system to discover novel approaches through recombination and selection pressure. 199 - - **Schelling Point Coordination Games** - Information elicitation mechanisms where truth naturally emerges as the coordination point. Participants are incentivized to report honestly because they expect others to do the same, making truth the natural focal point. 200 - 201 - ## Ideas 202 - 203 - ### Plurality Lens Impact Evaluators 204 - 205 - A federated network or ecosystem of IEs built on a shared, transparent substrate (blockchain). Different communities ("Impact Pods") define their own scopes and objectives, leverage diverse measurement tools, and are evaluated through multiple, competing "Evaluation Lenses." Funding flows through dedicated pools linked to these Pods and Lenses. 206 - 207 - - Impact Pods are self-organizing groups/communities (e.g., DAOs, project teams, research labs) define their specific objectives (O) and the scope (S) of work they consider relevant. They register their Pod on the public ledger, outlining their scope, objectives, and accepted measurement methods. 208 - - Measurement Attestors are a diverse ecosystem of tools and actors provide attestations linked to Hypercerts. 209 - - Evaluation Lenses are registered entities (could be smart contracts, expert panels operating under transparent rules, DAOs) that ingest Hypercerts (with their attestations) relevant to specific Pods and output evaluation scores. Multiple "Lenses" can exist and operate in parallel. 210 - - Funding Pools create dedicated pools of capital specifying which Impact Pods, Evaluation Lenses and Reward Functions 211 155 212 156 ## Resources 213 157
+33
Mechanism Design.md
··· 38 38 - [Allocation Mechanisms](https://www.allo.expert/mechanisms) 39 39 - [Generalized Impact Evaluators](https://research.protocol.ai/publications/generalized-impact-evaluators/ngwhitepaper2.pdf) - Framework for retrospective reward mechanisms 40 40 - [Info Finance](https://vitalik.eth.limo/general/2024/11/09/infofinance.html) - Using information aggregation for social decisions 41 + 42 + 43 + ## Toolkit 44 + 45 + - **Staking and Slashing**. Require deposits that get burned for misbehavior. Simple but requires upfront capital. 46 + - **Pairwise Comparison Engines**. Convert human judgments into weights using [Elo ratings or Bradley-Terry models](https://www.keiruaprod.fr/blog/2021/06/02/elo-vs-bradley-terry-model.html). 47 + - **Unprovable Vote Schemes (MACI)**. Use zero-knowledge and key-revocation games so ballots can't be sold or coerced. 48 + - **Collusion-safe games**. Rely on identity-free incentives (PoW-like) or security-deposit futarchy where bad coordination is personally costly. 49 + - **Fork-and-exit**. Make systems easy to split so minority users can counter-coordinate against cartels. 50 + - **Quadratic Mechanisms**. [Funding](https://vitalik.eth.limo/general/2019/12/07/quadratic.html) and voting that make influence proportional to square root of resources, reducing plutocracy. 51 + - **Prediction and Decision Markets (Futarchy)**. ["Vote values, bet beliefs"](https://medium.com/ethereum-optimism/retroactive-public-goods-funding-33c9b7d00f0c) - conditional markets choose policies that maximize agreed-upon metrics. 52 + - **Distilled-Human-Judgement Markets**. A jury scores a small sample, open AI/human traders supply full answers, rewards fit; scales expertise cheaply. 53 + - **Engine-and-steering-wheel pattern**. Open competition of AI "engines" acts under a simple, credibly-neutral rule-set set and audited/reinforced by humans. 54 + - **Research Augmented Bonding Curves (ABCs) / Curation Markets**. Automated market makers that route fees to upstream dependencies based on usage. 55 + - **Information-Elicitation without Verification**. [Peer-prediction mechanisms](https://jonathanwarden.com/information-elicitation-mechanisms/), [Bayesian Truth Serum](https://www.science.org/doi/10.1126/science.1102081), and other techniques to get truthful data from subjective evaluation. 56 + - **Token-Curated Registries (TCRs)**. Stakeholders deposit tokens to curate lists; challengers and voters decide on inclusions, with slashing/redistribution to discourage bad entries. 57 + - **Deliberative protocols**. [Structured discussion processes](https://jonathanwarden.com/deliberative-consensus-protocols/) that surface information before voting. 58 + - **Harberger Taxes/COST (Common Ownership Self-assessed Tax)** - Entities self-assess value and pay tax on it, but must sell at that price if someone wants to buy. Useful for allocating scarce positions/rights in evaluation systems. 59 + - **Dominant Assurance Contracts** - Entrepreneur provides refund + bonus if funding threshold isn't met, solving the assurance problem in public goods funding more elegantly than traditional crowdfunding. 60 + - **Conviction Voting** - Preferences gain strength over time rather than snapshot voting. Voters continuously express preferences and conviction builds, reducing governance attacks. 61 + - **Retroactive Oracles** - Designated future evaluators whose preferences are predicted by current markets. Separates the "who decides" from "what they'll value" questions. 62 + - **Sortition/Random Selection** - Randomly selected evaluation committees from qualified pools. Reduces corruption and strategic behavior while maintaining statistical representativeness. 63 + - **Optimistic Mechanisms** - Actions are allowed by default but can be challenged within a time window. Reduces friction for honest actors while maintaining security. 64 + - **Vickrey-Clarke-Groves (VCG) Mechanisms** - Generalized truthful mechanisms where participants pay the externality they impose on others. Could be adapted for impact evaluation. 65 + - **Streaming/Continuous Funding** - Instead of discrete rounds, continuous flows based on current evaluation state. Reduces volatility and gaming of evaluation periods. 66 + - **Liquid Democracy** - Delegation of evaluation power to trusted experts, revocable at any time. Balances expertise with democratic control. 67 + - **Threshold Cryptography/Secret Sharing** - For private evaluation scores that only become public when aggregated. Prevents anchoring and collusion during evaluation. 68 + - **Augmented Bonding Curves with Vesting** - Time-locked rewards that vest based on continued positive evaluation over time, aligning long-term incentives 69 + - **Multi-armed Bandits** - Adaptive mechanism selection algorithms that balance exploration and exploitation. Dynamically choose between evaluation mechanisms based on historical performance and context to optimize for both learning and effectiveness. 70 + - **Privacy Pools** - Systems that maintain participant privacy while excluding provably malicious actors. Allow honest participants to prove non-membership in bad actor sets without revealing their identity. 71 + - **Reinforcement Learning for Meta-Evaluation** - Use RL to evolve evaluation mechanisms through trial and error. The system learns which evaluation approaches work best in different contexts by treating mechanism selection as a sequential decision problem. 72 + - **Genetic Algorithms** - Evolution-based optimization for evaluation mechanisms. Breed and mutate successful evaluation strategies, allowing the system to discover novel approaches through recombination and selection pressure. 73 + - **Schelling Point Coordination Games** - Information elicitation mechanisms where truth naturally emerges as the coordination point. Participants are incentivized to report honestly because they expect others to do the same, making truth the natural focal point.