Home - Topics - Papers - Theses - Blog - CV - Photos - Funny

September 23, 2019

Rationality is Self-Defeating in Permissionless Systems

by Bryan Ford and Rainer BöhmePDF preprint version available

Many blockchain and cryptocurrency fans seem to prefer building and analyzing decentralized systems in a rational or “greedy behavior” failure model, rather than a Byzantine or “arbitrary behavior” failure model. Many of the same blockchain and cryptocurrency fans also like open, permissionless systems like Bitcoin and Ethereum, which anyone can join and participate in using weak identities such as anonymous cryptography key pairs.

What most of these heavily-overlapping sets of fans do not seem to realize, however, is that rationality assumptions are self-defeating in open permissionless systems with weak identities. A fairly simple metacircular argument – a kind of “Gödel's incompleteness theorem for rationality” – shows that for any system S that makes any behavioral assumption, including but not limited to a rationality assumption, a rational attacker both exists and has an incentive to defeat that behavioral assumption, thereby violating that assumption and exhibiting Byzantine behavior from the perspective of the system.

As a quick summary of the argument we will expand below, suppose a permissionless system like Bitcoin is secure against rational attacks, but has some weakness against irrational Byzantine attacks in which the attacker would lose money. Because the system is open, permissionless, and exists within a larger ecosystem, a rational attacker can find ways to “bet against” Bitcoin's security in other financially-connected systems (e.g., Ethereum), making a profit outside of Bitcoin on this attack against Bitcoin. An attack that appears irrational in the context of Bitcoin may be perfectly rational in the context of the larger ecosystem.

For this reason, an open permissionless system designed to be secure only against rational adversaries is actually just insecure, unless it remains secure even when the “rational” participants become fully Byzantine. Given this, one might as well have designed the permissionless system in a Byzantine model in the first place. The rationality assumption offers no actual benefit, but merely can make an insecure system appear secure under flawed analysis.

This blog post is based partly on ideas in Rainer Böhme's talk at the recent BDLT Summer School in Vienna. While formalizing the argument would require some effort, we thought it would be worth at least sketching the argument intuitively for the public record.

Threat Modeling: Honest, Byzantine, and Rational Participants

In designing or analyzing the security of any decentralized system, we must define the system's threat model, and in particular our assumptions about the behaviors of the participants in the system. An honest, correct, or altruistic participant is one that we assume to follow the system's protocol rules as specified, hence representing a “well-behaved” participant exhibiting no adversarial behavior.

A Byzantine participant, named after the Byzantine Generals Problem, is one we make no assumptions about. A Byzantine participant can behave in arbitrary fashion, without restriction, and hence by definition represents the strongest possible adversary.

We would like to build systems that could withstand all participants being Byzantine, but this appears fundamentally impossible. We therefore in practice have to make threshold security assumptions, such as that over two-thirds of the participants in classic Byzantine consensus protocols are honest, or that the participants controlling over half the hashpower in Bitcoin are well-behaved.

Even with threshold assumptions, however, building systems that resist Byzantine behavior is extremely difficult, and the resulting systems are often much more complex and inefficient than systems tolerating weaker adversaries. We may therefore be tempted to improve a design's simplicity or efficiency by making stronger assumptions about the behavior of adversarial participants, effectively weakening the assumed adversary.

Types of adversaries

One such popular assumption, especially in economic circles, is rationality. In essence, we assume that rational participants may deviate from the rules in arbitrary ways but only when doing so is in their economic self-interest, improving their expected rewards – usually but not always financial – in comparison with following the rules honestly.

By assuming that adversarial participants are rational rather than Byzantine, we need not secure the system against all possible participant behaviors, such as against participants who pay money with no reward merely to sow chaos and destruction. Instead, we merely need to prove that the system is incentive compatible, for example by showing that its rules represent a Nash equilibrium, in which deviations from the equilibrium will not give participants a greater financial reward.

Besides simplicity and efficiency, another appeal of rationality assumptions is the promise of strengthening the system's security by lowering the threshold of participants we assume to be fully honest. To circumvent the classical Byzantine consensus requirement that fewer than one third of participants may be faulty, for example, we might hope to tolerate closer to 50%, or even 100%, of participants being “adversarial” if we assume they are rational and not Byzantine. Work on the BAR model (Byzantine-Altruistic-Rational) and (k,t)-robustness exemplifies this goal, which sometimes appears achievable in closed systems with strong identities. But a direct implication of our metacircular argument is that an open system cannot generally be secure if all participants are either Byzantine or rational.

Assumptions Underlying the Argument

The metacircular argument makes three main assumptions.

First, the system S under consideration is open and permissionless, allowing anyone to join and participate in the system using only weak, anonymous identities such as bare cryptographic key pairs. Identities in S need not even be costless provided their price is modest: the argument still works even if S imposes membership fees or requires new wallet keys to be “mined”, for example. Proof-of-Work cryptocurrencies such as Bitcoin and Ethereum, Proof-of-Stake systems such as Algorand and Ouroboros, and most other permissionless systems seem to satisfy this openness property. Because participation is open to anyone globally and can be anonymous, we cannot reasonably expect police or governments to protect S from attack: even if they wanted to and considered it their job, they would not be able to find or discipline a smart rational attacker who might be attacking from anywhere around the globe, especially from a country with weak international agreements and extradition rules. Thus, S must “stand on its own”, by successfully either withstanding or disincentivizing attacks coming from anywhere. (And it will turn out that merely disincentivizing such attacks is impossible.)

Second, the system S does not control a majority of total economic power or value in the world: i.e., it is not totally economically dominant from a global perspective. Instead, there may be (and probably are) actors outside of S who, if rationally incentivized to do so, can at least temporarily muster an amount of economic power outside of S comparable to or greater than the economic value within or controlled by S. In other words, we assume that S is not the “biggest fish in the ocean.” Given that there can be at most one globally dominant economic system at a time, it seems neither useful nor advisable to design systems that are secure only when they are the biggest fish in the ocean, because almost always they are not.

Third, the system S actually leverages in some fashion the behavioral assumption(s) it makes on participants, such as a rationality assumption. That is, we assume there exist one or more (arbitrary) behavioral strategies that S assumes some participants will not follow, such as economically-losing behaviors in the case of rationality. Further, we assume there exists such an assumption-violating strategy that will cause S to malfunction or otherwise deviate observably from its correct operation. In fact, we need not assume that this deviant behavior will always succeed in breaking S, but only that it will non-negligibly raise the probability of S failing. If this were not the case, and S in fact operates correctly, securely, and indistinguishably from its ideal even if participants do violate their behavioral assumptions, then S is actually Byzantine secure after all. In that case, S is not actually benefiting from its assumptions about participant behavior, which are redundant and thus may be simply discarded.

The Metacircular Argument: Rational Attacks on Rationality

Suppose permissionless system S is launched, and operates smoothly for some time, with all participants conforming to S's assumptions about them. Because S is permissionless (assumption 1) and exists in a larger open world (assumption 2), new rational participants may arrive at any time, attracted by S's success and presumably growing economic value provided there is an opportunity to profit from doing so.

Consider a particular newly-arriving participant P. P could of course play by the rules S assumes of P, in which case the greatest immediate economic benefit P could derive from participating in S is some fraction of the total economic value currently embodied in S (e.g., its market cap). For most realistic permissionless systems embodying strong founders' or early-adopters' rewards, if P is not one of the original founders of S but arrives substantially after launch, then P's near-term payoff prospectives from joining S is likely bounded to a fairly small fraction of S's total value. But what if there were another strategy P could take, for perfectly rational and economically-motivated reasons, by which P could in relatively short order acquire a large fraction of S's total value?

Open world with S and S'

Because S is permissionless and operating in a larger open world, P is not confined to operating exclusively within the boundaries of S. P can also make use of facilities external to S. By assumption 2, P may in particular have access to, or be able to borrow temporarily, financial resources comparable to or larger than the total value of S.

Suppose the facilities external to S include another Ethereum-like cryptocurrency S', which includes a smart contract facility with which decentralized exchanges, futures markets, and the like may be implemented. (This is not really a separate assumption because even if S' did not already exist, P could create and launch it, given sufficient economic resources under assumption 2.) Further, suppose that someone (perhaps P) has created on external system S' a decentralized exchange, futures market, or any other mechanism by which tokens representing shares of the value of S may be traded or speculated upon in the context of S': e.g., a series of tradeable Ethereum tokens pegged to S's cryptocurrency or stake units.

Now suppose participant P finds some behavioral strategy that system S depends on participants not exhibiting, and that will observably break S – or even that just might break S with significant non-negligible probability. Assumption 3 above guarantees the existence of such a behavioral strategy, unless S's rationality assumptions were in fact redundant and worthless. P must merely be clever enough to find and implement such a strategy. It is possible this strategy might first require P to pretend to be one or more well-behaved participants of S for a while, to build up the necessary reputation or otherwise get correctly positioned in S's state space; a bit of patience and persistence on P's part will satisfy this requirement. P may also have to “buy into” S enough to surmount any entry costs or stake thresholds S might impose; the external funds P can invoke or borrow by assumption 2 can satisfy this requirement, and are bounded by the total value of S. In general, S's openness by assumption 1 and the existence of a correctness-violating strategy by assumption 3 ensures that there exists some course of action and supply of external resources by which P can position itself to violate S's behavioral assumption.

In addition to infiltrating and positioning itself within S, P also invokes or borrows enough external funds and uses them to short-sell (bet against) shares of S's value massively in the context of the external system S', which (unlike S) P trusts will remain operational and hold its value independently of S. Provided P reaches this short-selling position gradually and carefully enough to avoid revealing its strategy early, the funds P must invoke or borrow for this purpose must be bounded by some fraction of the total economic value of S. And provided there are at least some participants and/or observers of S who believe that S is secure and will remain operating correctly, and are willing to bet to that effect on S', P will eventually be able to build its short position.

Finally, once P is positioned correctly within both S and S', P then launches its assumption-violating behavior in S that will observably cause S to fail as per assumption 2. This might manifest as a denial-of-service attack, a correctness attack, or in any other fashion. The only requirement is that P's behavior creates an observable failure, which a nontrivial number of the existing participants in S believed would not happen because they believed in S and its threat model. The fact that S is now observed to be broken, and its basic design assumptions manifestly violated, causes the shares of S's value to drop precipitously on external market S', on which P takes a handsome profit. Perhaps S recovers and continues, or perhaps it fails entirely – but either way, P has essentially transferred a significant fraction of system S's economic value from system S itself to P's own short-sold position on external market S'. And to do so, P needed only to find a way – any way – to surprise all those who believed S was secure and that its threat model accurately modeled S's real-world participants.

Even if P's assumption-violating behavioral strategy does not break S with perfect reliability, but only with some probability, P can still create an expectation of positive profit from its attack by hedging its bets appropriately on S'. P does not need a perfect attack, but merely needs to possess the correct knowledge that S's failure probability is much higher than the other participants in S believe it to be – because only P knows that (and precisely when) it will violate S's design assumptions to create that higher failure probability. Furthermore, even if P's attack fails, and the vulnerability it exploits is quickly detected and patched, P may still profit marginally from the market's adjustment to a realization that S's failure probability was (even temporarily) higher than most of S's participants thought it was.

Within the context of system S, P's behavior manifests as Byzantine behavior, specifically violating the assumptions S's designers thought participants would not exhibit and thus excluded from S's threat model. Considered in the larger context of the external world in which S is embedded, however, including the external trading system S', P's behavior is perfectly rational and economically-motivated. Thus, the very rationality of P in the larger open world is precisely what motivates P to break, and profit from, S's ill-considered assumption that its participants would behave rationally.

Implications for Practical Systems

This type of financial attack is by no means entirely theoretical or limited to fully-digital systems such as cryptocurrencies. In our scenario, P is essentially playing a game closely-analogous to the investors in credit default swaps who both contributed to, and profited handsomely from, the 2007-2008 financial crisis, as covered more recently in the film The Big Short.

In the cryptocurrency space, some real-world attacks we are seeing – such as increasingly-common 51% attacks – might be viewed as special cases of this metacircular attack on rationality. It is often claimed that large proof-of-work miners (or proof-of-stake holders) will not attempt 51% attacks because doing so would undermine the value of the cryptocurrency in which they by definition hold a large stake, and hence would be “irrational”. But this argument falls apart if the attack allows the large stakeholder to reap rewards outside the attacked system, e.g., by defrauding exchanges or selling S short in other systems.

Externally-motivated attacks on cryptocurrencies have been predicted before in the form of virtual protest or "Occupy Bitcoin" attacks, Goldfinger attacks, puzzle transaction attacks, merged mining attacks, hostile blockchain takeovers, and out-of-band variants of pay-to-win attacks. All these attacks are specific instances of our argument. They have been presented in the literature as open yet solvable challenges. We are not aware, however, of any prior attempt to summarize the lessons learned and formulate a general impossibility statement.

For most practical systems, we do not even know if they are incentive compatible in the absence of an external system S' – i.e., where assumption 2 is violated – and probably they are not. Almost all game-theoretic treatments of (parts of) the Bitcoin protocol deliver negative results. Many attacks against specific cryptocurrency system designs are known to be profitable in expectation, such as ransaction withholding, empty block mining, selfish mining, block withholding, stubborn mining, fork after withholding, and whale attacks. It is likely thanks only to frictions such as risk aversion and other costs that we rarely observe such attacks in large deployed systems. Many specific attacks do not even depend on assumption 1, underlining the fact that rationality is not a silver bullet even where this metacircular argument does not apply. Where it does apply, it is more general and effectively guarantees the existence of attacks against all open systems that assume participants are rational.

Another related observation is that financial markets on derivatives of a system S mature in the external world (e.g., S') as S grows and becomes more relevant. So in some sense, systems built on the rationality assumption are temporarily more secure only until they become fat enough targets to be eaten by their own success. We can see this effect, for example, in the growing and increasingly liquid market for hash power, which effectively thwarts Nakamoto’s (or Dwork’s) rule of thumb that the ratio of processors to individuals varies in a small band. Such dynamics happen in the real world, too. But there they have traditionally taken centuries or decades while in cryptocurrency space everything happens in time-lapse.

Limitations of the Argument

This argument is of course currently only a rough and informal sketch. An enterprising student might wish to try formalizing it, or maybe someone has already done so but we are unaware of it.

The metacircular argument certainly does not apply to all cryptocurrencies or decentralized systems. In a permissioned system, for example, in which a closed group of participants are strongly-identified and subject to legal and contractual agreements with each other, one can hope that the threat of lawsuits for arbitrarily-large damages will keep rational participants incentivized to behave correctly. Similarly, in a national cryptocurrency, which might be relatively open but only to citizens of a given country, and which require verified identities with which the police can expect to track down and jail misbehaving participants, this metacircular argument does not necessarily apply.

Apart from police enforcement, rationality assumptions may be weakened in other ways to circumvent the metacircular argument. For example, an open system might be designed according to a “weak rationality” assumption that users need incentives to join the system in the first place (e.g., mining rewards in Bitcoin), but that after having become stakeholders, most will then behave honestly. In this case, rational incentives serve only as a tool for system growth, but become irrelevant and equivalent to a strong honesty assumption in terms of the internal security of the system itself.

Conclusion: Irrationality Can Be Rational

Types of adversaries

What many in the cryptocurrency community seem to want is a system that is both permissionless and tolerant of strongly-rational behavior – either beyond the thresholds a similar a Byzantine system would tolerate (such as a rational majority), or by deriving some simplicity or efficiency benefit from assuming rationality. But in an open world in which the permissionless system is not the only game in town, a potential perfectly rational attacker can always exist, or appear at any time, whose entirely rational behavior is precisely to profit from bringing the system down by violating its assumptions on participant behavior.

So if you think you have designed a permissionless decentralized system that is cleverly secured based on rationality assumptions, you haven't. You have merely obfuscated the rational attacker's motive and opportunity to profit outside your system from breaking your rationality assumptions. The only practical way to eliminate this threat appears to be either to close the system and require strong identities and police protection, or else secure the system against arbitrary Byzantine behavior, thereby rendering rationality assumptions redundant and useless for security.

We wish to thank Jeff Allen, Ittay Eyal, Damir Filipovic, Patrik Keller, Alexander Lipton, Andrew Miller, and Haoqian Zhang for helpful feedback on early drafts of this post.

Updated 27-Oct-2019 with link to PDF preprint version.



Topics: Blockchain Economics Security Consensus Bryan Ford