Paralysis Proofs: How to Prevent Your Bitcoin From Vanishing

Hacking Distributed
Paralysis Proofs: How to Prevent Your Bitcoin From Vanishing

From the buried gold of Treasure Island to the seven missing Fabergé eggs, lost and stolen treasure has long been the stuff of legend and romance. In Bitcoin, though, there are no princesses, dragons, or (seafaring) pirates, and not much romance. Fortunes are often lost simply because private keys are wiped from laptops, the slips of paper they’re printed on go missing, or they’re stolen by hackers.

Mundane as it may seem, key management is critically important in any cryptographic system. Cryptocurrencies like Bitcoin and Ethereum are no exception. Lost or stolen keys can be catastrophic, but handling keys well is notoriously hard. Users need to protect their keys against theft by wily hackers, without securing them so aggressively that they might be lost. Key management is especially challenging in business settings, where often no one person is trusted with complete control of resources.

A common, powerful approach to key management for cryptocurrencies ismultisig transactions, a way to distribute keys across multiple users. Such key distribution is referred to more generally as secret sharing.

We have just released a paperaddressing a critical problem with secret sharing in general and in cryptocurrencies in particular. We refer to this problem as access-control paralysis.

How Secret Sharing Can Induce Paralysis
A few months ago, an acquaintance approached us with a simple but intriguing problem, a good example of real-world key distribution challenges.

This person---we’ll call our whale friend Richie---shared ownership of a large stash of Bitcoin with two business partners. Richie and his partners naturally didn’t want any one partner to be able to make off with the BTC. They wanted to ensure that the BTC could only be spent if they all agreed. There’s a simple solution, right? They could use 3-out-of-3 multisig where all three would then need to sign a transaction involving the BTC. Problem solved! Or is it?

However, that’s not the whole story. Richie and his partners were also worried, naturally, about what would happen in one of their keys was lost. The device storing a key might fail, a key might be deleted by mistake, or, in some very unfortunate cases, such as a car accident, a shareholder might physically lose the ability to access her key. The result would be a complete loss of all of their BTC.

This isn’t the only bad scenario. It’s also possible that Richie and his partners have different ideas about how the money should be spent, and can’t come to an agreement. Worse still, one malicious or piggish shareholder might blackmail the others by withholding her key share until they pay her. In this case, the BTC could be lost, temporarily or permanently, to indecision or malice.

We use the term paralysis to denote any of these awkward situations where the BTC can’t be spent. Unfortunately, N-out-of-N multisig doesn’t solve the paralysis problem. In fact, it makes the problem worse, as loss of any one key is fatal.

Richie and his business partners.

For this reason, avoiding paralysis while meeting the goal of Richie and his partners---i.e., requiring full agreement to spend the BTC---seems impossible. Actually, it is impossible with secret sharing alone, because of a basic paradox. Suppose we have an N-out-of-N multisig scheme, which we clearly need to enforce full partner agreement for transactions. If N-1 shareholders can somehow gain access to the BTC when one share goes missing, they can simply pretend that one share has gone missing and access the fund on their own. In other words, what we had to begin with was really some kind of (N-1)-out-of-N multisig, which is a contradiction.

Richie’s problem seems to have left us in a state of paralysis...

Resolving the Paradox
Thanks to the advent of two powerful technologies, blockchain and trusted hardware--Intel SGX, in particular---it turns out that we can actually resolve this paradox. We can do so efficiently and in a very general setting---to the best of our knowledge, for the first time. Toward this end, we introduce a novel technique called a Paralysis Proof System.

As you’ll see, fairly general Paralysis Proof Systems can be realized relatively easily in Ethereum using a smart contract---no SGX required. We present an example Ethereum contract in our paper. Scripting constraints in Bitcoin, however, necessitate the use of SGX and also introduce some technical challenges. Prime among these is the fact that without significant bloat in its trusted computing base, an SGX application cannot easily sync securely with the Bitcoin blockchain.

Our approach leverages a novel combination of SGX with a blockchain that avoids the need for an SGX application to have a trustworthy view of the blockchain.

Paralysis Proof Systems: Intuition
The intuition is fairly simple. A trusted third party holds all of the keys in escrow. If one or more parties cannot or will not sign transactions, leading to the paralysis described above, the others generate a Paralysis Proof showing that this is the case. Given this proof, the third party uses the keys it holds to authorize transactions.

If we have a trusted third party, though, we’re clearly not achieving the security goal set forth by Richie and his friends. One party controls all the keys!

This is where SGX comes into play. An SGX application can behave essentially like a trusted third party with predetermined constraints. For example, it can be programmed so that it is only able to sign transactions when presented with a valid proof. (In this sense, SGX applications behave a lot like smart contracts.) Thanks to SGX, we can ensure that the BTC can only be touched by a subset of parties when provable paralysis occurs.

A few technical details
Of course, even given this magic that is SGX, we still need to ensure that Paralysis Proofs can only be generated legitimately. We don’t want Richie’s partners to be able to “accuse” him, falsely claiming that he’s dead by, say, mounting an eclipse attack against the host running the SGX application. Happily, blockchains themselves provide a robust way to transmit messages and for a party to signal that she is alive. To implement a Paralysis Proof System for Bitcoin, we take advantage of this fact, along with a few tricks. For the sake of simplicity, we’ll focus on the problem of inaccessible keys, setting aside other forms of paralysis for the time being.

A Paralysis Proof is constructed by showing that a party P is not responding in a timely way and thus appears to be unable to sign transactions. The system emits a challenge that the “accused” party must respond to with what we call a life signal. If there’s no life signal in response to a challenge for some predetermined period of time (say, 24 hours), this absence constitutes the Paralysis Proof.

For Bitcoin, a life signal for party P can take the form of a UTXO of a negligible amount of Bitcoin (e.g. 0.00001 BTC), that can be spent either by P — thereby signaling her liveness — or by pk_SGX---but only after a delay. Note that sk_SGX is only known to the SGX application.


Let’s take our example of three shareholders again. Let’s say each of them possesses a key pair (sk_i, pk_i). First they escrow their BTC fund---let’s suppose it’s 5000 BTC---to UXTO_0 -- an output spendable by either all of them or pk_SGX. Suppose now that P_2 and P_3 decide to accuse P_1. Upon receiving their request, the SGX application prepares the following two transactions and sends them to P_2 and P_3:
  • t_1 that creates a life signal UTXO_1 of 0.00001 BTC spendable by either pk_1 immediately or by pk_SGX after a timeout (e.g. 144 blocks, 24 hours)
  • t_2 that spends both UTXO_0 and the life signal UTXO_1, to an address spendable by pk_2 and pk_3 (or pk_SGX optionally, if they want to stay in the Paralysis Proof System).
The shareholders that accused P_1 should therefore broadcast t_1 to the Bitcoin network, wait until t_1 is added to the blockchain, then wait for the next 144 blocks, and then broadcast t_2 to the Bitcoin network. There are two possible outcomes:
  • In the case of legit accusation where P_1 is indeed incapacitated, P_2 and P_3 will obtain access once t_2 is mined. This ensures the availability of the managed BTC fund.
  • In the case of malicious accusation, however, the above scheme ensures that P_1 has the opportunity to appeal while these 144 blocks are being generated. To do so P_1 just spends UTXO_1 with the secret key that is known only to her (the script of t_1 does not require the CSV condition for spending with her secret key). Since t_2 takes both UTXO_0 and UTXO_1 as inputs, spending t_1 renders t_2 a invalid transaction.
Security Reasoning
The security of life signals stems from the use of a relative timeout (CheckSequenceVerify) in the fresh t_1, and the atomicity of the signed transaction t_2. To elaborate, t_2 will be valid only if the witness (known as ScriptSig in Bitcoin) of each of its inputs is correct. The witness that the SGX enclave produced for spending the escrow fund is immediately valid, but the witness for spending t_1 becomes valid only after t_1 has been incorporated into a Bitcoin block that has been extended by 144 additional blocks (due to the CSV condition). Thus, setting the timeout parameter to a large value serves two purposes: (1) giving P1 enough time to respond, and (2) making sure that it is infeasible for an attacker to create a secret chain of 144 blocks faster than the Bitcoin miners, and then broadcast this chain (in which t_2 is valid) to overtake the public blockchain.

Beyond Cryptocurrencies and Paralysis
Although we use Bitcoin as a running example, the power of Paralysis Proof Systems extends beyond cryptocurrencies and the techniques behind them support a range of interesting new access-control policies for problems other than paralysis. Some of these policies are easy to enforce in smart contract systems like Ethereum, but others aren’t because they rely on access-controlled management of private keys, which can’t be maintained on a blockchain.

For example, Paralysis Proof Systems can be applied to credentials for decryption. You can use Paralysis Proofs to create a deadman’s switch for the release of a document, allowing it to be decrypted if a person or group of people disappear. Here are some other examples of access-control policies other than paralysis that can be realized thanks to a combination of blockchains (censorship-resistant channels) plus SGX:
  • Daily spending limits: It is possible to ensure that no more than some pre-agreed-upon amount---say 0.5 BTC---is spent from a common pool within a 24-hour period. (There are some practical limitations discussed in our paper.)
  • Event-driven access control: Using an oracle, such as our Town Crier system (which is actually the first public-facing SGX-backed production application), it is possible to condition access-control policies on real-world events. For example, daily spending limits might be denominated in USD, rather than BTC, by providing a data feed on exchange rates. One could even in principle use natural language processing to respond to real-world events. For example, a document with compromising information could be decryptable by a journalist should its author be prosecuted by a federal government.
  • Upgrading threshold requirements: Given agreement by a predetermined set of players, it is possible to add and/or remove players from an access structure, i.e., set of rules about authorized players. E.g., it is possible to convert a k-out-of-N decryption scheme to a (k+1)-out-of-(N+1) scheme. In a regular secret sharing scheme, upgrading isn’t possible, because a group of authorized players can always reconstruct the key they hold. If an SGX application controls a decryption key, however, it can monitor a blockchain to determine if players have voted for an upgrade. Votes are immune to suppression if they’re recorded on chain.
In general, the combination of SGX and blockchains supports a fundamental revisitation of access-control policies in decentralized systems and introduces powerful new access-control capabilities---capabilities that are otherwise impossible to achieve.

To learn more, read our paper at here.

Many interesting extensions of the above are discussed in our paper. Here are a couple.

Paralysis Proofs via Covenants
As mentioned, it is the scripting constraint of Bitcoin that necessitates the use of SGX. In fact, we also present a (somewhat less efficient) approach without trusted hardware that makes use of covenants, a proposed Bitcoin feature. We refer readers to our paper for a covenant-based protocol. The bottom line is, the complexity of the covenants approach is significantly higher than that of an SGX implementation (in terms of conceptual as well as on-chain complexity). As there have been recent proposals to support stateless covenants in Ethereum, the comparative advantages of our SGX-based design may prove useful in other contexts too.

Tolerating broken SGX
In the aforementioned example, the fund can be spent by pk_SGX alone, but it’s important to note that’s not the only option. In fact, one can tune the knob between security and paralysis-tolerance to the best fit their needs.

For example, if the three shareholders only desire to tolerate up to one missing key share, what they can do is to move the funds into 3-out-of-4 address where the 4th player is the SGX enclave. If all of them are alive, then they can spend without SGX. If one of them of being incapacitated, the enclave will release its share if the rest two of players can show a Paralysis Proof. Therefore, even if the secret of SGX is leaked via a successful side-channel attack, the attacker cannot spend the fund unless colluded two malicious players.

This is an interesting line of future research we intend to pursue.
The Social Workings of Contract

Hacking Distributed
The Social Workings of Contract

Recently, scholars have begun paying attention to the legal limitations of so-called smart contracts. There are several salient critiques: smart contracts may imperfectly capture obligations; they may not fully account for changed circumstances; they may still require external interpretation. Taken together, these issues may well impede the legal utility of smart contracts “in the wild.” But there’s another set of issues at play, too: in the real world, contracts have social utility, and people use them in complex, strategic ways that often don’t align with their legal rights and obligations. These social functions require flexibility—often, the very flexibility that is intentionally short-circuited by smart contracts.

In my recent paper, “Book-Smart, Not Street-Smart: Blockchain-Based Smart Contracts and The Social Workings of Law,” I describe three common contracting practices that illustrate how contracts actually “work” in the social world. People may include contract terms that they know to be legally unenforceable in order to set behavioral norms for their contracting partners; for instance, “pay-if-you-stray” infidelity clauses in prenuptial agreements are generally unenforceable in court, but still communicate expectations about how the parties will treat each other. Or, people might include contract terms that are purposefully vague; particularly in long-term relations where the parties contract with one another repeatedly, it can promote stability to leave some expectations underspecified. Finally, parties might strategically decide not to formally enforce an enforceable contract. The looming shadow of a lawsuit can be enough to encourage people to bargain between themselves, and the outcomes of these bargains are often mutually preferable to what might result from formal adjudication.

Altogether, these uses of contract suggest that contracts are social resources as much as they are legal mechanisms. But smart contracts focus on the technical form of contract to the exclusion of social context; that’s what they’re designed to do. We might think of smart contracts as book-smart, not street-smart. While they may facilitate technically perfect and seamless implementation of agreements, the social friction required to negotiate and enforce a “dumb” contract can, in some cases, be functional and desirable. This doesn’t imply that there is no role for smart contracts in some social settings, but it does suggest that attention to the setting matters a lot. In the paper, I suggest that as a matter of policy, we ought to carefully consider the social characteristics of contracting environments—the goals of the parties, the longevity of the relationship, the availability of reputational mechanisms, and the like—before deploying smart contracts into them.
Decentralization in Bitcoin and Ethereum

Hacking Distributed
Decentralization in Bitcoin and Ethereum


We have been conducting a longitudinal study of the state of crytocurrency networks, including Bitcoin and Ethereum. We have just made public our results from our study spanning 2015 to 2017, in a peer-reviewed paper about to be presented at the upcoming Financial Cryptography and Data Security conference in February [1].

Here are some highlights from our findings.

Bitcoin Underutilizes Its Network
Bitcoin nodes generally have higher bandwidth allocated to them than Ethereum. Compared to our previous study in 2016, we see that the median bandwidth for a Bitcoin node has increased by a factor of 1.7x. The typical Bitcoin node has much more bandwidth available to it than it did before.

Higher allocated bandwidth indicates that the maximum blocksize can be increased without impacting orphan rates, which in turn affect decentralization. If people were happy about the level of decentralization in 2016, they should be able to increase the block size by 1.7x to clear almost twice as many transactions per second while maintaining the same level of decentralization.

Some people argue that increasing the maximum block size would also prohibitively increase CPU and disk requirements. Yet these costs were trivial in the first place, especially compared to today's transaction fees, and have come down drastically. For instance, a 1TB disk cost $85 on average in 2016 and $70 in 2017 [2].

To date, we have seen no sound, quantitative arguments for any specific value of the maximum block size in Bitcoin. Arguments on this topic have consisted of vague, technical-sounding-yet-technically-unjustified argumentation, bereft of scientific justification. The dissonance between the technical-soundiness of the arguments and the actual technical facts on the ground is disconcerting for a technological endeavor [3].

Ethereum is Better Distributed Than Bitcoin
Compared to Ethereum, Bitcoin nodes tend to be more clustered together, both in terms of network latency as well as geographically. Put another way, there are more Ethereum nodes, and they are better spread out around the world. That indicates that the full node distribution for Ethereum is much more decentralized.

Part of the reason for this is that a much higher percentage of Bitcoin nodes reside in datacenters. Specifically, only 28% of Ethereum nodes can be positively identified to be in datacenters, while the same number for Bitcoin is 56%.

Nodes that reside in datacenters may indicate an increased level of corporatization. They may also be a symptom of nodes deployed to skew node counts for various different implementations (a.k.a. part of Sybil attacks to influence public opinion), a hypothesis that was floated extensively during the course of our study.

In contrast, Ethereum nodes tend to be located on a wider variety of autonomous systems.

Neither Are All That Decentralized
Both Bitcoin and Ethereum mining are very centralized, with the top four miners in Bitcoin and the top three miners in Ethereum controlling more than 50% of the hash rate.

The entire blockchain for both systems is determined by fewer than 20 mining entities [4]. While traditional Byzantine quorum systems operate in a different model than Bitcoin and Ethereum, a Byzantine quorum system with 20 nodes would be more decentralized than Bitcoin or Ethereum with significantly fewer resource costs. Of course, the design of a quorum protocol that provides open participation, while fairly selecting 20 nodes to sequence transactions, is non-trivial.

Thus, we see that more research is needed in this area to develop permissionless consensus protocols that are also energy efficient.

Ethereum Wastes Mining Effort That Can Be Put To Better Use
Ethereum has a much higher uncle rate than Bitcoin's pruned block rate. This is by design, as Ethereum operates its network closer to its physical limits and achieves higher throughput. As a result, however, less of Ethereum's hash power goes towards sequencing transactions than Bitcoin's. Put another way, some hash power is wasted on uncles, which do not help carry out directly useful sequencing work on the chain.

This indicates that Ethereum would greatly benefit from a relay network, such as Falcon or FIBRE for Bitcoin. Relay networks ferry blocks quickly among miners and full nodes, and help reduce wasted effort by reducing uncle and orphan rates.

Ethereum Exhibits Better Variance in Fairness, Favoring Small Miners
Fairness is an important metric: it determines whether a small miner is at a greater disadvantage compared to a larger miner. If a system is perfectly fair, there would be fewer reasons for miners to pool their resources into larger, cooperating pools that operate in unison.

To measure fairness, we looked at the proportion of blocks that miners have on the main chain divided by the proportion of their blocks that did not help advance the blockchain, namely, pruned blocks and uncles. In an ideal system, this metric would be equal to 1.

The level of fairness in both systems is, roughly speaking, comparable. But there is a big difference in variance of fairness, with Bitcoin exhibiting high variance. That is to say, mining rewards are more unpredictable for smaller miners in Bitcoin. This is partly because the high block rate in Ethereum helps provide many more opportunities for the laws of large numbers to apply in Ethereum, while Bitcoin, with its infrequent blocks, can exhibit much more uncertainty from month to month.

The full details, of how we measured the data and what we found in more precise terms, are in our paper.

    [1]    Our study examines solely the networks and the blockchain maintained by those networks. It does not examine development centralization. Balaji Srinivasan and Leland Lee have developed a metric, called the Nakamoto Coefficient, that attempts to capture centralization across different fields.
    [2]    Historical price data is notoriously difficult to find, for some reason. The specific sources we used are PC Part Picker and Camel. Our personal experience was more drastic than the industry average, closer to a 2X drop in price over the same time frame.
    [3]    Concomittantly, Bitcoin Core has adopted a narrative that it is a Store of Value, in effect making it explicit that the token is not a technological artifact meant to facilitate payments, but an investment vehicle where early adopters are compensated by late comers.
    [4]    Of course, some of these entities are pools. And some people will claim that pools provide decentralization, because they are composed of multiple independent actors. This argument is incorrect for a few reasons: (1) we retrospectively examine the historical record, and at the time of that particular block's commitment to the blockchain, there was a de facto, undeniable agreement among the pool members to act in unison, now recorded on the blockchain, (2) perhaps the pool members would leave if the pool engaged in activities that damage the currency, but this has historically not happened, to the point where a pool exceeded 51% of the hash power, (3) even if pool members were motivated to leave their pool in the presence of unwanted behaviors (e.g. selective transaction censorship by the pool), their ability to do so depends on their ability to detect these behaviors, and most participants are not geared to detect them in the first place. In short, pools providing any level of decentralized decision making is more aspirational talk than a proven reality.
How Not To Run A Blockchain Lottery

Hacking Distributed
How Not To Run A Blockchain Lottery

Today's post is a cautionary tale on why running a lottery on a blockchain is so incredibly hard to get right [1].


Here's the setting: Eric Lombrozo, a Bitcoin Core developer, was in the Christmas spirit yesterday and decided to give away 1 BTC, split into 10 chunks of 0.1 BTC each, to people who re-tweeted him. This is a gift of approximately $1500 [2] each.

He wanted to make the giveaway provably fair, so he devised the algorithm described in his tweet thread. I won't go into the gory technical details at all, except to note that, in essence, he combines two block hashes at a pre-determined block height and derives a 16-bit number (H) that is coprime to the number of retweets. Coprime, also known as "relatively prime," just means that the winning lottery number H and the number of re-tweets do not have a divisor in common. He then indexes into the list of re-tweeters 10 times, rewarding every (H)th retweeter, wrapping around as necessary.

Now, the scheme is quite ornate and complicated. But the key operation that's happening underneath is simple: he is deriving a random number from two block hashes. This is a pattern I've seen in use in at least a dozen buggy Ethereum Dapps, and many of you are going to stop reading at this point thinking you understand the problem. Do read on, because there are multiple problems, and the actual bugs are not the usual, obvious ones, even though it'll seem that way at first.

Let's delve into this scheme and note the interesting observations and thoughts as we go along:

Concerned About Miner Attacks
  • Eric is worried about miner manipulation of his lottery.
Everyone deriving random numbers from block hashes should worry about attacks by miners. Recall that a miner wishing to tilt the lottery can do so by computing a block and seeing if its hash yields a good outcome for the miner. If not, the miner tosses out the block without making it public.

But Miner Attacks Are Not A Problem
2. But in this case, this worry about the miners is completely overblown. A miner would have to be insane to discard a perfectly good block to tilt this particular lottery, because they would stand to lose more than 20 BTC (~$300,000) for a gain of 0.1 BTC ($1500).

Failed Defenses Against Mythical Attacking Miners
3. Regardless, some people are overly paranoid about things that will not happen. We have heard quite a bit about the "Chinese miners" [3] and there is a subreddit dedicated to villifying them. And Core developers keep reminding us how cautious they are as a group. Perhaps the extra paranoia is called for; at least, maybe it's harmless.

Is Paranoia A Virtue?
4. But is extra paranoia really harmless? Or as Ross Anderson has argued relentlessly for a few decades, should one make security decisions based on costs? Is computer security, at its heart, a game of tradeoffs, where quantitative reasoning should keep us from going towards unnecessarily ornate solutions, which, in their effort to guard against things that are not happening now and will not happen in the future, themselves introduce other flaws? Even if they don't introduce problems directly, they take our time and concentration away from other potential issues. While we're trying to focus on the miners, we might totally miss the boat on other, bigger problems, right?

Things get philosophical at this point, so I'll abandon this line of thought and assume that the miners are evil. There is a subreddit full of messages and memes to this effect.

Does This Distrust of the Miners Actually Work?
5. So how good is this approach at keeping the evil miners at bay? Not at all. This scheme derives a random number from the combination of two hashes, at heights h and h+5. It does not make sure that these blocks came from two separate miners. What's the point of picking two numbers if they are coming from the same entity?

And even if blocks h and h+5 were coming from different miners, how do we know that they are not colluding? This seems like an insurmountable problem.

Well, It's Broken Anyway
6. But no matter! This scheme is broken anyway, in the usual, predictable way. The miner who mines the second block knows the first one, so he is fully and solely in control of the lottery's outcome. So, the two miners don't even have to collude, the second one controls the outcome.

But It Doesn't Matter
7. But wait, per point #2 above, the randomness of the number does not matter, because rational miners won't launch attacks that costs more than the potential winnings, let alone attack a good will gesture. Now, if this was a state lottery, it would be a different story, but it isn't, so let's move on. Miners attacks are not going to happen, and all the energy spent worrying about them is wasted.

It's Broken In a Different Way
8. This brings us to the crux of a more fundamental problem. This particular scheme picks numbers that are coprime to the number of re-tweeters, and picks small multiples of that number. Picking coprime integers is a good idea when devising pseudo-random number generators (PRNGs), but this is not a PRNG and the numbers aren't picked carefully, they are picked from the blockchain and multiplied by a small series of integers.

Signs of Trouble
9. As a result, some numbers are much more likely to be chosen for H than others. A simple example illustrates why. Imagine that Eric's tweet gets 1000 retweets. There are 65534 potential numbers that can be coprime with 1000. For instance, 2, 4, 5, 6, 8, and 10 are not coprime, so they will not be the first picked number. Their multiples are not going to be chosen either.

Now, it could be that a smaller number is picked, like 1, and the first 10 numbers would then be winners.

And it could be that a large number is picked, and it wraps around and covers numbers that would otherwise not be covered.

But there may well be numbers that are neither covered by getting picked (i.e. coprime with N) nor covered by a large number wrapping around (i.e. a multiple of a coprime mod N). Let's call these unlucky numbers.

11. Are there such unlucky numbers? Yes, it turns out that, if there are 1000 tweeters, there is absolutely no way that tweeters 20, 25, 40, 50, 60, 75 would ever win!

More Trouble
12. We don't know how many retweeters there will be. So perhaps the unlucky numbers for 1000 tweeters are different enough from the unlucky numbers for 1001 tweeters, and the probabilities cancel out?

We could do some number theoretic analysis at this point, but it's Christmas Eve and it's a lot easier to just crunch the numbers and check. What we can do is iterate over the possible numbers of re-tweeters, and see if there are any favored positions and unlucky numbers, and then we can see by how much the favored people are ahead of the unlucky ones.

13. I wrote a little simulator that examines what happens for every number of re-tweets. It's on github here. Everyone can look and see for themselves if I missed something.

Oooh Baby
14. Here are the results in graphical form, for a number of re-tweets in the range 1000 to 4000.


We can immediately see that re-tweeter number 10 is highly favored! He has 641486 different ways he can win. Second favorable position is person 970, who has a respectable 632404 ways to win. Retweeters 2, 8 and 9 are also in the top-25.

Winners Losers
------- ------
10 641486 143 475215
970 632404 336 473269
890 632321 672 472628
830 631763 756 470375
790 631226 528 468487
730 630904 840 464022
710 630818 780 461876
670 630004 660 455016
610 629438 420 454108
590 628902 924 440952

Compare this with the losers at the tail. Retweeter 924, for instance, has only 440952 ways to win! Retweeter #10 is almost 1.5 times more likely to be chosen than retweeter #924! His compatriots at 420, 660, 780, 840, and 528 are other lacklusters who are much less likely to be chosen.

Even Bigger Error
15. That all seems very clever. It initially looked like the mechanism was manipulable by miners, but then it turned out that that was totally irrelevant and the scheme was flawed all by itself. The distribution is not uniform; it favors certain people.

But there is a much bigger, even more glaring error. I'll give you a paragraph break to think about it.

16. This lottery is actually tilted in favor of people who run social media manipulation services. If you have a bunch of twitter sockpuppets, you can occupy many more slots than honest people. None of the fancy math above actually matters if you have 1000 sockpuppets and there are only 1000 organic retweeters. The sock puppets' chances of winning will be 50%.

Wrapping Up
So, what did we learn from this exercise?
  • Deriving fair, unmanipulable randomness from a blockchain is difficult.
  • Combining multiple block hashes does not provide any immunity against miner attacks, the last miner is in full control of the outcome.
  • If one fixates on unlikely problems, one can miss much more likely ones right under one's nose.
  • Striding into a big list with a number that is coprime to the size of the list will not yield a uniform selection. In fact, this scheme exhibits incredible skew, and timing your tweet well can give you a huge advantage over others.
  • But the day belongs, as it often does in the short term, to social media trolls and astroturfers.

    [1]    Actually, as with every Bitcoin related tale these days, this discussion contains, fractally, the entirety of the block size debate within it.
    [2]    Specifically, it's $1340 at the time of this writing, minus $40 to receive the funds, minus another $40 to use them.
    [3]    The "Chinese miners" are almost always lumped together by their nationality as if all of them come from the same mold. Yes, it's inappropriate.
Parity Proposals’ Potential Problems

Hacking Distributed
Parity Proposals’ Potential Problems

Be careful when you resurrect the dead.

Since the second Parity Multisig hack that froze a total of 514,774 ETH (242 million USD at the time of writing), there has been an ongoing debate about possible ways to unfreeze the funds and return them to their rightful owners. Yesterday, Parity technologies released a blog post and a draft EIP, consisting of four variants of a proposal that would allow the stuck funds to be unfrozen.

All of the Parity proposals essentially address the same issue. In general, they allow self-destructed contracts to be revived through the re-deployment of a new contract at that address. In the following discussion, we will start out by thinking about the security implications of such contract revival proposals in general terms, and then comment on the specifics of Parity’s four proposals. We conclude the post with a recommendation for how a potential unfreeze of funds should work.

A General Security Problem
The Parity proposal allows deployers of contracts to deploy new, different contract code in a dead contract’s stead. Any smart contracts that have been deployed with a self-destruct opcode now allow their creator to re-instantiate them with their choice of code after a self destruct. This allows creators of contracts with self-destruct to self-destruct and re-deploy their contract, potentially leading to loss of users’ funds.

A concrete example: let’s say you had a contract like 0x’s deposit contract, responsible for holding a number of ERC20 tokens as part of its operation. If such a contract could somehow be self-destructed, the creator can attempt to invoke the self-destruction mechanism and replace the deposit contract with a forwarder to themselves. Unwitting users may then send funds to the creators rather than the intended decentralized exchange. We discuss more concrete examples later in the post.

This example may seem contrived, but with the complex networks of interacting, custodial, and interdependent contracts we are building, any contract that relies on the code in contracts it interacts with not changing can be affected by intentionally or accidentally malicious contracts implementing this mechanism. The important security invariant that after deployment, code at address A can only change to empty is broken for any callers.

The Ethereum developers have so far been very conservative with breaking invariants, a strategy that we applaud. For instance, the adoption of EIP-86 was deferred due to concerns about breaking exactly the same invariant that this proposal modifies.

As we were writing this post, Nick Johnson published an excellent analysis that also argues against breaking invariants and looks at the proposed changes from a network-safety point of view. Our posts don’t overlap much, so it’s definitely worth reading his take.

On Semantic Compatibility
The fundamental property violated by Parity’s proposed changes is semantic compatibility. In general, at the time a contract is written, an author expects the contract to operate in a manner determined by the defined and documented semantics of the underlying language and platform (and any core existing contracts). When an auditor audits a contract, the same is true: auditors have to reason quite intimately about very specific edge cases and interactions of EVM constructs, and cannot make assumptions about the behavior of any code in isolation.

The proposed changes would introduce a major change to the semantics and security model of the EVM retroactively, potentially altering the assumptions of programmers, languages, and higher level tools made in the process of building a contract.

A big emphasis in the Ethereum community recently has been placed in high-assurance software development, including rigorous development and testing practices and use of formal models and tools. The proposed changes substantially weaken the security model for any contracts using self-destruct: in high-assurance, safety critical software development, any changes to the underlying platform are rigorously validated, tested, and re-evaluated for effects on higher level software that builds on them. Put another way, no vehicle manufacturer would upgrade its operating system in production without re-testing, re-auditing, and re-evaluating all the software that runs on this OS for bad interactions, side effects, and bugs. And for the proposed change, that includes every smart contract on the network today.

With regards to audits and formal verification, it is important to note that any such work performed on pre-fork contracts may need to be re-done post-fork, specifically if the verification of these contracts used the valid semantic assumptions of unchanging code in other contracts, or if the verified contract contained the self-destruct operation. This could set a precedent that imposes repeated ecosystem costs for an expensive process that ideally should only be required once.

Not all semantics-breaking changes are bad. In some cases, new semantics are introduced to components of the system explicitly specified as changeable (e.g. new opcodes, or new pre-compiles). It is important to not let backwards compatibility become a dogma, but for security, care and minimally invasive procedures are still required.

The Parity Proposals
We noted a number of implementation questions with the concrete code that Parity has proposed, specifically in their Proposals C and D. These are orthogonal to the higher level security issues described above, but nonetheless introduce substantial complexity into the Ethereum platform.
  • Hacky contract proxies: function setupProxyForContract(uint nonce, address destination) public { proxy[address(keccak256(msg.sender, nonce))] = destination; } is used to set the proxy for contracts in Proposal D. This allows the creator of the contract to set the proxy to any code they want at any time in the future (note that the setupProxyForContract guard does not have a single-call check, and can be used to swap out proxy contracts many times and on the fly; this allows contract creators to essentially hot-swap recovery contracts from under their users). Currently, contract creators hold no special power over contracts they created; this proposal would change that.
  • Hacky contract proxies 2: The above code does not handle cases where it was a contract that created a contract. In these cases, this creator contract must explicitly include proxy setup functionality, with old contracts that created contracts not able to participate in this proxy process. For a general solution, all cases should be handled.
  • Asymmetry between ether and tokens: In what is presumably an attempt to reduce the risk of revived contracts stealing funds, Proposals B, C, and D don’t mark the fallback function in the proxy contract as payable. Therefore, revived contracts can receive token payments, but not ether payments. This inconsistency introduces additional complexity in reasoning about the behaviour of revived contracts.
  • Conflicting mention / security properties of tx.origin and msg.sender: The proposal comment // please note the usage of tx.origin does not seem to match the code below it (same as in (1)). Tx.origin is not compatible with contracts created by contracts being revived (e.g. multisigs that generically forward call data). We believe a discussion of this choice should be a part of the proposal.
  • Light clients: In this proposal, users can deploy code generating arbitrary logs at dead contract addresses. Especially in Proposal D, any user can generate any log event for any destructed address enabling proxying. In particular, this can allow them to fool light clients and applications that rely on logging code being consistent with the audited contracts they deploy. This is likely to pose a security threat to a wide range of applications.
  • Inconsistent delays: Delays are used in Proposal B, but not C or D. It is unclear to us whether these were intentionally omitted, because some problems they would help mitigate exist in all three proposals.
  • msg.sender “spoofing”: Revived contracts will carry the same msg.sender as their previously destructed counterparts. Users can no longer rely on the code of a contract staying consistent across calls to external systems that use msg.sender for authentication (like ERC20 functionality). Two examples of this are provided in the next section.
While the proposed mechanisms (“create contract proxy at any nonce for this address”) would allow victims of the second Parity multisig hack to recover their losses, it isn’t clear that they merit adoption in their own right. A proposal for a general mechanism should be convincing on its own, ignoring the amount of funds from a single hack that could be restored if the proposal were adopted.

Especially proposals C and D would introduce significant additional technical complexity, and make the already difficult problem of reasoning about interactions between smart contracts even harder.

Vulnerable Examples
Here are a few concrete examples of contracts which may be vulnerable to a few of the problems we discussed above.

ERC20 Exchange
Bob’s full-featured exchange contract is responsible for swapping tokens on a blockchain. It’s tied to Bob’s (centralized) web UI, so it has a hard-coded self-destruct function that only Bob can call, recovering funds in case the contract is somehow compromised or to be upgraded. The contract was created by a temporary key used by Bob’s employee, Dave, who after all was just deploying a no-owner contract to the blockchain.

Sensing an incoming hack, Bob self-destructs his contract and reclaims all the users’ funds. Unfortunately, some hard-coded client contracts and exchanges still send funds to Bob’s contract, leaving them stuck there for Dave to steal (if he saved the key).

Town Crier is an oracle service in production on Ethereum today that uses Intel's SGX. Like all good smart contract citizens, a Town Crier-like system can include self-destruct capabilities, where IC3 can destruct a contract in the event of an SGX compromise or failure. In the Town Crier security model, you are trusting IC3 for availability but not integrity: when you get answers to your queries, you know they are right. The self destruct provides no security risk in the old EVM model, as you trust IC3 for availability regardless. The current Town Crier contract implements a similar permanent kill switch.

Such a system could also send all its query responses through a proxy contract, that checks that the transaction is really being sent by Town Crier, and charges the user a fee once the query is successfully validated. In fact, this is exactly what Town Crier does (line 182).

With a feature like this, it’s easy to see what can go wrong: IC3 maliciously self-destructs the Town Crier contract, installing its own proxy that sends fake data to Town Crier clients. Because these clients use msg.sender for authentication, they assume the real Town Crier contract is sending them a call (an assumption that, until these proposals, was founded on the EVM semantics).

This proposal makes the Town Crier paradigm impossible unless the Town Crier contract can be validated to not have self-destruct functionality. This defeats the purpose of self-destruct as a cleanup incentive, and forces all security-critical contracts to enforce and audit the absence of a self-destruct, a non-trivial task, especially if delegatecalls are used.

In general, making contract code mutable breaks the use of msg.sender for code authentication and validation, a common use-pattern in both oracles and ERC20 contracts (where, for example, users transfer tokens to an address they know is a multisig to secure them).

Bad Miner
Alice, Bob, and Carol are starting an e-cats based venture together, and have decided to open a Kitty-Backed-Token (KBT), and collect cats together in their contract. Unfortunately, a bad apple hacks their kitty tokens and begins inflating his own supply, withdrawing a higher and higher share of kitties. Luckily, the users of KittyToken can vote to self-destruct the contract before it is too late, allowing its original deployers Alice, Bob, and Carol to collect the cats and save them from certain death.

The Ethereum network has implemented Parity Proposal D, which allows users to set their own proxies for the contract. As soon as the period of time opens to create forwarders, a miner named Dave creates his own proxy and inserts it first into his block, transferring out all the rare kitties in the contract to himself. Dave need not own any KBT to do this, since any user can set their own resolver and essentially spoof their msg.sender as being the new zombie contract, the DeadKittyToken (DKT).

These kitties sit on external contracts, and again have the flaw of using msg.sender for authentication of transfers. Users assume this is OK, because msg.sender can only be the smart contract they audited, but the sneaky code change allows the evil Dave to send all the kitties to 0xdead.

Our Suggestions
A number of steps should be taken to resolve this anti-pattern for future contracts, as well as to potentially deal with previously lost funds. We recommend an alternative set of actions:

Separate funds-recovery for previously vulnerable contracts from changes that need to happen to both the EVM and its tools to avoid similar losses in future contracts, using a clean-slate approach to design the latter. EIP86 seems like the most studied candidate for introducing a redeployment mechanism to Ethereum. We recommend a modification to Solidity’s Security Considerations to cover self-destruct based antipatterns, and a static warning in the Solidity compiler for use of self-destruct, especially when calling a library containing the operation. In our view, this should be sufficient to ensure funds are not lost going forward, specifically in its ability to allow users to demand redeployable contracts.

Offer the community the choice of a fork for past contracts, which should consist of a target contracts curated by a community review process over a significant length of time, remediating losses in previously affected contracts. This fork should be offered in a same manner as the DAO fork, and should not be officially supported by client developers or the Ethereum Foundation (yes, including Parity).

The anti-patterns leading to these funds losses are not a severe and general enough issue to require breaking changes to established EVM behavior, and should be handled through changes to tooling and potential case-by-case remediation for contracts before these changes occurred (we propose this as an option, but do not provide an opinion on whether to recover the funds in this article).

We do not believe the funds loss of recent self-destruct vulnerabilities to be general enough to require a general solution. Such a general solution can be harmful, both in the practical sense of allowing potential future funds loss vectors and in the theoretical sense of setting poor precedents that have the potential to impede the process of reliable software development on Ethereum.

We advocate for separating funds-recovery for past vulnerable contracts, which should be discussed as potential contract-specific forks on their own merits, from changes that need to happen to both the EVM and its tools to avoid the vulnerability of future contracts.

Sincere thanks to Ari Juels and Emin Gün Sirer for reading and providing comments on earlier versions of this post.
Who Has Your Back in Crypto?

Hacking Distributed
Who Has Your Back in Crypto?


I came across a Twitter poll on which entities have their interests and priorities most closely aligned with Bitcoin users. The results, if they are to be believed, indicate an enormous misunderstanding, or else they betray the result of a successful disinformation campaign. To wit, more than 60% of the respondants believe that the devs have users' interests at heart. Only 23% trust businesses, and a mere 15% say the miners.

This is completely wrong.

In general, open-source (OSS) developers, especially second generation developers who were not present at the inception of the project, have skewed interests that are at odds with those of the users. Depending on your investment thesis, either the miners or businesses have their economic interest best aligned with users. Let's discuss why.

It is incredibly common and ordinary for second generation developers to add gratuitous complexity and bloat to a project. There are two reasons that compel them to do this.

Wanna Be Famous

One reason is to simply leave their unique mark. How else would anyone know that the developers are any good if there isn't a unique new vision? What new line do the developers get to put on their resumes, or which files do they point to in github on subsequent job interviews? "Added Schnorr signatures" sounds far more impressive than "I responded to bug reports and refactored a 5000-line main.cpp file" no matter how useless and cumbersome Schnorr signatures may be to use in practice, and how badly a refactoring was needed [1].

This means that projects often evolve by incorporating vanity features, typically by force of pure ego. Your typical primadonna dev will want to add a dessert table to the banquet; good luck getting them to slice the bread. Especially when there is turnover in a team, the next generation often change the project aesthetic.

A well-known example and ongoing fail-mobile is the systemd project, where a Linux developer named Poettering is trying to import into Linux all of Windows' problems. Like PulseAudio before it, systemd cites legitimate areas where Linux needs improving, but then breaks away from the underlying Unix aesthetic in every possible way. Poettering and his team can't be satisfied with incremental fixes that preserve the original vision -- it is essential that they redo things in their completely unique way, which totally isn't a bug, except I couldn't hear you when we had PulseAudio and I can't parse my logs now, can't install an update without rebooting my machine like a Windows pleb, and sometimes can't start up my system and it's totally not a bug.

Wanna Be Rich

The second reason is to make themselves indispensable, by increasing the complexity of the OSS project. Many open source developers are uncredentialed, young people with few other accomplishments and job prospects, looking to break into a lucrative industry. A complex code base immediately makes them an expert because they know all its warts and they know where all the skeletons are buried in the code. That's because they put them there. This immediately guarantees a consulting stream. You want to add a tweak? Well, you can't, you need to hire someone who understands the mess.

The best example for this is Asterisk [2]. An OSS project so complex, no one can do anything other than what's explicitly in the tutorials, and even then, it requires magic incantations and blood sacrifice. For those who do not know, Asterisk is a system for building phone management systems. You would think that audio management would involve the composition of nice, modular units, with uniform interfaces and clean configuration files. You'd be wrong. When I last looked, there were no abstractions in Asterisk. Things just barely worked for the few use cases, and only if you did everything the way you were prescribed. If you went off script, everything would break down. "If you do A, then B, then X, then S, you'll get the effect you want, because X relies on B's setup and leaves the state ..." you get the point. The people who have mastered this completely useless tangle of constraints ensure a steady income stream and enjoy a standing within their own community of people who do not know better of being an "expert." It's like being an expert at naming things, it sounds like a discipline, almost like science, but it's all just a bunch of does-not-matter in the end.

It is the rare person who can create something, stand behind it, put in the grueling hours to respond to bugs and errors, and then see others get rich off of it, without any trace of wanting to participate in the action. So if the codebase does not provide a direct way of compensating the developers, rest assured that the developers will find a way.

These motivations are short-sighted and self-limiting. They drive away new devs from entering the space and strangle the project. The behaviors above are rewarded in the short-term, but spell the convalescence phase for a project.

What About Cryptocurrencies?

So who has your best interests at heart when it comes to cryptocurrencies? Is it the miners, the businesses, or the developers?

If you hold coins, your interests are aligned with those who hold massive amounts of coins. That's not developers. It's most definitely not second-generation developers who were not there on day one when coins were cheap.

If you believe the currency will gain value by expanding community, you are aligned with those who desparately want the economy to expand. That's also not developers.

Miners hold massive amounts of coins. Businesses want a bigger economy, more users and fast coin turnover.

In contrast, developers have complex games they play. Sure, they want the coin value to grow, but not drastically more than your cousin Joe that you somehow convinced to go into your favorite coin with you. If they weren't there for the pre-mine, they wouldn't even have that many coins. While we all hate pre-mined coins, they do provide the right incentives for the long-term success of the coin.

Another way to analyze the situation is to look at the potential losses to be incurred by the different entities. A miner has 100s of millions of dollars' worth of hardware they have committed to the future success of the currency. A typical startup will have a few to tens of millions at risk. Collectively, VCs have poured more than a billion into the cryptocurrency space and they want to see it expand tenfold for returns.

In contrast, a typical second-generation dev has invested just their own time, which may possibly have been partially subsidized by a firm, and collected an average number of coins [3]. It's often unclear that they would have had better prospects elsewhere. They rage-quit and wal away from projects all the time. The worst that can happen to many devs, whose visions are flawed and whose bets turn out to completely tank a project, is to have switch IRC channels, clone a different but similar git repository, and muzzle their toxicity as they integrate themselves into a different social hierarchy.


When evaluating statements from miners, businesses and developers, cast your lot accordingly. No one likes corporations; many large and monopolistic ones tend to eke out profits at the expense of their users. Miners, well, they tend to be predominantly Chinese these days due to abundance of excess hydro power over there, so, if someone is the sieg-heilin', statue-lovin', hitler-would-have-been-alright-if-he-had-obtained-marching-permits type [4], it's clear what one would think. And developers have the benefit of actually having a face, and for some, a 7x24 social media presence. And the thing about troll-backed deceptive narratives is that it sounds all so good, and so popular -- all those likes and upvotes from those nameless trolls in those censored forums must indicate something, right?

Yet the underlying forces are precisely the opposite of how things might seem on a naive, superficial examination.

Of course, in an ideal world, people would evaluate proposals from a scientific perspective, instead of casting their lot with a particular tribe. And the ecosystem would not have distinct roles where people with different roles have goals in opposition to others. Whoever can invent that coin, whoever can build a community based on reasoned, civil, scientific discussion, is sure to make a killing.
    [1]    Bitcoin's main.cpp file used to be 5000 lines long, and quixotically, remained so for many years. Schnorr signatures are nice, there is nothing wrong with them, and we all need them as much as we need a second prehensile tail. "Wait, we don't even have a first one," you might say. That would be the point.
    [2]    I was an avid Asterisk user for multiple years, thanks to a high pain tolerance that allows me to also work on cryptocurrencies. While it was fun to automatically screen telemarketers, to redirect calls to my nearest phone, and to play hold musac for people who called the house, the underlying software was nothing but just bloat, with no attention paid to clean interfaces and repurposable components.
    [3]    On average, people are average. One could claim that developers would more consistently buy in, comparable to more fervent currency fans, but one could also claim that they would want to diversify their risk as well. From my own personal observation, many crypto investors have personal wealth that far exceeds early developers.
    [4]    He did, in fact, have a permit.
To Sink Frontrunners, Send in the Submarines

Hacking Distributed
 last edited: Sun, 27 Aug 2017 22:01:00 -0700  
To Sink Frontrunners, Send in the Submarines

The problem: Frontrunning
Miner frontrunning is a fundamental problem in blockchain-based markets in which miners reorder, censor, and/or insert their own transactions to directly profit from markets running on blockchain economic mechanisms. Miner frontrunning takes advantage of the responsibility of miners in a blockchain system to order transactionsin an attack described in great detail by Martin Swende in ablog post, which we highly recommend for background on this important issue.

Frontrunning is not strictly theoretical: in practice, frontrunning-like tactics have been observedduring large ICO releases, with increasingly sophisticated attacks anticipated as the financial incentivesfor gaming high-profile contracts increase and attract more sophisticated attackers.

As described in a previous article on frontrunning, "any scheme that a) provides full information to miners b) doesn't include nondeterminism and c) is vulnerable to ordering dependencies is gameable."

In this article, we will examine a potential mitigating strategy for frontrunning through the lens of a fair sealed/closed bid auction mechanism that resists miner frontrunning. This strategy can be used for fair markets, fairICOs, fair ENS-style auctions, and much more.

We stumbled upon this problem during the creation of our upcoming billion-dollar ICO token launch, HaxorBux (HXR).

Our mitigating strategy is an idea that we call a submarine send.Submarine sends aim at a strong confidentiality property. Rather than just concealing transaction amounts—which isn’t sufficient to prevent frontrunning, as we’ll show—submarine sends conceal the very existence of a transaction. Of course, a permanently concealed transaction isn’t very useful. Submarine sends thus also permit a transaction to be surfaced by the sender at any desired time in the future—thus the term “submarine”.

Submarines: They can’t frontrun you if they can’t find you

We have published a proof-of-concept implementation on github(still lacking some features like the Merkle-Patricia verification described later in this post). We encourage contributions and comments in the issues section there!

Why it’s hard to prevent frontrunning
A folklore method for concealing transaction values—used for everything from randomness generation to (non-blockchain) sealed-bid auctions—is called commit / reveal. The idea is simple. In a first protocol phase, every user Ui submits a cryptographic commitment Ui = commit(vali) to the value vali representing her input to the protocol, e.g., her bid in an auction. After all inputs have been gathered, in a second phase, every player reveals vali by decommitting Ui. During the first, commitment phase, no user can see any other user’s bid / value vali and thus front-run the second phase.

This is all well and good, but vali is a value, not actual money. So what if a user wins a sealed-bid auction conducted this way. How do we ensure that she actually pays the amount val that she bid?

In a decentralized system, where users may be untrustworthy, val is a more or less worthless IOU. The only way to ensure payment is for all users actually to commit $val, i.e., commit the money represented in their bid. In Ethereum, though, there is no (simple [1]) way to conceal the amount of Ether in an ordinary send. So when P sends $val as a commitment, she reveals her bid amount. That brings us back to square one. [2]

Suppose instead of Ethereum, we were using a hypothetical system that actually concealed transaction amounts and even the sender identity (but not whether a contract receives a transaction). Call it ZEthereum. Then, of course, we’d solve the problem. Front running isn’t possible in an auction if you don’t know your competitors’ transaction amounts. Right?

Unfortunately, even concealed transaction amounts don’t do the trick, as a simple example shows.

Example 1 (Transaction Existence): Tweedledum is bidding on a rare military helmet in an auction administered through a ZEthereum smart contract. He knows that there is only one other possible bidder, his brother Tweedledee, who loves the helmet, but has at most $1000 available to bid for it. If Tweedledum learns of the existence of a second bid, his strategy is clear: He should bid $1000.01. Otherwise, he can win with a bid of $0.01, far below the real value of the helmet. Even though bid amounts are concealed, Tweedledum can game the auction by learning whether or not a bid exists.

Tweedledee cheated of helmet by Tweedledum’s frontrunning

While this example is contrived, there are real systems in which the existence and also the timing of bids can leak information. For instance, in an ICO, it’s easy to imagine even naive algorithms estimating interest in a token and altering their bid based on contract buy-in statistics.

Send in the submarines
Our solution, a submarine send, is a poor-man’s solution to the problem of concealing transaction amounts and existence for a target smart contract "Contract". It doesn’t conceal transaction amounts as strongly as our hypothetical ZEthereum would, and it doesn’t definitively hide the existence of transactions. But it provides what we believe to be good enough guarantees in many cases.

The key idea is to conceal sends to Contract among other, unrelated and indistinguishable transactions.

A submarine send embeds a real transaction among a collection of cover transactions, thus achieving a form ofk-anonymity. Put another way, the submarine transaction sits in an anonymity set consisting of k cover transactions. While not as strong as notions such as differential privacy or full concealment based on cryptographic hardness assumptions, k-anonymity has proven useful in many practical settings. And some low-cost enhancements can strengthen our scheme’s properties.

The basic format of a submarine send is simple: It is a send of $val from an addrP, where P is the player interacting with Contract, to a fresh address addrX. The trick is to structure address addrX such that it has two properties: (1) addrX looks random in the view of potential adversaries and (2) $val is committed to SC in a manner that can be proven cryptographically.

Let’s illustrate with an example, showing how a system that is vulnerable to frontrunning can be rendered more secure by use of submarine sends.

Example 2 (Ash ICO): Suppose that smart contract AshContract implements sales of a new type of token called an Ash. Purchasers burn Ether to obtain tokens at a market-driven Ether-to-Ash exchange rate.

It’s easy to see that AshContract is vulnerable to frontrunning. A large buy transaction will substantially raise the market price of Ash. Thus a miner that observes such a transaction Trans can profit by squeezing in her own buy transaction before Trans and selling afterward. We can remedy this problem by enhancing AshContract as follows:

Example 2 (Ash ICO) continued: In a commit phase, a player P with address addrP makes a submarine send via transaction Trans as follows. P sends $val to address addrX for public key X, where X = H(addrP, SCN, TKEY), where SCN is a contract-specific identifier, TKEY is a transaction-specific key / witness (e.g., a randomly selected 256-bit value), and H is Ethereum-SHA-3.

In a reveal phase, to prove that she has burned $val, P sends TKEY to AshContract. AshContract then: (1) Verifies TKEY is a fresh key; (2) Computes X = H(addrP, SCN, TKEY) and addrX; and (3) Checks addrX.balance == $val. Upon successful verification, AshContract awards Ash tokens to P.

Observe that during the commit phase, X and addrX are computed from an as-yet unrevealed key (namely TKEY). Thus the transaction Trans is indistinguishable from an ordinary send to a fresh address. [3] Assuming that many other such ordinary sends happen during the bidding period, an adversary cannot identify Trans as a purchase of Ash.

During the reveal phase, P proves to AshContract that she burned $val to purchase Ash. [4] Given the way public key X is computed, it is infeasible to compute a corresponding valid private key. Thus money sent to addrX is unrecoverable, i.e., burned.

Of course, there are cases in which we don’t want to burn the committed currency. As we now show, it is possible to implement submarine sends in which $val is recoverable by AshContract after the reveal phase.

The two phases of a submarine send

Submarine sends via EIP-86
It is possible to implement cheap and simple submarine sends in which $val can be recovered. For this purpose, we rely on a new feature introduced inEIP-86. EIP-86 was scheduled to go live in the upcoming Metropolis hard-fork, but unfortunately its deployment has been postponed for now. We detail a temporary, more expensive option for implementing submarine sends in the blog appendix.

The crucial change in EIP-86 is the introduction of a new CREATE2 opcode to the Ethereum Virtual Machine. Like the already existing CREATE opcode, the new opcode will also create new smart contracts on the blockchain. However, unlike CREATE, CREATE2 will compute the address of the newly created smart contract C as H(addrCreator, salt, codeC), where addrCreator is the address of the contract’s creator, salt is a 256-bit salt value chosen by the creator, and codeC is the EVM byte code of C’s initcode.

To construct submarine sends, we combine this new way of computing contract addresses with a very simple smart contract which we call Forwarder. Forwarder performs one function: Upon receiving a message-call, it checks whether the message-call was sent by Contract. If so, Forwarder sends its entire Ether balance to Contract. Otherwise, Forwarder does nothing. Here is a simple implementation of Forwarder written in Solidity:

contract Forwarder {

address constant addrContract = 0x123;

function () {
if (msg.sender == addrContract)


To save gas, we can also implement Forwarder directly in EVM bytecode, reducing the size of the Forwarder contract by ~ 75%.

Let’s put the pieces together and examine how the commit and reveal phases work.
  • Commit: To commit, P computes the address A := H(Contract’s address || Data || initcode(Forwarder)), where Data contains any additional required information about P’s bid as well as a fresh nonce. The bidder P the sends $val to A. At this point in time, A is a fresh address which has never been observed on the network.
  • Reveal: Upon P’s revelation of Data, Contract verifies the freshness of the nonce contained in Data, computes A as described above and checks that A.balance == $val. Contract then instantiates Forwarder at address A using the CREATE2 opcode. Next, Contract message-calls into this Forwarder contract which in turn promptly transfers $val to Contract.
While EIP-86 offers the best vehicle for implementation of submarine sends, as noted above, it is nonetheless possible to implement them in Ethereum today. For details, see the blog appendix (“Submarine sends, today”.)

The astute reader might notice that an important step is still missing in the above scheme: Contract does not check that the transaction that sent $val to A actually happened during the commit phase! This is crucial, as otherwise we face a frontrunning issue again: Upon seeing a reveal of a submarine send, a miner could compute A, observe A.balance, and include his own commit and reveal transactions after the commitment phase has already ended.

So how should we go about verifying the timeliness of the transaction to A? The conceptually simplest solution is to verify this off-chain, e.g. using Etherscan, but that sort of defeats the purpose of smart contracts, right?

A more technically involved solution would be for P to reveal, in addition to Data, a proof that a transaction of $val to A was made during the commit phase. This proof can then be verified by Contract. The proof would include a block number, all the transaction’s attributes (transaction index, nonce, to, gasprice, value, etc), aMerkle-Patricia Proof of inclusion in the transaction trie, as well as a block header, and convince Contract that (1) that block was mined during the commit phase; and (2) that the relevant transaction is included in that block. However, generating and verifying such a proof for every submarine send is cumbersome and expensive.

The solution we propose follows an alternative “optimistic” approach: Instead of having every bidder prove to Contract that they followed the protocol correctly, we will allow parties to prove to Contract that some bidder cheated. As the cheating party will forfeit its bid as a result, this solution should incentivize correct behavior of the bidding parties. Bidders still reveal all transaction attributes and the corresponding block header to Contract but they do not include a proof (this is the expensive part after all). After the reveal phase is over, Contract enters a short verification phase in which parties may submit proofs that some other party’s reveal is incorrect. Analogously to the above proof of correct behavior, a proof of cheat consists in showing that the transaction attributes and block header revealed by a bidder are incorrect: i.e., that the given block is not in the blockchain, or that it does not contain the purported transaction. [5] If the proof verifies, Contract would collect the bid from the Forwarder contract, and send it to the loyal watchdog as reward. As verifying other bidders’ reveals off-chain is very simple, honest bidders have a strong incentive to check for and report a cheating party that outbid them (as long as honest bids are larger than the gas cost of proving wrongdoing).

We believe this solution offers a good compromise: Although Contract still has to implement the rather tedious procedure of verifying the (non-) inclusion of a transaction in Ethereum’s blockchain, this procedure should only ever be called if a party actually reveals an incorrect bid. In such a case, the misbehaving party’s bid is forfeited and used to offload the gas costs of the party that revealed the wrongdoing.

Exactly what constitutes a cover transaction?
If we were to make a submarine send today, how hard would it be for a frontrunning miner to detect it? The answer to this question crucially depends on the size of the anonymity set we are trying to hide our submarine in. As a reminder, this anonymity set comprises other “normal” transactions in the Ethereum network that look the same as a submarine send.

Suppose our commit window spans blocks Bstart to Bend. During that window, a frontrunner will be looking for transactions to some address A that satisfy the following properties:
  • Address A had never been observed in the network prior to this commit window.
  • Address A is not involved in any other transactions (internal or external) during the commit window. [6]
  • The transaction to address A carries a nonzero amount of ether.
Such addresses A, which we refer to as “fresh addresses”, are candidates for being the destination of a submarine send and thus make up our anonymity set. Note that it is irrelevant whether these fresh addresses are involved in further transactions after the end of the commit window (i.e., in blocks after Bend). Indeed, although this could reveal that some addresses were actually not used as part of a submarine send (thus effectively reducing the anonymity set), at that point in time the danger of a frontrun is moot as the commit phase has passed.

The choice of the commit window (the number of blocks from Bstart to Bend) should thus be primarily governed by the size of the anonymity set one might expect over that period.

Empirical analysis: Anonymity-set size
We performed a simple experiment to determine the effective size of the anonymity set for submarine sends. We selected a commit window of roughly 30 minutes between blocks 4007900 and 4008000. In that period, we recorded 7109 transactions (excluding internal transactions), of which 661 satisfy the above properties and thus make up an anonymity set for submarine sends.

As an example, consider this addresswhich received 5 ETH in block 4007963 and has not been involved in any other transaction since. [7]

The graph below shows how the size of the anonymity set grows throughout the commit phase.

Size of anonymity set over time

If we consider a larger commit window of a little over two hours, between blocks 4007600 and 4008000, we find that the anonymity set for submarine sends consists of 1534 “fresh” addresses.

Transaction shaping
Of course, a frontrunning miner might try to use information other than the freshness of the destination address to de-anonymize a submarine send. In particular, perhaps the miner knows that the transaction she wants to frontrun will contain a certain amount of Ether. In that case, it is important that the submarine send “blends in” with other transactions of similar value. Thus effective cover transactions must also contain transaction amounts that are statistically similar to that in the submarine send.

Fortunately, we found that the transactions that make the anonymity set span a diverse range of Ether amounts, with the majority of values between 0.01 and 100 ETH, and an average transaction value of about 6 ETH. Below we show the distribution of Ether values for the transactions to fresh addresses between blocks 4007900 and 4008000.

Transaction value histogram, blocks 4007900 to 4008000

Furthermore, there are a few ways to perform what we call transaction shaping in order to help prevent transaction amounts from revealing submarine sends:
  • Refunds: Contracts can refund full or partial submarine send amounts, providing limited further cover similar to the ENS amount hiding mechanism.
  • Flexible send amounts: In some cases, a sender has some flexibility in the amount of money she transmits in a submarine send. Refunds create such flexibility, but it can exist in other settings. For example, users may be willing to randomly vary their purchase amounts in AshContract to help conceal their transactions.
  • Fragmentation: Senders can split submarine send initial deposits into multiple transactions (potentially from a range of addresses), providing further noise for heuristics attempting to detect these sends. Of course, given the lack of transaction-graph confidentiality in Ethereum, there is a risk that multiple transactions can be traced to the same user. This risk can be mitigated by means of mixing.
  • Synthetic cover traffic: Senders can create their own cover traffic at minimal cost by sending money to fresh addresses they control.
Frontrunning and related problems are pervasive problems, with no good available remedies in Ethereum. Submarine sends, while imperfect, offer a powerful and practical solution. Fortunately, there is already a fairly high volume of low-to-medium-value cover transactions in Ethereum today, and we can expect it to grow as the system evolves. Ourproof-of-conceptworks on the Ethereum network today, though it costs more gas than our ideal scheme. For submarine sends to be truly practical, all we await is the adoption of EIP-86.

Submarine sends, today
If you don’t want to wait until EIP-86 is adopted, we have good news for you. It is already possible to use Submarine Sends in Ethereum, although it is more complex and costly than in the future EIP-86-Ethereum. To rid ourselves of the dependency on EIP-86, we need another way to construct fresh addresses that are later inhabited by a contract. Since we cannot use CREATE2, we will have to make do with CREATE.

Whenever a contract is created with CREATE, its address is computed as H(addrCreator, nonceCreator), where addrCreator is the address of the contract’s creator and nonceCreator counts how many contracts have been created by the contract’s creator. [8] Hence, the creator has some control over nonceCreator: to increment nonceCreator by one, the creator only has to spawn a new contract. Of course, incrementing nonceCreator until it encodes the output of a cryptographic hash function (say 80 bits [9]) is completely out of the question. But we can encode this value in a series of nonces, making the scheme somewhat practical.

As in the EIP-86 example, we also need a Forwarder contract. However, Forwarder isn’t quite as simple as in the EIP-86 example; beyond forwarding funds to Contract, it now also must be able to clone itself, i.e. to CREATE a new instance of itself.

Once again, we have a commit and a reveal phase:
  • Commit: P computes a commitment H’(Data), where Data contains the required information about P’s bid and a fresh nonce. P splits H’(Data) into k distinct 2-bit chunks [10] H’(Data)[1], H’(Data)[2], ..., H’(Data)[k]. The value of each two-bit chunk is interpreted as an integer in [1..4]. P then computes the address A := H(H(... H(Contract's address || H’(Data)[1]) ... || H’(Data)[k-1]) || H’(Data)[k]) and sends $val to A. As before, A is a fresh address which has never been observed on the network.
  • Reveal: Upon P’s revelation of Data, Contract verifies the freshness of the nonce contained in Data, computes A as described above, and checks that A.balance == $val. Contract then orchestrates a chain of contract creations by repeatedly calling clone() on various Forwarder instances until a Forwarder instance appears at address A: For each H’(D)[i] (1 ≤ i ≤ k), Contract calls the clone() function of the Forwarder instance at address H(... H(Contract's address || H’(Data)[1]) ... || H’(Data)[i-1]) at least H’(Data)[i] times. At the end of this process, a Forwarder instance will have been instantiated at address A and Contract then withdraws $val from that instance.
We have created a prototype implementation using 80 bits for H’(Data) in which creating a series of contracts and withdrawing the funds costs ~5 million gas. [11] We believe that optimising this implementation would bring the gas cost down by roughly 50%. At a currently feasible gas price of 4 gigawei this corresponds to 0.02 ETH or 3.65 USD.

Block stuffing attacks
Of course, our scheme has some limitations. Attacks still exist where an adversary can spam a large volume of high-fee transactions in an attempt to stop players from either committing or revealing, especially late in these periods. The longer the period, the more costly this becomes to the adversary. One mitigation strategy is to make the period long enough that the cost required to fill blocks exceeds the value estimated to be at stake.

Unfortunately, an inherent tradeoff between latency and fairness may always exist in systems aiming to provide fairness guarantees by tackling frontrunning, as smaller periods of hiding information from miners leave users more vulnerable toreorganizations and consider the input of fewer miners.

There are other approaches to addressing front-running that don’t involve concealment of transaction amounts:

(A) Players could commit small, uniformly priced deposits that they forfeit if they fail to pay. But they may still not be incentivized to pay in full.

(B) Players can break up their bids across multiple transactions and reveal their belonging to the same bid in the reveal phase. But a lack of transaction-graph confidentiality means that even bids from multiple addresses might be tied to the same user. We propose this approach as an added layer of security for submarine sends, but its limitations need to be made clear.

    [3]    Indeed, in the Random Oracle Model, the ability to identify Trans as a submarine send would imply the ability to perform signature forgery.
    [4]    This example glosses over the details of how party P convinces Contract that the burn of $val actually occurred during the commit phase and not after. We’ll come back to this important issue in the next section.
    [5]    As the Transaction Trie is sorted, showing that a transaction is not in it simply reduces to showing that either there is no transaction with the given index in the trie or that the transaction at the given transaction index has different attributes than the ones given during the reveal.
    [6]    If the address only receives transactions, it could technically still be a candidate for a submarine send as we could always split up a submarine send into multiple transactions to the same destination address.
    [7]    At the time of writing, the most recent block number is 4036411.
    [8]    This isn’t the whole story: If the creator is a contract, nonceCreator counts how many contracts have been created by the creator. Otherwise, nonceCreator counts how many transactions have been initiated by the creator. Furthermore, the inputs to the hash function are rlp-encoded before hashing.
    [9]    Since we don’t require our hash function to be resistant to collision attacks, we don’t have to worry about the birthday paradox and can get away with using a shorter hash. 80 bits should still provide ample defense against pre-image attacks, especially in light of the fact that an attacker only has a rather short amount of time in which finding a pre-image benefits her.
    [10]    Somewhat surprisingly, a theoretical analysis reveals that out of all fixed-width chunking schemes, 2-bit chunking is the cheapest on average. For example, using 2-bit chunks is 20% more efficient than using 1-bit chunks.
    [11]    Since this comes close to the block gas limit, we split the withdrawal process into multiple transactions. Note that it’s relatively cheap to compute A; the high gas cost is caused by the creation of the hundreds of contracts needed for withdrawing funds from A.
The Cost of Decentralization in 0x and EtherDelta

Hacking Distributed
The Cost of Decentralization in 0x and EtherDelta

This Tuesday, a decentralized exchange platform called 0x will be holding a $24 million token sale. In the midst of an ICO frenzy that has funneled $1.3 billion into fledgling projects, the 0x sale seems like a drop in the bucket. 0x, though, is at the forefront of a wave of decentralized exchange projects that also includes EtherDelta, BitShares, KyberNetwork, and Omega One.

So what are decentralized exchanges, and why does the world need them?

A centralized cryptocurrency exchange is administered by an entity that represents a single point of failure. Users deposit their funds directly with the exchange, and the exchange assumes responsibility for matching buy and sell orders, which it can do in real time. Many exchanges support the purchase and sale of cryptocurrencies, fiat currencies, and cryptocurrency tokens. Examples of centralized exchanges include Coinbase, Kraken, and ShapeShift—as well as MtGox and Bitfinex. These last two were famously compromised, resulting in the loss of roughly 650,000 BTC and 120,000 BTC respectively, hundreds of millions of dollars at the time. (ShapeShift too was hacked, but a mere $200k or so worth of funds were stolen.)

Therein lies the rub. Centralized exchanges have the serious drawback that they require users to trust the exchange with their money. A fraudulent or compromised exchange can result in theft of users’ funds. There are also other drawbacks to centralized exchanges, such as the vulnerability of the users to frontrunning by the exchange administrator, as we discuss below.

Decentralized exchanges were created specifically to address centralized exchanges’ vulnerability to theft of user funds. In a decentralized exchange, users retain a degree of control of their own funds. They do not send money directly to a wallet that is controlled by a single entity. Instead, trading orders, and thus the release of user funds, are authorized directly by users via digital signatures. In principle, therefore, user funds cannot be stolen.

“In principle” is the operative phrase here. As we show, the ability to sign off on transactions does not equate with real control. Enabling users to control their own funds seems a good thing, but it has the side effect of abandoning the real-time nature of centralized exchanges in favor of slow, on-chain trading, That in turn exposes users of decentralized exchanges to new risks of monetary loss. Consistent with 0x’s apparent design philosophy of community empowerment, there is further dimension of decentralization in 0x’s governance scheme, which permits token holders to upgrade 0x contracts. This feature unfortunately creates a systemic risk such that users’ funds could again be potentially exposed to theft, resulting in some degree of the worst of both worlds.

Any complex new system is likely to suffer from design flaws. Decentralized exchanges have some definite advantages over their centralized counterparts, but also some distinct drawbacks. The design landscape is complex and it is essential to evaluate the merits of individual systems and the risks they pose to users of being cheated or exploited. Some of the flaws of 0x are inherent to decentralized exchanges, while others aren’t, as we now explain.

0x closely resembles EtherDelta, an existing exchange. We therefore compare 0x with EtherDelta to better understand the design choices and likely risks in 0x exchanges.

In this blog post, we present a brief overview of the general design behind today’s decentralized exchanges. Then we discuss three categories of decentralized-exchange design flaws:
  • General flaws affecting all decentralized exchanges, at least within the design space currently explored by the community.
  • 0x-specific flaws introduced by particulars of the 0x architecture.
  • EtherDelta-specific flaws that are minor and don’t affect 0x, but teach us something about the challenges of correctly implementing exchanges.
Finally, we describe experiments we have performed in EtherDelta. These experiments highlights that the risks we describe are real and already emerging in a decentralized exchange with fairly low volume. We conclude with a brief discussion of the decentralized-exchange design space as a whole.

Decentralized exchanges: Brief overview
To support real-time trades, decentralized exchanges such as EtherDelta and 0x adopt essentially the same architecture for public (“broadcast”) orders. Users send their orders to off-chain matching services (called “Relayers” in 0x), which post them in off-chain order books. Such posting takes place in real time—much faster than if orders were posted on a decentralized blockchain.

Any user (called a “Maker” in 0x) can publish any buy or sell request (that this user signs) on the order book of the off-chain service. An order is accompanied by an exact price, i.e., limit orders aren’t supported.

EtherDelta and 0x both attempt to minimize trust in the off-chain matching service by not giving it the power to perform automatic matching of the buy and sell orders. Instead, any other user (called a “Taker” in 0x) trades against a posted order by adding a counterorder and digitally signing and sending the complete transaction, i.e., pair of orders, directly to the blockchain. The transaction is sent to a smart contract that executes the transaction, transferring assets between the buyer and seller (Maker and Taker in 0x). The lifecycle of a transaction is shown below in Figure 1.

EtherDelta and 0x both adopt this general design, but differ in two key ways. 0x allows anyone to stand up an exchange (“Relayer” in 0x), while EtherDelta has a unique one. Additionally, 0x, unlike EtherDelta, has a system of tokens used to pay transaction fees and for system governance.

Lifecycle of a transaction, using 0x terminology. (1) The Maker sends a digitally signed broadcast order (e.g., “Sell 100 TOK for 1 ETH”) to the Relayer, who (2) Posts the order to the Order book. The Taker finds the order in the order book and digitally signs it, with a counterorder (“Buy 100 TOK for 1 ETH”) to the DEX (“Decentralized EXchange”) smart contract.

General flaws in decentralized exchanges
The general design just described introduces several basic vulnerabilities:

Exposure to arbitrage: The lack of automatic matching permits in-market arbitrage, whereby stale orders are filled to the disadvantage of users unable to quickly cancel their orders in response to market fluctuations. For example, the arbitrageur can execute against a standing pair of orders (sell 1 TOK at 1 ETH and buy 1 TOK at 2 ETH) to make an immediate profit of 1 ETH. Since the only way for users to invalidate their signed orders (that they published on the off-chain service) is by sending an on-chain cancellation transaction that is explicitly processed by the exchange contract, the arbitrageur may pay a high gas fee to miners and win the race against the cancellation transaction. Therefore, users who wish to increase the probability of a successful cancellation may need to attach an excessively high fee that depends on the value of the trade, which makes the exchange platform unattractive to honest users. We show below that this problem isn’t theoretical, but already arises in practice.

Vulnerability to miner frontrunning: Order cancellations are a common feature of decentralized exchanges (after all, an exchange with no cancellation ability may not be useful in a volatile market), and their on-chain nature renders these cancellations particularly vulnerable to miner frontrunning; the miner of the next block will always have the option to execute cancelled orders with themselves as the counterparty, potentially profiting from such an order. To add injury to insult, the miner even collects gas costs from a user’s failed cancellation. This issue was noted in the Consensys 0x report, and is recognized as a limitation of on-chain cancellations in the community.

Exposure to exchange abuses: Since the off-chain matching service doesn’t perform automatic matching, it is supposed to publish all users’ orders as quickly as possible, resulting in principle in fully transparent behavior. In actual fact, though, the exchange can suppress orders, mounting a denial-of-service attack against users in order to corner a market or censor particular users’ transactions. Worse yet, it can front-run orders. Specifically, it can engage in the same kind of in-market “sandwich” arbitrage described above, especially when high-value trades are requested. The problem is that signed orders flow to the off-chain server first. The server can thus match the trade data with pseudonymous users that it controls. Both suppression and front-running by an exchange are extremely hard to detect.

0x-specific flaws
Decentralized governance: In just two days, 0x will launch the Initial Coin Offering (ICO) of their ZRX token. The token will serve two functions: First, it will allow market participants to pay a Relayer’s fees for listing orders. Second, the token will be used for "decentralized governance" over the evolution of the protocol and the DEX contract holding market participants' assets.

Why a dedicated token should be used for Relayer fees is unclear—after all one could simply pay Relayers in ETH instead. The use of a token for decentralized governance is a more interesting use case.

Unfortunately, the 0x whitepaper does not provide any detailed information on how this governance process will work. Neither does the code in 0x's github repository. Since the governance process appears to be the only good reason for creating the ZRX token, this is all the more disappointing.

The 0x whitepaper does, however, state that non-disruptive protocol updates (i.e. changing the protocol and underlying smart contracts without requiring opt-in of individual users) are an explicit design goal of the governance scheme. This immediately raises questions about the security properties of the governance process. If, for instance, 0x were to use a simple majority voting scheme to approve updates to the DEX (which holds all user assets), an attacker could perform a 51% attack where she buys more than half of all ZRX tokens and then votes to replace the DEX with a malicious contract sending all assets to the herself. Designing a secure, decentralized governance process will be difficult and involve a multitude of delicate tradeoffs. Once again, decentralization is no panacea and carries a price in terms of complexity and possibly weakened security!

Side deals: 0x's design allows for two kinds of orders, broadcast orders and point-to-point orders. Broadcast orders make use of a Relayer, who broadcasts the Maker's order to any listening Takers who can choose to fill the order by sending a signed message to the DEX. (Figure 1 shows the lifecycle of a broadcast order.) Relayers can charge fees as a reward for their broadcasting services: When an order is filled, the DEX will transfer any fees from the Taker and Maker's accounts to the Relayer's account.

In contrast, point-to-point orders do not make use of Relayers and thus avoid Relayer fees. As their name suggests, point-to-point orders allow two market participants to trade directly with each other by sending signed messages to the DEX.

This leads to a scheme that allows Makers and Takers to evade Relayer fees:
  • The Maker M posts an order O on a Relayer service.
  • A Taker T listening to the Relayer learns of the order and decides she wants to fill it. She contacts the Taker off-chain [1] with a point-to-point order O' corresponding to O: If O offers to sell 1 TOK for 1 ETH, O' will offer M to buy 1 TOK for 1 ETH from T.
  • Once M receives the order, she sends a single Ethereum transaction to the blockchain. This transaction calls the DEX twice, first cancelling O and then immediately filling O'. Note that thanks to the atomicity of Ethereum transactions, M carries no risk of both O and O' being filled.
Since cancellations are free and O is never filled, the Relayer will not earn any fees. Systematic exploitation of this flaw could lead to a tragedy of the commons, where individual market participants would make it uneconomical to run a Relayer by always evading fees, thereby destroying the "common good" of Relayers.

Side deals are less of a problem in EtherDelta because EtherDelta doesn’t support point-to-point orders and the fees for broadcast orders are hardcoded in the smart contract. However, 0x’s approach is not without advantages: Having a single DEX hold the assets for all point-to-point and broadcast orders and allowing multiple Relayers will likely lead to a more liquid market; furthermore, competition among Relayers may lower the fees that user pay. This tradeoff cannot be easily overcome: EtherDelta could in principle support multiple Relayers by using a separate contract for each of them, but this approach would not allow the liquidity in the order books of the Relayers to be shared.

Maker griefing: Maker griefing is an attack recognized in audits of 0x whereby an order maker moves tokens that are supposed to be involved in an order, causing it to fail in the final on-chain processing stages responsible for moving the funds. If such a failure occurs, the on-chain taker must pay gas fees to attempt execution of an order that never completes or provides any benefit to the taker, an inefficient use of taker time and money.

Repeating this attack on a large scale could potentially waste Taker gas, making Takers incur high order fees, costs, and delays in addition to the transparent Taker fees charged by the exchange. A cartel could place both legitimate and illegitimate orders, sharing information with each other out of band about which orders were legitimate. This would force outsiders to incur penalties, a potentially profitable strategy for a sufficiently powerful cartel. The recommended tooling mitigations do not entirely solve the issue, as they rely on checks of blockchain state, which could potentially change immediately before or even during the release of a new block. The potential for miner involvement as Makers in permissionless distributed markets or miner collusion with these cartels further amplifies these attacks, as miners could both collect the profit from Takers burning gas in griefing attacks and trigger the attacks in previously unseen transactions inserted in blocks before order fulfillment. Such a miner attack would allow no possibility of detection for the recommended tool-based mitigations.

The technological and economic barriers to these attacks or the formation of potential cartels mean these strategies may not surface until decentralized exchanges achieve substantial volume (and thus allow for substantial profit), dangerously providing a false sense of security and a false confidence in on-chain market architectures.

Similarity to EtherDelta: As described in the design of both exchanges, EtherDelta and 0x share a number of similarities. As EtherDelta is a full system that is currently operational and includes a number of in-production smart contracts, it is unclear where a potentially large ICO raise could be allocated beyond the development of equivalent technologies and distributed governance. The 0x roadmap provides some hints of potential strategies for differentiation, and we hope to see the development of advance and novel R&D improvements accompany the deployment of proposed EtherDelta-like infrastructure.

EtherDelta-specific flaws
Slow cancellation: Despite its processing of cancellations with low observed latency after a cancellation order is mined on-chain, the requirement of waiting until the next mined block (or later with potentially full blocks) imposes a significant barrier to real-time exchange, potentially locking up user funds and enabling profitable miner arbitrage on larger orders through frontrunning.

Slow order processing: During the posting of our test orders on EtherDelta in the experiments we conduct below, we observed a substantial delay required to place orders in the systems. This operation is not contingent on any on-chain transaction processing, and it is not clear to us why the system is imposing such a delay.

High gas costs for competing transactions: Due to the high latency of EtherDelta’s order book, some Takers may be blind to each others’ orders. This can cause a race condition where multiple Takers compete to fulfill a single order, leading to order failure with some delay and gas costs incurred by all but the winning Takers. This could potentially become problematic on attractive orders as the system increases in size, specifically with the possibility of miner participation and frontrunning.

Experiments with EtherDelta
We experimentally verified some of the flaws listed above. First, we evaluated the total theoretical arbitrage opportunity of 10 major ERC-20 Token/ETH exchanges (BQX, CDT, CVC, EOS, PAY, PLR, PPT, VERI, XRL exchange with ETH) on EtherDelta over a period of 24 hours (from 2017-8-11 00:30:00 UTC to 2017-8-12 00:30:00 UTC). Our experiments show that—assuming perfect execution, the current EtherDelta transaction fee of 0.3% of total volume, and a gas price of 20 Gwei—an arbitrageur could have gained 14.71785 ETH in the above period. At current exchange rates ($303.9 / ETH), this corresponds to $4,472.75 / day or $1.6+ million / year. Specifically, we observed 45 arbitrage opportunities during our period of study, with an average arbitrage opportunity of 0.32 ETH. Compared with centralized exchanges, EtherDelta has relatively small volume today. Were its volume to grow substantially without technical changes, arbitrage opportunities would presumably grow to be quite substantial.

Many of these arbitrage opportunities resulted from clear user error, i.e., obvious mispricing of orders. Even without such error, however, we identified approximately 6.7 ETH in arbitrage opportunities over the 24-hour period of study, corresponding to about $2,036.

Having confirmed that arbitrage opportunities exist in practice, we demonstrated experimentally that they are exploitable in practice. We designed a simple trading bot that would monitor EtherDelta's order books for arbitrage opportunities and send orders exploiting them to the blockchain. Our preliminary numbers suggest that out of 12 orders sent, 7 (or 58.3%) succeeded. The cause of failed arbitrage attempts was a failure to post our order fast enough, i.e., someone else hit the order or the maker cancelled the order before our order executed. (We did not seek to exploit all 45 observed arbitrage opportunities because this would have required roughly 80 ETH worth of tokens, which is more than we poor academic researchers possess.)

Assuming that the success rate we observed is representative, we can approximately estimate the expected net daily profit available to arbitrageurs according to the following formula:
Expected profit = (Net profit per transaction * Probability of success - Gas per transaction * (1 - Probability of success)) * Total number of opportunities

Our experiments during the period of study suggest the estimate:
Expected profit = 0.32 ETH * 58.3% * 45 - 0.004 ETH * (1 - 58.3%) * 45= 8.32 ETH

Or about ~$2500 per day. While this is not enormous, particularly if competition arises among arbitrageurs, an increase in popularity and therefore volume on decentralized exchanges would of course result in far larger potential profits.

The decentralized-exchange design space
There is a large design space for decentralized exchanges beyond the EtherDelta/0x variety. For example, an alternative is to allow the off-chain matching service to perform automatic matching, and even require each trade to include a signature by the off-chain service. To avoid loss of funds in the case that the off-chain service disappears, users would still be able to withdraw their assets without approval of the off-chain service, but only via a slow process. This design eliminates two of the major problems that we described above: The arbitrageur won’t be able to race against users who have already cancelled their orders, and miners won’t be able to front-run users. Unfortunately, the off-chain matching service will have the same kind of power to perform in-market arbitrage as centralized exchanges. Unlike centralized exchanges, though, it won’t have the power to steal users’ deposits.

Another reason to avoid automatic matching by the off-chain service, however, is that it reduces the complexity of the code of the exchange contract. Automatic matching support would naturally support functionality such as limit orders that don’t exist in EtherDelta / 0x. (Given the simplicity of the code base for which 0x are raising $24 million, code supporting automatic matching would presumably be worth several hundred million or so...)

Centralized exchanges have serious drawbacks, perhaps most notably exposure of users’ funds to theft. But the wave of creation of decentralized exchanges that place users’ funds in their control does not fully protect users’ funds, and introduces new problems. It is tempting to dismiss the problems we’ve observed in EtherDelta as trivial, but we believe they will grow as decentralized exchanges do. What we’re seeing today is just a harbinger of problems to come should decentralized exchanges sweep over the cryptocurrency landscape. But since the problems that we’ve identified are exacerbated when higher value trades take place, we conjecture that such problems will ultimately limit the popularity of decentralized exchanges.

This all is not to say that decentralized exchanges do not have a place or cannot fill a valuable market niche. We are also not claiming the superiority of centralized exchanges; notably, centralized exchanges can steal all user funds, the strongest form of unfairness. Nonetheless, it is important to remain aware of the tradeoffs made to achieve this decentralization and their potential negative effects on users of a certain exchange architecture. Someday, someone may devise a decentralized exchange architecture that doesn’t suffer from any of the limitations we’ve enumerated. This challenge remains an open problem.

As a final note, we observe that 0x and EtherDelta provide only a trading platform for tokens that circulate on a single blockchain. They don’t support trading BTC for LTC and so on. As our discussion illustrates, achieving even this modest goal with adequate performance and security is a herculean task for decentralized exchanges. Support for cross-chain trading requires far more complex architectures, see for example KyberNetwork and Omega One. Decentralized exchanges that enable cross-chain trading are likely to exhibit all the problems discussed in this blog post, and more.

    [1]    We are deliberately vague about the mechanism of off-chain communication since it isn't relevant to the scheme. A simple example would be the use of email using a simple directory-like smart contract to associate Ethereum addresses with their owner’s email addresses.
In the interest of transparency, we would like to note that one of the authors of this blog post, Phil Daian, is an advisor to Swap, a P2P trading protocol that is an alternative to decentralized exchanges. Swap is not immune to many of the issues identified in this blog post, given that an Indexer, the Swap mechanism for counterparty discovery, could play a role much like that of a Relayer plus Order Book. (The Swap whitepaper provides insufficient information for detailed analysis.)
Bitcoin's Impending Accounting Disaster

Hacking Distributed
Bitcoin's Impending Accounting Disaster

Some idiot is making it rain Bitcoin Cash, and it's not good.

Once again, there's a brewing accounting disaster and ensuing drama to follow in Bitcoin land. This next one has the potential to outdo past accounting disasters, and because the topic is politically charged, can lead to all kinds of upheaval in its aftermath. It's also instructive to examine from a techie perspective, for it sheds light into the kinds of financial scams that people run.

At the center of this debacle is a new currency called "Bitcoin Cash." Bitcoin Cash is a new coin that is derived from Bitcoin in two ways: the code is very similar, except it has the capacity to clear more transactions per second, and it shares the same financial history. That is, if you own a Bitcoin (BTC), come August 1st, you'll also own a Bitcoin Cash (BCC).

This distribution technique is similar to how the new Brazilian real was distributed to the masses, or how stock shares are distributed in the case of a spin off. If you hold a share of Google, you end up holding both a Google and an Alphabet share after the split.

It turns out that at least one, possibly more, cryptocurrency exchange gets this accounting wrong. This creates the opportunity to make money out of the thin air. At the same time, it also creates the possibility for exchanges to go bankrupt.

The Rules
Here is the distilled version of how the popular exchange Bitfinex plans to account for the BCC coin distribution:
The token distribution methodology will be:
  • All BTC wallet balances will receive BCC
  • Margin longs and margin shorts will not receive BCC
  • Margin shorts and margin longs will not pay BCC
  • BTC Lenders will receive BCC

This is a good time to check up on your ability to detect scam opporunities. Do you immediately see how you can abuse these rules to make money?

If you need a hint, it's worth pointing out that this distribution method is in direct opposition to Bitfinex's own guidelines on how coin distributions ought to work.

Ok, those of you who can see the scam and have loose morals have probably stopped reading at this point and are busy positioning themselves for their own little scam. Let's now go over how the trick works for the rest of us normal people.

The Scam
The problem has to do with how positive BTC balances are unconditionally rewarded with BCC, while negative balances are not accounted for properly.

In particular, imagine our favorite player Alice. Suppose that she has 1 BTC, and she deposits it at Bitfinex.

Stuart (the Scammer) reads the rules above, and creates an account on Bitfinex. He then margin-shorts 1 BTC. That is, he borrows Alice's 1 BTC, and sells it. He now owes 1 BTC to Alice.

Stuart then, in a funny trade, buys his own margin-short with cash. He now owns 1 BTC, and owes 1 BTC to Alice. His position is market neutral and he carries no risk. Alice also owns 1 BTC.

Then the split happens. Bitfinex decides to credit both Stuart and Alice with 1 BCC, each! That is, Bitfinex erroneously creates a liability of 2 BCC. Meanwhile, they only actually own 1 BCC.

Bitfinex immediately becomes a fractional reserve, and simultaneously inflates the BCC supply.

This goes counter to all kinds of established accounting principles, including Bitfinex's own guidelines for how to account for coin splits.

It is a terrible outcome for BCC. BCC price will be lower than its natural level as a result of this fraudulent accounting practice.

Of course, the BCC that Bitfinex creates out of the thin air like this is not real BCC. The BCC price at Bitfinex should not be comparable to BCC price elsewhere. But of course, the market will not know how to compare or price in the bankruptcy risk. As a result, this will depress the actual BCC's price. It is, in effect, blowing through BCC's 21M cap by manufacturing fake BCC.

It is also terrible for Bitfinex, in that they can lose money and even go bankrupt, because they are creating more liabilities for themselves than assets. But Bitfinex are old pros at running a bankrupt exchange. They previously lost $60M, socialized the losses across their user base, issued tokens for the loss, then bought their own tokens back for pennies on the dollar, thus appearing to come out of bankruptcy. Their Tether contracts ended up growing in "assets" at a time when their banking operations were interrupted, leading many to conclude that Tethers were unbacked and untethered securities despite their marketing and name. The next exchange on the Fed's radar for flaunting laws, accounting practices and plain old ethical behavior, after BTC-E, ought to be Bitfinex.

The Scam is Underway
Are people actually taking advantage of this loophole? You bet!

As one trader puts it Short 1 btc. Buy 1 btc. Get 1 bcc free :).

The short interest at Bitfinex is at historical highs. And yet it's not due to negative sentiment -- these are neutralized short positions, where the same person is both shorting and going long, just to game the system. As a result, the Bitcoin price is unaffected even while huge short positions are being developed just to game the broken accounting.

The real losers at the end of this will be all of us, regular cryptocurrency enthusiasts. The SEC, in its landmark decision where it rejected the notion of a Bitcoin-backed ETF, cited specifically the unprofessional way in which most Bitcoin exchanges are run. The end result of all these small games is that, yes, some people make a few coins in the short run, but the cryptocurrency cause is set back, and the reputational damage punishes everyone for years to come.
  • Many thanks to two accounts dedicated to ferreting out bizarre behaviors in the crypto space, Cukefinex Auditor and BitCrypto'ed, for pointing out the Bitfinex shenanigans. They are beacons of sanity and you should follow them.
An In-Depth Look at the Parity Multisig Bug

Hacking Distributed
An In-Depth Look at the Parity Multisig Bug

Multiple signatures are better than one.

This year’s IC3-Ethereum bootcamp brought together the Ethereum Foundation’s top developers, IC3 students and faculty, and dozens of others from industry and academia. We worked together over the course of a week on ten exciting, intensive development projects, and ended with a bang on the last day. The Ethereum wallet rescue group (with a little help from a couple of IC3ers) scrambled to respond when 153,037 Ether (worth $30+ million) was stolen from three large Ethereum multisig wallet contracts.

The MultisigExploit-Hacker (MEH), as he, she, or they are known, exploited a vulnerability in the Parity 1.5 client’s multisig wallet contract. A fairly straightforward attack allowed the hacker to take ownership of a victim’s wallet with a single transaction. The attacker could then drain the victim’s funds, as happened in these three transactions once the wallets were compromised. The victims were three ICO projects: Edgeless Casino, Swarm City, and æternity.

In the following we will give an in-depth technical explanation of the hack, describe the white-hat response, and draw some lessons about how such breaches might be prevented in the future.

How the attack worked
There are many reports that the vulnerability was due to the simple omission of an “internal” modifier that made it possible for anyone anywhere to take ownership of an existing wallet due to Solidity’s “default-public” policy. While it is true that the addition of the right modifiers would have prevented the attack, the attack is a little more clever than this would suggest.

The vulnerable MultiSig wallet was split into two contracts to reduce the size of each wallet and save gas: A library contract called “WalletLibrary” and an actual “Wallet” contract consuming the library. Here is a toy version of WalletLibrary:

// called by constructor
function initWallet(address _owner) {
owner = _owner;
// … more setup ...

function changeOwner(address _new_owner) external {
if (msg.sender == owner) {
owner = _new_owner;

function () payable {
// ... receive money, log events, ...

function withdraw(uint amount) external returns (bool success) {
if (msg.sender == owner) {
return owner.send(amount);
} else {
return false;

WalletLibrary looks pretty boring: Beyond some initialization code that will be called in the constructor of Wallet, WalletLibrary provides the basic functionality you would expect from a wallet: Anybody can deposit money into the wallet, but only the owner can withdraw her funds or change the owner of the wallet.

Here’s an example, simplified contract that could be using this WalletLibrary:

function Wallet(address _owner) {
// replace the following line with “_walletLibrary = new WalletLibrary();”
// if you want to try to exploit this contract in Remix.
_walletLibrary = ; _walletLibrary.delegatecall(bytes4(sha3("initWallet(address)")), _owner); } function withdraw(uint amount) returns (bool success) { return _walletLibrary.delegatecall(bytes4(sha3("withdraw(uint)")), amount); } // fallback function gets called if no other function matches call function () payable { _walletLibrary.delegatecall(; } }

This time, the code looks more complex. Notice the use of delegatecall throughout the contract. delegatecall is designed to enable the use of shared libraries, saving precious storage space on the blockchain otherwise wasted in duplicating widely used, standard code.

delegatecall works by executing the program code of a contract in the environment (and with the storage) of its calling client contract. This means that the library code will run, but will directly modify the data of the client calling the library. It essentially is as if the code of the library had been pasted into the client issuing the delegatecall. Any storage writes inside the delegatecall will be made to the storage of the client, not the storage of the library. delegatecall allows a client contract to delegate the responsibility of handling a call to another contract.

At the EVM level, a contract is just a single program that takes a variable-length binary blob of data as input and produces a variable-length binary blob of data as its output. A transaction in EVM provides an address and some data. If the address holds code, this data is used in a “jump table” like structure in the beginning of the contract’s code, with some of the data (the “function selector”) indexing jumps to different parts of contract code using the standard encodings described in the Ethereum Contract ABI specification. Above, the function selector for calling a function called initWallet that takes an address as its argument is the mysterious looking bytes4(sha3("initWallet(address)")).

Now that we understand delegatecall and how function selectors work, we can read and understand the Wallet contract. To begin with, we have a simple constructor that delegates the initialization of the contract’s state to WalletLibrary. This is followed by a withdraw function which -- once again -- delegates its task to WalletLibrary.

Finally, we want our wallet to be able to receive funds. This is commonly handled with a Solidity construct called a fallback function. A fallback function is a default function that a contract is able to call to respond to data that doesn’t match any function in the lookup table. Since we might want all sorts of logic to be triggered upon the receipt of funds (e.g. logging events), we again delegate this to WalletLibrary and pass along any data we might have received with the call. This data forwarding also exposes WalletLibrary’s changeOwner function, making it possible to change the Wallet’s owner.

Wallet is now completely vulnerable. Can you spot the vulnerability?

You might have noticed that the initializeWallet function of WalletLibrary changes the owner of the Wallet. Whoever owns the Wallet can then withdraw whatever funds are contained in the contract. But as you can see, initializeWallet is only executed in the constructor of Wallet (which can only run once, when Wallet is created), and Wallet doesn’t itself have an initializeWallet function. Any calls sending this function selector to Wallet in a transactions’ data won’t match.

So, the attacker cannot call Wallet.initializeWallet(attacker) to change the owner of the wallet and we are safe after all‽

As it turns out, the implementation of the fallback function means that we are not. As we already saw, a fallback function is a default function that a contract is able to call to respond to data that doesn’t match any function in the lookup table. The intent is to allow contracts to respond to receipt of funds and/or unexpected data patterns; in our wallet, it both enables funds receipt and allows for functions to be called in the wallet library that are not explicitly specified in the wallet.

One use of such a “generic forward” feature could be upgradeability, with the pattern perhaps even recommended as a security precaution for the event that the need for additional functions not anticipated at release time became known: By allowing the wallet’s owner to change the address of the library contract, one could keep the wallet with its funds at the same address while changing the underlying implementation. In the Parity contract, it is likely that forwarding data was only done to save gas costs, as the contract is not upgradeable and all forwards could have been made explicit at compile time. Instead, this implicit default forwarding was used over explicit forwarding to expose certain functions like revoke.

So, when an attacker calls Wallet.initializeWallet(attacker), Wallet’s fallback function is triggered (Wallet doesn’t implement an initializeWallet function), and the jump table lookup fails. Wallet’s fallback function then delegatecalls WalletLibrary, forwarding all call data in the process. This call data consists of the function selector for the initializeWallet(address) function and the attacker’s address. When WalletLibrary receives the call data, it finds that its initializeWallet function matches the function selector and runs initializeWallet(attacker) in the context of Wallet, setting Wallet’s owner variable to attacker. BOOM! The attacker is now the wallet’s owner and can withdraw any funds at her leisure.

In reality, the initializeWallet function was more complicated and took more parameters, but the principle of the attack is exactly the one described above. You can see one of the initializeWallet calls of the attacker; the attacker then immediately withdrew the funds, earning her 26,793 ETH (or ~6.1 million USD). Not bad for two function calls and 60 cents in gas cost!

If initializeWallet had been marked as internal, there would have been no corresponding entry in the jump table, making it impossible to call initializeWallet from outside the contract. If initializeWallet had checked for double initialization, there would also have been no problem. Adding to the confusion is the fact that certain functions that are supposed to be callable from outside the contract are marked as external; one could easily wrongly assume that all functions not marked as external aren’t visible from the outside. In the Parity’s patch to the vulnerable contract, a modifier was added to a helper function of the vulnerable wallet initialization process to throw an exception if the attacked initialization function was re-called.

The response
A white-hat recovery team (MEH-WH) developers identified and drained all remaining vulnerable wallets into this wallet. They recovered a total of $78 million worth of tokens (half the value being BAT and ICONOMI) plus 377,105+ ETH (around $72 million). The funds will be returned to their owners as noted on r/ethereum:
If you hold a multisig contract that was drained, please be patient. [The MEH-WH] will be creating another multisig for you that has the same settings as your old multisig but with the vulnerability removed and will return your funds to you there.

This is all well and good for the recovered funds, but the stolen funds are in all likelihood unrecoverable. The Ethereum community cannot, for instance, easily execute a hard fork as they did in the case of The DAO. The DAO had a built-in 34-day delay, during which the stolen funds were locked into the contract and subject to recovery. The MEH only needs to identify compliant exchanges to cash out or convert to ZEC to retain the stolen funds with full anonymity. The MEH has already cashed out small amounts, as in this roughly 50 ETH transaction at Changelly.

Unless the hacker trips up, the community will have to resign itself to the loss of the money---more than in any U.S. bank robbery -- and an enduring mystery over the identity of the thief / thieves. In case you’re wondering, none of our ten bootcamp projects involved stealing ETH from multisig wallets. :)

Looking at the history of the vulnerable contract in Parity’s github repository, we find that the contract was first added as a complete blob of code on December 16 of last year in commit 63137b. The contract was edited extensively once, on March 7 in commit 4d08e7b and then wasn’t touched until the attack occurred. Since the contract was originally added as one big blob, it was likely copied from somewhere else, making its provenance in development unclear. Note that the first version of the contract already contained the vulnerable code.

It is hard to believe that such an large (and valuable!) vulnerability could have gone undiscovered for such a long time. However, in light of the contract’s length and complexity and the complex interactions between Solidity’s fallback functions, its default-public visibility of functions, delegatecalls, and call data forwarding, that enable the attack, this seems less surprising. At least one other multisig contract had an analogous bug that stemmed from the lack of a function modifier.

We believe that there are multiple levels on which lessons should be drawn from this attack:

First of all, we recommend that Solidity adopt a default-private level of visibility for contract functions. This change would have likely prevented this exploit and others like it. This may be an opportunity to batch a number of other safe usability related changes, much needed additional types, and solutions to common gripes into Solidity. It's also an opportune time to think about versioning at the source language level, to be able to easily introduce new features into the language without having to worry about backwards compatibility.

In a more general sense, we believe that this attack was the result of security’s oldest enemy, complexity: It seems likely that the missing “internal” function modifier would have been discovered by the developers if Wallet had just been a single contract instead of delegatecalling out to WalletLibrary. Even without this modifier, Wallet would not have been vulnerable as long as Wallet’s fallback function wouldn’t have unconditionally forwarded any calldata to WalletLibrary, exposing unexpected functions able to modify the data in Wallet.

Interestingly, this specific attack may not have been caught by testing as implemented by most developers, since the vulnerability wasn’t caused by incorrect behaviour of a function, but rather by the unexpected exposure of a function (that behaved correctly) to the public. Nevertheless, we do, of course, strongly recommend that smart contract authors thoroughly test their contracts, and a test policy that included testing for function visibility on every function would have exposed the issue.

The creation of a rigorously followed best practices guide for testing and smart contract review that requires that visibility assumptions be made explicit and tested is thus, we believe, one of the strong lessons from this attack. Today, it is not uncommon to see large contracts deployed with only a handful of unit tests, with little to no reasoning about interaction between contracts, and with unit tests that do not even accomplish full statement or decision coverage. Beyond these glaring omissions of basic software quality techniques standard in the space, it remains apparent that there is still work to be done in understanding best practices for high level tooling and language design for smart contracts.

The Parity project has released a post-mortem giving a high level overview of the attack and discussing the steps Parity will take in the future to ensure that such an attack will not happen again. Many of their conclusions agree with the ones we made here.

We would like to thank Everett Hildenbrandt of the KEVM project for his feedback and helpful suggestions on explaining the attack.
Parity's Wallet Bug is not Alone

Hacking Distributed
Parity's Wallet Bug is not Alone

A bug was found in Ethereum's Parity Multisig Wallet yesterday, wherein anyone could take ownership of any multisig wallet and usurp the funds. And it looks like someone did just that, emptying out around $30M from three multisig addresses affecting three ICOs. The root cause of the bug was a missing "internal" identifier, where a function that should only have been called internally got exposed to everyone.

This is not the first instance of such a bug. The point of this blog post is to underscore some takeaways that might make sure that it's the last. I'll do this by describing a bug I found and fixed in BitGo's multisig wallet last year, and making the case for a general fix.

The Backstory
I took time away from this to look at BitGo code. I was younger and dumber.

I was sitting on the beach at the summer house (which I had bought with my proceeds from the DAO hack [1]) when I got a message from Benedict Chan and Ben Davenport at BitGo. They wanted me to take a look at their multisig wallet. I don't normally do code reviews -- they are hard work, carry too much reputational risk, and with some probability, people will misrepresent my limited involvement as some kind of endorsement of their entire business proposition. Further, they didn't offer any compensation and IC3 had no relationship with BitGo. But I liked BitGo at the time, mostly because they employed great people like Ben Chan and Jameson Lopp. Plus it was a lazy summer afternoon, I was sitting by the Med, with the crickets doing their thing, and the wind wasn't up yet to kiteboard. So I decided to spend about 15 minutes to look, through the glare on my laptop screen, at their code.

This is a good time to look through their code yourself and see if you can spot any vulnerabilities. It should be a little easier given the context. Here it is.

If you're feeling adventurous, see if you can spot the second attack. There isn't just one, but two related and distinct vulnerabilities.

In case you are feeling lost, like I initially felt, here's some orientation. This code is different from the usual multisig pattern one would build on Ethereum. BitGo evidently decided to replicate their Bitcoin workflow in Ethereum, so they pass around an object from one party to another, building the multisig transaction off-chain. In contrast, the natural approach in Ethereum would have been to keep track of the number of signatories inside the contract. It took me several minutes to figure out this design decision. Around minute 12, I spotted the bug.

Do you see the problem?

Spoilers follow.

The big issue is that any idiot on the internet can call tryInsertSequenceId() with any value they like. And a natural thing to call it with is, of course, a bunch of numbers close to MAXINT. After that, the multisig wallet is stuck. It will not be able to issue any further transactions. Any funds in the contract will be stuck forever. Well, either forever, or until BitGo is able to lobby King Vitalik [2] and get him to agree to a hard fork. Needless to say, that's in jest; I've recently read one too many rants by Bitcoiners who are upset that Ethereum is gaining dominance. The truth is, Vitalik couldn't pull off a hard fork even if he wanted to. BitGo would have to convince the entire Ethereum community to swap the state of their contract to re-mobilize their funds. Chances of that happening are somewhere between an ice-skating vacation in hell, Bitcoin Core adopting something invented by someone who does not go to extra lengths to appease their fragile egos, and Craig Wright actually signing something with the genesis key -- squarely in the "not going to happen" category.

The bug would have been averted by marking tryInsertSequenceId() as private.

Not at all ironically, the bug in the BitGo multisig contract is due exactly to the same root cause as in the Parity multisig smart contract.

A Second Vulnerability in the BitGo Contract
Safe defaults lead to less time on the laptop, more time on the water.

There is a second vulnerability in this contract. As I wrote to BitGo
I'm also worried that someone (eg equivalent of a hacked Bitfinex) could manipulate the infrastructure around this wallet contract so as to make the other party (BitGo) skip over most the sequence id space and sign 10 legitimate transactions with really large seqids, then be unable to sign any more transactions.

This is an attack on the BitGo API via a compromised client. Even if the private or internal modifier had been present, the contract would have been open to this kind of attack. These kinds of API design issues are fairly tricky to spot. It's one thing to verify the correctness of a contract against a formal spec, another thing to verify it against an implicitly assumed but not actually enunciated spec, and a different thing altogether to see if the spec is safe given the operating conditions.

I'm fairly paranoid, so I would have hardened the API by ensuring that the smart contract does not allow one to advance through the identifier space non-sequentially. That is, I'd rule out skipping around. Counting one by one all the way up to 2^256 would not constitute a feasible attack -- that DoS attempt would be preceded by the heat death of the universe. That would prohibit exhaustion of sequence ids by misbehaving clients.

Every big contract is simultaneously a bug bounty and a lambo.

I personally had a good run finding critical bugs, but this portends a dangerous trend. Specifically, with my co-authors Dino Mark and Vlad Zamfir, I found 9 bugs in the DAO, one of which was used by the DAO hacker. But I found-yet-incorrectly-dismissed the otherbug that the DAO hacker used. We seem to be living through a phase particularly rich in critical bugs.

There are some depressing takeaways here. Two separate multisig wallet contracts, which have been scrutinized heavily, were vulnerable to the same kind of bug. This reifies the finding from the 1970's that N-version programming tends to suffer from bugs in the same critical points. Even if one spends the effort to implement N independent versions of the same functionality, they might fail at the same time or around the same breaking points. True independence of failures is difficult to achieve.

Worse, many eyeballs don't make code safe. Just about every ICO, trust and company used the Parity multisig wallet, and that code was considered well-tested. One could perhaps try to make the case that Parity is a volunteer-run, open source project, that professional software houses have their act together better. But that'd contradict the fact that BitGo got bit by the same exact bug. So, clearly, the problem here is systemic and needs a systemic solution. As the database community painstakingly discovered, most programmers cannot handle power: they need sane defaults and mechanisms that fail safe when the programmer is negligent.

And there is also a positive lesson for language design. These kinds of bugs would be greatly reduced if the default policy in Solidity were "default-private." Almost every other language uses default-private, forcing a smart contract programmer to take extra steps to expose a procedure to outsiders. The tradeoff in Solidity is, frankly, wrong. Default-public is too error prone.

I'd urge the Solidity project to change this behavior. Make fields and functions in a contract bedefault-private. Such a change does not affect already-deployed smart contracts, does not require a fork, and can even be made optional with the addition of a compiler flag. It's relatively trivial to affect compared to the impossibility of recovering lost funds.
The BitGo After-Story
Can one blame Romus or Romulus for their upbringing?

Some of you may be wondering how BitGo handled this episode.

I notified BitGo of this issue and theypromptly fixed their code. They had apparently removed the "private" modifier to facilitate unit testing.

This brings up yet another takeaway -- there ought to be some way of dropping private modifiers during testing, and only during testing. Mutating one's code to perform testing is not good practice.

BitGo did not believe that the second issue would be a problem in practice, as they thought that the surrounding client code would not issue non-consecutive sequence numbers. It's their call and their own risk assessment. Recall that I wasn't getting paid for any of this and had no skin in the game, so there was no point in contesting their assessment. I'm told that BitGo ultimately did not launch a product based on Ethereum.

I wrote a nice and friendly email to BitGo:
Great to hear that that was useful. Much better to catch potential issues now, than to try to deal with the fallout from a multi-million contract that is stuck. I don't know if BitGo has a Technical Advisory Board, but if it does, I'd consider it. That'd create an open channel for interactions like this. Otherwise, the chances of catching me on vacation with cycles to spare are, regrettably, fairly low.

I never heard a word from BitGo for the next 4-5 months or so. I chalked that up to most early CTOs in the crypto space having been abandoned by their families at a young age and been raised by wolves. I was surprised, however, when Ben Davenport publicly adopted a rabid small block stance and supported the small troll squad known as Dragon's Den. When I emailed him about the impact of unprofessional conduct on the whole space, he sent me the following lovely message:
I understand you are currently on sabbatical from Cornell. Are your fellow faculty members aware of the attention that's being drawn to the Cornell CS department due to your public statements? Hope your activity during your sabbatical year doesn't influence your tenure committee too much.

You all, who realize that I received tenure many years prior, can imagine the bemused look on my face at the implied, yet toothless, threat on my employment. Tenure is intended precisely to stem these kinds of threats on academic freedom.

In the last two months, I had the pleasure of meeting Mike Belshe, CEO of BitGo, in person, who apologized and thanked me for this episode, almost a year after the fact. Belshe is a grown-up and they continue to employ excellent engineers. Regardless, I've decided to have absolutely nothing to do with BitGo in a professional capacity, and, coupled with the Bitfinex hack for which I consider BitGo partly responsible, I've decided to steer clear of their products.


    [1]    Just kidding.
    [2]    Also just kidding.
Atomically Trading with Roger: Gambling on the success of a hardfork

Hacking Distributed
Atomically Trading with Roger: Gambling on the success of a hardfork

We have designed a new protocol for Bitcoin and Ethereum that lets users bet on the outcome of a hardfork. Prior to the hardfork, two parties can deposit coins and commit to a wager; after the hard fork occurs, the coins are swapped on both forks. Our protocol is inspired by Loaded's challenge to Roger to swap 60k coins in the event of a Bitcoin Unlimited hardfork.

Our story begins
Satoshi Nakamoto's 1MB block size limit was activated on 12th September 2010 as an anti denial of service mechanism. Four months later, Satoshi Nakamoto left the community without removing the block size limit. His departure has led to the block size debate.

Over the years several projects have attempted and failed to adjust this block limit as changing Bitcoin's consensus rules appears to require significant support from miners, developers, exchanges, and validators.

Bitcoin Unlimited (BU) is one such proposal to adjust the block limit by a hardfork. Currently around 54% of mining power is signaling support for it. BU includes a new consensus rule called emergent consensus that allows miners to govern the block size amongst themselves.

As with any hard fork, the danger is that the blockchain could split in two upon activating the new consensus rule. One fork would follow the 1MB consensus rules; the other fork would follow the new emergent consensus rules.

Loaded's challenge to Roger
It is worth highlighting that members of the Bitcoin community perceive that a blockchain split is highly probable if BU is successful. In fact, this perception has led to Loaded (a wealthy member of the Bitcoin community) challenging Roger Ver (a vocal advocate of Bitcoin Unlimited) to put his money where his mouth is and to swap at least 60k coins (i.e. a 1:1 trade) if the blockchain does indeed split into two distinct forks.

The best part of this story is that Roger responded within two days to signal his willingness to pursue the bet.
It goes without saying that the Bitcoin community is highly excited at the prospect of the bet. Some of the notable posts and articles surrounding the bet:Of course before we can be excited about the bet... a single question remains:

What is the best technical approach for Loaded and Roger to pursue this bet?

Loaded stated that he would prefer if the trade could be performed as an atomic swap.

Anyone familiar with atomic swap protocols in Bitcoin will instinctively think of Tier Nolan's protocol after reading Loaded's post. In fact, I (Patrick McCorry) responded to Ethan Heilman on twitter to suggest that Loaded and Roger use TierNolan's protocol.

Ethan Heilman identified that the TierNolan protocol is not suitable for the bet since Roger and Loaded cannot use TierNolan to commit to the trade prior to the activation of Bitcoin Unlimited's hardfork. However Ethan's second reply about using replay protection to build a new atomic trade protocol led to an eureka moment.

Facilitating the Gamble
Together we developed a new atomic trade protocol to allow Loaded and Roger to deposit coins prior to the hard fork's activation and perform the atomic swap after the blockchain splits. In fact, we propose a protocol for both Bitcoin and Ethereum.

Bitcoin Atomic Trade Protocol
A key design decision for the Bitcoin atomic trade protocol was to ensure it was compatible with Bitcoin as deployed today (i.e. in the absence of a fix for transaction malleability). The protocol has three stages:

Deposit coins. Both Roger and Loaded deposit coins into a single funding transaction.

Off-chain Setup. First, Roger signs a cancellation transaction to Loaded that allows him to cancel the atomic trade if the off-chain setup is not complete.

Next, Roger signs a trade transaction and sends it to Loaded. This transaction allows Loaded to claim both deposits in the non-Bitcoin Unlimited blockchain (i.e. old consensus rules) if Roger reveals the secret A of H(A).

Then, Loaded signs a trade transaction and sends it to Roger. It is worth mentioning that this trade transaction has replay protection to ensure it can only be accepted into Bitcoin Unlimited's blockchain. This transaction allows Roger to claim both deposits if he reveals the secret A of H(A).

Once both trade transactions are exchanged, Roger is required to sign two forfeit transactions and send them to Loaded. The purpose of the forfeit transactions is to lock Roger into the trade such that if he does not reveal the secret A, then Loaded can claim all coins in both blockchains when the trade expires.

Assuming the above steps are complete and Loaded has not canceled the trade, then Roger can broadcast a commit transaction that locks both parties into the trade (i.e. blocking Loaded's cancellation transaction).

Atomic Trade. Roger can broadcast his trade transaction into the Bitcoin Unlimited blockchain once the hardfork occurs. This also requires Roger to reveal the secret A in order to claim both deposits.

Assuming the blockchain does indeed split into two forks, then Loaded can use the secret A to also broadcast his trade transaction in the non-Bitcoin Unlimited blockchain.

Of course, if Roger decides to abort and not claim his coins, then the trade expires and Loaded can claim all coins in both blockchains.

The key insight of this protocol (and bitcoin contracts in general) is that it is difficult to achieve fair exchange in the absence of a transaction malleability fix. In the appendix of our paper we include a second atomic trade protocol for Bitcoin that assumes transaction malleability has been fixed. Not surprisingly the second protocol's design is vastly simpler and is similar to how a payment channel is established, i.e. both trade transactions are signed off-chain before signing and broadcasting the funding transaction.

Ethereum Atomic Trade Protocol
As mentioned, we also designed a smart contract for Ethereum called 'Trade' to perform an atomic trade in the event of a hardfork. An interesting insight of this protocol is that a hardfork oracle can be realised to allow other contracts to detect whether it is in fork-1 (i.e. old consensus rules) or fork-2 (i.e. new consensus rules).

The protocol is straightforward:
  • Both parties audit the the Trade and Hardfork Oracle contracts.
  • Both parties deposit their coins into the Trade contract to lock their deposits until the hardfork activates.
  • Each party withdraws both deposits from the contract in their respective fork after the hardfork's activation.
Of course, the Trade contract communicates with the Hardfork Oracle contract to identify which blockchain it is in before authorising any withdrawal.

It is exciting that both Loaded and Roger may potentially swap their coins in the event that Bitcoin Unlimited is successful. Of course, our protocol is applicable not just to Bitcoin Unlimited, but any hardfork that includes replay protection. We hope the community have fun reading the paper and if anyone is willing to develop the protocol for use in any future hardfork then please let us know!

p.s. It is also worth mentioning that this paper is great example that using twitter is not always a waste of time... :)

Acknowledgements We want to thank Amy Castor for her constructive comments on an early draft of the blog post.
Twitter, Reddit and 4chan: The Web's Fake News Centipede

Hacking Distributed
Twitter, Reddit and 4chan: The Web's Fake News Centipede

The New York Times recently published an example of how information that appeared on a “fringe” community, such as a parody website, managed to creep into mainstream web communities like Facebook and Fox News. Figure 1 depicts how the fake information flowed between the communities involved in this particular example, thus forming a sort of _centipede_ of web communities. This example highlights how fringe communities can influence mainstream web communities by manipulating and spreading unconfirmed information. Before our study, this phenomenon had not been rigorously investigated, and there was no thorough measurement and analysis on how information flows between online communities.

Figure 1: An example of a Web Centipede as presented by The New York Times.

Nevertheless, anecdotal evidence and the aforementioned example suggests that mainstream web communities, such as Facebook and Twitter, as well as “fringe” web communities, such as 4chan and Reddit, are part of the Web centipede of information diffusion. In our technical report, we systematically study Twitter, Reddit, and 4chan with respect to the influence they have on each other, aiming to understand their impact on the greater information ecosystem. More specifically, we study the temporal dynamics of URLs from 99 mainstream and alternative news domains as they appear in these three communities.

The three platforms are quite different, and worthy of a brief explanation:
  • Twitter. It is a mainstream micro-blogging, directed social network where users can share 140-character tweets to their followers.
  • Reddit. It is a social news aggregator where users can post URLs to content along with a title. Users can upvote/downvote the post. The aggregate of the votes determine the ranking of the post, thus determining the popularity and reachability of the content to the users of the platform. The platform is organized into sub-communities called subreddits. Users can create subreddits, each with their own topic and moderation policy. In our paper, we focus on 6 specific subreddits that substantially contribute to the dissemination of alternative and mainstream news: The_Donald (for Donald Trump supporters), politics, news, worldnews, AskReddit and conspiracy.
  • 4chan. It is an imageboard kind of discussion forum: users create a new thread by making a post with a single image attached, and perhaps some text, in one of several boards (69 as of May 2017) for different topics of interest. Users on 4chan are anonymous and boards are monitored by “janitors“ that ensure conversation remains on-topic. However, janitors are generally not concerned about the language used, hence 4chan is often quite a hostile place. Furthermore, there are several examples that highlight 4chan's impact on the Web and the community. For example, a hoax that originated from 4chan about the death of Ethereum's founder resulted in loses of $4 billion in Ethereum's market value. In this work, we looked at the Politically Incorrect board (/pol/), which focuses on discussion of world news and politics.
Below, we briefly describe some of our findings from temporal dynamics and influence estimation analyses.

Temporal Dynamics
Given the set of unique URLs across all platforms and the time they first pop up, we analyze their appearance in one, two, or three platforms, and the order in which this happens. For each URL, we find the first occurrence on each platform and build corresponding “sequences.” E.g., if a URL first appears on Reddit and subsequently on 4chan, the sequence is Reddit→4chan (R→4). These sequences allow us to study and visualize the information ecosystem by utilizing graph modelling and analysis techniques. Specifically, we create two directed graphs, one for each type of news, where the vertices represent alternative or mainstream domains, as well as the three platforms, and the edges represent the set of sequences that consider only the first-hop of the platforms. For example, if a URL appears first on Twitter and later on Reddit, we add an edge from to Twitter, and from Twitter to Reddit. We also add weights on these edges based on the number of such unique URLs. By examining the paths, we can discern the domains whose URLs tend to appear first on each of the platforms. Below we show the graphs built for alternative and mainstream domains.
Comparing the outgoing edges’ thickness, we see that URLs appear first in Reddit more often than on Twitter, and more frequently than they do on 4chan. However, for other popular alternative domains, such as,, and, URLs appear first on Twitter more often than Reddit and 4chan. Also, 4chan is rarely the platform where a URL first shows up. For the mainstream news domains, we note that URLs from and tend to appear first more often on Reddit than Twitter and 4chan. On the other hand, URLs from other domains like and tend to appear first more often on Twitter than Reddit. As is the case with the alternative domains graph, there is no domain where 4chan dominates in terms of first URL appearance.

Influence Estimation
To estimate how the individual platforms influence the media shared on other platforms we use a statistical model known as Hawkes processes. This method enables us to say with confidence that a particular event (i.e., posting of a URL) is caused by a previously occurring event. For example, we can be confident that an event that happened on 4chan’s /pol/ board (i.e., a URL is posted in a 4chan thread) resulted on an event on Twitter (i.e., the same URL is posted on Twitter), thus denoting influence from 4chan to Twitter. This is particularly useful for modelling the influence of the three platforms, since they are not independent; each one is influenced by the others as well as the greater Web (more details about the Hawkes methodology can be found in the manuscript). For instance, there are someexamples of crazy conspiracy theories that originate from “fringe” communities and have an impact on other web communities, sometimes even ending up being reported by both mainstream and alternative news users.

In our experiments we measure the influence of the 6 selected subreddits from Reddit (The_Donald, politics, worldnews, AskReddit, conspiracy, news), /pol/ board from 4chan and the Twitter platform. Figure 2 shows the estimated total impact of the three platforms on each other, for both mainstream URLs (M) and alternative URLs (A). Twitter contributes heavily to both types of events on the other platforms, and is in fact the most influential single source for most of the other communities. After Twitter, The_Donald and /pol/ also have a strong influence on the alternative URLs that get posted on other communities. The_Donald has a stronger effect for alternative URLs on all communities except Twitter. Yet it still has the largest alternative influence on Twitter, causing an estimated 2.72% of alternative URLs tweeted. Interestingly, we observe that The_Donald causes 8% of /pol/’s alternative URLs, while /pol/’s influence on The_Donald is less, at 5.7%. For the mainstream URLs the strength of influence is reversed. Specifically, /pol/’s influence on The_Donald is 8.61%, whereas The_Donald’s influence on /pol/ is 6.13%. In the following figure we report the percentage of influence for each combination of communities.

Figure 2: Estimation of influence between the platforms for alternative and mainstream URLs.

This work is a first attempt at characterizing the dissemination of mainstream and alternative news across multiple social media platforms, and to estimate a quantifiable influence between them. To achieve this, we analyze thousands of URLs obtained from millions of posts in Reddit, Twitter and 4chan. Ultimately, we find that fringe communities have a surprisingly large affect on well-known platforms. That said, there is certainly more to learn. In the future, we aim to include other information modalities in our analysis, namely textual and image characteristics, to further assess the influence between communities.
Bancor Is Flawed

Hacking Distributed
Bancor Is Flawed

Bancor just did their Initial Coin Offering (ICO) last week and raised a record $144M within a few hours. They now hold the record for the biggest crowd-funding, ever, in the history of mankind.

We don't want to dwell too much on what this illustrates about the current ICO craze. It's a fact that raising that much cash through a standard VC process would require a credible team, multiple rounds of funding, with much due diligence and milestones along the way. None of that happened here -- Bancor went from appearing on the scene 5 months ago to raising 9-digits cash with no demonstration that their scheme actually works.

In this post, we want to quickly make the case that their approach is flawed. To recap, they propose a scheme to provide liquidity for digital assets, using a smart contract. In essence, they propose a public algorithm by which they can always propose a bid/ask price for other people's coins. Now, a flawed approach doesn't mean that what they have is exploitable the way The DAO was, though we detail some immediate problems with the code below. What we mean is that the contract, as implemented, is far from meeting its purported narrative, even if no one takes advantage of the exploit.

The quick takeaway is that Bancor can be gamed by miners, and, even if the miners are naive or benevolent, will always trail the real market. It provides no efficiency guarantee during this discovery process, and will likely waste its reserves on market price discovery. You should think twice before you layer a coin on top of Bancor.

Let's do a quick walk through the red flags we encountered as we read the code and documentation for Bancor tokens, known as BNT.

Issues with Bancor Fundamentals
  • Bancor uses lots of mumbo jumbo terminology.
Bancor addresses the "double coincidence of wants" problem in economics through an "asynchronous price discovery" mechanism that blah blah blah buzzwords blah and more buzzwords. Half of these buzzwords aren't even real. "Asynchronous price discovery" is a franken-word they made up. It borrows asynchronous from distributed systems and "price discovery" from economics. The crowds eat this stuff up, but in reality, it's just a giant red flag -- there's no content here.

Let's move on, for it's possible to have a good idea underneath fluffy marketing.
  • The core problem they want to address, "Double Coincidence of Wants" is not a real problem.
"Double coincidence of wants" is a real problem in economics today in the sense that the "itsy bitsy spider" problem is a real problem in zoology -- that is, it's something one might learn in grade school, and it's completely irrelevant in the real world.

To be fair, you might indeed come to the market with two rabbits one day and I might come to the market with two chickens on the same day. I might also have an aversion to rabbit meat and a family history of mal de caribou. We would then be unable to conduct a trade.

In reality, this never happens, because...
  • One can always use ether as a medium of exchange.
There already exists a common currency through which we can trade. It's called ether, and we can use it no matter which token pairs we want to trade, because those very tokens are, by definition, implemented on top of Ethereum and were purchased with ether in the first place. Using BNT tokens is like stepping into a kid's swimming pool, placed in an ocean.

The entire point of the underlying currency, ether, is to serve as the medium of exchange for the diverse assets built on top. Sure, we can always layer BNT on top, and use BNT underneath the new Bancor-based tokens, but there would be little point in doing so besides creating a money flow for the people behind Bancor. For every coin they design that uses Bancor, we can design a more efficient version that doesn't use Bancor.

In short, the BNT tokens are for show. They are not necessary.
  • Bancor is essentially a central bank strategy for automatically setting a dynamic peg for new coins.
Behind the hype, Bancor comes down to a very simple idea: it's a smart contract that automatically provides a buy and sell price for a new coin. This is known as a market maker.

Let's suppose that you decide to create your own coin called X. You hype up the X ICO, sell $100M of X, and decide to back it with $10M of BNT. This is your reserve.

What Bancor proposes to do is to create a market for your coins by automatically buying and selling them for prices that would preserve the ratio of your reserve to the total supply. That is, depending on how much reserve the contract has outstanding, it will automatically offer a price for the X tokens. If the reserves are, say, $12M BNT, then it will offer to buy back X coins for higher prices, to bring the reserve fraction back in line to the usual level. If the reserves are low, it will offer lower prices. So you can always buy or sell into the contract, even if there is no one else to buy or sell to.
  • That's it. That's the whole idea.
It's only 40 lines of code.

Now, there is nothing wrong with raising $3.5M per line of code, if indeed there is a certain technical advantage that those lines of code possess.
  • Everything Bancor can do for you on chain, you can do by yourself off chain.
You yourself could easily have held that $10M in reserve, and you could have intervened in the market at your leisure. If you wanted to, you could follow the exact same algorithm as Bancor and achieve the exact same results. There is no value to be had from using Bancor.

Let us repeat that: in terms of controlling the price of the X token, there is absolutely nothing to be gained by using the fixed Bancor strategy. Whatever equilibrium price Bancor would achieve for you through its on chain management, you could have achieved while managing the reserve yourself, off chain.

To be fair, a Bancor proponent would say, "but you are committing provable reserves to your currency." To which the answer is, "there are many other ways of proving your reserves, none of which tie you down to a dirt simple and flawed strategy for life."
  • It is much better to manage the reserves manually than to commit to the Bancor strategy.
If you held the reserves yourself and issued buy and sell orders, like everyone else, you could not only follow the Bancor strategy to the letter if you liked, but you could use any other strategy that you preferred. You'd have the benefit of the immense neural network that you carry behind your eyes, and the freedom to switch strategies at any time.
  • The Bancor strategy sets prices independent of the market.
The prices that Bancor offers for tokens have nothing to do with the actual market equilibrium. Bancor will always trail the market, and in doing so, will bleed its reserves. A simple thought experiment suffices to illustrate the problem.

Suppose that market panic sets around X. Unfounded news about your system overtake social media. Let's suppose that people got convinced that your CEO has absconded to a remote island with no extradition treaty, that your CFO has been embezzling money, and your CTO was buying drugs from the darknet markets and shipping them to his work address to make a Scarface-like mound of white powder on his desk.

Worse, let's suppose that you know these allegations to be false. They were spread by a troll army wielded by a company with no products, whose business plan is to block everyone's coin stream.

What's your best strategy if you were in control of your $10M in reserves and were manually setting prices?

We don't know about you, but we'd hold the reserves tight and assuage the market. Maybe hold a press conference with the CEO, have the CFO show the books to an auditor, and present a sober and somber CTO describe how he was just making ginger bread houses with powdered confectionary for a fundraiser when he happened to sneeze. At the end of such a painful episode, we'd have battle scars, but also $10M in the bank.

If you used Bancor, your Bancor smart contract would have no knowledge of what is happening out there in the real world. It wouldn't know the market movements, it wouldn't know where the coin ought to be, and it would follow its blind strategy of offering bid/ask prices. That strategy involves making a market through thick and thin, without any information about reality. In fact, that reality is determined purely by external markets, and the contract will, unstrategically, use its reserves to discover and match what the markets demand of it at that instant.

In this case, Bancor would offer ever decreasing prices for X coins during a bank run, until it has no reserves left. You'd watch the market panic take hold and eat away your reserves. Recall that people are convinced that the true value of X is 0 in this scenario, and the Bancor formula is guaranteed to offer a price above that. So your entire reserve would be gone.
  • The Bancor strategy fails to capitalize on excess value
Now, a Bancor proponent would say "ok that was bad, but you're not giving us the full story. After your press release, people would start buying your coins, and the contract would rebuild its reserves."

Ok, let me illustrate what happens on the upside, and make the case that what they are saying is correct, but also incredibly inefficient. It's like saying "hey, your nerves do grow back after spinal injury." Yes, they do. Not so clear that you'll walk again.

Let's imagine that on the day of your big press conference, with the CEO, clean books, sober CTO and a new head of HR, your engineering team announces a new deterministic, asynchronous consensus protocol that takes a constant number of rounds [1], and a perpetual motion machine [2]. The crowds now love you. They go from dumping your shares at any price to fighting each other tooth and nail to buy your shares. In the time between ethereum blocks, a single X is worth infinity dollars on third-party exchanges. You could now literally sell just a single coin and retire forever.

But instead, you're using the Bancor strategy. The smart contract doesn't know that these changes are taking place. What it will do is offer to sell that first X for dirt cheap. It will fail to capitalize, on your behalf, on the difference between infinity and its sad, algorithmically determined low price. It just doesn't know. That extra money will get burned as miners' fees, as people try to outbid each other to own this token at the huge discount offered by the oblivious Bancor platform.

Essentially, we showed what happens when the true market price follows a step function between two extremes. The prices offered by the Bancor market making 40-liner will not follow this step function in tandem -- it is necessarily inefficient. That inefficiency is wasteful and gameable.
  • The algorithmic dampening provided by Bancor isn't desirable for already liquid assets whose value is unstable.
The previous two scenarios explored two extremes. But, to be fair, the following argument could be made in the steady-state case: "if your coins are locked up in this market maker, you lose the ability to insider-trade on the knowledge that most of the rumors about the CTO and CFO are false. However, this is simply a more general tradeoff between flexibility on one side and reliability on the other side -- although you deny yourself the flexibility to make extra profits on private knowledge, your users gain the comfort of knowing that there exists this market maker which will dampen price movements, and that it will not go anywhere."

This is true, the reserves do dampen price movements. If the price is going to be doing some ups and downs around a single mean value, Bancor can help facilitate trades by acting as a market maker.

But the situation is pretty bad when the price is leaving one level for another. If it's going down, then Bancor will bleed its reserves to keep the price close to the higher point that the price used to be at. And if it's going up, Bancor will sell coins at a price lower than the equilibrium point of the market, and therefore slow down the up movement.
  • The Bancor strategy will not do anything to find or maintain the true equilibrium value of an asset.
The preceding cases already illustrated the fundamental problem: Bancor is designed solely to maintain a reserve fraction. It has nothing to do with finding or maintaining the true value of an asset. It doesn't know, yet it'll blindly offer buy and sell bids. It'll use its reserves to discover what prices it ought to set.
  • Bancor thus acts like a dynamically adjusted currency peg.
Currency pegs have been tried again and again: ask any Argentinian for details. Any time you have a central bank trying to use its reserves to buoy up a peg, you have the opportunity for gaming. You'll recall that George Soros masterfully ran the reserves of the Bank of England down, simply by knowing their strategy. In this case, everything is happening on a public blockchain, using a fixed algorithm, with full visibility, while the true price is being set elsewhere through informed buy and sell orders.
  • Bancor presents an arbitrage opportunity. It does not lead the market towards equilibrium, it trails the market, always and by definition.
The mismatch between the Bancor price and the true market price is the cost that Bancor pays to have its smart contract be informed of the true value of the asset.

It is not the case that Bancor is helping the market perform price discovery. Quite the opposite: it's Bancor that is discovering the price, by virtue of offering buy and sell bids that are at odds with the value of the asset on the open market. It relies on arbitrageurs to bridge the gap and bring Bancor up to speed on what is happening. It pays a price out of its reserves for this function.

As such, Bancor will always trail the true value in the open market, and act as a buffer or a dampener. Bancor uses its reserve to be informed of the delta between the price it offers and the price out in the open.
  • Bancor does not "eliminate labor" in price discovery.
Despite Bancor's claims that they eliminate labor in price discovery, their current contract does nothing of the sort. It simply shifts the market maker labor onto arbitrageurs. It is now the arbitrageurs' job to notify the Bancor contract of the true price of an asset, and get paid a programmatic reward to do so.
  • There is no indication that the Bancor strategy is an optimal, or even good, use of reserves to discover the price.
The previous point was that Bancor uses its reserves to figure out where the market is, and sets a price accordingly. This isn't inherently bad, but it's neither what's in the advertised materials, nor is there any indication that the formula they used is the best use of reserves.

Why not, for instance, a market making strategy that approaches the target price using a different formula? Or uses AI techniques? Or uses past history of price action? The depth of the order book? The possibilities, for the strategy that a central bank would follow to manage its reserves, are endless. Bancor picked just the simplest possible option. The space of options remains unenunciated and uncharacterized, and there is no indication why this approach is superior to others in the Bancor materials.
  • Bancor is a net negative in markets with substantial liquidity.
Bancor-style automatic trading should be suspended when the external markets are already liquid. Throwing a dampener at a liquid market, one whose motions are preordained and predictable by all, is not the best use of those reserves.

The sensible way to design Bancor is to place some limits on how much of its reserves it will use in any time period, to avoid the problems we identified. There are currently no provisions for this. Doing this well requires importing some facts about liquidity from the external world into Bancor, perhaps with the aid of oracles such as virtual notary, town crier, oraclize and augur. But at a minimum, some limits on how much of its reserves the contract will spend in any given time period seem called for.
  • Bancor claims to provide liquidity, but does not.
Liquidity is the ability to buy or sell potentially large amounts without moving the price substantially. The Bancor contract does not guarantee this property, despite claiming that it does. Prices can move arbitrarily, and the price slippage is dependent on the amount bought or sold. This is a simple consequence of the fact that Bancor has no risk model and has no smarts. It's simply a market trailing dampener.
The preceding discussion examined the fundamental Bancor value proposition, in the abstract. Let's now examine its instantiation in Ethereum.

Front Running
  • Bancor is open to front-running.
Bancor's current implementation is open to a simple front-running attack. A miner, upon seeing that someone is submitting an order to buy from Bancor, would squeeze his own buy order ahead of the user's. He would thus always get a rate from the Bancor market maker that is better than what the user gets. Every time.

Bancor has a "minReturn" concept, akin to a limit order, that ensures that if the order goes below a certain level of profitability, the user can cancel it.

But the miners know exactly what limits the users set. Ethereum transactions aren't private. So a miner can squeeze in just the right order ahead of the user.

On the way down, front-running works the same way, with the miners squeezing sell orders ahead of the user, and thus pocketing the price difference.

And it's possible for miners to automate this process with a simple software kit. Further, even non-miners can take advantage of this behavior, by paying higher fees to appear before the Bancor transaction in the block.
  • Bancor's suggested fix to front-running is broken.
In a Twitter exchange, Bancor engineers mentioned that they are planning changes where they would charge the same fixed price to all transactions in the same block.

What they mentioned isn't currently implementable on Ethereum, at least not in a straight-forward fashion. Transactions within a block execute completely independently. If there are two transactions T1 and T2 in a single block, T1's execution occurs in isolation, without knowing about T2's presence in the same block. So it's not possible to charge T1 and T2 the same price, because T1 has no idea that T2 will execute in the future.

There are schemes we can imagine, where T1 and T2 are placed on the block first, and then a later "execute" transaction is placed on a subsequent block that looks backwards and gives the same price to T1 and T2. This gets around the limitation in the previous paragraph but it's really ugly and kludgy, not to mention more expensive to execute and open to new attacks of its own.

In general, any scheme that a) provides full information to miners b) doesn't include any nondeterminism and c) is vulnerable to ordering dependencies is gameable. Bancor and all their proposed modifications, including order floors and per-block prices, still satisfy all three.
Bad Math, Rounding and Lack of Testing
  • Bancor reimplemented math.
Bancor ended up reimplementing their own functions for arithmetic. That is, their own add, subtract, multiply, and exponentiation.

As an aside, this is sad to see for two reasons. First, finance applications should not have to worry about overflow errors. Ethereum should provide base types that make sure these kinds of reimplementations are never necessary in application code.

Second, no reimplementation of basic math routines should look the way Bancor's code looks. It's a mishmash of special numbers baked into the code. There is a certain style to writing code that is correct by construction. The code has to be crafted such that, not only is it correct, but it is easy to prove correct upon inspection. Bancor code is nothing like that, and baking magic numbers into the code is something for which we penalize first year undergraduates.
  • Bancor did not test their own math.
The sum total number of dedicated tests for these math functions is 6. Multiplication is tested solely by multiplying 2957 by 1740.

The coverage of the exponentiation function is abysmal. There are 0 directed tests that cover this critical function. There are other tests that take a quick path through exponentiation, but there are more than 30 different cases, and 30 different magic numbers embedded in the code. The existing, indirect, checks cover only a handful.
  • Arithmetic errors can be fatal.
Special magic constants litter Bancor code. So much so that we found it difficult to test this code for correctness ourselves. There is an art to writing correct, clean code, and Bancor exhibits none of it. An error in any one of the constants would be catastrophic.

And even simple rounding errors can be problematic in this game. A rounding error can enable an attacker to buy-then-sell a token to Bancor, and make a fraction of a cent in the process. If they can earn such a small quantity above transaction fees, then they'd do exactly that all day long to drain funds.

A final corner case to check for is what happens when the reserves are down to zero. The code needs to be able to recover from that scenario. The current code has special case handling for selling the entire supply. This seems strange for the continuous, path-independent Bancor formula.
Integration and Scale
  • Bancor does not support the notion of supply caps for Bancor-based tokens.
If the tokens that are based on Bancor are not actually securities, but serve as access tokens for a system, then they will typically correspond to some right to service. In many, though admittedly not all, scenarios, the system can only deliver a finite amount of service, so the designers will want to set a supply cap on their tokens. Yet Bancor's smart contract will create tokens out of the thin air in response to demand.
  • Bancor does not scale.
Bancor generates continuous on-chain tx volume for arbitrage on tokens that nobody cares about by definition. If they did care, those tokens would have liquidity and not need Bancor. It requires continuous on-chain activity for what is currently primarily off-chain economic action.
Users Overpay
  • Bancor shortchanges its users.
Since the Bancor contract cannot issue fractional tokens, it simply takes your money and gives you a number of tokens rounded (see rounding above) to an integer. Since you don't know when exactly your transaction will execute when you submit your bid, you have to pay both transaction fees to the miner to mine your transaction, and throw in an extra dollop of cash or coins to Bancor such that the contract can execute your trade without running under. If you guess wrong, you'll end up, say, getting 0.99998 coins, which conveniently rounds down to 0.

This will undoubtedly cause a lot of frustrated messages from users, who might feel that they submitted an honest bid, and got something short of their expectations.

Bancor whitepaper claims that you can predict what you will get back, but that's patently false: in the presence of concurent users submitting transactions, you cannot predict at all. Any extra amount you overpay is usurped by the Bancor contract.
Potential Reentrancy Issues
  • Bancor code is "difficult to prove correct."
That's code for "it's a mess."

Well-written code looks like a work of art. It doesn't matter if it's C or Go or Ruby or Prolog; in fact, it looks especially like a work of art if it's well-written C code. But it does matter if it's Javascript, because well-written Javascript code is like the mythical Yeti: often discussed, with snippets of evidence for its existence, but no one has seen it in its full, corporeal form.

The Bancor code has a distinct Javascript quality to it. This has been the hallmark of badly written smart contracts: they have messy code paths, don't follow best practices, and happen to work by the skin of their teeth. The code works on a good day with the wind on its back. But when it comes to corner cases, things get awfully complex. In the course I teach, when I point out such situations to students, they go into a diatribe that goes "well, you see, the problem can't arise because for it to happen, A has to happen first, but A is guarded by B, and C ensures that B cannot happen." This is a temporal logic proof, over a certain code path. It's complex, fragile in the face of code evolution, and totally unacceptable in professional code development. We want flat, simple invariants, maintained by following best practices. Without regard for the correctness of the student's code, we grade such cases a 2 out of 10. They ought to know better, and they're 20 year olds with two years of programming experience. Bancor definitely has to know better, given The DAO experience.

So, in what follows, we will point out violations of best practices as problems. We did not have time to look into whether they are exploitable, because it's tedious to do so and it's immaterial. There ought to be no violations of best practices in a good contract.
  • Bancor code has a reentrancy problem in the sell() function.
There is what appears to be a reentrancy problem (recall that an exploitable reentrancy bug affected The DAO) in BancorChanger. Now, not every reentrancy bug is exploitable, and not every reentrancy bug gives rise to a The DAO like disaster. But nevertheless, there ought to be no state changes after a funds transfer, and there it is, right there on line 389 of the current (undeployed) code.

We cannot tell yet if this is exploitable, but it is disconcerting. There is no reason to perform a state change on something critical, such as virtualBalance, after the funds have been transferred.
  • Bancor code has a different reentrancy problem in the change() function.
There is a similar reentrancy problem in change(). When going from one token to another (recall the "double coincidence of wants!", perhaps it is referring to the double concidence of two different h4x0rs over the funds in Bancor), it performs, first, a buy() followed by a sell(). But buy() and sell() are both functions that call out of the Bancor contract. So it's possible to get into a messy situation if the token transfers during the buy() or sell() operation call back into the Bancor contract.

Again, we cannot tell yet if it is exploitable, but it should be avoided. All transfers should happen after all the state changes.
  • Bancor code assumes that ERC20 tokens based on Bancor are cooperative.
The code assumes that the tokens layered on top of Bancor are cooperative. But ERC20 tokens can contain malicious code. It is difficult or impossible for Bancor engineers to vet the token code that is layered on top, and even if they do, it may be possible to change the ERC20 token contract after deployment.
The Bancor code falls short of the narrative used to sell the code. Blindly making markets using a strategy that has no proof or reasoning for why it's good is a flawed idea. Additional problems, such as front-running, potential reentrancy issues, poor code quality, lack of testing and the general unnecessity of inventing a new currency, give us pause.

Keep in mind that there are lots of flawed ideas in the world. Not every flawed idea is catastrophic -- we've done many things that definitely were not proven good ideas, and survived to be writing these words. Perhaps miners will be altruistic and avoid front-running, perhaps all the magic numbers are correct even though they are untested, perhaps there are no rounding errors that matter.

But we know for sure that the Bancor approach is not the best use of one's currency. Someone armed with additional information not on the blockchain can certainly make markets in a better informed fashion. Since Bancor is possible to simulate off chain, committing to it on chain provides little to no additional value, except limit one's future moves.

Overall, it seems that the current Bancor approach is fundamentally inefficient and will bleed reserves. Assuming that there are no immediately exploitable issues in the code, and assuming that miners play nice and avoid front-running, we expect that we will see Bancor-based coins discover the limitations of the Bancor approach.

To end on a positive note, the optimistic way to look at the Bancor episode is that it's the first, flawed step in an interesting direction. It will surely be followed by more sophisticated approaches that have a better chance of living up to the hype. People are beginning to look for ways to codify Janet Yellen and place her on the blockchain. True, she need have no fear of losing her job right now. But sooner or later, we will have actualy smart contracts that provide strong guarantees as they manage a currency.
    [1]    This is impossible.
    [2]    This is also impossible.

We are grateful to Vitalik Buterin for his insightful feedback on earlier drafts of this document. Note that our technical analysis does not constitute investment advice. Crypto prices rarely follow rational rules, historically, some of the flawed systems that we have highlighted in this blog have done well, partly due to the increased attention they have received. Caveat emptor.

Recent Developments
  • Bancor delays token activation and their switch to trading BNT on their own platform.
  • The latest, hastily applied, patches to the Bancor code seem to revolve around rounding errors and precision loss.
  • This post discusses the rounding problem, mentioned above. It also mentions a social attack stemming from people not understanding how fractional reserves and the Bancor formula work.
Announcing The Town Crier Service

Hacking Distributed
Announcing The Town Crier Service

We are delighted to announce the (alpha) public launch of Town Crier (TC) on the Ethereum public blockchain. TC acts as a secure data pipeline between smart contracts and websites, or what’s commonly called an oracle.

The TC logo: A handbell + ‘T’ + ‘C’

TC isn’t a mere oracle, though. Thanks to its use of a powerful new form of trusted hardware (Intel SGX), it provides unprecedented data integrity. Equally importantly, TC enables general, flexible, and practical smart-contract execution with strong data privacy.

If you’d like to plunge into technical details and play with TC right off the bat, you’ll find all you need You can use TC’s Ethereum smart contract front end. We’ve also partnered with SmartContract so you can experiment with coin-price queries through their easy-to- use interface.

You can also read our published technical paper here. For those wanting a gentler introduction, though, read on...

Why do we need oracles?
Smart contracts confined to on-chain data are like sportscars on local roads. They’re purring with latent power, but can’t do anything really interesting.

To unleash their potential, smart contracts need access to the wide open vistas of data available off-chain, i.e., in the real world. A financial smart contract needs access to equity, commodity, currency, or derivative prices. An insurance smart contact must be aware of triggering events such as bad weather, flight delays, etc. A smart contract allowing consumers to sell online games to one another must confirm that a seller successfully transferred game ownership to a buyer.

Latent power waiting to be unchained...

Today, though, smart contracts can’t obtain such data in ahighly trustworthy way. And they can’t achieve data privacy. These deficiencies are starving smart contract ecosystems of the data they need to achieve their full promise.

We’d even argue that this problem of data starvation was one cause of the DAO debacle. The DAO attracted $150+ million because lack of data meant a lack of good smart contracts for people to invest in.

What’s the problem?
Let’s take flight insurance as a running example. It’s been widely explored by the community and you can find an implementation on the TC website.

Suppose Alice buys a flight insurance policy from a smart contract MolassesFlights that pays $100 should her flight, WA123, be delayed or cancelled. Clearly, MolassesFlights needs to learn the status of Alice’s flight. How can it do this? Smart contracts (and blockchains) can’t query websites, as they don’t have internet connections. And even if they did, there’s a more basic problem. Suppose that the creators of MolassesFlights and its users all trust In order to determine the status of a particular flight, someone (say, Carol, a MolassesFlight operator) must query with Alice’s flight information (Q = “WA123”) and obtain a response (e.g., R = “Flight WA123 delayed”). The problem is that there’s no way for Carol to pass along R such that someone else knows that R, as relayed by Carol, is correct and free from tampering.

This lack of transferability holds even if Carol obtained Rover HTTPS, the protocol used for secure (authenticated and encrypted) web connections. Carol learns that R is authentic because she logged into a website authenticated by a valid certificate. But HTTPS doesn’t enable her to convince another party that sent R. (HTTPS doesn’t support data signing.) As a result, there’s no trustworthy way to feed R to a smart contract.

There’s another problem as well. For Alice to buy flight insurance, she must specify her flight to MolassesFlight. If she sends it in the clear on the blockchain, she reveals her itinerary on the blockchain, and thus to the entire world -- an unacceptable breach of privacy.

Enter TC.

TC in a nutshell
Ah, the jet age…

TC’s core code runs in a secure enclave, an environment protected in hardware by a new Intel technology called SGX (Software Guard eXtensions). When TC queries a website over HTTPS and obtains a response R, other parties can confirm that R is authentic because the hardware (CPU) itself prevents any modification of TC or its state. SGX also enables TC to handle confidential data privatelywithin the enclave.

Specifically, SGX provides three key properties:
  • Integrity: An application running in an enclave is protected against tampering by any other process, including the operating system.
  • Confidentiality: The state of an application running in an enclave is opaque to any other process, including the operating system. (See our paper for some important caveats.)
  • Attestation: The platform can generate an attestation, a digitally signed proof that a given application (identified by a hash of its build) is running in an enclave.
Using Property 3 (Attestation), a user can verify that a valid TC application instance is running in an SGX enclave. SGX attestations can bind a public key PK to an application instance such that the corresponding private key SK is known only to the application.

Provided that a user trusts the TC implementation (whose source code we publish), therefore, she can trust that data TC has signed using SK are valid. The same goes for a smart contract, of course. Given PK, it can verify the correctness of data emitted by TC.

Once a user has established trust in a TC instance, Properties 1 and 2 come into play. Property 1 (Integrity) prevents anyone from tampering with the execution of TC and thus ensures that TC transmits only data validly obtained from target websites. In our flight insurance example, this property ensures that flight data will be exactly as reported by Property 2 (Confidentiality) enables TC to handle private data. In the same example, Alice can encrypt her flight information (Q = “WA123”) under TC’s public key. TC send query Q to over HTTPS. Consequently, Alice’s flight will not be disclosed on the blockchain or to TC’s operators.

In general, you can think of SGX as running an application in a black box or simulating its execution by a trusted third party. Even the owner of the machine on which the application is running can’t break its integrity or confidentiality.

The figure below shows the basic flow of data between a user smart contract (User Contract), the TC smart contact front end (TC Contract), and the TC Server. Here, prog denotes core TC code, params is the set of parameters in a query, and data is the website response (R in our above example). See our paper for details.

We provide more details on the under-the- hood workings of TC in our blog appendix.

From Alpha to Omega: TC today and tomorrow
TC today is an “alpha” system. It’s fully functional, and supports several different query types:
  • Flight data,
  • Stock tickers,
  • UPS tracking,
  • Coin market prices, and
  • Weather data.
Flight data queries are encrypted as described above. Other queries are in the clear.

Basic blockchain-to- server data flow in TC

As you can see, the current set of query types and functionality are fairly limited. Thus our current “alpha” label. We’ve got a lot more in the works, though, including:
  • Custom queries: We’re planning to allow users to define their own query types. In this way, users will be able in effect to create their own oracles. (This is possible today in SmartContract, with whom we’ve partnered in our alpha launch.)
  • Account-scraping: One of the most powerful opportunities enabled by TC stems from its ability to handle confidential data. Our Steam exchange query, which is under development, exemplifies this opportunity. This query will ingest a pair of user identities A and B, a user credential belonging to A (and 2FA passcode, if necessary), and a game identifier. TC will respond by indicating whether A transferred ownership of the game to B. To do this, TC logs into user A’s account. Thanks to the use of SGX, all of this can be done without exposing A’s credentials to anyone, including TC operators. This is just the tip of the iceberg. The ability to handle data confidentially is a stepping stone to full-blown versions of off-chain confidential smart-contract execution.
  • Data-source and TC-server redundancy: If a data source goes bad, e.g., a source of flight information is incorrect, TC will blindly and faithfully relay incorrect data. An extension of TC, however, can combine data from multiple sources to reduce the risk of error at the source. For example, TC can fetch stock ticker data from five different websites and take the majority value. Another problem is that if TC goes down, a relying user contact will fail to receive timely data. We plan to address this problem through deployment of redundant servers, an approach that can also minimize the (improbable but not inconceivable) risk of an enclave getting physically compromised.
We look forward to your comments and thoughts about how TC can evolve and best serve the Ethereum and broader smart contract communities.

The fine print
TC is patent-pending. We expect to commercialize versions of TC for other blockchains, particularly permissioned ones. It’s also possible that we will charge for special query types and/or bulk queries in Ethereum later down the road.

The current Ethereum public blockchain TC functionality, however, is free, and we plan to keep it that way. Users only need to pay the gas costs for a query. We intend to enlarge our free service substantially. As academics and smart contract enthusiasts, we’re interested above all in helping smart contracts achieve their full potential and seeing the grand experiment called Ethereum flourish and empower users in ways yet to be imagined.

More to Come
TC is just one of many other systems coming from IC3. Several near the end of the pipeline and worth looking out for are Teechain, a secure layer-2 payment system, HoneyBadger, a highly robust asynchronous BFT system, Snow White, a new, highly efficient consensus protocol, and Solidus, a confidentiality-preserving on-chain settlement system.

Town Crier was created by students and faculty in IC3 (meaning that the students did the real work, of course). Yan Ji, Oscar Zagarra, and Fan Zhang created our alpha service, with generous help from Lorenz Breidenbach and Phil Daian.

We want to thank Intel for their close collaboration, particularly Mic Bowman and Kelly Olsen for their support and advice, Sergey Nazarov at SmartContract for supporting our alpha launch, and the Ethereum Foundation, especially Vitalik Buterin, Alex Van de Sande, and Vlad Zamfir, for their input on TC during early development.

Appendix: A peek under the covers
While our general approach is conceptually simple, realizing TC using SGX requires some finesse.

To begin with, enclaves don’t have network stacks, so they must rely on the operating system (OS) for network functionality. Running outside the TC enclave is a Relay that handles TC’s network connections.

The whole point of running in an enclave, though, is to avoid trusting the operating system. How can we build a trustworthy application coiled in the embrace of a potentially hostile operating system?

As mentioned above, TC communicates with HTTPS-enabled websites, and thus uses TLS. To prevent OS tampering with TC connections to websites, we’ve partitioned a TLS implementation (mbedTLS) such that the handshake and record layer portions reside in the enclave, while the TCP layer resides outside the enclave. In this way, the code inside the enclave has a secure channel to the website. In effect, we treat the OS itself as a network adversary—exactly what TLS is meant to protect against by design.

As we’ve noted, TC has a smart contract front end, which we simply call the TC Contract. We therefore also have the problem of securing communication between this front end and TC code in the enclave. How do we prevent the OS from corrupting blockchain data?

The simplest solution would be to run a client (e.g., Geth) inside the enclave. But this would bloat what’s called the trusted computing base (TCB), the code that users need to trust in order to trust TC. The basic TC code, plus Geth, plus any wrapper around Geth would be a lot of lines of code.

We solve this problem in a counterintuitive way. TC doesn’t actually verify incoming data from the blockchain. It hopes that the Relay (and OS) are passing it valid queries from the TC Contract, but it accepts and processes bogus queries. The trick in our implementation is that TC signs queries along with responses. This way, the TC Contract, which records the queries it handles, can weed out invalid queries. And happily any corruption of queries is visible on the blockchain, providing incontrovertible evidence of corruption. The schematic below shows how these various pieces fit together for queries to a hypothetical service

Elements in green are those trusted by a User Contract querying TC. Note that there’s no need to trust the Relay, i.e., it sits outside the trusted computing base.

Basic TC architecture

Other oracle designs
There are alternative ways to create oracles, and it’s beneficial for the community to be able to draw on multiple sources of data.

One approach to oracle design, exemplified by Gnosis and Augur, leverages prediction markets. These systems create a financial incentive for a community to express truthful evaluations of queries, and thus data that may be consumed by smart contracts. This approach has some nice features. For example, it permits queries to be expressed in the form of natural language, something that TC can’t easily support (at least, not without some pretty sophisticated NLP).

Prediction markets don’t scale to large numbers of queries, however, as the cost per query in human brainpower and capital commitments is high. Nonetheless, this approach does serve a particular niche very well.

Oraclize is another option. It uses a clever protocol that leverages TLSnotary to create attestations (and has some pending work on other options, e.g., Android-based software attestation). Oraclize has been an important catalyst within the Ethereum community, as it has helped address the data-starvation problem. Its assertions of authenticity, however, are ultimately weaker than those obtainable through use of a hardware root of trust. Most importantly, without trusted hardware, it’s not possible to offer true data confidentiality. Even encrypted queries must be decrypted by and are therefore exposed to the oracle and/or a supporting service. Thanks to its simple and elegant security model, SGX enables TC to skirt these problems.
Levels of Enlightenment for Researchers

Hacking Distributed
Levels of Enlightenment for Researchers

Having gone through graduate school, and having watched hundreds of others do the same, I have come to realize that there is a common progression. I have observed three distinct stages, or levels of enlightenment, that individuals proceed through when faced with an intellectual challenge.

I want to chronicle this seemingly universal process, partly to seek feedback from those of you with deeper insights than mine, partly to provide reassurance to young graduate students, partly to help people who are entering my field, but mostly to serve as a framework for how to evaluate communities and improve discourse. For any community will consist of various individuals, scattered throughout this continuum between the various stages of enlightenment. And it's useful to be able to categorize and guide appropriately.

One caveat: I am not claiming to speak from a position of authority on this topic. All of this is "meta" and outside my area of expertise. I can claim to have reached Level 3 only in a limited number of narrow fields, and I'm constantly trying to explore new fields myself. So for all I know, maybe there are 16 stages of enlightenment and I've only seen 3! But I do think it's useful to discuss how people respond to intellectual challenges, what this means for those around them, and perhaps how we can all engage in better discussions.

Level 0: The Newcomer

Almost all newcomers to an area are greeted by a barrage of ideas developed by people who came before them. Even if an area seems "brand new," say, computing in the 40s, it buds off of existing fields, like electrical engineering or math, and entails the use of techniques drawn from both. Mastering these techniques is often a difficult process, for the road ahead seems windy and the end is not clear.

For one, learning a new area requires adopting the terminology and frameworks used within that area, some of which might be confusing and even off-putting. For instance, what we call "slow start" in networking refers to exponential growth, one of the fastest possible ramp up functions possible. Every new system to master comes with idiosyncratic complexitiesthat reflect its history. The bigger the gap is between how a clever person would design something from scratch versus how it is right now due to its history, the more offputting and inaccessible it is.

And the ties between the disparate lines of work aren't crystal clear. There are few people, for instance, who can relate how the work on failure detectors, in distributed systems, is related to consensus protocols. The relationship between a new consensus protocol and others that came before it may not be clear, even to the authors of the work themselves.

And there is seemingly a ton of material to master. This is especially true when one is learning on their own and there isn't someone to guide them through the enormous data bank of published work. And it can be confusing, if not incredibly unproductive, because there is often a giant gap between what is said publicly and what is known among the experts. For instance, there are some well-publicized results that are known by experts to be misguided or misleading, such as the CAP Theorem. A simple reading path will often guide one into the underbush, where even the trees, let alone the forests, are not visible. Self-study in such areas will lead one into ratholes.

Level 0 Reaction: Overwhelmed

The standard reaction most people have at this stage is to feel overwhelmed. If you talk to graduate students in their first two years, they universally feel like they have far too much reading to do.

There are some common pitfalls at this stage.

Lack of integration is probably the biggest problem that people encounter as they try to make sense of the various papers they are reading. Instead of fitting the new material into a cohesive whole, they learn and memorize results. Instead of applying uniform standards and criteria to different ideas, they evaluate each idea separately, within its own framework. For instance, it is quite common for people who have done some but not enough reading in databases to use the acronyms ACID and CAP, without realizing that the C in these acronyms is completely different. Or, for another instance, for people who hold day jobs building distributed systems to claim that Paxos is a Byzantine Fault Tolerance algorithm (yes, there are Byzantine Paxos variants, but no, Paxos does not handle Byzantine failures).

Lack of time is an issue that plagues the part-timers. In most areas of study, it is impossible to perform the necessary synthesis in one's mind except by reading a large number of papers in a relatively short span of time (say, 4-8 papers per week for 10 weeks). The mental cache needs to contain the material to be cross-connected, and that can't happen at a rate of 1 paper per week.

Finally, lack of breadth is a common problem. You might say "hey, wait a minute, in this age of ADHD, you're telling me that it's lack of breadth, and not lack of focus, that is a problem?" Indeed, in my experience, focus isn't critical in the early stages. There is so much to learn that it doesn't matter if you attack left or right or center -- as long as you do the reading voraciously, you will master it (and if you're not doing the reading, we go into the "lack of time" category above). The problem cases I've seen are invariably people who are so focused on one topic that they try to go deep and skip the foundation-building that comes from reading a broad base of papers.

Most people never proceed past Level 0. It is possible to always remain behind the reading, just with a slightly expanding vocabulary and a constantly foggy understanding. Being in a structered degree program helps yank one out of this zone, but is not a guarantee.

Level 1: The Half-Expert

As people gain more expertise in an area, the Level 0 feeling of "inferiority in front of the collective works of mankind," gives way to its opposite. They have now mastered a sub-piece of human knowledge. They speak the lingo. And most importantly, they have acquired the ability to critique it. They now possesses destructive powers.

Specifically, people at this level can read a paper and either figure out its critical weaknesses, or they have built a catalog of quick critiques they have memorized. For instance, every time someone says proof of stake, a half expert will cite the "nothing at stake" problem, to knowing nods from other Level 1s and 0s. Or someone will introduce a new database and a Level 1 will ask if a database is "CP or AP." These cringeworthy discussions are signaling mechanisms that Level 1s have adopted to distinguish themselves from Level 0s.

Most Level 1s are incredibly good at identifying problems with past work. In fact, they can't help but do this; they've been trained to become weakness-finding-machines. This skill is a critical enabler for what they need to do at the next level up. But leveling up is hard.

So they see it as their hard-earned privilege and, now, birthright to attack every idea that came before them, as well as every new idea that they encounter. Snark and negativity flows with copious abandon from half-experts.

Level 1 Reaction: Dismissal and Destruction

And as we know from the Lord of the Rings and the entire "power-corrupts" canon, it's hard not to wield destructive powers. This is where we begin to see damaging behaviors. No work is good enough. Everything that has ever come before is crap. The greats on whose shoulders they rise clearly didn't know anything about the demands of today's systems.

There are three behavioral patterns that are common among half-experts.

First is sheer laziness. When asked to critique an idea, Level 1s will simply google related words, parse what they see (for they can now do this properly, as they are not Level 0s), and deliver a non-original critique that, in essence, was provided to them by someone else. Some might hang out on IRC channels to pick up others' ideas and regurgitate them. Many Level 1s have their pet issues that they feel defines them. For instance, some might harbor a special love or special hatred for certain techniques (e.g. certain algorithms, approaches and so forth), ready to go into a canned rant every time these are mentioned.

Second, they exhibit an obsession with their own ideas. At this stage, ideas are rare and hard to come by for the Level 1. Typically, their solutions are complex, under-thought and under-justified. It is quite common to see solutions that are essentially kitchen sinks that "solve" multiple problems at the same time. Achieving novelty with a simple and elegant solution is hard: you really have to invent something new. Creating something novel by gluing together disparate components is almost trivial. For instance, in a world where the only rain protection consists of hats, inventing an umbrella would be the next best invention, but requires intellectual prowess that a Level 1 simply lacks. It's much easier to invent a novel hat, incorporating a propeller, a wide brim, and a feather on top. Because no one has done that before, it's automatically novel. Crucially, its weaknesses cannot be readily discovered by other lazy Level 1s by simple googling. Too wet? It has a wide brim. Too hot? Propeller fixes that. What's the weird feather for and why didn't you design something simpler? Ah, but you see, the wide brimmed hats for sale never anticipated the needs of my particular hat usage, and you're a dummy if you don't see why the feather is absolutely essential.

Finally, in my experience, Level 1s have few ideas, but what few ideas they have, they repeat over and over again. People say that when you have a hammer, everything looks like a nail. I don't know if that's universally true, but what I have seen is that, to a blacksmith's apprentice who worked so hard to forge that first hammer, everyone else's hammers look totally inferior.

Level 1 half-experts often develop narratives to justify their destructive behavior. One narrative is that whatever they are working on is too important, too exceptional, too precious. This is wrong, of course, unless they are working on nuclear launch systems, but then again, Level 1's are never allowed to work on nuclear launch systems. I've met people who worked on (non-nuclear) launch systems, as well as many people who worked on systems that actually serve about a billion people. None of these experts had 1/100th the hubris of a half-expert. Another narrative, commonly used by Level 1s in industry, is to justify destructive behavior by citing their business concerns. Somehow, especially in the US, bad businesspeople firmly believe that "all is fair in business" or "it's just business," as if those words could ever provide an adequate basis for destructive behavior.

Of course, any given community will have some Level 1 members in transition. In unhealthy or immature communities, where there are large groups of people at Level 0, these people might even be revered. After all, they have a better understanding of issues than Level 0 muggles who haven't even done the required reading. And they might seem to have an insightful comment or two every now and then. And if they can't produce an insightful comment themselves, they can always acquire and resell someone else's ideas.

Quite a few people get stuck at Level 1 and never proceed beyond. Interestingly, such people are quite rare in PhD programs: the number of people who dropped out of our PhD program because they were stuck in Level 1 is incredibly small. What is much more common is the self-taught student who managed to reach Level 1 on his or her own, but is unable to make further progress. They often form or join a tribe of like-minded people, for much needed validation and for protection against criticism from people who know better. And they frequently make much of the noise. People at higher levels are actually busy with productive activities, while the Level 1 typically spends more time jockeying for social recognition instead of technical competence.

Level 2
There is no characterizable Level 2. Between Level 1 and the ultimate Level 3 lies some chaotic times, different for every individual, marked by much hard work.

Level 3: Nirvana

At some point in the process of trying to master, and then advance, a new area, something magical happens. The researcher realizes that, indeed, all past work has weaknesses, and that this is normal, for it is quite difficult to achieve perfection. Their destructiveness gives way to a positive attitude that can extract valuable contributions from other people's work. They realize that we are all in this together. They become able to function in a cohesive environment of like-minded peers. And their productivity skyrockets.

Those of you who are familiar with successful software houses (like Google on the large company end of the spectrum, and Chain on the startup side, and everything in between) will immediately recognize that these are the qualities that most happy science and tech communities encourage. Hacker News, for all its many faults, works very hard to avoid snark in an effort to keep out Level 1 discourse, and it succeeds somewhat. Academia, of course, is all about principled behavior, from design to critique.

I'm not going to say anything about the value of reaching Level 3, or whether it makes a community more healthy. The causality could run the other way -- instead of happy communities being a result of people who have reached Level 3, it could be that happy communities allow people reach the final stage of enlightenment.

And are there communities that are devoid of Level 3s, yet able to function? Undoubtedly so.

But in my experience, the most stable and productive environments tend to have a large number of people who have reached this final stage. Such people are able to provide and receive criticism without making it personal. They can see the value in half-baked ideas and foster them, instead of shooting them down on sight. New people fearlessly join such groups because they know that their ideas will get fair consideration.

I don't want to paint too rosy or idealistic a picture. We know that even academia, which tries very hard to foster a Level 3 environment, has personal rivalries and petty infighting. But the discourse, at least in the communities I have been a part of, is almost always civil, the problem cases are confined, and there exist common values to help steer the group out of ruts of negativity.

Overall, the bottom line is that this is the exact progression I have seen hundreds of times in graduate programs at several different schools. Those who reach Level 3 seem to be far more productive. Of course, it's certainly possible to be productive without fully going through the stages -- my own institution has awarded PhD degrees to people who never progressed beyond Level 1. But somehow, there seems to be a strong correlation between the Level 3 mindset and the ability to advance human knowledge.

Perhaps it's worth talking about the nebulous Level 2, the all too critical transition between Level 1s and Level 3s. I'd like to hear from the readers on what they think enables people to cross over the chasm. Is it a hard challenge? Mentorship? Recognition? Something else? If so, how do we help more researchers achieve higher levels of enlightenment?
Hijacking Bitcoin: Routing Attacks on Cryptocurrencies

Hacking Distributed
Hijacking Bitcoin: Routing Attacks on Cryptocurrencies

At a high-level, Bitcoin is a randomly-established peer-to-peer network composed of thousands of nodes and tens of thousands of connections which rely on flooding to propagate transactions. As an attacker, being able to prevent the spread of information in such a network seems unrealistic, if not impossible.

Yet, this apparently sensible observation does not take into account that the Internet routing infrastructure (i.e., the set of protocols that govern how Internet traffic flows) is notoriously insecure and can easily be manipulated by attackers to intercept Bitcoin traffic. It also does not consider that large Internet Service Providers (ISPs), such as the ones sitting in the core of the Internet, might be naturally crossed by a large fraction of Bitcoin traffic already. Since Bitcoin messages are exchanged in clear text and without integrity checks, any (malicious) third-party on the forwarding path can eavesdrop, drop, modify, inject, or delay Bitcoin messages. The question is then: Is Bitcoin vulnerable to such routing attacks?

In our recent paper Hijacking Bitcoin: Routing Attacks on Cryptocurrencies to appear at the IEEE Symposium on Security and Privacy, we shed light on these aspects by studying the security of Bitcoin from an Internet routing perspective and quantify the potential disruptive effects of network attackers. Among others, we show that:
  • Bitcoin is surprisingly centralized from an Internet routing perspective: 20% of the Bitcoin nodes are hosted in less than 100 IP prefixes. To put this in perspective, there are close to 600,000 IP prefixes advertised in the Internet today. At the same time, few well-established ISPs (e.g. Hurricane Electric) naturally see a large fraction of the Bitcoin traffic. Together, these two characteristics make large-scale routing attacks surprisingly practical.
Only 13 ASes host 30% of the entire network, while 50 ASes host 50% of the Bitcoin network.
  • Because of its centralization, partitioning the Bitcoin network and isolate 50% of its mining power only requires a small routing attack, one which is orders of magnitude smaller than the attacks routinely seen in the Internet today. Any malicious ISP with access to the Internet routing infrastructure can perform this attack which starts to be effective after only few minutes (according to our own measurements on the live network).
  • Any ISP transiting Bitcoin traffic can delay the propagation of mined blocks (for up to 20 minutes), in a stealth way, even if she sees one direction of the traffic.
  • Bitcoin traffic is impacted by routing attacks today. We found many examples of actual routing attacks that ended up diverting Bitcoin traffic.
  • While multi-homing and end-to-end encryption (BIP 151) reduce the risks of network attacks, they do not prevent them. Our results show that even heavily multi-homed mining pools are vulnerable to routing attacks. Further, end-to-end encryption do not prevent an attacker from dropping Bitcoin connections.
In this post, we take a closer look at these issues. We start by describing the two possible network attacks which we consider, namely partitioning and delay attacks, along with their potential impact on Bitcoin. We then discuss some short and long-term countermeasures that would increase Bitcoin's robustness against network attackers. More details on our work can be found on our website.

Partitioning attacks
With partitioning attacks, an attacker aims at splitting the Bitcoin network into (at least) two disjoint components such that no information (e.g. transaction) can be exchanged between them. To partition the network into two components, a network attacker intercepts all the traffic destined to all the Bitcoin nodes contained within one of the component and drops any connection to the other component. To intercept traffic, a network attacker relies on vulnerabilities in the Border Gateway Protocol (BGP), the only Internet routing protocol used today, which does not validate the origin of routing announcements. These attacks, commonly referred to as BGP hijacks, involve getting a router to falsely announce that it has a better route to some IP prefix. By hijacking all the IP prefixes pertaining to the nodes in one component, the attacker can effectively intercept all the traffic exchanged between the two components. Once on path, the attacker can sever all these connections effectively disconnecting the two components. An animation of the attacks can be found on our website.

Illustration of how an AS-level adversary (AS8) can intercept Bitcoin traffic by hijacking prefixes to isolate the set of nodes P = (A, B, C, D, E).

The extreme centralization of Bitcoin from an Internet viewpoint makes partition attacks particularly effective as few IP prefixes need to be hijacked. Indeed, our measurements show that 50% of Bitcoin mining power is hosted in only 39 prefixes (i.e., in 0.007% of all Internet prefixes). This allows an attacker to isolate ~50% of the mining power by hijacking only these 39 prefixes. Much larger BGP hijacks (involving orders of magnitude more IP prefixes) are routinely seen in the Internet today.

While intercepting Bitcoin traffic using BGP hijacking is effective, any un-intercepted Bitcoin connection bridging the two components would quickly render the partition ineffective. Due to factors such as multi-homing, some nodes cannot be prevented from exchanging information, forming some kind of persistent connections. The presence of such connections makes partitioning attacks more challenging for the attacker, but not impossible. In our paper, we elaborate on how an attacker can provably identify and mitigate these persistent rogue connections by reducing the size of the partition she is trying to achieve.

By partitioning the network, the attacker forces the creation of two parallel blockchains. After the attack, all the blocks mined by the side with the shorter chain will be discarded together with all included transactions and the corresponding miners’ revenues. Moreover, discarded transactions will be irrecoverably canceled if there exist other transactions in the prevailing branch of the chain which spent the exact same Bitcoins (conflicting transactions).

Delay attacks
Bitcoin nodes are designed to request blocks from only a single peer to avoid overtaxing the network with excessive block transmissions. The block is requested again (from another peer) only if the request is not answered after 20 minutes. This design decision, coupled with the fact that Bitcoin traffic is unencrypted, allows for a powerful attack in which anyone intercepting Bitcoin traffic can delay block propagation on the corresponding connections. To do so, the attacker performs simple modification to the content of the Bitcoin messages she intercepts. As Bitcoin messages are not protected against tampering, neither the receiver nor the sender have any indication that the message has been modified, allowing the attacker to stay under the radar. The actual impact of such an attack depends on the victim and ranges from double spending (for merchant nodes) to wasted computation power (for miners). An animation of the attack can be found here.

Illustration of how an AS-level adversary (AS8) which naturally intercepts a part of the victim's traffic (node C) can delay the delivery of a block to it for 20 minutes.

Like for partition attacks, the centralization of Bitcoin nodes in few networks and prefixes, combined with the centralization of mining power in few pools, make delay attacks practical. We found that three ISPs together see 60% of all Bitcoin traffic. If malicious, these ISPs could therefore effectively and invisibly keep many bitcoin nodes uninformed. Unlike partitioning attacks though, we also found that even these powerful attackers could not disrupt the entire cryptocurrency. So, even though many nodes would be slowed down, Bitcoin, as a whole, would still function.

We verified the practicality of a delay attack by performing one against our own nodes. We found that a network attacker that intercepts only half of a victim’s connections can keep it uninformed for 64% of its uptime. We also found that the vast majority of the Bitcoin nodes (70%) are vulnerable to such an attack today.

How can we prevent network attacks?
Fortunately, there are both short- and long-term countermeasures against network attacks. First, peer selections could be made routing-aware. Bitcoin nodes could, for example, aim at maximizing the diversity of the Internet paths seen by their connections to minimize the risk that an attacker can intercept all of them. Moreover, nodes could monitor the behavior of their connections to detect events like abrupt disconnections from multiple peers or unusual delays in block delivery. These events could serve as an early indicator of a routing attack and could, for instance, trigger the establishment of extra randomly-selected connections. Finally, solutions like end-to-end encryption would also help (especially against delay attacks). Yet, encryption alone would not be sufficient to protect against partitioning attacks as an attacker can still drop encrypted Bitcoin connections.

The purpose of our research is to raise the awareness of the Bitcoin community on the need to prevent routing attacks from disrupting the cryptocurrency. While we have no evidence that large-scale routing attacks against Bitcoin have already been performed in the wild, we believe few key characteristics make these attacks practical and potentially highly disruptive. These characteristics include: the high centralization of Bitcoin (from a mining and routing perspective), the lack of authentication and integrity checks, and some design choices pertaining, for instance, to how a node requests a block. We are currently in the process of implementing some of the countermeasures highlighted above. Clearly, we wouldn’t mind some help in doing so!
Revealing the hidden links in the Monero blockchain

Hacking Distributed
Revealing the hidden links in the Monero blockchain

If you follow the world of cryptocurrencies, you probably know that Monero is a privacy-centric cryptocurrency that has surged in the market since late 2016. My coauthors and I, from Princeton and UIUC, have just released some research showing that its privacy claims, particularly for transactions made prior to February 2017, are significantly overstated. Based on the invective response our research has generated, the Monero community would much prefer you focus only on their newest offering, while ignoring the unfulfilled promises from their recent past.

For a bit of background: Just like in Bitcoin, each Monero transaction has some inputs and some outputs. Each input is a link to the output of a previous transaction, forming a graph.

In Bitcoin, these linkages are transparent and explicit. Every block explorer website (like lets you follow the graph, clicking the links to move from one transaction to the next. These linkages have also been the basis for graph analysis, clustering, and deanonymization analysis.

A typical users' mental model of Monero transactions is informed by this illustration.

In Monero, however, the links between transactions are obscured. Each transaction input can contain decoy links called “mixins.” Every transaction input has multiple incoming links, and every output has multiple outgoing links; in principle, no one can tell which is the real one among the mixins. You may have formed a mental model based on the popular illustration shown to the right.

Furthermore, if you visit any Monero block explorer website, you'll see the transaction graph is presented as a dead end. You can't tell where each input comes from, and you can't tell where each output goes. Just as an example, have a look at how typical block explorers render this transaction from May 2016, ChainRadarand MoneroBlocks

This transaction has 5 inputs and 5 outputs. Each input has 6 mixins (7 links in total, including the real one), indicating a high level of privacy is desired. The outputs are dead ends, and you can't tell which of the 7 links is the real one. After all, Monero is the "opaque blockchain".

In our research paper, "An Empirical Analysis of Linkability in the Monero Blockchain," we show that in fact for most of Monero's blockchain history, the mixins haven't done much good. Most transactions made prior to February 2017 actually are linkable. Here's the problem. In the past, most coins were spent by 0-mixin transactions (those that opt-out of privacy altogether) were commonplace. Including these coins as decoys doesn't do any good, because it's already obvious where they've actually been spent. However, the Monero wallet does not take this into account. The result is that we are able to identify the correct links for the majority (62%) of 1+ mixin Monero transactions made from 2014 through 2016. The Monero blockchain has provided little more privacy than Bitcoin.

To illustrate the problem and make its impact crystal clear, alongside our paper we've launched a block explorer, called MoneroLink, that reveals the hidden linkages in Monero transactions. If you visit the MoneroLink page for the transaction mentioned above, you'll see we are able to identify the correct incoming link for 3 of the 5 inputs, and we're able to identify the correct outgoing link for 2 of the 5 outputs, despite the large number of mixins.

The MoneroLink block explorer reveals the hidden linkages among Monero transactions (from 2014 through 2016).

We readily note that for transaction made since March 2016, the rate at which we can link transactions has steadily decreased, although the correct link can still be guessed with higher probability than you would hope. In this post I'm eliding almost all the technical details, but a more detailed analysis can be found in our technical paper.

Is this a surprise?

If the existence of a link-revealing Monero block explorer comes as a surprise to you, then you're not alone. This is the first block explorer of its kind. To date, there has been no indication that the Monero blockchain is vulnerable to such wide analysis. And it clearly contradicts the privacy claims in Monero's marketing and popular media.

However, the reaction to our work from Monero developers and discussion community on /r/monero has been to say we have known this all along.
“This is not news. Anyone who has done any basic reading on Monero has known this for a long time”(tweet)

What “basic reading” refers to here is a pair of reports, MRL-0001 and MRL-0004, from 2014 and 2015 respectively, which introduce the main vulnerability that our explorer relies on (as well as even more sophisticated concerns outside our scope), explaining that they could plausibly lead to “a critical loss in untraceability across the whole network if parameters are poorly chosen and if an attacker owns a sufficient percentage of the network.” [Emphasis mine.]

Neither of the MRL reports conveys that this is an actual problem affecting actual transactions. Instead, the papers are abstract, describing mathematical models of marbles in urns and hypothetical attack scenarios involving Simpsons characters. Most importantly, no prior report has made any empirical analysis based on actual blockchain data.

The contribution of our work is to show that 1) the parameters have been poorly chosen, 2) there doesn't need to be any attacker, the problem manifests all on its own, and 3) we confirm that indeed the result has been a critical loss in untraceability.

On the soothing quality of the MRL reports

I'd like to call attention to a particular pattern of discussion, in which MRL reports have been used (unintentionally) to quell further investigation. At various points in Monero's history, several forum posters have stepped right up to the threshold of this revelation, but have been gently steered away from proceeding further, using the MRL reports to end the discussion.

Case 1: In December 2015. A redditor asks “Educate me: Zero mixins forgo all the privacy advantages of Monero, right? But do they damage or interfere significantly with other transactions?”

Redditor VedadoAnonimato responds:
“I'm not even sure whether the mixin selection algorithm rejects outputs that have previously been spent in no mixin transactions. If the selection algorithm can't reject these, then the situation is dire, since currently the majority of Monero transactions have no mixin." (permalink)

Monero developer smooth_xmr confirms VedadoAnonimato's concerns, but then refers to the MRL report to convince readers that the problem is already dealt with.
"The math in MRL-0001 shows that those outputs will eventually become irrelevant though... we plan to blacklist those from any future mixins which will immediately solve the problem” (permalink)

The authoritative “math in MRL-0001” is the final word, effectively ending the discussion. Incidentally, the proposed change to “immediately solve the problem” was never implemented (the measures actually implemented have had a gradual effect instead). Regardless, even if it were implemented, “solve the problem” here means that future transactions will enjoy better privacy. This doesn't do a bit of good for anyone who relied on the privacy of prior transactions!

Case 2: In the following thread, a user asks, “Is Monero Really Anonymous?” and posts a link to an essay speculating on a form of linkability.
“This has been discussed at length. … [MRL-0001] explains the known issues related to tracing transactions, how well monero already does at them, and where it needs to do better.” (permalink)

Further down thread, “There's nothing to discuss that hasn't already been extensively documented in those linked whitepapers.”

Case 3: Here's another example showing how the MRL reports have been interpreted:
“Peters comment is MRL-0001 which got fixed with MRL-0004 - a totally theoretic attack anyway :-) (bitcointalk)

Our research clearly shows that this attack is not theoretical, but absolutely affects actual transactions. It has taken empirical blockchain analysis to show this.

In fact, VedadoAnonimato (again in late 2015) even highlighted the need for this missing empirical approach:
"It would be interesting to know if somebody has actually gone through the data and checked how many transactions can be partially or totally traced through the problems pointed out there." (permalink)

The thread, of course, ends immediately on that cliff hanger. The blockchain data has always been there for anyone to take a look at, but it's only now that we've done the work.

The truth hurts, but time heals

The Monero cryptocurrency and community will undoubtedly survive this revelation. The best response now would be to acknowledge that this result comes as a surprise and no one knew just how many prior transactions were vulnerable to linkability analysis. Even if some folks have had a suspicion deep down that this might be the case, no one has particularly wanted to look into the data and see the unpleasant truth.

If the Monero developer community instead rallies around the “This research is not new!” narrative, then that bodes poorly for them. It means that the community's leadership has knowingly allowed users to be misled about the privacy of prior transactions. The right thing to do would have been to issue an advisory that communicates the problem very clearly:
WARNING: Monero users who made transactions in 2014-2016 expecting unlinkability should be warned that their transactions are most likely linkable after all.

This is exactly the message that the MoneroLink block explorer conveys. It's better late than never!

The MRL reports have served as a form of dog-whistle communication. The reports are written (unintentionally, but perhaps subconsciously) in densely coded language that avoids alarming users. Developers have been working in earnest to improve future versions of the software, but have not sought to empirically quantify the existing problem or to clearly warn users about the privacy implications. Real users today can easily be impacted by a loss in privacy of yesterday's transactions.

I'll wrap up with a final quote from smooth_xmr:
"Most transactions in 2014 and 2015 (and even most of 2016) were mining and trading. There were precious few ways to use it for anything else. There was a crypto kingdom game at some point, and a dice site, that's about it." (permalink)

This is plausible, and hopefully it's true, since it would mean the potential harm caused by this vulnerability is limited. However, even if only a small number of users relied on Monero for privacy in 2016 and earlier, then their privacy matters, and this warning is intended for them.

Addendum 1. More responses from the Monero community
As anticipated, this release stirred up a flurry of shitposting and character attacks, on reddit and on twitter, from a loud minority of grumpy folks. I'm resisting the urge to curate the worst of these for a cheap vindictive pleasure. Most of the rest are editorial quibbles about how we present our results.

Instead I'd like to thank Monero core dev smooth_xmr for an excellent discussion thread with technical feedback. We take your criticism seriously and will incorporate it in updates to our paper. I also want to thank SamsungGalaxyPlayer and KaroshiNakamoto for insightful (but still very critical) posts, including a great thread calling for a more rational and level-headed public response to a research release like this.

Addendum 2. Isn't this a paid hit piece for Zcash?
Disclaimer, disclaimer. I've been involved with Zcash for years (as well as for several other cryptocurrency projects, like Tezos, Ethereum), as a consultant and recently as a founder of its community-supporting Foundation. This is transparently and appropriately disclosed on the front page of our technical report. It's also clear to see from my website, twitter profile @socrates1024, and is duly reported to my employer, the University of Illinois. My research is not funded by Zcash, and the other coauthors have nothing to do with Zcash at all.

Many of the initial reactions to our research predictably focused on questioning our motives as a way of drawing attention away from the substance of our report. By all means, be skeptical of our findings, and ideally reproduce them independently.

Addendum 3. On the timing of our release coinciding with the Monero hard fork upgrade
Whoops. We dropped our paper within a few hours of a planned hard fork flag day. Maybe no one will believe me, but this timing was entirely coincidental, and I was not aware that Monero was going to upgrade then. This is totally my fault, because in hindsight there were plenty of places I could have read about this. (I've been reading lots of monero reddit threads recently while preparing our article, but only by keyword search, not by reading the front page.)

In particular, Monero has established the admirable policy of scheduling hard fork upgrades regularly, on April 15 or on three other predictable days of the year, and has committed to this policy for over a year.

I can see how this timing looks like hostility. If I were paying more attention and noticed, I absolutely would have waited until after the hard fork to release news that that would likely distract the developers and broader community. Originally we hoped to release this work two weeks ago, but I got distracted by the Financial Cryptography conference and didn't finish the website until now. I'm relieved this hard fork upgrade went fine, as have all of Monero's hard fork upgrades to date.
A Thinking Person's Guide to the Latest Bitcoin Drama

Hacking Distributed
A Thinking Person's Guide to the Latest Bitcoin Drama

The war for control over Bitcoin's future took a bizarre turn yesterday when Greg Maxwell, a developer with Bitcoin Core project and CTO of Blockstream, announced that a certain chip contained the ASICBOOST optimization, and suggested an immediate activation of Segwit in response.

Here's my quick response to this turn of events:
  • So far, there's absolutely no independently confirmed evidence of ASICBOOST actually being used on any chip.
  • If ASICBOOST was actually used, we'd see ample evidence on the blockchain. This evidence would be in the form of transactions that are picked according to an algorithm that shuffles the transactions to create a sought collision, instead of sorting them by fee-per-byte.
  • I do not have the time to perform a thorough study, but I manually examined a randomly picked block mined by Antfarm a few days ago. As you can also examine for yourself, the transactions are ordered essentially by fee-per-byte, which is not what we would see if they were shuffled to create collisions.
You can stop reading here. If #1, #2 and #3 are false, then there's absolutely nothing but baseless accusations flying around. Someone needs to do a thorough study to instantiate the claims.

On the offchance that Antpool is actively utilizing ASICBOOST, there's still quite a lot to be said about the claims:
  • Building more efficient mining chips is what miners are supposed to do, as it secures the blockchain. Framing an optimization as an attack is disingenuous. Going from 28nm to 16nm was not an attack on the network. Better mining algorithms have never been considered an attack.
  • There are countless patents involved in the manufacture of ASICS, spanning layout, doping, lithography, all the way to packaging. Layout alone can confer advantages that far exceed the paltry 30% possible with ASICBOOST. The Bitcoin community believes it understands hashing algorithms, but that's no reason to ignore patents related ASIC manufacture in general. If a case is to be made against patents, then it needs to be made properly, across all patents related to ASIC manufacture.
  • Taking action to block this is similar to the government taking measures to block Elon Musk's more efficient electric cars. Specifically prosecuting a chosen miner, in the current political climate, would send a terrible message of absolute centralization in the hands of one particular developer group, and it would severely damage Bitcoin mining and the coin's security.
  • Segwit is being offered as a solution to ASICBOOST. Yet Segwit has at best a tenuous connection as a response to ASICBOOST mining. There are countless other, non-controversial solutions that can disable ASICBOOST, should one choose (wrongly) to do so.
  • The fact that Segwit is being offered as an inevitable and only solution to ASICBOOST suggests something about the true motivations behind this reputational attack.
  • There is a history of reputational attacks against people who have spoken up against Segwit. This is toxic and not healthy for the space.
  • Some people have asserted that ASICBOOST forms a conflict of interest (CoI) that explains why Jihan opposes Segwit. This is based on the false premise that everyone would adopt Segwit if it weren't for a CoI. There are many other publicly discussed reasons to oppose Segwit, based on technical and philosophical grounds.
  • If commercial "conflicts of interest" are a showstopper that should cause a company to abdicate its voice in the ecosystem, then practically every significant company would have to have no say in Bitcoin's evolution. Every other company is trying to influence the coin to evolve according to their own vision, while taking steps to maximize its role within that vision. We call this "capitalism." If we suddenly instituted draconian measures to cut companies out due to CoIs, one of the first companies with CoIs would be Blockstream.
  • Overall, the community seems to be at a point of rapture. As we all move forward, let's make sure that we take every step for good, substantiated reasons backed by science. Reality is too important to let it be swayed by arguments to emotion and by the noise generated by troll armies, on either side.
Let's do our due diligence first, and dispassionately find a coherent strategy that represents the best path forward for all.
BitFury's Smallest-First Mining Is Bad For Bitcoin

Hacking Distributed
BitFury's Smallest-First Mining Is Bad For Bitcoin

He shot a lot of cucumbers when most normal people shoot one deer.

Bitcoin's mempool has reached unprecedented levels recently. We had about $1B stuck in transactions waiting to be confirmed, and the number of waiting transactions broke an all time record at 70,000. Amidst all this, we had BitFury come in and mine a number of blocks using a non-standard "smallest transactions first" strategy. I want to quickly explain why this is actually bad for Bitcoin.

If all the miners did this, it would lead to lower throughput overall, a less predictable network and a net initial increase in fees followed by everyone paying only the lowest possible fee, destroying the fee market and the ability to tell apart useful transactions from spam.

Quixotically, BitFury got praise for what they did, which either shows that most people are innumerate or that they can only look ahead 0-ply where actions have no consequences -- they live in the here and now, like goldfish, with nary a care, and follow the good book when it comes to making decisions, as opposed to thinking through what actions lead to what outcomes. That's why they get excited when they hear "it's time for game theory." Well, it's pretty much never time for game theory. But it is time to think critically about the effect of a miner picking transactions in a different order.

This kind of deviation creates terrible incentives, the chief among which is (1) to break large transactions up into small pieces, and (2) to attach the most minimal fee or no fee at all. In turn, this makes the inputs into the system, the transactions, take up precious additional space and actually increases the overall demand for blockspace.

And breaking up the correspondance between fees/byte versus confirmation time wreaks havoc on wallets' ability to predict wait times, and thus their ability to attach meaningful fees. It completely kills the "fee market" that we all heard so much about.

There have been only two arguments in favor of mining smallest transactions first:
  • "We have a transaction backlog. There are lots of transactions. Let's clear out as many of the transactions as possible."
  • "The same strategy is proven and working even in the Apple Store in queue. They ask you and if you have a simple question, they will give you priority, decreasing the queue length." This quote was provided to me by Alex Petrov of BitFury over Twitter DM.
Let's see why these two arguments fall apart under scrutiny. I didn't have much time to write this blog post, so its coverage will jump between these two arguments chaotically, which is quite fitting given the topic.

Smallest First vs. Most Needy First
The shift in BitFury's behavior comes down to a switch from "largest fees per byte first" strategy to "smallest first".

This switch makes sense only if what you care about are superficial metrics, and then, only if you have a tiny time horizon and are not concerned with the big picture.

Scheduling Theory and Bitcoin
The core of the second argument is based on a simple result from scheduling theory, which says that shortest-job-first is the optimal strategy if what you care about is to minimize completion time, that is, time from submission of a transaction to its appearance in a block.

The reasoning for this is simple: assume there are transactions A, B and C. Suppose A is 1MB large while B and C are 500KB each. Mining A first, then B and C together yields a collective wait time of 1 (for A) + 2 (for B) + 2 (for C) = 5 block-times for 3 users, or 5/3 blocks per transaction submitter. Mining B and C first, followed by A, yields a collective wait time of 1 (for B) + 1 (for C) + 2 (for A) = 4 block-times for 3 users, or 4/3. The latter is clearly shorter than the former. If we apply this logic iteratively, we'll find that it makes the most sense for the miners to prioritize small transactions first, and mine transactions in sorted order, from smallest to largest.

So Alex's logic seems, superficially, to make sense: if we are given an inflexible collection of transactions in a batch at the beginning of time, if the users submitting them are honest and not trying to game the system, and our goal is solely to minimize user wait time in a world where every transaction has a user waiting for it to clear and all users are equally important, then picking the small transactions first is a good strategy. That's why you see people at well-run airports, banks, and customs lines pulling people who need little time to service out of the line to expedite their case. Anyone with a valid passport and simple itinerary knows the pain of getting stuck behind that guy who is missing paperwork, was born on the Marshall Islands just as it was changing hands -- he is going to take a while, and more people would feel better if he stood aside and let the line clear up. This is also why, at the Apple Store, they ask you if your transaction is short and easy, and pull you out: they don't want you to get stuck behind the problem person who has a complicated transaction involving 2 returns, 3 gift cards under slightly different names, and the involvement of an Apple Genius. Shortest-Job-First is a provably optimal strategy, for serving a static batch of inflexible, unchanging requests.

But the logic fails because the premise is not correct. Smallest-first is a terrible, short-sighted strategy when applied to Bitcoin.

Bitcoin transactions are flexible. A large transaction can easily be turned into multiple small transactions, that, together, take up more space than the original transaction. Mining small transactions first provides an incentive to wallets to break up their transactions into such smaller pieces. But many small transactions that accomplish the same task as a single big transaction actually consume more space than the single big transaction. So, in aggregate, they reduce the system's economic throughput. They are a wasteful way to use our precious block space.

Bitcoin transactions are also not equally important. The main metric we have for telling apart the important transactions from worthless spam is the fee they carry: important transactions carry more in fees. Processing the transactions that are unimportant first will reduce the wait time for transactions whose users do not mind waiting, while making the people with time-limited transactions suffer longer. It's a strategy purely for optics, aiming to lower numbers of waiting transactions, not maximize the actual good that the system is doing or serve the people who have signaled their need for service.

Finally, Bitcoin users want predictability. Those people in favor of a fee market especially require predictability: if the market is being cleared in a haphazard, random order, then the wallets will have no feedback on whether their fees are high, low, or just right. Chaos isn't a good strategy when you want to develop a feedback loop.

Fundamental Problem
Overall, smallest first is a fundamentally bad idea because it breaks the connection with the fee paid and the resources consumed. Recall that the block space is the limited resource here, and the fees paid are our way of gating that access, of measuring who needs it most. Hence, the default metric to sort transactions by is satoshis per byte -- this ensures that the transactions that do the most economic good get prioritized first.

Switching to small-transactions-first enables BitFury to post this tweet and collect kudos from the community for entirely the wrong reasons.

First, the system should not be processing low importance transactions over more important ones. Maximizing economic good requires going after the most economically useful transactions first, and our only proxy for "economically most useful" is to sort by largest-fees-per-byte-first.

Second, one miner sweeping up the smallest, lower-fee transactions doesn't really do any good for the system. Someone still has to mine the larger transactions. Those outstanding large transactions still need to be cleared. A temporary, short term increase in the transaction rate for one miner doesn't achieve anything in the longer term. It's like a teenager cleaning up her desk and sorting her socks, really quickly -- it's a nice but ultimately pointless gesture, her tasks aren't done until the entire room is cleaned up, and the sum total of tasks still takes the same amount of time. She left the hard work to others.

Third, this policy provides an incentive to wallets to break up their transactions, to issue many smaller transactions in lieu of a single large one. That would lead to more wasted space overall, and a net decrease in throughput. A miner that did this in a predictable fashion would be damaging to the network.

Fourth, prioritizing small transactions can mislead or confuse the fee estimators built into wallets. I don't have the time to examine the various fee estimators today, but I believe the default fee estimator has built into it the assumption that miners will maximize their profits and perform largest-fees-per-byte-first, and uses the fee distribution in the mempool to pick its own fees. In this case, seeing high fee transactions sit around and not get mined will cause the users to actually bid even higher fees, actually leading to an added expense for users until the wallets adapt. Now, this is not the only way to estimate the fees (in fact, it is notoriously difficult to estimate the fees) . There may be fee estimators out there that instead base their decisions on the fees found in mined blocks. In that case, they will issue transactions with low fees, which will lead to stuck transactions and a bad user experience.

Finally, once the wallets adapt to smallest-first mining, their best strategy will be to attach the lowest possible fee to every transaction. The mempool then becomes a Hail Mary zone, where people toss their transactions cut up into small little chunks, and hope that the miners will sweep through their precious transfers just by sheer happenstance. The fee market will collapse, and we can no longer have a rational market, because the miners will not be behaving rationally, putting up the space in the blocks up for auction. Imagine a web search service, where you pay solely the fees to place your ads, and the ads are included based on chance. This is not a winning strategy for anyone. Just ask Yahoo.

I'm Doing Work In An Irrational Order to Reduce My Workload
Alex Petrov of BitFury has also expressed concern that, when transactions sit around in the mempool and expire, they get resubmitted, causing the miners to have to re-validate them. This is tantamount to saying that, because you are overloaded by your boss asking you to do the same N things over and over again, you'll do them in an irrational order. Once again, Alex may be pointing to something that might need fixing, but the fix is almost certainly not smallest-first. Adding more resources, parallelizing validation, maintaining a 1 bit validation cache even when transactions are evicted, etc are all effective techniques for avoiding repeated work.

Are There Better Ways To Reduce The Mempool?
One simple way to reduce the mempool backlog is to make the blocks larger, either through a blocksize increase or through the more complex Segwit patch. That will reduce fees, to boot, in return for increasing the space, bandwidth and CPU consumption of Bitcoin. Given how cheap and plentiful storage and CPU are, and how much bandwidth increased just last year alone, adopting some kind of a blocksize increase is the thing to do.

Thanks, but no thanks
Overall, smallest-first is, at best, a strange strategy. It most certainly does not accomplish what we all need: more useful economic activity recorded quickly on the blockchain.

Luckily, it's also a losing strategy: at the moment, a miner mining smallest-first will earn around 5% lower in BTC terms than what it would have been had they gone for mining largest-fees-per-byte-first. Let's hope that this dampens their behavior and counteracts the damage they inflict on Bitcoin through increased delays on important transactions, perverse incentives, worse utilization of the blockchain space and potentially increased fees.

Aviv Zohar said it best when he quipped "[smallest-first] will raise the revenue of other miners and eventually push irrational ones out of the market."