Episode 393

Ethereum Foundation – An Eth2 Progress Update

  • Play

  • Play

  • Play

  • Play

  • Play

  • Play

  • Play

Ethereum will switch from Proof of Work (PoW) to Proof of Stake (PoS) likely already later this year in a much anticipated upgrade to Ethereum 2. The switch to PoS aims to make Ethereum both more secure and more sustainable by securing the network through Ether instead of mining. A second Eth2 update will address scaling through sharding at a later time.

Danny Ryan, Researcher with Ethereum Foundation, has been a major driving force behind the Eth2 project. He joined us for a progress update and we chatted about how the protocol will work in its steady state, what has launched so far, what happens in The Merge, and how PoS will affect centralization tendencies.

Topics discussed in the episode

  • Danny’s background and how he got into crypto
  • An overview of Eth2 – Phase 0, Beacon Chain
  • The role of a validator and building blocks
  • Penalties and rewards within the protocol including slashing
  • What the epoch is and how it relates to finality
  • The Proof-of-Stake merge
  • Why Proof-of-Stake is favorable for security purposes
  • What is the roadmap for sharding?
  • Ethereum fees

Friederike: It’s so good to have you on, we’ve been meaning to do this episode for a super long time. It’s been way overdue. Just before the podcast, we talked about the outline and it was definitely enough to fill out two episodes. Before we fully dive into the product line, can you tell us a little bit about yourself? What brought you to the Ethereum foundation? What did you do before? What piqued your interest in blockchain?

Danny: I honestly it’s an honor to be here. Early on in my blockchain journey, I remember listening to Epicenter and kind of gobbling up all of the content. It’s cool to be on the other side of that right now. How did I get here? It’s a similar story to most. I think honestly, I used to be a freelance software developer for many years.

I graduated college and was like, I really didn’t want to work on a normal job. I didn’t want to work in an office. I didn’t want to live in San Francisco, all that. I moved to New Orleans and it was just kind of like helping small businesses with weird software solutions. That was fun. Did that for many years. Then I think I started paying attention to Ethereum around the Dao, pre Dao hack.

Someone sent me an article, I think it was the New York times. It was like, all this money is being raised for this weird thing. That piqued my interest. I had heard about it theory before, and I hadn’t realized that it actually launched and I started paying attention and the DAO in particular, I know it was a fantastic disaster, but the fact that could exist and was happening really, I think allowed me to see and start to process what these tech this technology could do. I guess that was about 2016.

I became more and more obsessed at the beginning of 2017. I realized that it’s all I wanted to think about and all I wanted to do. I got rid of all of my freelance clients and I said, I’m going to figure it out and make this my job on the journey of how do I actually make this, my livelihood. I heard of this proof of stake thing. I thought to myself, okay, well, this doesn’t make any sense. This is never going to work.

A couple of weeks later. I’m like, okay, this makes sense. This is interesting, but how can I, how can I make this a business? Can I do something with this to make this my livelihood? And I’m like, okay, I could probably make a staking pool. I started reading all about steak and all about how it was going to work and all that kind of stuff in 2017, thinking that I could make a steak and pool. Then I realized there was still work to do.

I started helping out with some of the work here and there, like contributing to the research where I could, and like helping out with testing and various things online. I had been collaborating on the internet with various contributors. I had been working on some testing infrastructure for Casper FG and different things.

The EDF was like, Hey, do you want to join? We have plenty of work to do, and I joined them at the beginning of 2018. At that point I thought we’d probably launched proof of stake and about like six, seven months. Little did I know that I would still be working on that very problem today. As we will discuss, proof of stake for Ethereum is live, but it’s certainly not completed. Today I am still working on that very problem.

Friederike: Good. Yeah. It’s been a long time coming. W what exactly do you do at the Ethiopian foundation?

Danny: I worked on the research team and I do a mix of research specification writing, and then a lot of communication and coordination around those two things. The ETH 2.0 project consists of many teams at the EF, many external teams, ETH 2.0 clients, ETH One clients and the intersection of all that. I spend a lot of time communicating with engineers and helping people understand things and helping kind of coordinate and make sure the project keeps moving.

Friederike: Super cool. How many people are working on ETH 2.0 currently? Do you have any idea?

Danny: Definitely over a hundred. There’s five active ETH two client teams of varying size. There are probably 20 people at the ETF that work on this stuff full time. There’s plenty of people that work on it part-time, increasingly so as we approach the merge, which we’ll talk about later. ETH One clients are increasingly working on the merge and working on ETH 2.0. It’s quite a number of people.

Friederike: It must be difficult to coordinate because there’s so much of a research angle to it. I think research is something that is not infinitely parallelizable.

Danny: Right. I’m one of, probably, many people that coordinate various things and it’s a very organic open source effort. The amount of coordination isn’t incredibly high, although at the end of the day, we all need to talk to each other and we all need to coalesce on a single path and decisions. I try to help facilitate that.

Martin: Let’s talk about the things that have been decided and can have that are live and kind of, yeah. Basically tell us about the state of yeah. Phase 0, or what is live and what is already working?

Danny: First I’m going to answer “what is ETH 2.0?” Very broadly. It’s definitely a bit of a misnomer, but we can go along with that term. It is a series of major consensus upgrades for Ethereum aimed to make the protocol more secure, more sustainable, and more scalable. At the core of that is the move from proof of work consensus to a proof of stake consensus.

Instead of securing the network with mining hardware and energy consumption, securing the network with the tokens itself, the Ether, and so at the core of that is the bootstrapping and the creation of this new consensus mechanism. What as you mentioned is live today is what we call phase zero that went live in December of 2020. That was really the bootstrapping of this new proof of stake consensus mechanism that is called the Beacon Chain.

In December tons of Ethereum, community members in different institutions put a bunch of Ether as capital and collateral into what we call the deposit contract and kickstarted this new consensus mechanism called the Beacon Chain. The Beacon Chain exists in parallel to the current Ethereum network. In parallel to the proof of work network, which is still securing all of the assets and applications and contracts and accounts today.

We have on the one hand, the proof of work network chugging along, and on the other hand, this new consensus mechanism called the Beacon Chain existing in parallel to this building and securing itself. I think today there’s something like 4.5 million Ether locked and secured on this chain. I don’t know what that’s worth today. It depends on the minute and the hour, but this thing exists. This thing finalizes itself, this thing builds itself.

Ultimately what it does is it just builds and secures itself. This is by design. This is an iterative path to get rid of the proof of work and to move main-net to this new consensus mechanism, obviously it, there may not as used by tons of people secure as tons of value. There’s a lot at stake in this operation.

What we’ve done is built it in parallel, kind of vetted it in production, DUNS, tons of tests live. Now what we’re working on is actually the deprecation of the proof of work consensus mechanism in favor of the slide proof of stake consensus mechanism. That’s where we’re at today. There is a proof of stake, consensus mechanism, bootstrapped live securing tons of value, but really just kind of securing itself in isolation.

Martin: Then let’s deep dive into what it exactly does. Right now it comes to consensus on what?

Danny: It comes to consent so on itself. It’s by itself. What I mean is the proof of stake, consensus mechanism and all of the little gadgets and things in it. It has a validator set. Each validator is worth approximately 32 ETH. There’s something like 140,000 validator entities in this consensus. ETH One of them has its own little state. It has its balance. It has duties.

It is like a job at any given time. It has randomness generation. It has information about finality. Which portions of the chain are finalized and would never be reverted. It has a lot of just various accounting between finality and kind of the head of the chain. There’s a number of operations related to the functionality of this chain. Those operations are kind of what we call validator level transactions. System level transactions, and really what it does is a core operation called attestations where validators are constantly signaling what they see as the head of the chain and what they see as their local state of the world.

They use these messages to come to consensus with each other and ultimately drive this core like system layer of the chain, this some other operations related to validate or activity like deposits, onboarding new validators, exits, leaving the validator set, then a couple of other things. Really it’s like this, it’s a proof of stake system and there’s a lot of different accounting, different little operations going on and it builds and it comes to on itself.

Friederike: In principle, I can become a validator, right? So I need 32 ETH. What do I need to do?

Danny: Yes, anyone can become a validator if they have, or accumulate, 32 ETH. There’s a number of different tools, the different paths, there’s this nice tool called launchpad, which has a nice user interface. You can do this in a number of ways, but you can go to launchpad. You do a key generation step where you generate an active key. The key that’s going to sign out to stations and build blocks and different things.

Then what we call a withdrawal credential, which can be a cold key. It doesn’t have to be on the machine that ultimately owns the funds when the validators are done with validation. To become a validator, what you do is you take the 32 Ether, generate some keys, sign a deposit message, and send that Ether to the deposit contract, which is a special contract on the current Ethereum mainnet on ETH One, you send it with this data.

Then the Beacon Chain, which operates in parallel to that proof of work network, is listening to the deposit contract, coming to consensus on the state of the deposit contract and inducting new deposits. Right now it’s like a one-way bridge from the proof of work Ethereum to this new Beacon Chain. Once you have your validated deposit inducted into the Beacon Chain, there’s a little bit of process overhead and a little bit of time that it takes in terms of coming to consensus on this, then you get a new validate or record in the state of the Beacon Chain.

The Beacon state is like this kind of system layer state of this thing. After a number of E-box at something like four epoxy ICI proxy, 6.4 minutes, you then become activated. At that point, your validator now each epoch will have at least one, if not a number of duties with respect to the Beacon Chain and they get little assignments and they listen to the network and sign messages and talk to each other and come to a consensus on something.

Martin: I want to directly ask about the deposit contract, because the premise so far was that the two things are living in parallel, but that is already obviously a connection. Apparently, ETH 2.0 chain needs to read from the ETH One chain and kind of needs to understand the ETH One chain. Does it mean that to run a validator, you also need to run an ETH One client or otherwise, how would you know that this deposit happened?

Danny: Right. Yes, there is a one way and very restricted bridge from ETH One chain into the Beacon Chain. As a validator, you are running the Beacon Chain, which is actually relatively lightweight and running the Ethereum one proof of work network. This is to be able to listen primarily to that deposit contract and be able to build and connect that bridge to bootstrap, this consensus mechanism, which is the crypto-economic mechanism based off of Ether, you do need Ether. That link is critical for the bootstrapping and functioning of the system.

As a validator, you do have these two pieces of software that you’re running in parallel and communicate with each other. This actually is kind of representative of what the system would look like after the merge. Once these systems are unified, you similarly would have to run the Beacon Chain, which is kind of the system level state, as well as a piece of software that will give you access into the execution layer, into the things that we know and love essentially like Gath minus proof-of-work plus Beacon Chain.

Friederike: I assume it’s incentivized to be validated. What do I get if I get to build a block and how is it determined if and when I get to build one?

Danny: There’s two primary actions that are rewarded for the validator. One is this action called testing, which you are assigned to a test exactly once per ETH block. Exactly once per 6.4 minutes, and this accounts for actually seven eighths of your validation reward. The majority of the issuance goes to this very regular interval to activity, which is nice.

There’s a reason for this, because it helps with kind of hardening the fork choice and keeping things very stable because many validators get, even though only one validator at any given slot is producing a block of mini validators, get to throw in their weight and say what the head is. It makes it very difficult for these monopolistic activities of producing a block to be able to reorg and do different attacks. It’s also nice because it really smooths out rewards.

In proof of work, you’re rolling the dice over and over again. Every once in a while, you randomly get a chance to produce a block and get a big payout. Whereas in this proof of stake system, the rewards and payouts are much more regularized. Even if you have just one validator. Attestations like this are a very critical message type for securing the head of the chain and finalizing things. But then as you said, this other very critical role is actually producing a block.

Based on your validator assignments, which is based on randomness generation on chain, which we can talk a little bit about every single slot. There is exactly one validator assigned to produce a block. A slot is every 12 seconds. It’s kind of like the heartbeat of the system.

Instead of this stochastic process of mining, which randomly there’s a target for block time and randomly blocks get produced kind of around that target. Instead there’s this click of every 12 seconds blocks can be produced. If the proposer shows up and every single 12 seconds a producer, a validator is assigned. If it’s your slot is 10,001, you had a little bit of look ahead, say you had 32 slots in advance. You knew that you were going to produce a block.

You’re listening to the chain and you listen for valuable things to include in your block. You build off of the parent and produce a block and broadcast to the network, valuable things. Today are primarily just these validator operations. At the stations and deposits and things like that, but post-merger, you would also be including user-land application layer transactions, and seeking to maximize transaction revenue like miners do today.

Martin: What happens if for some reason the validator that’s assigned to this block or slot doesn’t show up?

Danny: A slot can be skipped. In the normal case, this looks just kind of like a longer block time. For example, just based on the stochastic process and proof of work, sometimes you have blocks that happen in 10 seconds. Sometimes you have blocks that happen in 30 seconds. In the event that a slot was skipped, you would have 24 seconds in between the block time. That would look like a slight reduction in capacity of the chain for that given time.

Interestingly, with EIP 1559 themed mechanics and variable block sizes that can be self balancing in the average. But what we’ve seen in the live system is something like a 99.5% participation rate with respect to blocks and attestations. In the normal operation of the chain, so not crazy global latencies or not some attacks scenario or major failure in a client, we expect to see almost all the time, a block every 12 seconds.

Martin: You said 99.5% for both.

Danny: the attestations and the blocks. Yeah.

Friederike: And do I get slashed if I don’t show up or if I don’t attest?

Danny: We reserved the word slashing for very explicitly malicious activities and what can be more severe penalties. In normal operation, we have rewards and we have penalties. You have this 32 ETH staked, and if you do your job, you can be rewarded. If you don’t do your job or fail trying to do your job. You can’t quite find the head of the chain. You might instead lose a small amount of Ether.

In normal operation, if you, and this is variable, depending on the size of the validator set, but in normal operation in a year, you might make anywhere from like six to 12% return on that third tooth deposit. Whereas if you were offline the entire time, your penalties would equal approximately what you could have made. You actually, instead of making 6% that year, you would have lost 6% that year.

It’s not just opportunity costs. You would actually have been penalized a little bit and seen a linear decrease in your state. That’s rewards and penalties for all the basic activities. You can either receive rewards, or you can be slightly penalized if you aren’t able to perform your job, there’s individual ones. There’s also groups. Like if the amount of your attestation reward would actually scale with this like group mechanics.

If like a hundred percent of the validators online, you can receive maximal valid reward. But if only 80% of the validators are online, you would actually only get 0.8 of your total acquisition reward. This is so there’s incentive one to not like DDOS your neighbors and take them down so you can get their rewards instead. Also in the event of crisis, there’s this group dynamics, so that you want to figure out what the hell is going on and try to fix it.

Those are rewards and penalties. You did bring up slashing slash the is actually, it is a penalty, but it’s a much more severe penalty. It’s in relation to these explicitly cryptographically, provable nefarious activities. For example, you’re assigned to a test, every epoch, this is very important for the operation of the chain. This helps finalize things. This helps secure the head of the chain and the fork choice, and you’re only supposed to do it.

If instead you do it twice per epoch, you can be slash because this is an activity that can lead to a network fault. Essentially, the idea is in proof of work, you have a physical real-world piece of hardware that you can only point to one chain or another. One fork or another, and or you could split it. You could do 50% of that, four of energy on that fork, 50% on the other four, but you cannot put a hundred percent on to 4k and a hundred percent on a Fort beak.

Whereas in proof of stake, that economic asset is actually just related to you. Signing messages is really cheap. The idea here is that messages that can lead to you attempting to apply your stake in multiple places. It could lead to network faults and confusion as to what the head of the chain is. We have to make those messages expensive, similarly to how allocating your physical resources, the mining hardware was expensive.

Thus you do some of these activities where you’re essentially signaling to different versions of a history. You can be penalized severely because they’re provably nefarious messages. That’s what is called slashing by severe. I mean that if enough validators were doing that type of double signaling within a recent window of time to create a network fault, and that minimum threshold is one third of the validators, then you’ll be punished maximally.

If one third of the validator set is doing slash double things within a recent time window, then those dollars actually lose a hundred percent of their stake. That’s because we want to have provable security bounds based on if and when attacks happen. Whereas if you’re, if this is just like an isolated event, say I’m just running a single validator. I do something ill-advised with my staking setup and I’m signing double messages by accident then. No one else has really been doing it in the recent time window. I get a slap on the wrist. I get kicked out of the validator set.

Since the fraction of validators that have been slashed recently is very low. My penalty is still relatively low. I might lose like one ETH and get kicked out of the validator set. That’s, so we have rewards, we have penalties for normal operation. Then we have slashing, which is for these very explicit nefarious activities, like double signing out to stations or producing two blocks in the same slot, that kind of stuff.

Martin: Yeah. Talking about slashing, is there a chance to do something that you get slashed by accident? Concretely while we have multiple client implementations. While that might be the situation that one client says, well, that is developed block and another client says that’s not developed block and therefore ignoring that and proposing another one.

Danny: I have seen some, a number of flashings on may nuts since December and almost all of these have been due to individuals and institutions creating sophisticated and attempted to be sophisticated redundancy setups. Essentially it’s very, if you have your keys in one place and you’re tracking the messages that you’ve signed, it’s very simple. It’s very, the logic is, can be in six lines of code, it’s very simple to like not double sign, essentially.

There’s a couple of conditionals in a very small database and I can prevent myself from doing this. But if I have my keys in two locations, say I’m trying to run client A and client B. I’m trying to run them on two machines to make sure that I don’t have any downtime. Then I’m going to be signing double messages, almost every epoch. I’m going to almost certainly be slashed.

This is actually what we’ve seen is we’ve seen some hobbyists that didn’t get the memo and are trying to make sure they don’t have any downtime. A couple of them have been slash, but actually what we’ve seen more so is these institutions who want to advertise the best uptime ever. What they do is they have way too sophisticated deploys and don’t manage the keys properly and have the keys in two different locations.

If you have your keys in two different locations and they both think they’re in charge and they’re there’s no communication there. You’re going to be slashed. Cause you’re going to like eventually have a slightly different world view. Like one block, one client might see the block. The other client might not say the block and they both will sign something.

Martin: Basically your advice is don’t do too complicated setup.

Danny: If you’re offline, even for a day, like you’re going to lose very little money. Because again, we have rewards and penalties for normal operation. If you’re offline for a whole year, you might lose like 8%. Whereas if you do too complex of a setup, you can lose much more than that. There’s many client teams and a number of them have implemented this new feature called doppelganger detection, came up with it and super foods from the east Asian community.

When you turn on your client and it knows that there’s some validator keys associated with it actually won’t start its job immediately. It’ll wait like an epoch or two and just listen to the network. It knows that it’s not signing anything. It’s not broadcasting anything. If it sees any messages coming in from itself, it’s detected a doppelganger and it says, oh no, don’t sign any messages. This is actually a new feature that’s rolled out to a number of clients that’s protecting the hobbyists with simple setups.

Obviously then you have like an epoxy to have extra downtime. But like I said, downtime is not the issue. It’s really like double signing is severely a worse activity. I think you can override manually. You can do like a flag that’s like dash capital letters, unsafe, disabled doppelganger detection. But for most users, they should just run with the run with the default and have those protections.

Friederike: You’ve referred to epochs repeatedly now. Maybe let’s talk about that because that’s something we don’t have on ETH one. What is an epoch and how does it relate to finality?

Danny: An epoch is a collection of 32 slots. Like I said, slots are every 12 seconds. An epoch is every 6.4 minutes. Slots are this unit of time at which blocks and very granular actions can happen. Whereas epoch is a collection of slots and there’s like an aggregate of duties and different accounting that happened at each epoch. Every epoch searches your slots, every validator is assigned to attest to exactly one slot per epoch.

In advance, my validator client gets notified that okay, at the six slot into the next epoch, you’re going to have to attest to the head. At that point you’ll sign a message and broadcast an attestation. Similarly, like within an epoch, you can potentially be assigned to a block. The entire validator set has one attestation per epoch.

At the end of the epoch is when accounting is done. In the previous epoch, we go and look, okay, how many attestations were there? Was there agreement? Was there disagreement? What are the rewards and penalties based on like the individual and the aggregate activity. Is there enough consensus on the state of things to update the finality calculations.

An epoch can first be justified and then be finalized. Justification is kind of like this first round of signaling for validators to say, I think that this block will remain in the chain forever. Then once something is justified, they can signal a deeper thought, which is, okay, let’s now say this block will remain in the chain forever. What we have is this two-epoch finality cycle, where at the end of epoch-N and you might justify it and at the end of epoch-N plus one, you might optimally finalize epoch N and may not.

I think we see like pretty much every single epoch just kind of goes through that cycle of justification and valley justification of banality. That’s also at these epoch boundaries is where a lot of like rewards accounting happens and various other things like you might also at that point update what the view of the ETH 1 chain is so that you can pro induct do deposits and different things.

It’s really these accounting boundaries. It’s really these larger than block groupings of logical activity. From a finality standpoint, you have all these little blocks where you can think about epoch as like larger packages. It’s kind of like the meta chain on top of the little mini chain.

Martin: Does it mean the maximum kind of a reorg that could happen is like one epoch or two-week POCs, or is that one way to think about it?

Danny: It’s like increasing depth and probability of reorgs in the east to Beacon Chain and at every slot, a validator first creates a block. Then one 32nd of the validator set is at, during that slot, because they’re assigned to that slot, they’ll then send out an attestation, that immediately gives weight to people’s fork choice and recursively gives weight to the chain prior.

Actually, because of the way that much of the validator-set participates in each slot, you end up with probabilistic guarantees at the depth of slots that the chain won’t be reorg. It feels a little bit like proof of work at the chain tip where you can make probabilistic claims that there won’t be a reorg unless XYZ happens.

Under normal operation, those probabilistic claims are actually very strong because most of the validators are participating in sending signals at each slot. Then at the depth of one epoch, 32 slots can be justified. This is the first step in finality justification, you can make a much stronger claim that something justified would never be reorg because it would require a very large amount of validators.

At least one third, likely more on the 1/2, even 2/3 realm, to not run the protocol, to run an altered version of the protocol where they stopped listening to justifications. Once something’s justified locally, you say, that’s what I want to build on. Something won’t be elsewise justified unless people change their local protocol. Granted that wouldn’t necessarily do slashing, but it becomes very unlikely.

At the depth of 2 epochs, there are 64 slots or 12.8 minutes, then you can finalize. This means that locally in my software, whoever’s seen that finalized would never revert. You can make claims that no one will see a different version of finality unless a minimum threshold of validators for slash and that minimum thresholds one-third one third is actually, although theoretical attack could happen at one third, it’s extremely improbable that you could even conduct it at one third. It would probably be much higher.

You can make crypto economic claims that this is finalized. This will remain finalized, and nobody else will see a different version of finality unless a large amount of capital is burned. We have the head of the chain we get to make probabilistic claims based on all these attestations, that things won’t be reordered. Then 32 slots, we get justification for almost any operation with enough depth and enough confirmation. At 64 slots, we get that finality, which is like that ultimate claim of crypto economics. Like it’s not going to revert.

Martin: Yeah. That sounds to me like a very high level on proof of work, it’s kind of totally normal and totally expected. That happens multiple times a day that the single block, or even two blocks get reverted here. It sounds like even a single block or slot revert, would be highly unlikely in normal operations

Danny: Yeah. In normal operation. Absolutely. If I saw 99% of the slot attestations come in, I can be pretty well assured that this is not going to revert unless there’s actually something malicious going on. If instead I saw 10% of those slots attestations come in, then I wouldn’t start locally making decisions necessarily because I don’t have a high probabilistic guarantee.

In normal operation, we do see almost all validators assigned each thought, testing each slot. Thus, we do get that increasing probabilistic guarantee, but things won’t be reverted. It’s probably in normal operation as good as proof of work confirmations and then increasing depth increasing kind of crypto economic guarantees of non reversion.

Friederike: But on a very fundamental level, this is still very different from ETH One. On ETH one the general design decisions is off availability over consistency. The chain can never hide, but this comes at the cost that there might be reorgs. Then you have like the converse with BFT side consensus algorithms that retry basically they have finality, but in principle, the chain can halt. Very much like on Tendermint it seems like ETH 2.0 does something weird hybrid middle ground that I didn’t even know existed before. What are the trade-offs of having like this hybrid model?

Danny: Like proof of work consensus, the Ether protocol is fundamentally liveness favoring. Meaning that the chain can be built, even if you don’t have these BFT thresholds of validators. This is to provide the uptime and availability of the network that blockchain users and what I think a global decentralized network expects at that point. It’s ultimately up to local node operators to make decisions about what is accepted.

If I’m in an exchange, I might always only operate under finale, but if I’m like sending NFTs to my buddy and like, we don’t have a finale, I know that this operation will clear and it’s not a big deal. What we get here is really like, we get a live chain and we get a BFT consensus following. From the perspective of the designers would be, at least for the expectations of guarantees of blockchain systems, a really nice compromise.

The idea ultimately is a safety favoring chain cannot simulate a liveliness favoring chain, whereas a liveliness favoring chain can simulate a safety favoring chain. Thus the latter is ultimately a more powerful construction because it gives more optionality to users on how to interpret the worldview at any given time. There’s much debate on this point and as to what the quality of service can or should be, and whether it’s really worth having these live chains without finality.

But again, that point ultimately is like the clicking point for me is really like, sure, if there’s not finality, I don’t have to finalize anything, but I do provide optionality to users and systems based off of like probabilistic guarantees and different things in a liveness favoring system.

Friederike: Super interesting. Maybe let’s talk about the match for a little bit. There would be the merge and the merge where mage, ETH One into the Beacon Chain. How exactly does it happen? When is it going to happen? I imagine ETH One and ETH Two have separate states, how is that handled? How do you make them congruent?

Danny: Let’s think about what ETH One is. It’s a lot of things, and there’s a lot of different ways to think about it. But for the purposes of the merger, we can think about it in two layers, we have this application layer or this execution layer where all of the users hang out, where all the applications are. It’s where the user layer state is. It’s where transactions are being processed and all that. It’s really what I, as an end-user care about, I care about my Uniswap trades and that kind of stuff.

Then you have this thin proof of work, consensus module that’s providing the services, providing the service to this execution layer. It’s the cradle for blocks. It’s providing guarantees about reorgs and different things like that. What we have is really these two layers. We have the proof-of-work consensus layer providing the application layer two services and two users. Then what we’ve bootstrapped in production today is the speaking chain, which is a proof of state consensus.

The idea really here is for, at one point in time, the proof of work module to be driving that application layer. At the next point in time, post-merger that proof of stake module, that Beacon Chain to be driving the same execution layer, the same application layer. The transactions, essentially the cradle of Ethereum right now as these like proof of work blocks and that proof-of-work consensus, and post-merger the cradle of Ethereum, the thing holding it all together.

Ultimately the Beacon Chain and the proof of stake consensus. You can imagine that payload for the execution layer is essentially moving locations upon some condition. People are running software that knows prior to this block height, I’m listening to the miners. After this time, I’m listening to the proof of stake validators, and there’s like a number of little details to work through on the actual point of merge and how you may be handling attacks in this boundary and reorgs on this boundary.

The simple case is essentially you have a chain being built by proof of work at one point in time. That same chain, that same payload of execution layer that validated that end-users care about is then being built by these validators. The nice thing is conceptually, these layers are important from a mechanism design standpoint, but they actually translate into really nice software reuse. We have what we call two clients, which are these Beacon Chain clients.

They’ve built a highly sophisticated proof of stake consensus mechanism. Then we have What is an ETH one client? An ETH One client really is a highly sophisticated execution layer. It’s highly sophisticated EVM, transaction processing mem pool management and all that kind of stuff. Plus this thin proof of work module that like, literally hasn’t been touched since day zero. It’s a relatively simple mechanism from a software perspective and it hasn’t been touched.

What the software after the merge looks like is really taking an ETH One client, which is primarily a highly sophisticated execution layer, cutting out that proof of work module, which was the driver of that execution layer. Instead of a proof-of-work module listening to any client. The software post-merge actually looks like you take these two clients, which are highly sophisticated proof of stake consensus mechanisms, and you take an ETH One client, which is a highly sophisticated execution layer, and you smash them together and you have the proof of stake client driving that execution layer.

I’m asking questions about the execution layer. For example, instead of the proof of work module saying, Hey, give me a valuable transaction bundle to include this block. The Beacon Chain client is instead saying, Hey, open the Ethereum, my local execution engine. Give me the valuable payload, process this payload and that kind of stuff you mentioned.

There’s a state before, and after that state, there’s a Beacon state, which is like the system layer state of this proof of state consensus mechanism. Then there’s the application layer state that exists in these like proof of work blocks today. Really this consensus mechanism is really good at doing that. It’s really good at coming to consensus on things.

It’s really just like slotting in its state transition and in its state, it’s embedding the execution layer state of Ethereum into that. If you think about it as like a tree of all the things embedded in the Beacon Chain, outer layer state that is built and finalized, you’re essentially having the application layer of Ethereum embedded into it as like a sub component of its state.

That application layer state right now exists in the proof of work land. It’s really just taking that application layer state and subsuming it into the Beacon state, which when you finalize the outer state root of the Beacon state, you finalize everything within it, the application layer state that’s been consensus on, then you get these finale properties and the other properties that a proof of stake is giving to itself.

Friederike: You’re kind of spooning over the state, but I mean, in principle, the miners can continue with the original chain, right? So basically this is kind of a natural breakpoint for a fork.

Danny: If anybody wants to run proof-of-work Ethereum, I think blockchain governance ultimately works so that anyone can continue to run it. There’s a couple of things that I think might make it not super viable. The Ethereum community has consistently, since Genesis put this thing in called the difficulty bomb, the difficulty bomb was intended to ultimately at the beginning to allow for a cleaner shift to proof of stake.

The mining difficulty at these points of the difficulty bomb increases exponentially so that it becomes non-viable to mine, that proof of work chain, unless you actually hard fork. That might dissuade a proof of work fork, here. But another interesting point is that when in the last contentious theory of hard fork, which created Ethereum classic, there wasn’t a lot going on in the application layer, there really was this DAO thing and then a bunch of like little experiments. The application layers could kind of fork and exist in parallel. There wasn’t any of these like big dependencies and stuff.

I would posit that if you’re in forked today and you had a majority community stake in one, and then a minority community stake in the other that the application layer on the miner.ity one is likely going to implode. There’s a lot of like interdependencies and a lot of value and stuff here. For example, Oracle’s may or may not be run on the minority fork.

Even if they are, you have systems like maker, which if the value of the ETH on one side or the other drops significantly, you have like mass liquidations then DAI integrated all into DeFi and you have rippling.

Martin: All the backed assets like USDC, Tether, and so-and-so. Yeah. I also think the support for proof of stake into your community is so overwhelming that I really don’t think there will be any debate or any question kind of what’s the direction.

Danny: Definitely. from a community perspective, I’m like 99.99% are just like, let’s do this. We’ve been wanting to do this for years. Can we just do this? Come on. Whereas you could theoretically run a proof of work fork, but I think that it will quickly become a wasteland.

Friederike: You said earlier that proof of stake has two main reasons behind it. One is the environment and sustainability, and I think that’s pretty straightforward, why that is the case. The other one is the security. Let’s talk about the security aspect. Why is proof of stake good for security purposes?

Danny: If we can prove fundamentally crypto economic consensus mechanisms, meaning that they have certain properties, if no entity has certain thresholds of value securing the network. They’re pretty similar in that sense and have some similar properties because of it. I think the crux of the proof of stake having higher security is really because that asset securing the network is actually in the protocol.

You can not only reward that asset. You can penalize that asset and especially if the failure leads to a much more secure system. Let’s think about what happens if a proof-of-work chain has 51% attacked. If you have a party that has 51% of the assets and they can reorg and do whatever they want. The only recourse here is forking the protocol so that you have a new mining algorithm at which point you bricked all the good guys, hardware as well.

You can think about it as there’s a budget for the attack. The budget for the attack was securing 51% of the assets, but then there ultimately really wasn’t a cost. Once I secured the budget, I’m now just God. In proof of stake, because those assets are in the protocol, there’s a budget. Secure 51% of those assets, but then there’s ultimately a cost as well.

If you do connect a network fault, then those assets are slashed. I can do the attack once. I lost all my money and I have to accumulate all those assets again, and I can do the attack again whereas in proof of work, I just entered God mode and could reorg over and over again. In the extreme where you’re some say, maybe hit like two thirds threshold and you’re some sort of like censoring the majority or cartel.

This can also be detected socially. I can identify this cartel, the censoring majority, and the extreme, there can even be social recovery. These assets could be forked out of the protocol and burned. Whereas in proof-of-work you don’t have this nuance and you, the only recourse was ultimately forking, the good guys and the bad guys out, that’s definitely like something, a failure mode you don’t want to run into, but the fact that recovery does exist, I think ultimately it would dissuade even those extreme types of attacks.

Martin: Yeah. I mean, this is a question about what is more secure. I think the debate, well, what’s been going on for years and I think that proof of stake is it’s absolutely clear one thing where it’s way less clear to me how it will play out is the question of centralization. Because originally I used to bring up that proof of stake would lead to less centralization than proof of work, because in proof of work, you have the economy of scale.

If you spend, I don’t know, 10 million on mining, you will get at the end, a better return for your last dollar than someone who just spends a thousand dollars on mining. Why was proof of stake arguably or well, maybe that is less the case. Even if you just have like one validator, you kind of probably will have the same rewards that being said, there are significant arguments, I would say, also for centralization in proof of stake.

That is likely the idea of staking through derivatives. I mean, to some extent, that’s already what we are seeing is that, of course, yeah, there are specifically now where you cannot get your ESR, but immediately out. It’s quite a commitment to do it manually or to do it yourself, kind of stake your 32 Eher, you could just go to an exchange and they will offer you a nice service.

They will say, okay, well, if you want to go out while we find a seller for you, we create a staking derivative, or whatever, and we have a market for that. Of course, to an individual, I mean that’s a big plus, but that threatens decentralization. I maybe you know better than me, the current statistics, but exchanges play already a big role. Right.

Danny: I mean, between exchanges and staking pools, it’s probably something like 55% of staked assets that we can tell today. There is definitely a strong hobbyist community. You’re right, like this there’s this like before the merge of this unknown lockup. Liquidity is certainly a question and certainly a driving factor for people to move to these other types of institutions.

Martin: Is there anything we can do about it?

Danny: There are some things that are done today. One thing I mentioned earlier, this is kind of a miner. point, but the fact that you can participate with a very small percentage of the network and still have regular rewards and payouts, that’s like a nice little thing. That helps the hobbyist community. One thing that is designed into proof of stake systems, which is not in proof-of-work systems is there’s these like discorrelated incentives.

If you’re with a majority, if you’re with a very large staking institution and they go offline, the amount that you lose is much higher than if you were going offline and isolation or in even a smaller pool, even worse is if you’re with one of these large institutions and they have some sort of security breach or some sort of bug or issue, then that causes them to be slashed.

Then, again, the amount that you’re slash scales with the percentage of the network that was recently slash. If you’re with like a 30% staking pool or 30% staking provider exchange that has a major slashing event, say somebody internal just wants to watch the world burn, or somebody hacks the system, or they’re running just buggy software or someone trying to be clever with their redundancy. You know, you lose quite a bit of capital.

There are these disincentives to being with the large institutions. This might drive you to be honest and take it home, but this also might drive you to be with a smaller pool. I think even if we end up with a highly pooled landscape, which we certainly are beginning to see, at least in the 50% range, there’s still these incentives to not be with the mega players.

In mining, I think we have like two or three pools that you get them together and you’re at 51%. The only thing that keeps those pools from being larger than that is because it’s not socially tenable, people don’t want to join the pool that’s too big. Whereas there’s actually a disincentive of joining pools that are that way in the staking landscape. It is unfortunate that those disincentives are in the tail risk scenarios, which I think that humans are pretty bad at really assessing tail risk.

I think we might see waves of centralization and then decentralization as some of these events happen. If something major happened to a major exchange, all of a sudden we might see everyone like scatter their stake and redistribute, but we shall see, second derivatives are certainly another interesting one thing I think we’ve seen, there’s a, there’s an entity that has something like 7% of stake assets right now.

I think that number has increased quite a bit in the past few months. That’s primarily because they offer, I won’t use the word decentralized. They offer it, they offer an on chain, staking derivative, and this they have an interesting mechanism where they’re kind of distributing. They’re kind of like this tokenized middleware that then distributes to various pools.

Although it represents 7% of the stake, it’s actually distributed to like 10 pools underneath the hood. Then you have these stake derivatives that represent the shared risk across these pools. This is, I see as a competitor to exchanges rather than a competitor to a hobbyist. Like if I’m a hobbyist, I’ll probably be a hobbyist. Whereas if you know, I was going to go to Coinbase, I might instead go with this decentralized option.

I hope that we see a number of these. We see a lot of competition and I hope that we see a number of like staking institutions, sociologic competition. I am probably more optimistic than you, although we might see a highly pooled and highly institutionalized space. I think we’re going to see much better properties than we’ve seen with mining pools. One is that every exchange wants to get it on the action. Every exchange is probably going to create a product. Whereas mining pools, we saw very few entities that ultimately are these very large pools.

Martin: Yeah. When one argument you brought that is definitely very strong is just this continuous payout. I mean, that was definitely a driving factor for mining pools in proof-of-work. How is that affected after the marriage? Because then, well, I guess we are talking about transaction fees. Do they go then to the block, does a block producer, or at least probably MEV will go to the block producer. That will affect the

Danny: Value of the payload of the application, their payload. The transactions that we know and love goes to the lock producer, they’re the sole entity responsible for bundling and finding that value. They’re the ones that are paid out. This actually is definitely like, especially in the ever evolving landscape of MEV, this is definitely like a huge point of research and huge point of discussion. Ultimately, I think it comes down to the democratization of MEV, meaning who has access to it, how much of an edge do institutional players have over hobbyists?

Ultimately, I think it’s very important for one, for application layer contracts to design their systems. They don’t have highly exploitable MEV and open tools and open access to MEV to be created. That hobbyists don’t have a huge discount on institutions and three even investigation of layer one protocol techniques to, for MEB minimization, for example, 1559, ultimately does put a bound or does reduce the amount of MEV available essentially like the, because of that in protocol burn.

This is a very exciting area of research, but potentially there are other types of techniques that may make its way into the protocol over time. There’s also a security component here. The classic Bitcoin issuance model is not sustainable and hinges upon the fact that as the issuance goes towards zero and transaction fees become the dominant component of the block, that it becomes the mind on the head component of the protocol becomes game theoretically un-stable.

If I’ve 20% of the network and I see these like huge transaction fees, it might actually behoove me to reorg the, try to re attempt to reorg the head, even though I only have a small probability of doing so to try to get those fees. What we’re seeing with the rise of it may be, and that ratio of like block value to block issuance go up. We definitely see some of those like similar, weird incentives pop up on the layer, one security. That’s definitely a huge component of research. I wouldn’t say it’s quite a huge worry, but there’s a lot of people thinking about this. Yeah.

Martin: You have an interesting insight, at least for me, that reducing MEV might be effective for de-centralization.

Danny: Absolutely. It is certainly a concern, I think, in a world in which hobbyists couldn’t access MEV, and you had major institutions pouring hundreds of millions of dollars into optimizing MEV, that’s definitely not a good outcome for decentralization. There’s a kind of a huge parallel tractor research right now on democratization and MEV minimization that I think is going to be critical to the security of the system in the future.

Martin: I mean, even if it’s completely democratized and everyone would get the same, you would still have the variance issue. 

Danny: You do have those consistent payouts, which may be keep your machines, keep your lights on, but you would see these like larger payouts. But so it’s definitely, it definitely like the optimal of like having these like very clean, consistent revenue streams changes. It changes the calculation. Certainly.

Friederike: The other thing that ETH 2.0 is going to bring, not immediately, scalability with the sharding. For some reason, this is lumped together with the proof of stake transition, but it won’t come live on to next year. I’m guessing at the very earliest, is that correct?

Danny: Sure. So, yeah, ETH 2.0, I often say, is to increase security, sustainability and scalability while retaining decentralization. This is so if the team can provide a secure and sustainable environment for the world’s decentralized applications at the crux of sharding at the crux of this consensus mechanism, being able to provide more scale is really like sophisticated mechanism design.

It turns out that it’s much simpler to design these mechanisms when you have a known validator set, when you have like the consensus entities at hand to essentially like orchestrate and dictate through mechanism design with proof of work, there’s this notion of there’s no notion of validators that at any given time, there’s no notion of consensus set.

Sharding, which relies heavily on randomly sampling and validators and consensus entities validating subsections of the system at any given time, really needs to know the set. You could imagine there is some proof of work sharding designs, but they ultimately end up trying to mimic a proof of stake sharding design.

There might be some sort of election into a set. You have to mine for a certain amount of time and do a certain amount of blocks, then all of a sudden you’re on the set and then you can be sampled. It’s much cleaner and much simpler to just have a validator set known. With proof of stake, we get that out of the box.

Really the foundation is, get a proof of stake consensus mechanism out there, and then come to consensus on valuable things, such as the execution layer of Ethereum today, the application layer, and also more things. So sharding rose, so take this consensus mechanism and utilize it for the value of Ethereum.

The prioritization here between these two major upgrades, the merge and sharding the question was which to come first. The really nice thing is that there are tons of scaling efforts, a layer two scaling effort going on in parallel to all of this layer one development coming online today. These roll-ups which scale with the amount of layer one data and aim to provide like 10-100x scaling for Ethereum.

Moving to the sharding designs, what we get to do is the merge, which gets rid of proof-of-work, which makes the system more secure and more sustainable. Simultaneously. We get all these layer two scaling solutions, which makes the system more scalable. Then further down the line, bring on sharding, which would compliment these layer two scaling solutions by providing more layer, one data to get even more scale.

Essentially all these roll-ups are, they’re buying us time. We get to work on the other two components through the merge and then prioritize shredding. Essentially if we did, if we did sharding today, instead of the merge we might get we get all these layer twos, and then we get all this, all the scale from sharding, but we didn’t get the increasing centralization increased sustainability. Instead we get to tackle them all at once at these different layers and then enhance it further down the line.

Martin: From your perspective, is it isn’t an option on the table that it will stay at proof of stake, but basically saying while we do the transition to proof of stake, but after that, we’ll just drop it, that’s it?

Danny: I don’t think so, but I mean, I don’t get to decide, you could imagine some movement at the end of the community being like this is enough, stop messing with it. I think ultimately that would be it with 10 to a hundred X scaling with roll-ups like that would give us something. I think that there would be a very valuable and powerful tool, but I don’t think that it would give us the scale that much of the community has imagined throughout the years. What we do have going on right now is there are shouting specs up in the spec repo. There are people working on R&D and working on prototypes. I think even within the next few weeks, we might see a very small sharding dev-net, which builds upon the Beacon Chain and the merge.

Ultimately from an engineering perspective, one had to be prioritized to the other or are we just both would take longer? And so, yes, there’s a lot of unknowns. There’s always a lot of unknowns and the roadmap and the technical specifications have often been in flux. You could imagine in 12 months something radically has changed, but I do think that come the end of the year with the hardening of the sharding specs and with increasingly some like test nets with some of the people that are sprinting on prototyping, that will have like a good idea as to how the slots in the next, like 12 to 24 month roadmap

Obviously there’s this thing we call ETH 2.0, which is all the stuff we talked about so far, but there’s also like tons of other shit people always want to do and prioritize. It’s a constant kind of debate and discussion. Other things you might’ve heard of like state management and state sustainability through statelessness and other techniques, which helped reduce the load of running a full node help of lite clients, all kind of stuff.

There’s a lot of different parallel R&D efforts. I think I can say very confidently. The next major thing to go to the main net will be the merge, but then post that it’s kind of a very active discussion and debate.

Friederike: Yeah. I’d love to do another episode sometime about chatting and the other efforts just because having them on the sides like this, it doesn’t do them justice, but I do actually have a couple of more, very tangible questions on proof of stake. Who will determine the block gas limit after the merge?

Danny: The execution layer, at the beginning of the merge. everything dictating that realm is ported. Instead of the proof-of-work miners producing and sending signals on that, it’s the proof of stake validators. In the construction of a block that has this execution layer payload, similarly, the block gas limit could be dictated by a validator then by proof-of-work miner.

That knobs still exists. The block producer can still turn that knob. The block producer just happens to be a validator now, instead of a miner, similarly the 1559 feed mechanics, which are expected to go live in the next couple of months, that mechanic at its core is in relation to the block producer, right? And the block producer post-merger happens to be a validator rather than a miner.

Martin: On this topic. I really have a very high level question on how we see well or how you see Ethereum and specifically was this yeah. Question of what’s the purpose of Ethereum and is there potentially even a conflict interest? One way to look at Ethereum, very critically currently, is look at the high fees and see users and applications paying currently enormous fees. One way to look at it very negatively would say, well, they got locked into this chain. They kind of started there assuming, well, everything would be cheap and free. Currently they are getting the maximum legal rent extracted from those fees. I mean, the narrative is getting stronger around ultrasound money or kind of making a lot of money in a way was Ethereum. To what extent, or maybe what’s your comment on that? Or is there a chance to come to lower fees? Or maybe even the question? What is a one? 

Danny: I don’t claim to be an ultrasound moneyist. I do think that the foundational asset in these systems must be valuable because these are crypto economic systems and they rely on economic consensus models, which generally are more secure when that foundational asset does have value. I get the push there. There’s probably two pushes there. Some people are trying to make a lot of money.

Ultimately a secure Ether, a highly valuable secure Ether does provide good properties to the system. That aside, the crux here is ultimately capacity, which is supply. There’s a demand that outstrips supply quite a bit, and people really want to use the system because of the network effects, because all the shit is there. Like you said, they’ve been roped into the system, that’s where all the action’s happening.

There’s also an argument that, and maybe again, it’s kind of the tail risk thing, but the system’s much more secure than other systems by virtue of everything happening there. By the network effects and the value of the foundational asset, ultimately we need to figure out how to leverage this secure network for more capacity. there’s a couple of techniques there.

We talked about earlier, but roll-ups kind of go into this realm of, can we use this as like a settlement layer to like a parallel system and leverage and get the same security out of it. It turns out that you can, and that has been quite a journey to construct systems like that. I think plasma was like this promising thing. We kept running into this issue of like, ultimately, like, there’s the, there’s the data availability problem.

There were all these kinds of ransom attacks and different issues that arise there. Roll-ups solve that and are extremely promising to use Ethereum more as a sediment layer to leverage its security, to secure much more activity, which is great. Then ultimately sharding these techniques of random sampling of using consensus participants in a way that does not degrade security, but can come to consensus somewhere.

Ultimately like what does Alwan, I mean, all one’s a L wants a security model on a certain amount of capacity and with other, with a spectrum of decentralization, I think if theory thinks the Ethereum community, the Ethereum ethos thinks that decentralization is like a very critical component to the value of these systems and to the value to the world. Thus all of those design decisions that sharding how to get more capacity ultimately are unwilling to really sacrifice on decentralization and thus it’s taken time.

It’s difficult. I think one of the reasons that a lot of applications and things are here is because of that ethos, because of that philosophy, because of the value that we think decentralization is going to bring the world. But we see a lot of like other competing systems that I think make different design decisions, especially on that decentralization spectrum that provides much more capacity and the systems may or may not have their place in the future. But I think if Ethereum is attempting to build a certain type of future, and that’s really the guiding light, and I think really ultimately the value proposition.

 

Guests

Sponsors

  • Exodus

    Download the app today at exodus.com/epicenter and iOS users can buy up to $500 in Bitcoin for just $1 for a limited time.
  • ParaSwap

    ParaSwap aggregates all major DEXs and makes sure you beat the market price at every single swap and with the lowest slippage - paraswap.io/epicenter
  • Solana

    The Solana ecosystem is growing at a rapid pace and it’s a great place to build your project or get involved with the community. Go to solana.com/epicenter to learn more.

0:00:00 | -:--:--

Subcribe to the podcast

New episodes every Tuesday