Episode 451

Ethereum Foundation – The Ethereum Merge

  • Play

  • Play

  • Play

  • Play

  • Play

  • Play

  • Play

The hotly anticipated Ethereum merge due later this year, will see the joining of the existing execution layer of Ethereum (the Mainnet we use today) with its new proof-of-stake consensus layer, the Beacon Chain. It eliminates the need for energy-intensive mining and instead secures the network using staked ETH. This is a truly exciting step in the Ethereum vision of more scalability, security, and sustainability.

Tim Beiko, Protocol Support for the Ethereum Foundation, is heavily involved in the project. He joined us to give some deep insights into governance within Ethereum, a full update on how the merge is coming along and the next steps, and how Ethereum will look post-merge.

Topics discussed in the episode

  • Tim’s background and how he got involved in the space
  • Unpacking governance on Ethereum
  • The Ethereum client ecosystem
  • The current state of Ethereum’s testnet landscape
  • The Merge: what is it, and where do we stand?
  • What is coming next in the lead up to the merge and how is success measured?
  • How did the very recent Sepolia test go?
  • Scaling – what will the ecosystem look like post-merge?
  • The Ethereum roadmap

Sebastien Couture: I am here today with a new co-host, Joseph Schweitzer, who I am sure lots of you will recognize as a familiar face in the Ethereum ecosystem.

Joseph, thanks for coming on and co-hosting this one with me, it is quite fitting since we are talking with Tim Beko, who coordinates all Ethereum’s core developer meetings and works at the Ethereum Foundation on the protocol support team. And today we are talking about all things Ethereum merge, which is quite timely because there was a major test-net merge, a few minutes before we started this, so this will go out fairly soon after that happens, so this is quite timely. Thanks for co-hosting this one Joseph. And maybe tell listeners a little bit by yourself and what makes you a competent co-host on this topic.

Joseph Schweitzer: Well, if I am going to jump in on one, I feel this is an appropriate subject, I’ve been around the Ethereum space for a while, I do communications and PR work, with most of my time at Ethereum Foundation as well, but I’m sort of a general tinkerer. So, anything in the layer one sort of blockchain space that passes the legitimacy sniff test is something that I’ve to play with for a while. And, happy to jump in. 

Sebastien Couture: Happy to have you on and hopefully you can come on for most of our Ethereum focused episodes because I think you have a lot of insider information and good insights. So, glad to have you here. 

Joseph Schweitzer: And the wonderful thing about public systems is that nobody has insider information if you are paying attention.

Sebastien Couture: Tim, thanks for joining us today after the merge on Sepolia welcome. 

Tim Beiko: Thanks for having me.

Sebastien Couture: Tim, tell us a little bit about your background and how you got involved, in this work.

Tim Beiko: First, thanks for having me on. I started getting interested in blockchains around 2013, 2014, first heard about Bitcoin and got in into that.

Then a couple of years later I heard about Ethereum through the DAO, but when the DAO was a project and not a hack, I had heard mentions of it before, but the DAO project has already what got me to actually try out Ethereum. So, I literally bought Ether and bought DAO tokens at the week or even maybe day before it got hacked.

And then the next morning, I remember reading on Reddit, the post saying I think someone is draining the DAO. And that was a pretty eventful couple weeks after that because not only was there this big hack, but after there was Ethereum Classic split, and you had to figure out how to split your tokens, in order to prevent replay attacks.

And I knew absolutely nothing about blockchains then. So, I was just copy pasting random commands in my terminal hoping that I would not just lose all my coins doing so. It was an interesting way to get into the space, but after the DAO there was this lull where it felt Ethereum was not going to be a successful experiment because, if this is a smart contract platform and you can not write a smart contract on it without it getting hacked, is there a ton of value there.

So I get following a bit then in late 2016, early 2017, you start to see a bunch of projects use Ethereum again and obviously in, by mid-late 2017, there was this huge ICO boom. And there I kind of realized there would probably be a lot of demand for Ethereum, even if all the applications in 2017 turned out not to work.

Clearly there is a lot of things you could do with blockchain and I decided I wanted to work at the protocol layer because as a user, it was still pretty rough to use Ethereum in those days. And when there were ICOs and the mempool would stay congested for hours to days, it was just a really bad experience and it felt there was a lot you could improve there.

But I was not an engineer or researcher, I was a product manager. So it took me a while to find a product manager job working on the protocol and not on the product built on top of the protocol. So took about a year, but then, consensus put together a protocol team and I joined that.

And I worked as part of consensus for about two and a half years on their Hyperledger Besu client. So got involved in basically main net protocol work through that, and as part of that, I worked a lot with like Hudson, who was chairing the core dev calls at the time.

And then around 2020 he wanted to move on to other things, I decided to step up and take the role there. And since then I’ve been basically coordinating these developer calls that we have on Ethereum, where different protocol implementation teams get together and chat about changes to Ethereum.

So that is how I ended up here.  

Joseph Schweitzer: And before we dig into core development and sort of how governance core dev call work in general, what’s your role today? You mentioned earlier you were with Ethereum Foundation on the protocol support side, what is protocol support?

Tim Beiko: Our team is not pretty well known, but the team I am on at the EF is called Protocol Support, and it is a bit of an odd team because before me, there was no team. There was just Hudson, floating in the org chart, and then since Ethereum has grown and gotten more complex, in the past couple years, basically there was just a team put together.

And there is folks like Danny and me where we obviously chair these calls, there is a bunch of folks who help either get better input and share updates with the community like Trent. We just have folks who work on specific stuff behind the scenes. There was the Sepolia merge today and someone on our team is just working on getting the hash grade on Sepolia so that we hit the merge at the right time.

And there is, stuff client grants or organizing workshops, so anything that basically supports the work of researchers and core developers and the protocol is stuff we try to help with. 

Joseph Schweitzer: I think we were getting toward the same point, but may have a little bit of a delay, just so folks’ kind of get it, what does governance look like on Ethereum? 

How does it relate to these core dev calls and who decides anything?

Tim Beiko: There is a lot to unpack here, the very first bit that is important to know about Ethereum governance, especially relative to other blockchains is we don’t use coin voting or any sort of formal voting as part of the governance process.

And the rough reason there is that we don’t think that coin holders are the only stakeholders that we should disproportionately optimize for and so if we don’t have coin voting, it becomes a much messier process for governance to happen. And there are definitely some present cons to that, but I think overall it works quite well for Ethereum and generally the way changes to the protocol would happen this is the happy path.

And then we can talk about all the edge cases, someone comes up with an idea we have a pretty open process for proposing changes to the protocol because all the specifications are public, anyone can come and put together a proposal to change something.

We use EIP’s for that mostly, again there is some exceptions here, but roughly we use EIPs. So if you want to change something on the protocol, you come, you put together an EIP, then we usually ask that you get some feedback async from people who have relevant experience in the part of Ethereum you are changing.

So imagine you are changing gas prices or something, then you’d probably reach out to some client teams you’d probably want to do some benchmarking on, why is the new gas price better? If you are adding something new, trying to get some feedback from the people who will use this thing that you are adding and why is it important to them and whatnot.

And once you have a proposal that is in decent shape, we have these public calls that happen every two weeks called all core Devs. And there is a mirror version of that for changes to the beacon chain, but for now we can assume they are roughly the same thing.

So, we have these public calls people come with a proposal and then discuss it. And then, client teams basically decide whether that this is a change they should implement. In practice it is incredibly rare that a change will be accepted on the first time it is presented, depending on the complexity of the change.

It takes a small number of months to a small number of years to get that change included. And most of that time is spent just in back and forth with protocol developers who try to understand like, does this change actually benefit people, and most importantly, is there security risk to Ethereum by introducing this change?

So, there is a long list of changes that would be beneficial to end users, but there are outstanding security issues with them and so, we can not have them in Ethereum. Then, assume you go through this process, you convince everybody on the client teams that like okay, this change should go in.

Client teams will typically then write the code for your change, write testing and whatnot. And then they just end up putting out the software. There still needs to be adoption of this software by the entire Ethereum community. And this is the part of Ethereum governance, which is very different from a lot of projects or a lot of other L1’s, where when the client devs put out the software, you can think of it as an opinionated suggestion.

They think these are the changes we should make to Ethereum and that we should make them. But if people like stakers and node operators don’t upgrade their nodes, those changes just don’t happen. And in practice usually by the time we have put out a set of changes, the community will adopt them.

And the reason is we try to prune changes that we think would not be adopted earlier on in the process just because it is quite messy to have an upgrade happen. Where it is highly contentious and to be clear, those have happened in Ethereum in the past, but generally if something feels like there is not broad community support and there is not a strong enough rationale to include it despite that, then it is kind of inclined team’s best interest and not include it, because then they are the ones who have to deal with the fallout of a messy network upgrade.

So there is definitely this check by everyone involved in Ethereum about the changes that go out. And then this often leads to very intractable conversations about like, what is the Ethereum community and who should get a say and whatnot. And I don’t think there is a single answer there.

Different change, bringing out distinct parts of the community with strong opinions. But it is really this process of just trying to come up with a set of changes that client developers think, will be adopted and proposing them. And in the default case, they usually end up be being adopted, but it is not something we can take for granted or force upon people. 

Sebastien Couture: That is the first time I heard someone really kind of explain the governance process for upgrades in the Ethereum. I spend a lot more time on the Cosmos side of thing and, of course there is coin voting and governance is an issue that affects all different blockchains, whether it is something on Bitcoin, or Ethereum would say with its processes or blockchains with built in coin voting governance.

Do you think that coin voting governance could add some level of useful signaling in Ethereum governance?

Coin governance has its flaws and we have certainly seen that in the Cosmos space recently, but I think that as a signaling mechanism, it is highly effective. And we saw recently, some proposals on some Cosmos chains, get 90% or above 90% buy-in. I am curious what your thoughts are here. 

Tim Beiko: That is a good question. So, I don’t think it adds much at the level of Ethereum L1 and that is not to say it does not useful in other context, but I think for us, there is two outcomes you can basically get with your signal.

Either it is mixed, or it is strongly in favor. I think for any case where it is strongly in favor, we could get that signal quite easily, let’s take something EIP 1559, even the merge, I think if you polled coin holders about the merge, they would tell you they are all in favor because it reduces issuance.

Maybe some of them have an invested interest in proof of work, so they would poll against. My rough feeling is it would probably be large majority in favor. And if you take something like EIP 1559, it would probably be even clearer because coins are getting burnt and that is undoubtedly good for coin holders.

But then, because we do not have a formal process to gather other signals beyond these calls and whatnot. It would sort of elevate that signal above others, and it is not necessarily the thing you want to optimize for especially in the short term.

Ether holders are a stakeholder as part of the governance process, but they are not the only one, nor are they necessarily the most aligned, you could argue the client teams developing Ethereum, even though they are not the biggest coin holders by several orders of magnitudes, they have a desire to see Ethereum thrive as a long term infrastructure, that maybe the coin holders today don’t have.

If you have a very architectural view of this, maybe most of the coin holders today are just holding for the merge because that is a trade for them, and they’ll move on to the next thing.

And I’m not saying this is the case and I doubt it would be in practice, but it is not clear to me that a signal from coin holders at a specific block, being much more explicit than all the other signals we have actually adds value. The root reason there is, you are optimizing for many stakeholders and there is obviously some alignment between coin holders and the other ones, but it is not complete.

And I think the more your project is aligned with coin holders, the better those governance votes are of a signal. And if you imagine a quite simple hypothetical Defi product where the coin basically gets part of the profits from the operation of that Defi product. I think in those cases, it is entirely reasonable to have the coin holders be the main, if not so decider of governance there because it is, well, they are building this thing and this thing has a clear business model or flow of fund structure and they can optimize that.

And there is not really, users are obviously maybe not coin holders, but then the users are the ones who generate your profit. So, you want to keep them happy, and this is basically a simple alignment problem. And whereas for Ethereum there is obviously the Ether the asset, but I think that is a subset of what people are trying to build and what users use it for.

The very simplest example there you could say, well, it is good for coin holders when the fees are high and a lot of ETH gets burned, but that is obviously very bad for users. So you don’t want to over-optimize for them and that is Ethereum also has had coin votes in the past and I’m not sure, we have gained much information from them.

And this is not even getting to the technical parts of it where a lot of the ETH is not in a position where it could vote. So for example, some of it is in a multisig, some of it is wrapped in Defi, some of it is in called storage, you are not even getting a vote of coin holders.

You’re getting a weird vote of the ETH that is readily available and willing to take some procedural risk to go and vote on that, I’m pretty against it, but I think for some other projects where the incentive alignment is just much closer with just coin holders, it makes a ton of sense.

Joseph Schweitzer: We’re going to the merge in just a minute. I did want to just make one sort of point of clarification, one further question when it comes to governance, digging into the stakeholders that you were mentioning. So, for a lot of folks listening in it may have been a while, and some of these terms, consensus layer clients, execution layer clients, you mentioned an entire secondary call.

So, in the old days all core devs was all core devs, and now you kind of have these two layers happening in unison, can you explain a little bit about what this client ecosystem looks and sort of your thoughts on this sort of facet of Ethereum governance and where it is headed in the years to come?

Tim Beiko: Everything we kind of talked about, it is almost the pre beacon chain Ethereum where, we had even then, unlike other blockchains, Ethereum has many implementation teams. So the Ethereum protocol is specified using, basically math and some readable but not optimized code.

And then there is different teams who implement versions of this protocol. And I think the best analogy is web browsers in a way where if you go to ethereum.org on Mozilla versus Chrome versus Firefox, versus Safari, You get to the same webpage, but obviously Chrome versus Safari, you have a bunch of different features that they optimize for.

And so, you can think of Ethereum client implementations as that, where there is a single specification that they follow, kind of implementing HTTP and DNS resolving for a browser. But then there is a bunch of degrees of freedom that they must improve their optimization stuff around sync speed, database storage, the efficiency of API requests and this means they all kind of get a say in the governance process.

So it is not just a single implementation team deciding this, but basically a set of them. And to make this even more complicated, in practice now we have both, what we call the execution layer of Ethereum, which is the current proof of work chain where smart contracts and user balances live.

But now we have this beacon chain, which is the proof of stake implementation of Ethereum, and this proof of stake beacon chain also has an independent specification and a set of independent implementations for it. And that means that in practice for their governance, they need to come to consensus across all these teams for changes.

And now we have the merge happening, which is the combination of the current execution chain where we remove proof of work and instead rely on the existing beacon chain to bring consensus to the network. And so this means that both from a technical, but also governance perspective, these two layers will merge together where now we have people from say the beacon chain giving input into decisions that affect the execution chain because they are affected by it and vice versa.

And so, this is probably what the next couple years of governance for Ethereum are like, is finding cleanly merging those two things, and there is some interesting aspects to both of them that I think we want to preserve. 

So I think that on the execution chain, we have this very open process that is fairly well documented and people can come into the beacon chain because this launches separately from Ethereum, they want to optimize for speed.

And so they’ve been much better at executing efficiently and the cost there, it is a bit harder for outsiders to follow the process and the specific changes. As we merge it together, hopefully we can preserve the speeds that we have had from developing these things independently, and having it be a bit more modular, but also make the process for specifying changes across the entire Ethereum stack very clear and transparent. So that is the important thing we will be working on next. 

Joseph Schweitzer: But in short, you’d say it is much more of a peer review process than it is sort of a participatory sort of token holder kind of thing.

Tim Beiko: It is, participatory peer reviews it feels a bit too distanced compared to what we have, peer reviews usually are anonymous, and it is also usually one or a couple rounds, very formal and discrete sets of reviews. I think what we have is much more fluid where, you get a bunch of reviews from your peers.

But people are not anonymous, they can be, but generally they are not, or at least not all of them. And it is also not just open to experts. This is probably the other thing as well, for example, take EIP 1559, which was a popular one.

You could think EIP3074, which is another popular change that has not made it into Ethereum yet. It is not just client devs reviewing and deciding about this. It is the community and different parts of it like smart contract developers and whatnot will come and they’ll have strong opinions and those also need to get incorporated.

Tim Beiko: So it is this weird mix where, there is a lot of public review, it is also open to anyone to come in and in practice, it is like we don’t get every stakeholder to come in every time, but when a change affects a set of stakeholders, you can be sure that someone from that group will show up and have a strong opinion.

And this is kind of what you want, you want the most qualified or the person with the strongest opinion to be heard and to make a decision based on that.

Sebastien Couture: So, can you talk a little bit about the test-net landscape and what that looks like and what are the current test-nets running?

Tim Beiko: So that change a lot recently, I can share where we were at and where we are hoping to go. But, Ethereum has a lot of test-net and tests are obviously useful for both application developers who can deploy their contracts to them before going to main-net. And for client developers who can deploy protocol changes to them before going to main-net. 

But the problem with test net is they end up being very unstable over time because one, if say they run on proof of work, proof of work is not meant to run with low hash rate and you get a bunch of volatility in terms of block times and whatnot. And two, if they run on proof of stake, again, it is you are asking people to run these validators for no rewards.

So, it is quite hard to keep them stable and then over time, as just a normal blockchain, their history and their state size growth. So it becomes harder to sync a node, we have decided with the merge coming this was a good time to revisit what are our test-nets and what do we want them to be going forward.

So basically, we launched a new test set for the merge called Kiln. And this is the first one we will be shutting down right after the merge. And the idea there is just we wanted something, earlier this year that anyone could use that was, show them what post merge Ethereum feels.

So both infrastructure providers, smart contract developers and whatnot. That was a new test net that we launched and its only purpose was to be there as a merge test net before we merge the other ones. And so, once we have merged main net, that one will go away. And then if you look at the other longer lift test net, we basically have Görli, Ropsten and Rinkeby, as well as Kovan.

Kovan has been half deprecated for over a year and now is fully deprecated, because Open Ethereum is basically the only software that can run validators on it. And that client has been in a semi-deprecated state for a year, and now it has fully been deprecated. So, if you are using Kovan basically this is not going to transition to the merge.

You should migrate to another test net because as soon as the merge hits, it means that the network you are using just does not reflect the state of the Ethereum network. So, Rinkeby is maintained by the GET team and it has been around for a long time and now has a large state in history, so it makes it harder to run nodes on it, so we are also not transitioning this one through the merge. 

But it has a lot of applications depending on it, we have a bit of a longer shutdown period where we will probably have it be live for about a year from now. Although as soon as the merge happened again, it will not be a good copy of the main net.

It will mostly be there for legacy reasons, but in about a year should be shutdown. Then we had Ropsten, which was another really old test net. And this one was running on proof of work, and it has always been really chaotic because getting proof of work hash rate on test nets is quite hard.

That means the average amount you get is really low, but then if somebody just shows up with a miner, they overwhelm the network and that causes super quick blocks and then when they leave, it causes super slow blocks, it is just a bit annoying to maintain.

And it was always quite a bit this chaotic state with a lot of reorgs. And so we transitioned this one through the merge first because it felt, well, it is already quite chaotic if we break it. It is probably the least bad test net to break and now it is running under proof of stake.

But, after the merge on the Ethereum main net, we will be shutting this one down as well. So, sometime before the end of the year, and this leads us to two test nets, which we are going to be maintaining. The first is Gordy is also old, but I think of them as legacy test nets. It is the one which has the strongest community around it.

There is a diverse group of validators on it. There is a lot of enthusiasts running on Gordy, and there is also the biggest beacon chain test net at the freighter beacon chain that is anchored to it. So Gordy is a test net we will be maintaining going forward, we are going to run it through the merge.

It’ll be the last test net, we actually run through the merge, so that validators have a dress rehearsal that they can try before going on main net. But if you are using Gordy it is not going away anytime soon. And then lastly, because Robson and Rinkeby were old and large.

Last fall, we launched a new test net called Sepolia. And this just transitioned to proof of stake today, right before we recorded this, and this is a new test net where the good thing about it is just super lightweight to sync. If you want to run a node on it, you can get up to speed in less than an hour.

And it just makes it really easy to maintain. The downside is there is obviously not as much stuff already deployed there, so there is not as much interrupt between contracts. But we will be keeping this one as well and hopefully growing the amount of stuff on it over time.

And the final difference between Gordy and Sepolia is, because Gordy already had a really large validator test net, we are going to keep this as an open validator set. So if you just want to try stuff on your validator on the test net, you could always use Gordy and even after the merge with Prader, you will still be able to do that.

But again, when people have these test and validators, they end up not really caring for them. It causes some amount of instability on Sepolia. Instead, we had just a whitelisted set of validators where these are all infrastructure companies and whatnot that commit to running them for multiple years.

It is quite a stable experience for developers. So, to recap all this, Gordy and Sepolia are the two test net that are sticking around long term. This is what you should be using if you are moving to. And then the other test net, we are going to be shutting down you should consider Kovan basically already, gets the deprecation window, and the shut down soon Kiln the merge test net will be shut down right after the main net merge. Ropsten will be shut down before the end of this year, and then Rinkeby will probably be shut down sometime next year. But it will be lagging behind main net and so it will not be a great staging environment anymore. 

Joseph Schweitzer: And this leads us, if you are a developer, it is good to know if you are a user, you have been probably waiting to hear what the merge is, where do we stand now? Because we are talking about test nets and test net merges and what happened today. But I will just hand you the mic for a while, what is coming up? 

Tim Beiko:  It is useful to give a bit of context on how we got to where we are today and then what are the next couple steps. But basically we have been testing the merge for over a year now. And if you count the launch of the beacon chain, it is more two and a half or so years.

And I mentioned earlier, we launched this beacon chain separately from the Ethereum application proof of work chain because, at the time there was already a ton of usage on Ethereum, and we wanted to make sure that the beacon chain worked and was stable. And that if it wasn’t they wouldn’t break everything else on Ethereum.

We launched it and after it’d been up and running for a while we started the prototype, how are we going to actually combine these two networks together? And, really early designs for this had a migration as part of them, or users would have to move from one to the other.

And I just felt bad user experience, where ideally you do not want them to have to do that. And there was this insight a few years ago that while the clients that run Ethereum, the proof of work chain today, they are already used to this concept of different consensus algorithms.

So we mentioned main net uses proof of work, but a lot of the test nets don’t, they use what we call proof of authority or just different consensus engines. And similarly, a lot of enterprise private networks also use different consensus engines. And so there was this idea of well, what if instead of moving all the applications over from the proof of work chain to the beacon chain, we simply had the software that is running all the applications change what consensus algorithm it listens to, to get the consensus on the state of the network.

And this is the very high level design of the merge is that once we hit a certain point, the clients on the network stop listening to proof of work as a way to get the latest valid head for the blockchain, and they start listening to the proof of stake Beacon Chain. This means that you do not need to move all the applications over from one to the other.

Tim Beiko: You can simply kind of use a different rule to tell you what the latest valid block was and keep kind of building on the existing chain. So we first prototyped that in May of last year, there was a hackathon and we got some of the client teams together to prototype could the post merge, architecture work where you have a beacon chain client that kind of keeps track of the head and then they receive blocks and they send them down to their execution client, which runs the EVM transactions, make sure that those transactions are valid and then confirms that or orphans the block, if not.

So, we prototyped that in this hackathon. We got it to work, that was good confirmation, this design was sound with the two clients talking to each other, obviously, it was a hackathon, so there were a ton of bugs, we spent last summer fixing all those bugs and ironing out a design for a final architecture.

And then, last fall we prototyped the actual transition from proof of work to proof of stake. So, could you start a network up on proof of works, start a beacon chain separately, and have them transition and not fall apart throughout, and safely finalize on the other side.

So we got everyone in person actually for this, for a week. And, after a week of work, we managed to get a first prototype of that working where we launched a devnet and it started on proof of work, moved over to proof of stake and it finalized on the other side. And so again, this was just a weeklong hackathon, so there were tons of bugs and issues.

And we spent all the fall after that fixing those. And by basically the Christmas holidays, we had a spec for the entire merge and post merge Ethereum, which we felt was stable, like not perfect but roughly right. So, we launched this first test net that called Kintsugi late December. 

And this was just to give people an idea of what post merge Ethereum would look for us to reach out to infrastructure providers and application developers to make sure that things didn’t break, when they were using it, and we got that confirmation.

We also ran a bunch of stress tests on the network, spam it and put nodes in weird states. And we found doing that, we found some edge cases in the spec, we fixed all of those. And in March, we launched this Kiln test net, which I mentioned earlier which was basically, call it 95% final spec for the merge.

So, we have done some small tweaks to this spec since then, but no big substantive changes, they have mostly been either clarifications or just some edge cases that a subset of clients were hitting and whatnot, but the overall functionality has stayed the same since then.

And after launching Kiln, we still felt it would be good to get some more practice runs because you learn a lot when the network runs through the merge, and it is a bit weird in a way because this is code that only has to run once on main net, and there is a ton of complexity there.

But then once the merges happen, you never need to run through the transition again. We wanted to make sure we got as many kind of runs of that as possible. And we have started doing these things called shadow forks, which are basically hard forks where we only launch a small number of nodes controlled by client and testing teams, which have the hard fork.

And then we see how it goes for them. So, you can think of this like, there is thousands of nodes on the Ethereum network. Well, we take ten to a hundred, we spin them up and we tell those small number of nodes, hey the merge is happening here. And then we actually run through the merge on those nodes and see what happens.

And for a couple of days also we can replay transactions from main net as well. So, we get to see not only running through the merge on main net, but also are the nodes stable afterwards, can you still sync and whatnot. So, we have been doing these shadow forks over and over, we had our 11th or 12th one earlier this week.

So basically, every week since late March, if not multiple times a week sometimes. And that is been really good, to help us test not only every client implementation, but also every pair wise combination. So earlier we mentioned, there is these clients that work on the execution chain, these clients that work on the beacon chain.

There is four on one side, five on the other. This means there is 20 pairwise combinations of one beacon chain and one execution climb that you can get. And so, in the shadow forks, we basically test every single possible permutation, and we want to make sure that they all work. And then when we find bugs, we obviously fix them.

And this is roughly where we were a month or two ago and now we feel much more confident in the readiness of the code. We are not a hundred percent there yet, but we felt ready enough that it made sense to start forking the long lift test net on Ethereum.

And there were a couple reasons for that. The first is with Robson we mentioned earlier, it is quite a chaotic test net already. So it was, even if something went wrong, it wasn’t the end of the world, but we also wanted to give end users like people running validators at home, the chance to run through a merge basically.

Because all these shadow forks, the nodes are basically controlled by client teams and testing teams, but it is not open to anyone to run a node. So we had this first Ropsten fork where we had a new beacon chain launch for Ropsten and anyone could join that, and we had people participate the network moved over and generally the transition went well.

So that was good, we found some couple bugs that we then fixed. And then today we ran through the second test net at Sepolia, and the goal there was to make sure that the bugs on Ropsten would show up again. And it generally looks like the fork went well, but we are still digging into the details and combing through everything.

Tim Beiko: Once we have that, we basically have one more test net to do before main net, and this is Gordy with the printer beacon chain. And this is a test net where we really want things to be quite ready, because this is where the majority of stakers are expected to run through the transition, with their setup in anticipation of main net.

And it has a ton of activity as well, it is not a test that we want to break, once we have debriefed on this Sepolia fork and see whether there is any issues we need to fix or address, we will start scheduling Gordy. And then once Gordy happens, assuming it goes well, then we would look at scheduling the fork for main net.

And the goal is really just to get as many dry runs as we can in increasingly complex scenarios where, if you start super complex from the start, you are going to hit a bunch of issues and it is going to be really hard to find a root cause. But if you start with these small dev nets that you, increase complexity over and over, you are always fixing bugs at the edge of the complexity.

Tim Beiko: And it just makes it much easier to fix those bugs if you are increasing complexity as you go. And short it is, you spend the past year testing, we have done one more test net to date, there is one left, and as assuming things go well on that one, we would move to main net. 

Joseph Schweitzer: So, to recap, merge specific dev nets, long-standing merge specific test nets in line with these shadow forks. Upgrade the long-standing Ethereum test nets that we covered earlier, and then main net, with one more long-standing test net left to go. And that is Gordy. 

Tim Beiko: That is correct. And then Gordy with the printer be contained and we will just call it all Gordy after it is merged. 

Joseph Schweitzer: So what do you look for in a successful merge and when it is done, what’s next? So, I know that a lot of work has been done on Shanghai, and a lot of those listening would probably be curious about.

So, there is some misconceptions about the merge that I am sure you are probably very familiar with and can speak to, from token unlocks to scalability and some of those things are covered in the next sort of set of upgrades they’ve come after. 

So, what’s left coming up until the merge and then what comes next?

Tim Beiko: Coming up until the merge is basically we want to monitor these test nets, not only right after but also, call it a week after so that we can still sync nodes to them and things work well, we have a whole slew of metrics that we look at from some pretty basic stuff, our blocks being produced to our transaction fees for every transaction being routed to the right way.

So, it is now it is really just combing through all these metrics and making sure that things are looking good and fixing anything where it isn’t. And then when we fix things if we find a bug, we will typically then write a test for it and then run all the clients through this test suite because it is possible that it is just one client hit something during a merge, but then all of the clients actually have the bug and it just happened not to hit it.

So now it is fine combing through that, and then if you look out and you assume the merges happened and we are all good, there is obviously more things on the Ethereum roadmap beyond that and maybe the first that you hinted at is this idea of beacon chain withdrawals.

So, because the merge is the most complicated upgrade we have done to Ethereum since probably launching Ethereum, we have tried to cut down anything that was not absolutely critical from it, just so we can have the smallest possible set of changes, which is already a pretty big set of changes.

The biggest cut to that is the ability for validators to withdraw their stakes back to the execution layer, to fully exit as a validator so that is the first big feature we are planning, for after the merge. 

Typically, when we have these hard forks with new features, we will introduce more than one. So there is a bunch of other proposals as well but that is the stuff we will be working on right after. One thing I will note though with regards to validators and withdrawals is while validators will not be able to withdraw their stakes after the merge, the 32 East plus, the rewards they have accrued, as soon as the merge happens, validators will receive transaction fees that currently go to miners and they will be receiving that on the execution layer itself.

So, they will not be locked validators on the beacon chain, this is kind of neat, if you are a staker or even using a staking pool or something, you will start accruing the non-burnt part of transaction fees right after the merge. 

Sebastien Couture: Let’s talk about what happened, just a couple of minutes ago, the Sepolia merge, can you talk about that and in the context of this broader roadmap and how significant it is to the broader merge efforts? 

Tim Beiko: So, we were saying it is the second out of three test nets, and it is this new one, which we hope to keep stable, and the validators on this network are a set of client teams, testing teams, infrastructure providers it is not quite anyone, because we want this network to be stable. We basically open it to any individual or entity that can commit to running stable validators over an extended period.

But there is not just a webpage where you can sign up and launch a validator like there is for creators. So, this was a good test because it shows us okay, there is still some group of distinct entities and individuals that need to coordinate and debug things.

But it is still not as open as the beacon chain or main net. It increases the complexity of things because we are on this call, I have not been digging through all the specific things that happened. But at a high level right before I jumped on, it seemed the network was relatively stable.

There were some parts of validators that hit some issues that were offline. And it seems folks are still looking into that, but we will know better in the next couple days what types of bugs did we hit, were they bugs or were they actually just configuration issues?

So, if they are configuration issues, this is quite good because, call it operator error rather than protocol problems, so you can just restart the validators with the right config and it works, that is what we are looking at.

I think once we have a clear picture there, we will start thinking about, okay, when do we want to fork this last test net? I think our bar for the last test net will be higher because we want to make sure that any staker can run through it and that it is as close to main net as possible, and then, assuming that goes well then we would move to merge main net. 

Sebastien Couture: And so, this next test net, anybody would be able to run a validator on this next test net? 

Tim Beiko: You can already, so you don’t need to wait for the merge you can literally go and register a validator on Prater and have it be up and ready for the merge. If you want to also run a post merge validator right now, you can do so on Ropsten. So Ropsten has merged and so you will not run through the actual transition. But if you want to figure out, okay, how do I configure my validator so that I actually get transaction fees.

Some validators, for example, today it is possible to use a third party provider instead of running an execution layer node to track deposits. After the merge it will not be possible to do so, if you have been using, infra or Alchemy with your beacon chain clients to track deposits, you cannot do that after the merge.

So, you need to figure out how do I run base suit net mine, Gath, Aragon, and you can do all that on Ropsten today, and register a validator and make sure that it is working. 

Sebastien Couture: So, as we wrap up here, it would be interesting to maybe talk a little bit about, about scaling and how post merge, what scaling would look like post merge.

And I think it is interesting that Ethereum is taking this data availability approach and allowing, sort of ecosystem chains to build on top of this base layer, what is your view about what this ecosystem will look like post merge, where will the majority of applications live?

And I think one interesting thing maybe also to consider is what are some of the interoperability? I mean beyond bridging and things like that. What is the work being done on operability to ensure that all of these different rollups and chains built on top of the data availability layer will be able to talk to each other, do crashing calls and stuff like that.

Tim Beiko: Over time in terms of just designing Ethereum, there has been this evolution where it is, if you compare Ethereum when it launched to Bitcoin, it was this maximally complex blockchain relative to Bitcoin where it concerns itself with arbitrary computation and state and whatnot.

Well, the blockchains landscape has evolved a ton since then, and the kind of design philosophy in a way for Ethereum has kind of shifted to, okay, what is the actual minimal set of things we can provide with the highest level of security, which then enable people to build applications, scaling solutions and whatnot that depend on that.

And so, one very obvious shift is I decided that Ethereum originally wanted to do something called execution charting, where there is a bunch of L1 shards, which each process computation and parallel that are managed by the protocol, and we have moved to this world where well instead we had rather allow a free market of solutions to emerge and use Ethereum as more of a settlement layer.

And this is what we have seen with L2s today, and this is how we think about scaling is the L1 chain, the capacity will keep improving and there is some stuff we can do, but it will not improve as quickly as the demand for block space grows, if you imagine we improve the capacity 10x or a 100x over the next 5 or 10 years.

The demand for blockchain might grow hundreds and thousands of times, we need several orders of magnitude more. And this is where things basically L2s can provide that. And the way L2s work at a high level is that they trade off this asymmetry where on Ethereum it is very expensive to run computation, but fairly cheap to store data.

Whereas on an L2 it is actually quite cheap to run computations. So if you can run all your computations on L2 and post some data about them back to L1, that allows you to lower your fees a ton because you are just running all these computations and not actually running much on Ethereum L1.

And ZK rollups and Optimistic rollups differ in how they approach this, but at a high level there is this asymmetry between the cost of running operations versus posting data. And so, if we could get scaling solutions that just run most computations off-chain.

Either run some proofs on chain or some disputes on chain, but then mostly post data, that allows for way cheaper transaction fees and for more scale. And so, today when these rollups and scaling solutions post data back to Ethereum, the only mechanism they must do so is storing data in the blockchain forever.

And this has two consequences. One is every single node on the chain needs to process that data and two, they need to hold onto it basically forever. And so that means that it is still expensive to store data on Ethereum. And so, when you pay for a transaction on a L2, most of the cost you are paying is to store this data back Ethereum.

The first thing we can do there is, okay, how do we make it cheaper to store data on Ethereum? Because that that mean it is cheaper for all these rollups to be built and or conversely that they can accommodate more demand for the same price and therefore scale. And one thing about Rollups is they do not actually need this data to be stored forever on the network.

They only need it to be posted on the network and available for some amount of time such that people can agree that it was there and that it was correct. And if it is not correct, have a reasonable amount of time to dispute it, and typically this is on the order of a week more or less.

So there is this assumption within Rollups that if the data has been made available for roughly a week, it gives time for people to sanity check it. Make sure that there is no either malicious or buggy transactions or mismatch between what the L2 thinks is the state of the world and what L1 thinks, the L2 state of the world is.

And if that happens, then it is fine, this is really the window where you want to provide really strong security guarantees around this data in this short period of time. And after that, you almost do not really need to provide much because you can assume that, if there was a dispute, it would have been resolved.

And if somebody wanted to make a copy of the data, then they could have done so already, and so this is where we get into this idea of data availability, which is to say, what if instead of just having these L2s store data on the blockchain forever, they simply post it to another place where it is still secured by Ethereum, but we do not guarantee this data is available forever.

We guarantee it is available roughly for how long rollups need it, with some buffer on each side. So, like I was saying, rollups usually need data left for the order of a week. And the proposals for Ethereum right now is what if we just provided a way to make data available for the order of a month or something.

So, even if it is you need it within a week, maybe you need to sync a node, literally you need to go buy a computer, get an internet connection set up, get the guy come to your house, set it up, sync your node, and this extra buffer would give you enough time that you could still recuperate the data that way.

And so this is when we talk about proto sharding, this is roughly the idea is what if we added a data component on Ethereum, which is ephemeral but still highly secure. But then you can charge less for this data component because you are not incurring a forever storage because you are incurring a short duration storage cost.

And then the next level beyond that is what if instead of having all the nodes on the network incur this short term storage cost, you could scale the amount of data that you are storing, have nodes only store a subset of it, but then get a really high probabilistic guarantee that the rest of the data is available on the network.

And when we talk about full sharding for Ethereum or bank sharding, which is the latest spec for it. This is what it means, we take this infrastructure that we must store an ephemeral amount of data, but instead of having every node store a full copy of all the data, you have every node stores a subset of it and do some cryptographic checks across the peer-to-peer network to ensure that other nodes are storing the rest of the data with a really high probability.

I don’t know if it is in the billions or trillions, but there is an incredibly low chance at some amount of data would be unavailable on the network. And because basically by doing that you are able to scale roughly by another order of magnitude, the amount of data that you have.

And then those cost savings or basically scaling bandwidth gets passed on for layer twos and other solutions that help. And so, this is roughly the vision, can we hone in on the parts where security mattered the most and put all of our efforts there, and provide these incredibly high security assurances that this data was made available, that this data was correct and that the network came to consensus on it.

But then after that kind of outsourced the community into the ecosystem, ways to store and manage that data and your other point was around, cross L1 communications and stuff like that. I think again, that is something where it feels the L1 protocol itself is not the place for that to live, but where it is probably much healthier if you see just a market emerge and the best solutions gain traction there.

Joseph Schweitzer: So to recap slightly, the merge is happening and then after the merge happens, we would dig into things that look like optimizations that help with scaling in L2 land. But I think, as we come close to closing, I could give you the mic back to talk a little bit about where some folks maybe fell off the wagon a little bit, is that as research changes, titles change, naming schemes change, and roadmaps change.

So, for those that have maybe been on the Epicenter train for closer to a decade, there are four stages to Ethereum’s roadmap, there is Frontier Homestead, metropolis, and Serenity, and that is long gone. And for those that sort of came around during the ICO era the question is when is ETH2, which still kind of exists today in a couple of different places, and people that confuse one another with token name changes that do not exist.

But this revolves around some phases, would you say this is kind of where the roadmap stands now, it is a merge be these kinds of optimizations and whatever else into the future, what this sort of today’s Ethereum roadmap?

Tim Beiko: I guess the kind of meta part from your question, clearly the Ethereum roadmap has changed a lot, so it is probably naive to expect today is when it gets set in stone.

I think maybe the biggest conceptual changes, we are a bit less linear now than we were before. And because we have grown the amount of people who contribute to the protocol, we are able to do stuff in parallel to a degree that we could not. And so, this seems like when you looked at this frontier, the Serenity roadmap or ETH2 phase zero, phase one, phase two, they all assume it is like we are going to work on A and we are going to work on B and we are going to work on C.

Whereas today we are in a spot where, obviously we need to ship things one after the other we cannot ship every single thing at the same time, but there is definitely different teams and different even within teams people that are working on the various big either issues with Ethereum or areas of improvement and they are kind of happening in parallel.

So we talked about beacon chain withdrawals, we talked about sharding, we did not talk about MEV and proposal builder separation and we did not talk about statelessness, and we did not talk about the EVM and continuing to improve that, but those are all threads that are happening in parallel today.

And I think at a high level, there is some protocol related things that we need to do to ensure that Ethereum scales and it is safe and that we do not end up in a spot where there is some centralized actor within this system who can exert a really high level of control and of influence.

And there is a ton of work that is being done on, on those fronts, there is also a ton of work that is being done making sure the EVM stays relevant and keeps improving and luckily it is they are not blocked one on the other. And what I think happens is whenever something goes from the spot where it is ready to go from research to productions, then we typically will prioritize that and get working on it at the client level.

And we have been lucky so far that, it is just not happened that two R&D efforts are ready to ship at the same time. And if they are in the future which it probably will happen, then we would have to decide which one is higher priority or do we want to bundle them together, I think there is not as much a single big roadmap.

And that is one thing we have tried to do with the naming, you talked about ETH2 versus ETH1, and today we call it the consensus layer versus execution layer. And the best naming strategy, going forward is just to try and describe things very plainly what they are.

Like the consensus layer helps the Ethereum network come to consensus on the valid tip of the chain. The execution layer runs those transactions. I think once we have charting live it would be fair to call that a data layer or a data availability layer.

And similarly if you look on the MEV land, when they start to think about proposed builder separation it is like they are taking this one step further when they are looking at, okay, what’s the role of a validator? What are the different tasks that a validator can do? What are the degrees of freedom?

And can we just segment that in even more granular detail so that we can analyze it better. So that is my hope I think if we can go from having this very vague terms to just saying, okay, these are the five biggest problems and we need to address them and this is the solution bit that is going to address them I will be happy.

Obviously, it is not my cultivate, but I’ve appreciated that we are trending in that direction. 

Joseph Schweitzer: And I think we are going to have to save proposal builder separation for the second round. 

Tim Beiko: You need someone else to come and give a two hour talk on that.

Sebastien Couture: The next time we have someone from EF, it will be to answer the question when ETH3.

We need to start getting some content out early about ETH3, since our history with covering this sort of topic on Epicenter. 

Thanks for coming on Tim. It is been great and hopefully now our listeners, get a much better view of the overall arch of the merge and everything that is coming up next certainly. I have a much better understanding now, hopefully we can get you on again soon in a couple of months, when things start to move into production. 

Tim Beiko: Thank you so much for having me. 

Sebastien Couture: Great thanks.



  • Chorus One

    Start earning rewards and contribute to network security by staking with @ChorusOne, a staking provider securing $5bn in assets on over 25 decentralized networks. Head over now to https://chorus.one to start your staking journey.
  • ParaSwap

    ParaSwap aggregates all major DEXs and makes sure you beat the market price at every single swap and with the lowest slippage - paraswap.io/epicenter

0:00:00 | -:--:--

Subcribe to the podcast

New episodes every Tuesday