We’re joined by Alexey Akhunov, an independent Ethereum researcher. Alexey has been working on an ambitious project called TurboGeth. As the name implies, it is a version of Geth which features a number of speed and performance optimizations.
Alexey also leads the state rent working group of the Ethereum 1.x project. Ethereum 1.x came out of Devcon when core developers began to realize that the full migration to Serenity would likely take several years. The team hopes to bring progressive improvements to Ethereum in parallel to the development of Serenity.
Topics we discussed in this episode
- Alexey’s background as a computer scientist
- The story behind TurboGeth and how it differs from the original Geth client
- The speed and performance optimizations of TurboGeth, as well as its trade-offs
- What is Ethereum 1.x in the context of Ethereum 2.0 (Serenity)
- Which people and projects are part of Ethereum 1.x
- What is state rent and why it may be beneficial to Ethereum
- Implementing state rent in Ethereum 1.x and 2.0
- eWASM and how it can be introduced in Ethereum 1.x and 2.0
- The future of Ethereum and the progress towards Serenity
- Microsoft Azure: Deploy enterprise-ready consortium blockchain networks that scale in just a few clicks. More at aka.ms/epicenter.
Sebastien Couture: And so today our guest is Alexey Akhunov. Alexey is an independent researcher, works on Ethereum and works on different projects with the Ethereum foundation and so we talked about a couple different things primarily we talked about so the roadmap for Ethereum what is known as the Ethereum 1.x and how it relates to Ethereum 2.0 Serenity and also talked about his work on a project called Turbo Geth so it was really interesting conversation because allowed us to see how the Ethereum roadmap actually had evolved since DevCon with this new point release called a Ethereum 1.x
Friederike Ernst: With the Ethereum tool looming so prominently on the horizon it’s often easy to forget that Ethereum 1 is actually going to be here for some while longer and it’s necessary and totally worthy of people’s time to actually make improvements upon that and we talked with Alexey about what he’s doing in that field and how it’s going.
Sebastien: So without further delay, here’s interview with Alexey Akhunov.
Friederike: We’re here with Alexia Akhunov today. Alexey is an Ethereum researcher.
Alexey Akhunov: Hello. It’s good to be here.
Friederike: Fantastic. Let’s jump right In, can you tell us what your background is and what you’re currently concerning yourself with?
Alexey: Yes, so I’ve always been, since I was very young, I was always doing programming. I guess that was my profession all my life. So I wrote my first computer game in Basic when I was 12 years old and my brother helped me to debug it is I didn’t know how to debug things and then it led to me going to university to study computer science and programming. Then I did a PhD in computer science and worked multiple places and it was always a programming so I learned a bunch of programming languages and eventually I learned about the cryptocurrencies in 2012 from my colleague at work and we started to have kind of lunchtime conversations and at first I didn’t actually understand it so it took quite a few attempts for this guy to make me understand what it is. And then I started to research Ethereum when I was thinking about the decentralized storage. And the reason I really thought about it is that you probably heard about the idea about the mesh networks, like a mesh net internet. And so one of the colleagues at work said that we should have this mesh internet instead of internet providers, but then I said would how are you going to do the search because the search requires storage and so you need the disc storage. So I started looking into things eventually I found the Ethereum white paper. I looked at it and I thought that it would still not solve the storage problem, but it’s interesting in its own right. So since then I started following Ethereum and in 2017 in June I went full time working on these projects, mostly related to Ethereum, but sometimes I did some other things. And at the moment I’m mostly working on this Ethereum 1.x project which is about ensuring the longevity of existing Ethereum network. So it’s my current thing. Oh, and I forgot that I was doing the Turbo Geth most of the last year. It’s my kind of my version of Ethereum client.
Friederike: Cool, so maybe let’s just jump right In with Turbo Geth. So I know you’ve talked about this on many occasions, so Geth is one of the mainly Ethereum clients and Turbo Geth is some sort of improvement on it. Right?
Alexey: Well, yes in certain ways. It is not currently functionally superseding it but my goal is to be functionally superseding it. Yes.
Sebastien: So what are some of the issues that you saw in Geth that you felt you could improve upon and ultimately led you to start building Turbo Geth.
Alexey: So the way it started is that I simply did the profiling so there’s a very good tool kit for for profiling so it was very easy for me to just run dome Geth sync on Profiler and to see where it spends most of its time. So I saw that a lot of time was spent in going into the database and then I started digging deeper and I realized that wow that each access to the state actually needs multiple access to the databases. So I thought well, this is really weird. So I found this was not how I expected it to be and then digging deeper I wanted to fix that part. So I wanted essentially so that each access to the Ethereum state takes one hit of the database and no more and that was kind of defining goal of this. And then it turned out to be a quite a deep rabbit hole which took me a year to to explore.
Friederike: How much better does Turbo Geth fair compare to Geth?
Alexey: At the moment it’s better in two areas I think. And it might be on par or maybe slightly worse than others. So it’s better in terms of the compactness of state history. If you let’s say run an archive node, which is the node where you have the entire history of state expanded, not in the blocks but expanded meaning that you can access it really fast. So I haven’t run the Geth archive node for a while, but the last time I did it in in summer it was about 1.5 terabytes on a disc, so that’s it’s expanded state. In Turbo Geth however, currently the expanded state in archive node is about 360 gigabytes which is a which is a big difference, this is the one thing. And the second thing is the actual speed of access. So when you want to run some data analytics, you want to retrace transactions from the past, the Turbo Geth is probably up to 250 times faster. So that’s why I can do data analysis on it that I wouldn’t be able to do on Geth. So if for example for my recent project, I can retrace all transactions to gather a number of s stores in a block in about two days, so to trace all the transactions in all seven million blocks, imagine if I had to do it with Geth it would probably take me I’d say 50 times longer so it would take me like half a year, which as you see it becomes impractical.
Friederike: What have you actually implemented in order to actually facilitate these improvements?
Alexey: So essentially I changed the way that the Ethereum client represents its state in a database and that’s the main difference and most of the clients that exist now and I think probably all of them except for Turbo Geth, they store the state as what they call trie or Merkle Patricia Tree, which is essentially a tree for each node which has at most 16 children, and the property of this tree is that if you want to read or write a certain entry you have to start from the root and go down the tree and the deeper your entry is, the more hops down the tree you you’re doing. And another important bit about this hopping is that you cannot do the next hop before you did the previous one so there’s a data dependency. So this is actually what was the initial thing that I observed. Because of the data dependencies, you cannot do these things in parallel, you have to hop from the root down to the leaves. In Turbo Geth I decoupled the state storage from this Patricia tree so I realized that the only reason why you need the Patricia tree is to compute the state route, but you can store the data as you like, and I like to store the data in a flat format where you have a key as the hash of the address and the value is the serialized value. So it means that when you want to access a certain item in the state, you just need one query to the database and that makes most of the things much faster.
Sebastien: So what are some of the trade-offs because it seems to me like this is, of course we would want this and we will want all the Ethereum clients to adopt these similar improvements, there must be at some point some trade-offs in order for someone to still want to use the regular Geth client?
Alexey: Yes at the moment that there are some things that are not supported in Turbo Geth which are working on Geth but I think they will all be superseded. So one of the things that I cannot do in Turbo Geth is what they called fast sync, and the fast sync is the way of joining an Ethereum network where you download the current state starting from the root and then just rebuilding the Merkle tree and this particular way of thinking requires certain query which gives you the hash of the node in the Patricia tree and then you’re supposed to answer with the sterilization of this node. Because Turbo Geth doesn’t store this Patricia tree at all it has no way of responding to this query. So it doesn’t know where this particular hash lives or on which part of the tree or in which part of the history. Up until recently I thought this was going to be a big challenge but after the workshop that we’ve done recently I’ve discovered that it might actually not be a problem. So we’re going to develop the new sync mechanism which will actually be more efficient with Turbo Geth than with Geth because it has the flat structure. So I think in the near future Turbo Geth will fully function and supersede Geth, if I have enough time to do this.
Sebastien: Just to understand the context in which you’re billing this, are you billing this by yourself, or are you working in collaboration with the Ethereum Foundation in anyway to operate Turbo Geth into Geth.
Alexey: Well I didn’t think about incorporation yet. I did start it as my own project and then I received the grant from Ethereum Foundation in 2018 and then I had some support from Infura because for them it would be beneficial to reduce the storage requirements and potentially run the rest the nodes and a cheaper hardware, but because it’s not functionally superseding yet it hasn’t happened. So I don’t know whether this is going to replace Geth or not, I think if I will leave this decision up to the go-ethereum team. I never try to force it through and I also didn’t have time and resources to actually try to port my changes into the go-ethereum and I was very explicit about it with the go-ethereum team and they are fine with that, they understand that I’m also under constraints.
Friederike: Just to clarify, can you just quickly talk about what the current state of Turbo Geth is. Is it live, can people download it?
Alexey: Yes it is live. The current version does not really have the Constantinople fork in it. So it is currently rebased up to the state of go-ethereum which existed somewhere in end of January I think, and you could currently download it. You can do the full sync. So it is the only sync that supports it, you have to start from Genesis and you apply the blocks. If you have a decent machine it will take you probably two weeks to do that and you will end up with the file which is 360 gigabytes and then you can do your RPC queries. You can process the blocks and some of these RPC queries are much faster, some of it are slightly slower. But yeah, it’s working. I haven’t tested all the RPC queries, but generally it works. The light client is not supported and state trie sync doesn’t work, but this could be fixed. And Infura also managed to sync it some time ago so it had some independent verification, not just a figment of my imagination.
Friederike: So are you going to drive it forward or is this going to to become a thing for the Ethereum Foundation, because maintaining this is a lot of work?
Alexey: Yes so I’m going to drive it forward, because there is another application for Turbo Geth. I’m also working with the Interchain Foundation because they are very interested in using Turbo Geth as the engine for their Ethermint project, and at the moment I’m figuring out how to merge this flat structure that I have in Turbo Geth with their AVL balance tress and I think I found a way to do it. And my my idea is to modularize the code so that I can use it for both Ethereum client and for Ethermint. So there is definitely appetite for it and as I said earlier Infura is very interested in actually Using Turbo Geth and I also now discovered that the biggest use case for me and hopefully for other people is use it as a data analytics source because as I said, it’s much more viable to do data analytics with the Turbo Geth than with any other client at the moment, and that potentially could be useful for companies like Google because they already looking into this analytics in the cloud using the Ethereum data. Not just for Google but if anybody else.
Sebastien: So I was just going to mention Google and the work that they’re doing. So we had Allen Day on a few months ago who’s an engineer at Google and who’s leading the project to bring the different blockchains like Bitcoin and Ethereum into Google Cloud so that you can effectively query the blockchain and do data analytics in a very simple query language. So what are the applications there? And how could Turbo Geth be beneficial? I mean because I presume Google’s got these really incredible machines and they don’t really have computing power, or it’s not really an issue for them. How is Turbo Geth beneficial in this case?
Alexey: Well, obviously if they don’t care how much it costs to run these analytics then there’s nothing for me to offer. But if you do care about the cost, I think the Turbo Geth can make the cost of running this analytics much smaller. So because of the efficiency you can run it on tens of the hardware that you usually would do.
Friederike: So the improvements that are made in Turbo Geth are mostly database?
Alexey: Yeah, it’s mostly that nature.
Friederike: So there are six or seven Ethereum clients, but only two that are really used to Geth and Parity, could this also be used for Parity?
Alexey: Yes, though I did actually think and talk with Parity about it. So they need enough motivation and enough justification to do this. It will require the big overhaul of their architecture of their client to do something like this. Essentially this is the overhaul that I have done for go-ethereum to do this, but the similar overhaul is required because you essentially have to change it on many things. So they had a similar project, not that ambitious, called Parity DB which is essentially flattened representation of the current state. This hasn’t been integrated into the Parity yet because they haven’t found enough motivation for it, but it might be if we go ahead with some of our plans like advances in client. I sort of see that if this project is going we will see convergence are other clients will implement it, especially if it becomes kind of all-around benefit, you know, there will be no reason not to adopt this
Friederike: Yes I see, so if there are no drawbacks there’s no reason to not also implement this superior database. You earlier alluded to the fact that you also do other things, you mentioned Ethereum 1.x. Can you can you quickly tell us and the listeners what you mean by Ethereum 1.x and how it differs from Ethereum 1 and Ethereum 2 and how Serenity in Constantinople actually fit into the picture?
Alexey: Yeah, so I’m going to walk you through what I call the short prehistory as I usually introduce the 1.x. So it starts with Cancun in November 2017 where Vitalik gave his closing speech called ‘A Modest Proposal for Ethereum 2.0’ and in that speech his suggestion is to keep the Ethereum 1.0 as the conservative and safe chain and most of the innovations will go into the shards on the Ethereum 2.0. So people thought well that makes sense and then in May 2018 in Toronto Vitalik gave another presentation, I think it was called ‘So you want to become a customer validator’. And so that was about running the Casper validaters on your laptop , basically signaling that the Casper FFG (friendly finality gadget) was near. But then somewhat surprisingly for people, in June 2018 there comes a pivot in Ethereum 2.0 meaning that the Casper FFG on Ethereum 1.0 would not happen. Instead there will be a separate chain, which is called a beacon chain that will be launched as parallel to Ethereum 1.0 and then the casper researchers and the sharding researchers will be merged into one research team because they turned out to be doing lots of similar things and that pivot basically meant that the first people thought well, maybe that’s pivot means that we’re going to get sharding faster or casper faster, but then by the October or `November 2018 when again Vitalik laid out the potential timeline for the Serenity it became clear that actually it’s not going to be that fast, it might take three years optimistically to functionally supersede the Ethereum 1.0 and maybe five years not so optimistically by a function is superseding, I mean you need to go through phase zero, phase one and phase two to actually get to the same functionality as we get in Ethereum 2.0. So just launching a beacon chain is not enough. So then people realize that we have to live with the Ethereum 1.0 for another three years at least and probably for another five years and look what’s happening. And so this kind of look what is happening was the initial chatter and difficult for among the kind of core developers. Like, what do you think is going to happen? We are really struggling with the growing state with the synchronization, things taking forever, is it going to work. So this is how the Ethereum 1.x actually work and the reason why it’s called 1.x is because we don’t know if it’s going to be 1.3, 1.5 over 1.7. So we just put the X in it.
Friederike: Cool, so as I understand it, there’s a couple of things that are actually part of Ethereum 1.x and different people are working on them. So maybe you can give us a short overview of who is part of Ethereum 1.x and what kind of projects they’re working on.
Alexey: Okay, so there were initially four working groups that were initiated on Ethereum 1.x so one of the working groups is, we called it state rent. And now we call it state fees because it might not be just rent. And so I took on the leading of this group and people who were there agreed, I hope it’s still okay, and then there was another group about chain pruning. So this is the group which we will be looking at something which is not related to the state directly, but also something that current Cathedral clients have some issues with, storing the blocks which I think is going about 70 gigabytes or 80 gigabytes now, the block bodies, and also the growing event logs storage. So we have to start pruning them at some point. And so Péter Szilágyi has stepped in to lead this group. So everything is kind of fluid at the moment, the groups are not really restricted to these people. So we always welcome new people to contribute. And so the third group Ewasm some group. So some people might be surprised why Ewasm is there. So I agree that it might sound a bit artificial but when we initiated this the belief was that something like a state rent or state fees will be a kind of restriction to the resources that we give to the developers and it’s good to bring something in return, so you take something and you give something else, so you’re not just taking or you’re not just giving it’s a give or take. So Ewasm has a potential of doing this and also Ewasm could help to to reduce the number of point features that we have to introduce. There was a lot of talk about introducing new pre compiles for lots of different cryptographic primitives and that if you look at the history of presenting a release, there was a lot of time spent on just implementing two or three pre compiles because of lots of testing and so instead of spending the core developers time on that, why don’t we just do the what they call the last pre-compile just, Ewasm, and then you can implement all the pre compiles files there. It has a lot of nuances but these are the two reasons why Ewasm is there and currently Ewasm is basically lead by the Ewasm team. Yeah, I’m not going to list all the people because I probably will miss somebody out. So the fourth group that has been formed is the Emulation Simulation group, essentially this is the group that tries to find out what are the tools that we can use to support the other groups like State Rent and the chain pruning group, to run some test scenarios and try to predict what problems that we’re going to face in the future. What are the first things that will break. That’s the roughly the description. As I say that I don’t have a list of people who work in each working groups because at the moment it’s all very fluid and open. I see anybody who is contributing to the to this as a part of the working group.
Sebastien: So I just want to maybe come back on this and maybe clarify a few things. So this was all very confusing to me. The version numbers and then also Serenity and Constantinople and as someone who vaguely follows this I thought it was confusing so I can imagine for someone who’s coming into the ecosystem just how confusing it could be. So there’s Ethereum until now and at some point, the idea of Ethereum 2.0 was was put out, but as it stands it looks like all the features that are on the road map won’t be ready for production deployment until maybe three to five years from now or at least until that arises to stability. It might be some time. However, the Ethereum blockchain and the the system as it stands has a number of problems and a number of issues. So during DevCon these people came together and said, okay, let’s form these working groups so that we can come up with a DOT release in the version 1 of Ethereum, which would include some of these features so that we can continue to build the ecosystem and build the apps on top of the Ethereum.
Alexey: Yes, that’s exactly the correct representation. Also people realize that the fate of Ethereum 1.0 and 2.0 or are linked. One cannot live without the other so, and this is the important bit, you cannot just simply forget about what’s happening on Ethereum 1.0 and hope that we will get there like with Ethereum 2.0, so one supports the other.
Sebastien: Okay. So in the 1.x version of Ethereum, which is this version that we aspire to build at some point and to release, there are right now for things that are in that roadmap, one is state rent, which we’ll come to in a few minutes. The other is chain pruning. So optimizations on the size of blocks and logs, Ewasm and emulation and simulation tools, and so what’s left than on Ethereum 2.0 in the roadmap?
Alexey: Okay, so Ethereum 2.0 is a very ambitious project and the parts of the work that we are aspiring to do in Ethereum 1.x will be very useful for Ethereum 2.0. For example, it seems to me that there is some consensus that the state rent will be required in Ethereum 2.0 but the difference between the state rent in the second Ethereum and the first Ethereum is that in the second it could be pure it doesn’t have the legacy of the current ecosystem current contracts. You don’t have to deal with my transitional issues, with the things that you have to look at. So in the second Ethereum, we can just introduce it in pure form without all the mitigations for the certain vulnerabilities, but the lessons that you learn with the first will be invaluable for properly introducing it into the second Ethereum, the same I would say for Ewasm. You can learn a lot of lessons on the way and apply Ewasm in a much better way to the shards when it comes in. So basically my conclusion is that everything that we do will be useful in second Ethereum. Because it will make it a better system. So it will inform a lot of design decisions.
Sebastien: in addition to proof of stake and Beacon chains and all these other things. Y
Sebastien: Okay. It feels like this was a natural thing to do anyway, to do things in an iterative sort of fashion.
Sebastien: It seems logical to me.
Alexey: Yes, there was a gap which was probably temporarily overlooked and we simply just recognized this as the gap and then we have to still put resources in this gap rather than just shifting them over to the the second Ethereum.
Friederike: I think this is a great preface for actually talking about these proposed things in details. So can you briefly recap what you mean by state rent and why we need this?
Alexey: Okay. So first of all the idea of state rent or some people used to call it cold storage rent, it is not a new idea and a lot of people entertained it back in 2014 and 15 before Ethereum started, but previously people were concerned about the cost of storage. Essentially when I could expand the state of Ethereum by let’s say creating a new contract or by introducing a new item into contract storage, then you pay for it once in a gas and then it’s staying forever unless you decide to free it which you might not never do. And so people were representing this problem as, okay, so you paid once but then other people will have to pay for it until the end of time. So at some point I realized that this probably was the failure of the previous state rent \researches that they concentrated on this particular cost, the cost of storage, because if you start this argument, you will very quickly find yourself arguing about things that you don’t know, like we don’t know how much it costs to store things. We don’t know, depending on different kind of storage, how the cost function tails off and things like this. So instead we pivoted from this approach and realized we’re going to only talk about the performance implications of the large state. The problem that we’ve seen is that as the Ethereum gets more use or even with constant use, the size of the state grows and that brings some performance problems, which we could observe. It’s not something that we can speculate about, it’s something that we can measure and this is one of the reasons why we have this emulation simulation group to help us.
Okay, but the other part that I didn’t answer, so if we don’t bother clearing the state that people use, and the concern is that the state is probably something that you use for a while but the DApps come and go, but the state that they’ve been using is in the system. So everybody has to keep downloading it back and forth. So what we can talk about is the total set state which is essentially let’s say 10 gigabytes at the moment. Everybody has to download 10 gigabytes when they join a network. Then let’s say that if there’s a six gigabytes or eight gigabytes of this which nobody cares about, like people rarely use it it’s just there because they were there first, and then there’s a useful state which is when new people who come to the system they have to be content with using just two gigabytes. And so the problem is that this useful state keeps shrinking, so you essentially end up with a lot of garbage in your state which everybody has to shuffle around but the actual useful space is really constraint. You can you can compare it with the state of the property market in central London or something like this, where there’s a lot of houses empty. They are actually owned by somebody but nobody really lives there so all of the people who want to live there have to be content with a very small number of houses, and obviously they will have to pay a lot of money for renting them and stuff like this. So instead we get rid of them, build new ones and redistribute them in a sense of the rent. So I think our premise is that this is important for longevity. So the system does not become this dead ghost town where there’s lots of empty houses and nobody can use them.
Friederike: Okay. So basically the idea is to charge people not once when they start using storage and then paying them back a certain amount if they free it up again, but actually charging people by the day or by the block that they’re actually using the storage
Alexey: Exactly. So if they decide that they don’t need these things anymore they can just withdraw their Ether or they just leave it and then it will be garbage collected by the rent. So I see the rent as the garbage collection mechanism mostly.
Friederike: And a lot of programming languages actually have that built in as well. Right? So basically that you free up space that you no longer need.
Alexey: Yeah, the difference between the programming languages and this is that we have a very difficult problem of determining what is not used anymore. So that’s why we need things like recovery, if we made a mistake and removed something that people actually need there has to be a way to bring it back.
Friederike: Okay, so walk us through the process. So basically say you have a you have a smart contract now and it uses storage so it has to pay rent. So it has to have funds or someone needs to pay funds on behalf of it. So what happens if no one pays.
Alexey: So under all three of the current proposals, when the smart contract exhausts its balance and the rent balance, they are two separate things. So when there’s nothing in both of these balances then eviction happens. So eviction, under the current proposals doesn’t just happen automatically. Somebody actually has to poke this contract. So by poking I mean that somebody has to create a transaction which touches it to access it in some way. For example, somebody queries the balance and then in the end of the block this contract gets evicted. And eviction happens differently for non contracts and contracts. For non contracts, which basically just have some monetary Ether, eviction means just removing from the state because there’s nothing really useful in that, there’s no useful information. For contracts of course, there’s a storage, so eviction on the current proposal does not completely remove it from a state but it leaves a so-called stub which is essentially the commitment to the entire state of the contract before the eviction, and this stub, unfortunately has an effect that it does not completely remove it from the state so it has to still dangle there, but this stub is what allows us to restore it later on. If it was a mistake and somebody realizes. So the biggest example is if you had the multisig wallet with lots of tokens on it, and then you made a mistake by you didn’t pay up the rent and you realize suddenly, my multisig is gone, there was a million dollars in that I want it back, you would be able to use the recovery mechanism to rebuild the storage of this contract in another contract and simply use a special up code to restore it from the stub. And then you get your contract back. You can top it up with the rent and then you keep using it or you can move the things elsewhere.
Sebastien: And where is the stub stored?
Alexey: The stub would still be stored in the state. So this is the price we pay for the recovery. So the stub is expected to be 32 bytes hash which is a commitment to what the contract looked like before it was addicted.
Sebastien: Okay, and I’m not sure I understand how that solves the problem of freeing up state if the stub is stored in the state.
Alexey: So this is basically a non perfect solution and we we have a more perfect solution down the line but we want to see if this non perfect solution is actually going to be enough. So obviously for the for contracts which have no known storage at all, or very little storage, this stub is probably going to be big enough so that there’s not enough benefit in clearing, but for the contracts which have a lot of storage, 10,000 million items for those contract. Of course, the benefit of clearing will be a quite big so instead of 10 million things sort when 10,000 things you can have one hash in a state and everybody has to download only just that one hash rather than all the hundred thousand items. Yeah. It’s a non perfect solution, but we hope that it might be enough for our purposes.
Sebastien: Okay, I’m still not sure I understand. So the hash itself contains just the hash of the state. But coming back to this idea of recovering a multisig wallet, so if that data gets deleted from from the blockchain.
Alexey: Okay. So this data will have to be recovered from the history obviously. So if you want to recover your multisig, you have to go to our archive node or some node which still has the history, recover what the state was and then reconstruct that state on chain and then instruct the EVM to restore it.
Sebastien: Okay. So it doesn’t alleviate archive nodes from having the store this data, it only alleviates the regular nodes from having the store the state data.
Alexey: Exactly, the problem we are seeing is not actually the disk space that users have, so as I said we are trying to not care about this too much but we are actually now looking only at the active current state that everybody has to download when they join a network which is the much more acute issue to solve and your multisig will be deleted from that state but leaving this stub that you can use to then prove that this is was the state of my multisig, please recover it and it will be recovered.
Friederike: Can you use this as a feature, so basically saying this is a contract I want to have on the public ledger, but I don’t need to access it often. So maybe only once a year or so. So I will let it run out of rent and then basically only the stub has to be saved by everyone and then I will migrate it to a new contract. I will restore it when I need it again.
Alexey: Yes, of course. You can probably use it to save some money on rent. Yes. So you are just hibernating your contract.
Friederike: That sounds very similar to stateless contract design that you find in some other chains. So for instance Polka Dot uses this or also r chain can you compare those two?
Alexey: Yeah. So the difference is that in the state list contracts we assume that when the contract is is represented as a stub it’s’ still accessible by the normal operations. In our proposal when the contract is in hibernation state, when it’s a stub, it’s not accessible by anything. It’s basically invisible by EVM with the exception of this special opcode, which it’s restored to, and only that opcode can see that stub, nothing else can. So to other contracts or to other observers, it looks like it’s not there. And so with the stateless contract it’s not true, with the stateless contract Paradigm is that it’s supposed to be usable. You’re supposed to be able to mutate that state to access the the bits in the storage . With our with our mechanism you have to first restore it, bring it back on chain, and then you can use it. When you finish using it you can let it expire and then clean it up if you want to.
Sebastien: So when you talked about earlier a perfect solution and on perfect solution.
Alexey: So yes if we find that this is not enough, for example, if we find that there will be lots of tons of little contracts having the hash stubs and then the state is still not small enough, the perfect solution, it’s not actually perfect, but it basically completely removing the contract from the state. There are three alternatives how to deal with it. So first of all not recovered at all, this is the kind of the nuclear option, basically saying that once it’s gone to the state, there is no way to bring it back. The second option is what Vitalik suggested in his paper on the resource pricing, is essentially when you want to revive the contract, which is not in the state, not even a stub, you need to prove two things. You need to obviously reconstruct the state that it was and then you need to prove that at some point in the past this was the state of the contract and the way you prove it is through the hash chain, through the header chain and through the state route. So this is the first thing you need to prove, that it existed at some point in the past, but the second thing you need to prove is that it did not exist at any point after that. So this is what they call exclusion proof. So first you do inclusion proof, and then you do exclusion proof and for the exclusion proof it’s tricky because you basically have to prove for every block since the eviction that it wasn’t there and there are ways to optimize it for example, if you say that, we now mandate that every contract has to live at least for 1024 blocks. And then it means that we don’t have to do exclusion proof for every block but just only for every 1024 blocks. The way we can mandate it is say whenever you create a contract you prepay the rent for one year for 1024 blocks in advance to make sure it will not get evicted.
The third way to do it which is what I call the graveyard tree is essentially eviction of the contract will require to have access to some kind of graveyard Merkle tree where all the evicted contracts live. And so if you want to evict something you say, okay, this is the state of the graveyard tree to the place where I’m going to put this contract and now I’m putting this contract into the graveyard and this is the modified Merkle tree proof. And then later on if somebody wants to revive it and they give a proof this is the contract inside the graveyard tree, this is the update of the graveyard tree without this contract. So you basically take it out to the graveyard and put it back into the chain so that you don’t have to do exclusion proofs. But this requires everybody who ever wants to evict or restore contracts to have this full copy of the graveyard tree. But we hope it will not get there to those things. So I described it a little bit in my first proposal. I’ve excluded it from the second for simplicity. I might bring it back in the third but basically we hope we will not need these measures because they’re slightly more advanced.
Friederike: Looking at this. So basically if this were a completely new system, obviously this makes a lot of sense that you don’t pay for storage once and then you can use it essentially forever. I mean, that’s the way that this the system should be designed. But obviously that’s not the case here. So basically you have to move this from a system where people actually deployed smart contracts under different assumptions to this new state rent system. And to me it seems there would be a lot of complications.
Alexey: Correct, you’re absolutely right. So this is what makes this I would say both very challenging and very rewarding at the same time because we’re not designing the pure system, we’re designing the migration from the legacy system to some non perfect system that we introduced. So that’s why when we started analyzing the implications of the rent on existing contract, there were a few things that basically sprung into the mind. So one of them is what we call the dust griefing vulnerability and the conclusion was that most of the contracts that exist today will be vulnerable to this attack. I can explain it to you if you want.
Friederike: Yes please.
Alexey: Take for example our beloved our ERC20 token contract, which has things like transfer and approve. And so the approve is a good example because the approve is essentially allows somebody to pull the tokens out of your account and also another feature of a prove is that anybody can call it even without being a token holder. So I can call a proof to any contract without even having to acquire tokens. So all I need is just a tiny bit of Ether on my account and I can approve lots of things. So if you imagine that if we have this token contract which has the information about all the token holdings inside it in its storage, then under the state rent regime this token contract will start paying the rent proportional to the number of items. So that means that if I am the villain, if I want to hurt this contract or to make sure that they abandoned it or maybe a competitor or something, what I will do is that I will start doing a proof like on lots of lots of random things so they can inflate the storage of this contract by using a little bit of gas. And so I will condemn them to pay a lot of fees forever by just doing some tiny investment of gas things. So that’s what I call the dust griefing attack. So I create a lot of dust. I can do it by transfers as well. I can purchase some tokens or acquire it in some ways and then just distribute them over lots of dust accounts, which will also inflate the storage but approve is much better because you don’t even need to take the tokens, and the same applies for lots of other contracts. So for example for the decks with the either Delta for IDEX, so every trade settlement is creating another storage item. So as you trade, you basically keep inflating these contracts. So this is one of the first things to solve and so far the intuition is that most of the contracts that are exist and are popular today will be vulnerable and they will have to essentially be rewritten. Which is bad news. Another realization we did is that if we look at the contracts which depend on each other, let’s say if you have some sort of decentralized exchange and we have things like Maker DAO, which now have some links to each other so you can move the contracts from one to another where you have some other interrelations. So if we say, okay now we’re going to introduce rent, all you guys are going to be vulnerable, you all have to upgrade at the same time. This is not possible, so you have to say well you’ve got this time to upgrade maybe one year or something, and this is how you can do it and you can do it in one by one. So first contract, which is the dependency of everybody upgrades first, then the other dependent contracts upgrade next and so forth. I know this is really challenging but we still don’t know when this problem will really become crucial to solve but current intuition it will have to happen within the next two years.
Friederike: Will you help people determine what kind of contingencies and dependencies there are because basically this seems like it is enormously complex. It’s a little bit like fixing an engine while it’s running. So you take out little parts and you need to make sure that the engine actually keeps running while you’re actually switching out parts.
Alexey: Yes, this is one of the biggest part of the project. And at the moment in the project plan I’ve been creating I call this part ecosystem research and that would consist of essentially enumerating all the different contract and DApps that we have and then for each of them to have somebody looking into those contracts and determining what are their vulnerabilities, what will happen to them. And what are the ways they can rewrite and modify this and of course then having this information going to these developers of this contracts and have conversations with them, this is how you’re likely to be affected,this is how you we think you should try to rewrite and get their feedback. Maybe they will give us the idea about some missing features in the proposals, maybe something which we haven’t thought about. So yeah, this is going to be a large work and at the moment I think we’re trying to make it more community driven, and by this I mean we’re planning to create a lot of Gitcoin grants so that multiple people can work at once because it will require that massive parallelization of efforts. I’m not trying to do this myself because simply there’s not enough hours in a day. And so there has to be a lot of people working on it. So this is probably going to be the most intensive part of this whole thing.
Friederike: So this is going to be a massive undertaking and just to give people an idea of the of the unforeseen consequences that this could have you recently tweeted about the parity contract that was suicided last year. So, can you talk about that?
Alexey: Yeah. So first of all I don’t want to give people the impression that this was intentionally designed this way. It’s a realization that came to me when I was reading some tweets by Jon Maurelian and so they were discussing the CREATE2 Opcode consequences and Jon was asking is it going to enable Parity multisig recovery and said well obviously not but something else would, so essentially in the proposal number two, there was this part which is called replay protection, which means that when your non-contract account gets evicted from the state and gets reinstated, so you can reinstate it by sending some Ether to it, then it would normally get to nonce zero which means that you can repeat the nonce that you had before and so as, let’s say if you pretend that you are the person who has the private key from the account that deployed Parity multisig library, pretend that you are Gavin Wood, and you still kept that private key, and then we deployed the supply protection and then we deployed the rent and addiction. So then what Gavin would do is that he would take that account. He will remove all the Ether from it so it will be 0 then he will get it evicted by poking it so his account gets evicted and then he puts some more Ether into it, it comes back with a nonce zero, then he says okay when I created the Parity multisig library, my nonce was let’s say 35. So then he does 35 transactions to something else and then it gets to the same nonce he had when he deployed multisig library, now he does the transaction which deploys completely different contracts, which doesn’t have the vulnerability and the problem fixed. So the library comes back at the same address and everybody gets access to the funds, right? It wasn’t designed this way, but the the reason why it became possible because we did not think about…. nonces are not just for replay protection, nonce is also used as the determining of the contract addresses. And now it has to be a consideration. So in the third proposal, I will replace this particular replay protection mechanism with another which does not repeat the nonces. So essentially the conclusion of this, the nonce is cannot be repeated.
Friederike: Interesting. So I mean this is this is an enormously ambitious undertaking. So what’s the timeline on this? And do you intend to have some sort of proof of concept?
Alexey: Well the timeline for the whole project is probably anything between 18 months and 30 months, sofor example the state rent which is the most complicated bit, it has a many pieces in it so at the moment when I’m writing proposals three it every every change is a letter from A to S, so how many letters are there, you can figure out yourself, and they are organized in the dependency diagram which shows you which change is necessary for the prerequisite for another change. And this diagram already exists in the second version so you can have a look but the third version is very different. So using this diagram we split into pieces so this could happen in the first hard fork, this can happen in the second hard fork and this is happening in third hard fork and actually the interesting bit about it is that at some point, let’s say after first hard fork we also get some side benefits. We will be able to increase the clock gas limit, which currently we’re not recommending doing because it will accelerate the state size growth. So as I said if it’s three hard forks, and if we assume that each hard fork takes us nine months to execute then it’ll be twenty seven months, but we will already be starting to getting some benefits after the second hard fork if we start evicting the non-contract accounts. So this is how we’re going to operate with this this thing, the proof of concept has to be done as an iterative process before you even get to the EIPs. That’s my opinion specifically about this project because it’s so complex. So we already had one first proof of concept on the first version of proposal done by Adrian Sutton from Pegasus. So we’re going to be doing more of this proof of concept potentially again with Adrian, but also engage some other people to do that. So the idea is after one or two proposal versions, we will do proof of concept to figure out what hasn’t been unspecified what is all ambiguous and obviously this proof of concept will also allow us to generate the test cases so that it’s pretty easy for the other core developers to then implement those things. So we do a lot of work upfront so that when we put out the EIP’s, we already have proof of concept and we have test generated. So that’s the ambition.
Sebastien: Earlier we talked about Ewasm and you mentioned that Ethereum 1.x would have Ewasm as part of its roadmap and that in Ethereum 2.0 there would be improvements on Ewasm. I don’t think we’ll go into the details of what Ewasm is, if our listeners can go back to episodes 245 with Martin Becze or 263 with Justin Drake, for a more in-depth discussion of what exactly is Ewasm. But with regards to the roadmap, can you talk a bit more about some of the steps that we would see here with Ethereum 1.x and Ethereum 2.0 with regards to Ewasm.
Alexey: So one of the reason Ewasm has been brought under this kind of umbrella of the Ethereum 1.x is that it enables us to not concentrate on what we call point features. So if you look at the byzantium release of Ethereum which happened in October 2017, it included four new pre-compiles which is the optimized subroutines, which is for some cryptographic operations. And although it was very useful it actually took a long time and a lot of work from the Ethereum core developers to prepare. It’s very tricky and since then there was more and more requests for more pre compiles because a lot of things that people find useful are simply too costly to implement in EVM, the byte code. So there’s more and more requests and at the moment we find ourselves, that if we try to implement all these requests there’s going to be no time for other things. So Ewasm is seen as a solution to this as what some people call the last pre compile. So essentially you roll out the engine which will be more efficient at executing those operations, so more tuned to let’s say hardware, as the Ewasm is, and so it will enable us to to introduce these features so that’s what that’s why I call Ewasm a meta feature as opposed to this point features, which are specific pre compiles people are asking for, so we don’t have to spend our time coding up specific pre compiles that people want, we are just doing things and maybe in the beginning it will be used for core developers to quickly introduce this request pre compiles and in the future to just open it up to everybody. So if you want your pre compiles you just deploy them as Ewasm contracts. So that’s the division.
Friederike: And Ewasm would then in effect run in parallel to the to the EVM?
Alexey: Yes. So there is no plan in at least in the first Ethereum to replace EVM with Ewasm because I don’t think it’s practical and other people also think it’s not practical. So there will be some ways to call Ewasm subroutines from the EVM code, it might be done via special opcode or some pre compile. Essentially there will be some kind of boundaries where you enter the Ewasm and now the Ewasm execution engine which will be in all the Ethereum implementation, will take over from EVM at these boundaries and then when the Ewasm procedure subroutine finished execution, it will give the control back to the EVM. And at some point during the execution Ewasm, there might be some points where the Ewasm code will require access to some of the Ethereum states, so it’s not just going to run and do some pure math computation, but sometimes it will have to go and fetch something from the state or or maybe update the state.
Sebastien: Can you talk about some of the ways or some of the scenarios where it would be it would be useful for a EVM to call up an Ewasm subroutine.
Alexey: Yes, so one of the things that was discovered by the work of let’s say Greg Colvin, when he was working on the what they call EVM 1.5 project, he said that he did a lot of experimentation and he discovered that because the EVM has a such a long war which is 32 bytes all operations have to be done on a long wards and it’s much much less efficient than if you just had the operations on 64 bits, which is mostly implemented in hardware. So Ewasm for example is much more attuned to the hardware execution because it has 32 bit registers. It was 32-bit and 64-bit registers. So when you execute this code, you don’t have to do a lot of aggregation because the math on the long wards is much much much lower. So the idea is that for a lot of the useful things we can execute things on Ewasm faster just simply because it has different arithmetics and because it might have more optimized compilers and things like this. So for example, I don’t think it has a dynamic jumps like a EVM has so it allows some sort of more static analysis like in other optimizations. That’s my view of this at least.
Friederike: So I think we need to wrap up soon. I have some questions as to the how you see the future of Ethereum. So it seems that for Ethereum 1 there is still a lot of potential to make Ethereum 1 is better without actually touching upon both sharding and proof of stake. So given that Ethereum 2 is going to take longer than expected to actually be put into place, what would you hope to see in the coming months and years with Ethereum 1.x and should for some reason or other Ethereum to fall through entirely, so show this fall apart just because basically each of these two big topics, sharding and and proof of stake, both of them are enormously complex and meshing them together only adds to that. So I mean do you think there is a danger of that not happening, and putting the pressure on Ethereum 1.x to actually step up and to take over for the next couple of years.
Alexey: Well, I think that obviously there are a lot of uncertainties about the future of the Ethereum 2.0 because at the moment they have phase zero pretty well specified, but as we saw in some of the reviews, I think one that I read was from James Prestwich. So he did an interesting review where he says that as you go through the phases there’s more and more uncertainty and for example the phase 3, is very vague about how this is going and how this is going to work. So I do sometimes have some, not worries I don’t worry about that much, but I do have some doubts that it might take a bit longer than let’s say even five years for this to happen. And whether there will be clear benefits and how exactly sharding is going to be done. So, I mean me personally I like fixing things that currently work rather than designing new things because this is not my strength to try to implement completely new things. So I like fixing things that already work and this is why Ethereum 1.x project is perfect. So what we see in Ethereum 1.x is I think we will be able to solve one of the biggest problems without any controversy or without any hard forks. I could go into this if you like and if we see that Ethereum 2.0 has some more delays then we will redouble our efforts and I think we can do certain things to keep Ethereum working, we’ll have to do some extraordinary measures, for example, we might need to make the state access remote and do what I call poor man’s sharding of state which is like sharding which is not enforced by the protocol like in the the second Ethereum, but the sharding which is emergent simply from the fact that people are not storing an entire state anymore. So these are the things I might see happening in the future but what I do believe is that if we keep focusing on this and not simply hoping that these things will keep working, I think if we simply hope that things are going to keep working they will stop working. If we keep focusing on making sure they do work and just keep fixing them, we have much better chance of keeping it alive for as long as we want. And then some people like Greg Colvin believes that, well you see Ethereum 1.0 is probably going to be used for as long as you know, maybe forever like I mean, maybe it will coexist with the second Ethereum. I mean, maybe the entire transition will never happen. Maybe some people will prefer to stay on the list on this system for a very long time. You can’t really force them to go away can you? Well, maybe you can, we’ll see what how this happens.
Sebastien: So let’s look into the future now and I just want to get your thoughts on this. So, presuming that proof of stake chains are the future of blockchain and that proof of work chains become less and less used because proof of stake has clear benefits and what we’re seeing right now is like Cosmos is about to launch and Polkadot is also making some headway and these these chains are proof of stake native, if it takes three to five years for Ethereum 2.0 to be fully realized with proof of stake and sharding and sidechains and everything, do you think there’s a risk that Ethereum would lose some of its network effect and some of it’s authority as the primary authoritative smart contracting DApp platform to other chains that are natively proof of stake and already having the ability to build DApps and things on them.
Alexey: Yeah. So this is a very interesting question. So first to address the point of proof of stake, I think I was really looking forward to Cosmos launch because to me it was the first non trivial proof of stake system that will come to production and I’m really excited for it to launch. And I think there was a bit of competition there between all these three things and now obviously we probably will see Cosmos launch first and Polkadot after that and Ethereum only the third, and we will see how it goes. I’m hopeful this is going to work but I will still see that the the proof of work is not dead yet. So we will be stuck with it for a while. And to the second part of your question is whether if Ethereum might lose its appeal – it might actually do it and one way to not this let this happen is to actually bring new experiments and innovations to it. So as an example, a lot of people look at the state rent as some sort of negative thing, but I would actually say that if you look around a lot of the block chains that reach the certain scale, you know in the beginning when you launch a new blockchain it’s always like yes, yes we’re going to be super duper blockchain and will scale enormously, but when they do reach certain scale is start seeing the problems of the growing state, and these problems repeat again and again everywhere. That’s why in lots of projects they started to think about the state rent to introduce it, but nobody actually did it so far. So what I see if Ethereum does it first and actually shows how it needs to be done, this is going to be a really big step forward, not step backwards. So it will be the first real life introduction of this concept which everybody was just talking about in theory, and now we have it in practice. And I can see this is the competitive advantage pretty much. And the other things as well is, to do the things like Ewasm, I know that the Polkadot already has the might of Ewasm but again we’ll see who is gonna do it first.
Sebastien: Okay. Well, Alexey thank you so much for joining us today, it was fascinating to get a glimpse this and also really get to understand where things are at with Ethereum right now. It’s true that since since DevCon a lot of things have been percolating and it really helps to get someone to really lay out the current state of things and where things are going.
Alexey: Okay. Thank you very much for having me. It was a real pleasure to have this chat.