Justin Drake

Ethereum’s Audacious Roadmap to Build a True World Computer

The Ethereum vision has always been to create a world computer. But its scalability and performance limitations have meant that it has fallen far short of that vision. Yet, work on scaling Ethereum has exploded in breadth and complexity over the past years. From variants of PoS, to Plasma / Plasma Cash, sharding, EWASM, BLS signatures, everything has been on the table. While confusing on the surface, underneath a coherent vision for a new Ethereum that will scale near infinitely has emerged.

We were joined by Ethereum Researcher Justin Drake to discuss the Ethereum Serenity vision, its core components and the roadmap ahead. A particular focus was the beacon chain, the role of randomness and Verifiable Delay Functions.

Topics we discussed in this episode
  • Justin’s previous project building on top of Open Bazaar
  • Why he made the switch from application development to Ethereum consensus research
  • The high-level vision for Ethereum 2.0 / Serenity
  • The Ethereum Serenity roadmap to scaling the world computer by 1m times
  • The crucial role of the beacon chain
  • The difference between Ethereum Serenity and Polkadot
  • The role of randomness in making Ethereum Serenity work
  • The limitations of existing trustless sources of randomness
  • How Verifiable Delay Functions can be used to create better randomness
  • The Ethereum Foundation plans to develop an open-source VDF ASIC
Sponsored by
  • Microsoft Azure: Deploy enterprise-ready consortium blockchain networks that scale in just a few clicks. More at aka.ms/epicenter.
Transcript

Brian Crane: This is Epicenter episode 263 with guest Justin Drake. Hi and welcome to Epicenter. My name is Brian Crane. Today I’m going to speak with Justin Drake. He’s a researcher at the Ethereum Foundation. We had a super interesting conversation about the future of Ethereum. There was a lot of unclear things in my mind before this conversation and fortunately was able to clarify a lot and I hope you will enjoy it as well.  So yeah, please enjoy the conversation. Hi, I’m here today with Justin Drake. Justin is a researcher at the Ethereum Foundation. I actually met Justin a few years ago. Back then he was working as a fellow Anthemis and there was this Anthemis retreat somewhere in the Alps in France. It’s very gorgeous and he was there, and we spent quite a bit of time talking. There he was working at a different project then that was building on an OpenBazaar. So, I’ve known him for quite a while, but recently he’s become very involved in Ethereum space and working on some of the cutting edge Ethereum work to scale and built the new, better Ethereum that will hopefully live up to its original promise of being this world computer.  So, thanks so much for joining us today Justin.

Justin Drake: Thank you for having me.

Brian: So, I’m curious. How did you originally become involved in the cryptocurrency and Blockchain space?

Justin: Right, so I discovered bitcoin in 2013 and in mid-2013 I didn’t really understand what was going on, but for some reason it had this appeal to me and towards early 2013, I decided I was going to read the white paper. I printed it out and it took me maybe 10 readings to actually understand what was going on, but because it was a short paper, I read it several times and just the intersection of things of mathematics, which I studied at university and cryptography and finance and computer science and networking all these things came together. And so in just a few days after understanding it at a high level, I created a bitcoin meetup group in Cambridge, UK and then a few years later I basically decided to quit my job and work fulltime in Blockchain space.

So, I was a programmer. Part of my time was programming FPGAs and I was spending all my free time on bitcoin and it was becoming unsustainable.  So I started looking for an opportunity to work fulltime in the blockchain space and I found this program from Anthemis. Anthemis is a venture fund in London and they were offering basically no strings attached, a little bit of money to do research for one year on the blockchain space. So, I jumped on this opportunity and during that time I created a company called Duo and as you mentioned, it was a building infrastructure on top of OpenBazaar.

Brian: So, why did you decide to build something on top of OpenBazaar?

Justin: So, back then I was taking a user centric point of view and I was trying to understand okay, how could this be useful to people? And I realized that there was a lot of complexity and that this complexity had to be abstracted somewhere and this appealed to me.  The idea of building a frontend UX, which was totally minimal and abstracted as much as possible to the extent that you don’t even realize that you’re using bitcoin or some other technology and back in the day OpenBazaar was the most promising actual use case of bitcoin. We were talking about remittances and getting discounts of personal IO, but these things didn’t really appeal to me and then OpenBazaar this idea of peer to peer decentralized market place was like a fascinating vision and I wanted to help out by building infrastructure on top of that to make it easy for people to use OpenBazaar.

Brian: So, when you started working on OpenBazaar Ethereum was already around. So, why didn’t you decided at that time say okay, I want to build something on Ethereum, right? Did you feel more committed to bitcoin, or maybe more in line with Bitcoin’s vision as opposed to Ethereum’s?

Justin: So, Ethereum was almost nothing back then. It was an idea, it was a team, but it was extremely ambitious, and I had some doubts as to whether or not the developers would actually pull it off, but for sure it was an idea that appealed to me and I followed it very closely. The more Ethereum developed the more I realized this was the right answer and within the OpenBazaar committee I lobbied in a way for more than just bitcoin support. In particular, I wanted to have Ethereum’s support, but it was difficult. I would say they were almost bitcoin maximalists and they gradually being softening that stance, they have added more coins and now there are coins where they’re looking to integrate Ethereum, but yeah I guess that’s part of the reason why I left OpenBazaar, but I think the main reason why I left OpenBazaar is because I felt it was a bit too early.

So, in order to build a decent space in a marketplace, you need all this infrastructure, right? You need a stable blockchain to start with. Also you need identity, reputation, dispute resolution, stable coins. All these things that are basically going to take years to build and OpenBazaar is about mashing together all these components and making a nice piece of experience, but we don’t really have the foundations.

Brian: Absolutely, because that’s one of the interesting things that stands out to me here is first you go and you work on a company that’s trying to build slick end user application and tries to make all of the abstract all this complex technology, make it easy and user friendly. And then you go to the absolute polar opposite and work on Ethereum research and is as far removed from user experience and interface as possible. So, that’s quite a shift you made.

Justin: Yeah, I mean in 2013 and 2014 I was drinking the Kool-Aid. I had these grand visions of how society would be reshaped and part of it was that there was this belief that bitcoin would actually scale and would be de-platformed for all these things, but as we learned more and more bitcoin took a very conservative stance and didn’t change and I think that led to the frustration of people, like myself, who were almost bitcoin maximalist in the sense that I believe bitcoin was the answer to basically flipflopping and leaving bitcoin. In terms of going to the polar extreme at the consensus layer, I guess I learnt a hard lesson that we are still very early and fundamental problems like scalability need to be addressed and so, maybe we should address these first before thinking of the application layer.

Brian: But do you feel tempted that once maybe in two years or three years when there’s lots of progress being made, do you want to go back to building end user applications?

Justin: I think that that might be an option. It will take a few years for the full Ethereum 2.0 vision to be fully rolled out, but once it’s mature I expect Ethereum 2.0 to not require so much maintenance and so much change and it might make sense for me to build something at the application layer, or the service layer, because I was effectively on top of the application layer, the service layer, I was building frontends and service engines and we implemented the OpenBazaar clients in JavaScript as a decentralized note that would run in the browser. So, this is the kind of infrastructure that is not really the small contract layer but one layer above.

Brian: So, now let’s switch to Ethereum and Ethereum is interesting because when Ethereum white paper came out back in 2014, there was already talk about switching to proof of stake and there was some of these long term ideas for sharding and scaling, but I think at the time because they were maybe very abstract people could still have at least this subjective sense that they have some idea about where things are going and what things look like, but then over time what we seen in the last years this explosion in complexity and now so just to list some of the things that are being worked on, right? So, there’s Casper and then there’s this Casper FFG and there is this Casper Finality Gadget, there’s Casper CBC and then there’s plasma.

Now there seems to be a lot of different plasma implementation including plasma cash and so the tremendous variety of activity there. Then there’s the whole sharding and I guess you mentioned that Flad is working on some sort of sharding correct by construction. So, it may be a shift in sharding directions and this is something we’re going to speak later quite a bit. This is beacon chain work and then this work on using maybe BLS and threshold signatures. There’s work on verifiable delay functions and maybe having like an Ethereum specific ASIC, which is very different from the proof of work ASICs. Then the work on ewasm. So, I think it has come to the point where it’s, I think, I really, really hard for people to have a sense of where are things going, where are they at and what’s Ethereum going to look like in the future? So, what’s your take on it? Is this great because there’s so much activity. Is it maybe too scattered and maybe not focused enough, or like how would you described at this moment kind of the work on Ethereum 2.0?

Justin: Right, so I mean you talk about complexity in general and the whole space and I’d say that a lot of the research was driven by the need for scalability and scalability really is a non-trivial thing. So, you know you mentioned plasma and all the variances of plasma. That’s a layer of complexity which is of layer two and then there’s research happening at layer one where things like Casper and sharding. I would say that a lot of the complexity is finding the right way towards the optimal way forward, but a lot of the research and the new ideas that have come out have settled down towards a really, really nice vision edition, which actually is simple and can be explained and implemented with relative ease, but I would say that a lot of the complexity has been modularized and what I like to call gadgets. So, for example, the finality Casper FFG you can think of that as just one module. You can go into the details if you’re interested, but you can also think of it abstractly as just like a finality gadget and we have all these various gadgets and they all fit in quite nicely in the Ethereum 2.0 vision and then in terms of road map, we’ve segmented things in just three layers and it’s not just in terms of road map, but also in terms of modality.

So, the layer zero is basically just a beacon chain, which is this piece of system infrastructure which manages the rest of the system. It provides services, for example, such as randomness, the management of the validators, it provides finality and various other things. It’s where most of the complexity lies and then we have phase one, or layer one, which is the data layer. So, if you think of a blockchain you have two things going on. One you have a consensus game people agreeing as to what data is being fed into the blockchain and that would be layer one for us. And then, we have layer two, which is the execution of the transactions basically giving meaning to that data, running it through an EVM and having a notion of state. And so, we have this progressive roadmap where we’re focusing on the zero and then layer one and then layer two.

Brian: Maybe one thing that would help people wrap their head a little bit around what you’re talking about in these different components and road maps is if we look a little bit towards the end state. Let’s assume that all of these things get built out and they work and we now have this upgraded Ethereum that’s capable of delivering all the promises made, or the expectations set. What does that look like in terms of what are the main components, how do they interact, but also what does it look like from a user experience perspective?

Justin: Right, so in terms of the main goals of Ethereum 2.0 number one is to move away from proof of work on to proof of stake. That’s partly to reduce the cost of proof of work, but also to enable new things, for example, we’re enabling finality and we’re enabling sharding by having proof of stake and the second aspect is scalability with sharding. The idea here is that instead of having one single blockchain you have 1,024 blockchains. So, you have roughly on the order of magnitude of 1,000 x increase in scalability. So, from the point of view of the application developer you can choose a shard and have your app live in that shard and it will have a virtual machine somewhat similar to the virtual machine in Ethereum 1.0 except that the virtual machine will be based on web assembly. In terms of the good news is that you’ll have this upgraded virtual machine and you’ll also have more scalability and hence lower fees. The bad news is that you’ll have complexity in terms of the cross-shard communication and another thing we want to introduce is the notion of sustainable storage, or storage rent, or storage maintenance fees and the idea here is to set things up from an incentive standpoint so that the stakes doesn’t keep growing internal. And so, as an application developer you will have to take that into account, which might make your life a bit more difficult.

Brian: You mentioned there’s going to be all these different shards. As an application developer, I mean is this going to be a similar thing to maybe a Cosmos Zone or Polkadot relay chain in that it may be different and not a relay chain, but a parachain that there may be different blockchains that may be for as much as game related shards and other shards, or is that all abstracted way in I don’t care and this just provides scalability?

Justin: Right, so I’d say Ethereum 2.0 is very similar to Polkadot. So, we have this notion of a central chain, which we call it beacon chain and they call it relay chain and we have shards when they have parachains. The main difference is that we are going towards a homogenous shard, so every blockchain on the consensus layer is exactly the same and that from an implementation standpoint just makes it a whole load of simplifications. So, even though from a theoretical standpoint it might be slightly less powerful, it means that you have a simpler and potentially more robust system.

Brian: And my other question was will I, as an application developer, will I use choose particular shards because they differ in some way maybe on the type of applications are also on that shard and in some other ways, or is this something that I don’t care about?

Justin: Right, so you would probably choose your shard based on gas costs. So, every shard will have its own gas market, which is another complexity. And so, once you choose the shard with the lowest fees, but at the same time you want to choose a shard based on proximity to other applications. So, within each shard things should happen quite fast and quite cheapy and you want to try and avoid the cross-shard communications.

Brian: There is the proof of work chain as well, right? Where today the Eth [indiscernible] [00:18:51] lives and the application live and in the future of course we expect that these essential applications will run on these shards. So, what is going to happen with the main today’s Ethereum proof of work chain?

Justin: Right, so we use the Ethereum 1.0 chain, the proof of work chain, basically for bootstrapping, for economic boot strapping. So, if we already have this billions of dollars in tokens and we use that to bootstrap the Ethereum 2.0 system and one of the main questions that it asks is is the Ethereum 2.0 going to be the same as the 1.0 Ethereum? Yes, it’s the same. We’re using the same precisely to have this boot strapping mechanism.

Brian: And so down the line proof of work chain would then be shut down?

Justin: Right, so we don’t really know exactly what we want to do with the proof of work chain. Most likely in my opinion is it’s going to stay alive in some way or another for many years to come. One option is to integrate it as a contract within one of the shards, but that’s quite an ambitious vision. It’s possible it will just live there for some time. Another option is to have some sort of a bomb that gradually will mean that applications will have to move away. So, you can imagine, for example, a gas price bomb, or gas limit bomb. Over time the gas limit goes to zero, or over time the gas price goes to infinity.

Brian: And then the tokens, would it be possible to move Ethereum the proof of work chain to shards and also back?

Justin: It’s a unidirectional thing. So, you take the Eth in the 1.0, for example, 32 Eth, deposit that into the beacon chain and that makes you a validator. Once the beacon chain, you can withdraw them in the shards and then between the shards and the beacon chain you can do whatever you want.

Brian: Oh, that’s interesting. So, you basically burn your old Eth and you receive then the new Eth on the beacon chain?

Justin: Exactly.

Brian: And so, when the beacon chain, I guess that’s maybe a time to speak a bit more about the beacon chain in detail. The things I understand right is, one is that it keeps track of the validators, right? So, the validators are both on the beacon chain, but also on the shards and has this process of selecting the validators for the different shards and kind of set of validators and I think that’s what we’re going to speak a lot more about later because that’s a key part of your work and does it also keep track, for example, of balances of Ether, or would those be in the different shards?

Justin: Right, so it keeps track of balances of ether for the validators. The validators being the key consensus participants in the Ethereum 2.0. It does grunt work, for example, handling the deposits and the rewards and the penalties and the withdraws and all these things. As I mentioned it basically provides infrastructure for the shards and for the system. So, it provides randomness. Randomness is important because we will be sampling validators to perform various tasks. So, we’ll say okay, this validator will have to propose blocks on shards x, y and z and this set of validators as a committee will have to do at a station or notarization on this specific shard. So, the way that we achieve scaling is basically by giving different validators different tasks spread out across all the shards. It’s very different from the current model where everyone is working on the same shard in this massive replicative fashion. And so, we’re making these smaller groups of validators be large enough that they are statistically representative of the wider pool of validators, but they’re still quite small.

Brian: And is there also a rotation of the validator sets so that a particular shard will have some validators for some time, but as a validator I continually will work on different shards?

Justin: Yes, so in general you want to have as much rotation as possible and that’s to prevent adaptive attacks. So, once you now who is assigned to which shard, you can try to bribe them, or you can try to deduce them, or try and do bad things. So, the more you shuffle them, the greater the security and we have different types of committees. So, we have one type of committee which is for notarization. It’s basically to solve the data availability problem and basically you’re asking validators is this piece of data available to download and always if not and if not people say that it is available to download then given your honesty assumption then it is indeed available to download, but there are things which are more difficult and you can’t do the shuffling so fast. That for example is the state layer, the execution layer, so when you want to execute a transaction you need to have access to the relevant states and downloading that state will take time and you need to sync up and so you can only shuffle so fast. We are looking into very interesting approach called Stateless Climbs. So, in Stateless Climbs basically users come with their own state. They include the state in the transactions directly and they include witnesses, which basically proves that the state corresponds to the state root, which is the master checkpoint and that would mean that the executors, those who are executing the transactions, do not need to have the state and so they can be rotated maximally fast.

Brian: Okay, great and this is helping a lot in wrapping my head a little bit. You mentioned that this is unidirectional, right? So, we move the Ether from the main chain and I can now deposit it in a contract and I get this right to validate. Does that mean is there token transfer on the beacon chain though? Because in the initial phase there’s just the beacon chain charge that doesn’t exist yet. So, does that mean if I’m moving my Ether now to stake on the shard I’m essentially locking up until the shards become live and only at that point I’m starting to be able to transfer them?

Justin: Yeah, that is correct. So, in the initial phases when we don’t have the shards you need to be a believer and want to be a validator for a reasonably long amount of time. You cannot transfer the tokens to other people with the shards. It’s not an application layer thing, it’s not a user thing. It’s purely a system thing and we only have system infrastructure and in particular there’s no small contracts. It’s all extremely simple. Where you would use your Eth and do all the transactions and contracts within the shards. So, you’d have to wait for that to happen and that would only happen in phase two when you actually have the shards have a notion of states. In phase one, we would have the shards, so we would have blocks and each shard would be growing over time and there would be consensus as to what the data is, but you wouldn’t be able to use your Eth.

Brian: Okay, interesting. So, that would require a big leap and I guess potentially would mean that lots of Ether would get locked up for what sounds like could be a long time.

Justin: Exactly, yes. We are hoping for hundreds of thousands of validators eventually. So, each validator needs to come in with 32 Eth, but I think the minimum number of validators that we will require to start the process will be on the order of 10,000 validators.

Brian: I mean maybe detailed question, but I guess will there be some sort of higher reward if there’s fewer validators so that there’s an intentization to move early?

Justin: Exactly, yes. So, the more people there are the smaller the rewards. So, if there’s only 10,000 validators there will be a huge incentive to be a validator. You can check the specific curve. It’s basically a quadratic curve. So, it’s not exactly in there, there’s a different curve that was chosen.

Brian: So, you mentioned the Polkadot relay chain before and of course one of the functions of the relay chain besides, at least the way I understand it, looking out for the security is that it also has this interoperability function. In the cosmos as well, even though the security and violation layer works very differently there, but you have the cosmos part right that you can basically use to keep track of token balances and stuff like that. So, how does this cross-shard communication work here? Does their beacon chain have a function in terms of supporting interoperability between shards?

Justin: Yes, absolutely, so one of the key things of the beacon chain is to allow the various shards to communicate with each other and one of the processes that we have here is the notion of a cross link. So, periodically within every shard there will be check points which will be included in the beacon chain and then these checkpoints, which we call cross links will be used for other shards to be able to read the states on every other shard. So, basically you can think of the beacon chain for being a light client for every single shard. In terms of additional infrastructure that we have for cross shard communication. Number one is we have finality. So, once you have a cross link, which is finalized, it’s like a super strong cross link and it should never revert and so you can really rely upon that at the application layer when you want to communicate with other shards. We also have some sort of notion of pre-finality. So, within each shard we have so called atastations/g, which are votes from other validators on the block.

So, every time a block is created you have a bunch of validators who are invited to go test whether or not that block is building on top of the tip. So, basically building on top of the parent which was kind of on the canonical chain. And if a block gets sufficiently many of these other stations then we have variability that that block is going to eventually be finalized even though the cross link right now hasn’t been included in the beacon chain and the beacon chain has not been finalized. So, basically, we have this spectrum of finality where it starts with a beacon proposability building block and then at the stations coming in and then you have the notaries coming in creating a cross link which that’s included in the beacon chain and then the beacon chain gets finalized through the Casper FFG process.

Brian: Cool, so recently there was the web 3 summit in Berlin and Gavin gave a talk and a demo about substrate and Polkadot and I thought Danny from ACCT, he asked a really good question, which was basically about the composability of Polkadot versus Ethereum and you made this point that today in Ethereum it’s really nice and you can write this application and use as maybe maker, as a stable coin and use as dharma and prediction markets and lots of different things that’s very easy to write up in this application. And it’s like okay, how is that going to work in the Polkadot model and Polkadot initially talked about having cross chains, more contract calls, but Gavin basically made the point and I hope I’m not misquoting him here but even though that’s in the works, but that it’s going to take a long time to make this even semi feasible and that in the meantime it will actually be much better to build things like that in Ethereum where you are basically on a single chain. So, I’m curious about this point of composability and the ability to write applications that use lots of different contracts. Do you have a sense of how that would differ in terms of the difficulty between maybe the future theorem and something like Polkadot?

Justin: So, right now in Ethereum 1.0 everything is intolerable.  So if you have two small contracts they basically are on the same chain and when you make a call it’s a synchronous call. So, that’s very simple and easy to reason about and in the 2.0 we want to maximize that. So, we want all the shots to be part of this one organization, which is coherence, but there are boundaries between the shots. And so, the question is how do we make these boundaries as small as possible and reduce the friction. And part of the story here is at the consensus layer, the protocol layer, adding infrastructure to make it easy for the various shards to communicate. So, I mentioned these cross links. The cross links allow for asynchronous communication between the shards, which is different from the synchronous communication that you’re used to within that Ethereum 1.0 context. But we’re also adding this notion of attestation pre finality and you can use that at the layer two layer basically you can have optimistic shard communication protocols.

So, with high probability in the default case things will happen as you’d expect even if you having gone through this whole process of finality. And so, you can use that to your advantage to have very short cruscade communication protocols. One of the things that I expect will happen is a standardization in the cruscade communication principals. Something very similar as what has happened with the standardization of the tokens, for example, with ERC20. Ethereum 2.0 application layer will be a stake layer will be very universal and very generic. So, you can build whatever type of communication you want and that makes it a bit more complicated because you have to make a choice, but I think the best designs will be experimented upon and they will be standardized.

Brian: Okay, okay, I guess that may be remind me a little bit of, let’s say the work that Interledger has done. The Interledger protocol, which also is a higher-level chain agnostic and the reason we did a podcast with Stefan Tomas we also talked a little bit about their assumption that actually the Interledger protocol will mostly function between layer two solutions. So, like the lightning and the Raiden and stuff like that and in between there would be the Interledger. So, I guess it goes a little bit in this direction too. Know that many people could come up with some sort of standard interface between the different shards and then there will be a competition and maybe a standardization effort and at some point, maybe will converge on some standard.

Justin: Right, I think some experimentation and some standardization of the application layer. I also think we can make upgrades and add infrastructure as required. So, one of the pieces of infrastructure that I’d like to see for example is synchronous maybe crucades transactions. Just a pure if. So, we’re taking a very specific use case, just the transaction of if and kind of adding needing infrastructure for that. We might take the best designs at the layer two and the most popular ones and shrine them. So, basically include them at the protocol area maybe to simplify them in some way, or maybe give them a special feature. The way that I think of Ethereum 2.0 is as this base layer, which is actually very simple and you have all these gadgets which add functionality. So, finality is just a gadget. You don’t’ really require it. It’s nice to have because it makes the communication cross chain simpler and also you have more confidence in the chain that when it progresses and I think we will add more and more things going forward. So, we will make sure that the EVM is aware of components that are added at the protocol layer.

Brian: Let’s now say in this future I’m going to Ethoscan and I have an address and I wanted to see how many Ether are in this address? Then what does Ethoscan do on the backend? Does my Ether live in a particular shard and now there has to be some sense of oh, which what do I need to create to see the balance? Like how would that work?

Justin: Yeah, so I think most likely the address will contain the shard number in addition to what we know as the address right now. Just the patternization of these two fields and you just paste that in Ethoscan and it’ll be the same user experience that you’re used to.

Brian: Okay, okay, great. Well we’ve alluded to it before, but let’s dive into this now. So, the question about randomness. Like what’s the reason for needing randomness in the Ethereum 2.0?

Justin: Right, so at the consensus layer you need randomness to sample the validators. So, you have this massive pool of validators potentially close to a million validators and you want to assign them different tasks. And it’s important from a security perspective that it be random because otherwise as an attack you can try to assign yourself to one specific shard and take over that shard for example. So, the two types of sampling that we do is one, which is monopolistic sampling where you just sample one single person and you give them a task and they have monopolistic power over that single task and we use that for example block proposals. So, at every single so-called slot there’s one single proposer per shard who is invited to extend that specific shard. You also have these committees.

So, committees are on the order of hundreds of validators and they’re meant to be large enough to be statistically representative of the wider form and because you have this honesty assumptions. So, for example you’re assuming that two thirds of the validators are honest. When you do the sample process the committee, you know we have extremely high probability that at least half of them are honest. And so, you can ask them to do a task and vote and at least half of them vote for something and you know there’s at least one honest person that voted and so whatever they voted on is indeed a reflection of the truth.

Brian: I would say let’s come back to the rule of how specifically randomness is used in the validator sampling and how you see how it will delay functions, but I would love to also hear your take a bit on what is the value of having randomness as something you can make an API call to, right? Like I’m building an application and I can now say hey, give me some randomness and I use that to make some decision, or split some fork or distribute some money, do a lottery. And so, I think a lottery example is something that is obvious to anybody. So, you could make a smart contract, we all put some money in and randomly chooses one out of everyone who put the money in according to how much you put in. So, you could make this trust or trust lottery, which sounds pretty great, but what are some other exciting applications that you think would be possible with having this randomness source?

Justin: Let me just go back to the randomness because in a way we already have randomness with proof of work, right? Proof of work is about randomly sampling a miner and you have a random number which is going to be your block hash and that is something that we already exposed in the Ethereum 1.0 layer as an up code that you can use. One of the things you need to be very careful of is this block hash is biasable meaning that an attacker, if they want to, can have some amount of implements of what that random number will be. The way they do that as a miner is that they will withhold their block and not broadcast it. If the block that they’ve mined has a random number, they don’t’ like. They basically discard it and a lot of the decentralized randomness schemes suffer from this bias problem. And so, we’ve put a lot of effort in trying to build something that is unbiasable similar to what Definity is doing where they have an unbiasable random number scheme using BLS signatures. Now why is unbiasability important?

Let me take the example of the lottery. Let’s say that an attacker has one single bit of bias meaning that they can choose between two random numbers at any given point in time. For example, when the lottery winner is selected. Now if you have a lottery with $ 100 million and you as an attacker you come in with another $ 100 million. So, there’s $ 200 million in the pot. If you had no bias, you would be – your probability of winning would be one half, so your expected return would be $ 100 million, but now that you have one bit of bias you can reroll the die once and so you can choose between two numbers. And so, your probability of winning now is 3 quarters and so you’ve basically stolen $ 50 million from the community just by being able to bias the randomness. And so, just in general gambling, casinos, all these things are all these applications, so in terms of applications beyond lottery you have all the various gambling applications like casinos and poker rooms, but you also have games such as crypto kitties and there’s also applications in proof systems. So, if you have a zero-value proof, for example a stock, you can decompose it into smaller chunks where you have this interactive protocol. You have approver who is being challenged and here you need to have strong randomness with the security you work. You can also think of systems which again have a pool of validators and you sample them and so you can use that to build proof of stake system on top of your theorem.

Brian: And so, with some BLS, you mentioned definitely and for listener interest in that we did a podcast with Tom and Dominic before and we also talked about BLS and their role there. Does that qualify us as unbiasable?

Justin: Yes, so the way that that scheme works, BLS is just a technicality, is that you have various participants who have a share of a secret and if sufficiently many people come together specifically you need to reach yourselves a threshold then you can generate the next number and you need at least this amount of people to generate the next number and the next number is totally unique and deterministic. It’s impossible for someone to predict the next random number if they don’t have the shard and it’s impossible to bias because it’s just the single correct answer as to what the next random number will be.

Brian: Right, so in Definity more specifically let’s say we have 500 notes in this set and if more than 50%, you have 250 I guess, might come together they can create a next key, but which of those 250 come together is irrelevant? Like any one at the same result, but you need to get those 250 in order to get that result.

Justin: Right, I mean you do want to have a well-defined committee because if you have, let’s say, two different committees and they’re each invited to create the next random number then you can grind between the two like one will come first or something like that. So, you do want to have an ambiguity on who will be doing the task. The main problem with the Definity scheme is what happens if you don’t reach the threshold of active participants? People who are online and honest and this is a problem because people could be dishonest and actively decide not to participate and the Definity is already making the assumption that one third of the people are going to be dishonest, but the only problem is in the honest people they could be offline for all sorts of reasons and one of the design goals that we have in the theorem is to survive World War III. So, if 80% of all the validators suddenly go offline and they stay offline permanently we still want the system to go ahead and in that case of Definity it takes just 10 or 15% of the honest players to go offline for the likeness of the system to be threatened.

Brian: So, it’s 10 or 15%, but wouldn’t they just need around 50% of the participant to produce the next signature?

Justin: So, the Definity requires two thirds to be honest and online. So, there’s two assumptions here. Honest and online. So, already you lost one third that are dishonest by assumption. And so, you’re only left with 66%. Of these 66% if you have even the small fraction that are offline, let’s say 10%, then you will eventually get a committee where you don’t’ have this threshold of half that is online and with 10% I think you will hit that within a day or two.

Brian: Oh yeah, Yeah, because there will be different committees chosen and then you could get unlucky in some way I guess if the committee isn’t big enough number didn’t hold. I guess we’ll see if these assumptions are so problematic, or if you couldn’t then also do something like arteries R4 in a kind of extreme scenario like that and could still recover.

Justin: Yeah, so you could try and do a R4, but you need to know who is online and who is offline because if you just restart and most people are still offline then you’re just stalling 10 minutes later. So, that’s not useful. In terms of whether the assumption that two thirds are honest and online is a good one to make in the long-term. That’s very difficult to know. It’s a little bit like a seat belt, right? The seatbelt is only useful in the one instance where you have an accident and that rarely happens, but when it does happen you really want the seatbelt.

Brian: Yeah, of course nobody will argue that things are being equal, it’s much more preferable if the thing goes wrong and doesn’t have the assumption. So, there’s no question here. So, let’s speak about the verifiable delay function and the role that this delay function has in how randomness is created on the speaking chain.

Justin: Right, so you have these two classical families in randomness schemes. One is based on commit reveal and one is based on threshold cryptography. So, commit reveal, that includes the proof of work because in a way you’ve committed yourself to a random number just by burning electricity and then you have the option to either reveal it or not reveal it. And then there’s schemes which are based on proof of stake which is called Randal, so with Randal you just pick a secret on your computer. You commit to it by hashing it and publishing the hash and then at some point in time you invite it to reveal the secret and that will contribute towards the entropy of the system. And then you have this other scheme which is based on unofficial cryptography upon which the Definity is based and then in each of these two classes you have two of 3 properties that you want.

The 3 properties that you want is one on predictability and two you want liveness. You want this scheme to continue even if people get offline and 3 you want unbiasability. The commit reveal doesn’t have unbiasability and the threshold scheme doesn’t have strong liveness. And so, the approach that we’re taking is to take Randal which is not unbiasable, but has the strong likeness property and somehow upgrade it so that you have unbiased random numbers and this upgrade process involves introducing the notion of time. So, you want to try and introduce lower bounds on the amount of time before an attacker knows how he’s manipulating the randomness so that he’s basically manipulating things in the dark and he will be unable to actually have a meaningful bias on the randomness.

Brian: So, let me try – because I think I sort of wrapped my head around how this works now after we spoke briefly before and I watched your Theorem talks. So, let me try to explain it and maybe you can correct me. So, let’s say now what we’re doing, right, is like me and you, the two of us, we together create a random number. We make up this one period and you create this random number and you submit it and I create one and I submit it and then the two together create this random number and now the issue is if – and this is a timeout here. So, let’s say you go first and I go second. If I don’t do it then it’s just yours that will be taken as the input for that, but now if you just stepped out then I’m second. I could check what random number results and then I could decide or not decide to not add my bit and then bias the result. And now, with this verifiable delay function basically what we’re doing is let’s say I have 5 seconds to add my bit to this which will generate around a number for a particular period, but to calculate actually what’s the output the random number resulting from our input it takes longer than I have to submit my number, right? So, I could take yours and I could put in possible answers for my random bit, but to know how I will bias it it will take me, let’s say, a minute, but I will only have 5 seconds to submit the random number. So, now I don’t have to chance anymore to bias the random number.

Justin: Yeah, that’s exactly right. So, instead of having two participants you want to have many more so there’s at least one which is honest and online. The honest participant will not reveal to the world what their secret it. And so that will create an element of unpredictability within the Randal epoch. The random epoch let’s say is 10 minutes and you have 100 participants. One of which is honest and then at the end of the epoch what the random process does is basically it Exo’s all the revealed numbers from these 100 people and the Exo is going to be again unpredictable, because you have one person that is adding unpredictability to it. But the problem is that the last revealer, or those who are towards the end could be the last two, or the last three etc. They have influenced once they know what everyone else has revealed because it’s all odd list. And so, to prevent this last reveal attack you basically, exactly as you said, you add a minimum guaranteed delay between when you make the action and the when you know the results of your action. So, even if you don’t want to try and bias things you won’t have enough time to know what the action, or the result of your action will be and hence you won’t be able to bias.

Brian: Yeah and of course if I’m able for some reason if I have some super quantum computer or something and I can, let’s say, we have a 1,000 people who reveal this random bit and I’m the last one and let’s say I had this amazing super computer and I was able to do it within a split second. Then I could still calculate okay, how would I influence this random number and decide whether I reveal it or not? So, there’s this hardware component that comes in here.

Justin: Exactly, so if you have hardware, which allows you to defeat the randomness. So, defeat the guaranteed delay which is in force basically by doing computations. It takes time to do computations and hence it takes time to compute the output of the busy F. F stands for function. It’s just a function which sends an output. If you have this hardware, then basically you’re falling back to Randal. So, if it happens that you are the last revealer then you will have one bit of a tax service. Now in order to create this guaranteed delay you basically need to have an assumption about the speed of hardware of the honest people and the speed of the hardware of the dishonest people, of your attacker. And specifically, you want to place a bound on how much faster an attacker can be. So, let’s say that your bound is 10x, or 100x.

So, an attacker cannot be 100 times faster than the honest players and if you have this bound of let’s say 100x then you can make your computation time be 100 times more than the guaranteed delay that you want. And so, an attacker can only compress things down to by a factor of 100 and the way that we make sure that it’s impossible for an attacker to have hardware, which is 100 times faster than what the honest people have is basically to give the honest people the community at large access to really fast hardware to start with. So, different foundation is looking to build an ASIC, a state-of-the-art ASIC with state of the art circuits, or computing this VDF and we are going to use an advanced node and make as many optimizations as we can and after that we’re going to make the assumption that the attacker can’t be that much faster than what we’re capable of doing.

Brian: And of course, the interesting thing here about doing this like ASIC is – now let’s say we have this ASIC and it performs at a certain performance. Now if you compare this with bitcoin with ASICs, I as a miner, if I can improve my performance, electricity consumption, has to put back 5%, right? That’s great. I’m going to do it, right? I’m going to invest a lot of money in that because I can earn more money and become more profitable and that’s how bit name became like a massive company, but here it’s not really the same, right? So, here my advantage only comes in if I’m able to like be like an order of magnitude faster then rest otherwise like a small improvement doesn’t really get me anything.

Justin: Exactly. Small improvements gives you nothing so long as they’re below this critical point and even after the critical point. Even if you’ve somehow managed to reach that you’re 100 times faster then there’s a graceful decay of the securities. So, let’s say you’re 101 times faster then you actually almost have no influence. Maybe one in a million times you’ll be able to influence one bit. So, you have this perfect unbiasability if you’re below a certain number and then gracefully the case to Randal.

Brian: So also, of course, economically there’s likely that we will see an industry merging like we’ve seen in bitcoin or like around proof of work to continually try to have better shapes and faster shapes. Like there’s just no incentive to do that here really.

Justin: Exactly, right. There’s very little incentive. So, in terms of the incentive requires proof of work we’re talking on the order of $ 1 billion a year that are burnt by the miners and used for the security of the network. In the context of VDF you need almost no rewards. You need, on the order of the million, it’s nice to have a reward to incentivize the blocked proposals to include the VDF’s on chain, but you don’t need more than that. And so, if you have a faster VDF it doesn’t really buy you that much. One of the things that it does buy you is that it gives you a larger so called look ahead meaning that you will be able to know what the random number will be slightly before everyone else, but at the application there you need to be aware of this and make sure you wait a sufficient amount of time and at the consensus layer we’re also doing that. We’re making sure that even if the attacker knows a little bit ahead of everyone else what the random number is it doesn’t actually affect the system in any way.

Brian: Okay, okay, so it could be something like let’s say now we have a lottery and I’m able to calculate like slightly more – or you know, I guess there could be some interesting use cases. Let’s say you have some unchained event, right, that’s resolved by this random number and maybe have some derivative market that’s based on this and if I’m able to slightly calculate this 10 seconds earlier than maybe I get some benefit of being able to trade on this, or do something like that, but doesn’t really undermine the core?

Justin: So, if you take the example of the lottery for example, you’re going to have a point where the ticket sales have ended. So, for example for the euro million in the UK ticket sales closes at 9 o’clock or something. And then at 10 o’clock is when they actually draw the number. So, there’s this one-hour gap between when the ticket sales close and when the drawing is made and applications will have to do a similar thing. They’ll have to make sure that if someone is able to know the random number earlier than everyone else by a little bit they can’t use that to their advantage.

Brian: Okay and so yeah. So, you guys are the Ethereum foundation together file coin is now funding this effort to create this open source assex/g. Can you speak a little bit on that? Like what are some of the unique challenges on this?

Justin: Right, so the number one challenge of the assex/g is just the cost of it. Like if you want to have a state of the art assex/g it is going to cost between $ 15 and 20 million and it would be nice if we could split that cost across multiple parties and fall coin is one of these parties that has the most interest in contributing because they would like to use a VDF in that vertical. So, it’s a win, win if we do collaborate. At this point in time we have not made a go, no go decision on spending to $ 15 to 20 million. Right now, we’re still in the viability phase and we’re sharing the cost 50/50 in the various studies that we’re doing before taking a decision. One of the nice things is that more people are interested in the VDFs so there is spreading in that sense. So, we have Solana who is also potentially looking to join the so called VDF alliance, because they’re using a VDF in that protocol. We’re in discussions with Tezos. So, we had a call with Arthur and they might join the alliance. We’re looking to submit a grant or proposal to the Tezos foundation and then you have Chia that is also using a VDF in that vertical and they might also join the alliance. Chia is, right now, taking a slightly technically different approach. They’re using so called class groups and we’re using RSA groups. It’s a miner technicality and I really hope that we do converge towards the same solution because it would be nice if we could have the kind of a cross blockchain standard for the VDF.

Brian: And then you mentioned that this would be given away for free. So, how are you going to give away these devices?

Justin: Right, so we have two hardware assumptions. The first hardware assumption is the attacker can be much faster than the commodity hardware and the second assumption is that there’s at least one piece of hardware which is controlled by an honest player which is online. So, the strategy we’re taking is to build thousands of these VDF rigs and distribute them as widely as possible in the most decentralized fashion possible and hope that at least one of them is going to be online at any point in time. So, anyone from the VDF alliance is more than welcome to have a VDF rig. We’ll give them to enthusiasts. We’ll give them to exchanges. If you’re a foundation member, we’ll give them to Edward Snow then, or the electronic frontier foundation, EFF and we’ll give them to as many people as possible. Also, there’s a project that are looking to use VDF at the application layer. So, for example there’s a decentralized exchange so they’re looking to use VDFs to prevent front lining. And so, we’ll make sure to give them a rig as well.

Brian: Okay, okay, but it’s not like it is something where there’s maybe like a marginal of awards to incentivize people to keep it on and keep it running? It’s not really comparable in like if something like assex/g, right, that you’d want to have, I don’t know, a big center where you have many of these running, or that doesn’t seem like something that will happen here.

Justin: So, one thing that may happen is that people who are simultaneously validators in Ethereum 2.0 and VDF evaluators. So, people running the hardware. They might have an incentive to try and slightly over crock the VDF so that they’re the first ones to have it and they’re the first ones to include the VDF output on chain and get the tiny rewards that is assigned to the first block proposer who includes their own chain.

Brian: Maybe last thing here and I know this is a difficult question, but like what is the timeline on these things? Many days, I guess, some of the main milestones seem to be the launch of the beacon change and maybe the launch of the shards and then having actual I guess you said the stake transfer where you could have like ETH moving over there and what timeline do you expect that to happen?

Justin: Right, so I’ve said publicly in several instances that I believe the beacon chain will be available in 2019. That’s phase zero. Phase one, which is the data layer, will come in 2020 and the virtual machine will come in 2021. Now just a bit, give a little more color on this, a lot of the complexity has been put in the beacon chain. So, just reaching this initial milestone will be pretty huge and I’m expecting it to happen towards the end of 2019. The data layer is very nice because there’s almost nothing there. It’s just blobs of data that are recorded in a hash chain, there’s blocks and headers and every block is fixed size. There’s very, very little complexity going on and the fork choice rule and the mechanisms of attestations that I talked about is going to be the same as in the beacon chain.

Justin: So, I expect the data layer to come early 2020. One of the nice things that with only the data layer and not the application layer, the stake layer, you can still do useful things. So, this is what I call alternative execution engines. So, one of the easiest examples is these true bits. So, with true bit you allow people to make transactions and these transactions are recorded in the blockchain and with each transaction they’re basically making a claim as to what the effect of that transaction is. So, for example the transaction is compute this function and instead of having the EVM actually complete the function the result is given and there’s a collateral and then someone, a challenger, can come in and say hey, hold on, this is not the correct answer. The answer is 3 not two and then they engage in this game where they figure out whose current and who’s wrong and the nice thing about this game is it happens in logarithmic time. So, it happens really fast and at little cost for the virtual machine. And so, what you can do is you can stuff the shards with all these transactions and execute them somewhere else.

For example, in Ethereum 1.0 and this is one of the main challenges that true bit has. True bit is a wonderful system, but even just putting the data on the blockchain is extremely expensive and now they have this nice option of putting the data on the shards and running the execution on the Ethereum 1.0 and then the last phase will be the EVM 2.0, which is based on the web assembly. The nice thing here is that web assembly is becoming a standard across many, many different blockchains. So, it will be well tested and will be relatively straight forward to add web assembly. So, we might also see phase two relatively soon. One of the things that we haven’t completely figured out, for example, is the sustainable storage model. How do we make sure that we align the incentives in such a way that the state does remote and grow uncontrollably?

Brian: Cool, well thanks so much for coming on Justin. That was super helpful and I feel much more clear on what’s coming for Theorem and so yeah, great job and thanks so much.

Justin: Thank you for having me.