
We’re joined by Tal Moran, Chief Scientist at Spacemesh. This new consensus protocol is designed to run on home desktop PCs, filling free space on users’ hard drives to create a Proof of Space-Time. The goal of this new blockchain protocol is to solve the issues with Proof of Work and Proof of Stake, that is, energy inefficiency on the one hand, and possible centralized plutarchy of rich validators on the other.
Topics discussed in the episode
- Tal’s background as an academic and researcher
- The problems with Proof of Stake and Proof of Work
- What is Proof of Space-Time and how it works
- How miners use their hard drive space to establish proofs
- How randomness is generated in Spacemesh
- Spacemesh’s DAG architecture and how blocks are added to the chain
- The tortoise and hare protocols proposed by Spacemesh
- The Spacemesh team and recent funding round
- The project’s business model and roadmap
Sebastien Couture: Hi, so we’re here with Tal Moran who is the Chief Scientist at Spacemesh. He is also a Professor of Computer Science at the Interdisciplinary Center of Herzliya in Israel. And he joins us today from Israel to talk about Spacemesh and a new form of consensus mechanism that we’ll learn all about in this episode. Thanks for joining us.
Tal Moran: Well, thank you. I’m glad to be here.
Sebastien: So tell us a bit about your background. And so you come from Academia. How did you come into the blockchain space?
Tal: I am an academic cryptographer by training and I was always interested in sort of protocols that have new ideas and and potentially practical uses. And basically I started working a few years ago. I heard about Bitcoin and I have to say in the beginning I didn’t understand what the excitement was about. But once I started looking at the protocol, then there’s some really nice interesting theoretical questions there and this is how I got started and you know how to solve the problems the theory that come out of Bitcoin.
Friederike Ernst: So you did a PhD with Moni Noar who also worked on e-cash with David Chum. So what was your PhD on?
Tal: So my PhD was it was fairly wide ranging but I think some of the more interesting parts definitely to a lay audience are protocols that people can do, my thesis was called Cryptography by the People for the People, and so things like voting protocols where you want to make sure that your vote was counted but still keep the vote secret and on the other hand you don’t trust computers. So what can you do? And so these are also actually things that are based on some really really nice ideas that we extended we have several papers on that in that area but also things like how you can use everyday objects to do cryptographic protocols. Like we had the cryptographic protocols with scratch off cards and things like that.
Sebastien: You said your PhD exists in the real world or like practical applications.
Tal: Exactly. The voting protocols are people are actually trying to put these into practice and there are several different ones. There was actually a project that they did it the IDC called wombat voting that had the cryptographic verification. There are bigger projects, David Chum was involved in one in Maryland and I think Takoma Park, I don’t remember the exact name, where they actually did like a citywide election using verifiable voting. So people are working on these things and I think now that you know, voting fraud and you know attacks on the voting system of become more talked about then these things might have a resurgence.
Sebastien: Now, what about your postdoc at Harvard? Can you tell us a bit about that?
Tal: Yeah, so I worked with Salil Vadhan there and actually some of the things that we did there , so we had the first protocol for a proof of sequential work when it was still wasn’t fashionable. We came up with a protocol, this is together with Mohammad Mahmoody and it was a very complicated protocol. I have to say it was not really practical but last year Bram Cohen and Krzysztof Pietrzak actually have a much simpler version of this protocol. It won the best paper awarded Euro Crypt and this is what a protocol we’re actually using in Spacemesh.
Friederike: That’s super interesting, Bram Cohen is also involved with Chia right?
Tal: He is and also Krzysztof, they’re both involved.
Friederike: Maybe we’ll talk about this a bit later. So you’ve been for a couple of years, a professor in Herzliya and you first started working on Spacemesh there correct.
Tal: That’s correct. And so I actually started in earlier cryptocurrency first called Meshcash which had many of the same ideas but it was based on proofs of work and you know, the original motivation was always to try to replace proofs of work with something else. But you know, there’s just a lot of details there and proofs of work have this one giant advantage is cryptographically, they’re very easy to work with and they have like they’re self-contained there you they really are very nice in terms of proving their security. So the first version said lets, you know, do things one step at a time, the first version just took the first step of solving the scalability problems and some of the incentive compatibility problems that these blockchain protocols have, by going from a chain to a mesh, and then the second step was taking the ideas from Meshcash and replacing the proofs of work with proof of space time, which is basically replacing the resource that I’m using instead of CPU I’m going to use disk space and this adds a lot of challenges in terms of how we get things to stay secure and guarantee consensus. And that’s what we solved basically when we wrote Spacemesh.
Friederike: Really interesting. So maybe let’s talk about the proof of work and proof of stake and proof of space time part first and then we can move on to the to the mesh part. Is that okay with you?
Tal: Yeah sure.
Friederike: Yes fantastic. So I mean there are different issues with proof of work one being that it’s extremely energy intensive. But you also feel that proof of stake isn’t a worthy successor, isn’t a good successor for many causes. But why do you think that is?
Tal: Yeah, so I wouldn’t say that I’m like categorically against proof of stake. I think there’s some very nice protocols that use proof of stake. But proof of stake does have some major disadvantages compared to proof of work. So one of them is this sort of circularity, when proof of work is totally self-contained. I have a resource I prove to you that I use the resource. In proof of stake I’m proving that I spent money that’s in the system. But first of all, there’s the circularity because this is actually a resource only if the money is worth something but the money is only worth something if the system is secure so we have something that seems a little bit fishy. It doesn’t mean that it can’t work in practice. But already it’s like the foundations are a little bit shaky. The second thing is because you have the whole system sort of certifies itself then you have these problems with how you prove security or basically what your security assumptions are. So again here I’m talking about things that actually prove security which maybe we’ll talk a little bit later why I think that’s a critical thing to do when you’re designing cryptocurrency protocols, but in proof of stake if you want to prevent this sort of alternate history attack where suppose I wake up now, after a hundred years, and I’m trying to find out what the current history is of the system, then if I have a proof of work then there’s an easy solution I can see what how much work has been spent on top of each branch of the system and now I know what’s the true history. But in proof of stake, if say somebody manages to steal old keys, so somebody manages to steal, you know, all of the keys at some very far back point in time, then they can now fork the system and create a new history that looks completely valid because the history only depends on you know, who has which keys.
And so they can sort of fast forward this history to the current day. And now I have two histories that I simply cannot distinguish between and so I have to trust somebody, and this is the problem when what you want is is so totally decentralized and you know, the trust should also be decentralized. And the way they solve this in the proofs of proof of stake, they either need to add these trust assumptions like checkpoints like okay, you know, we just know that at this point in time, this was the right branch and we just will never switch to another branch or stronger assumptions like the things that you can erase your memory securely, right? So the honest users always completely delete their old keys and switch keys all the time, so nobody can steal them after the fact then this can help you prove security of such a system, but these security erasure assumptions are arguably, you know, not so reasonable, and because it’s very hard to actually secure the erase data and honest users it’s not clear that they have an incentive to erase data because you cannot prove that you erased it. So, you know, we have security assumptions that are more iffy and then there’s the final thing which is this sort of permission lists property. So one nice thing about proof of work is I can start doing a proof of work without asking anybody for permission. I just start my CPU and it works. In a proof of stake system in order to join the system I have to get stake. And I can only get stake if somebody gives me stake. So I need somebody’s permission to give me stake. So you can say okay this is just a technical, you know definitional property, but it’s actually not completely technical because there is an attack that’s related to this. Suppose the adversary somehow manages to get a majority of the stake at some point in time. It has 51%of the total stake in the system. It can now refuse to sell the stake to anyone and from now on till forever basically the adversary controls the majority of the stake and this is a problem, especially when systems are starting up because systems, if the economy is worth a hundred trillion dollars getting 51% of the stake is a very expensive attack, but if the economy starts now and it’s just worth, you know, ten million dollars or fifty million dollars then getting 51% of the stake might not be such an unreasonable attack, especially if you’re talking about things that might be of interest to sort of nation state level actors. The other problem is that this is undetectable. So the fact that I have 51% of the stake it doesn’t appear on the blockchain as Tal has 51% of the stake. I can pretend to be many different people and they can still sell to each other so you don’t see this giant block of stake staying there. It looks like the system is fine. But at any point in the future now I can decide to crash the system. So this is something again. Is this an actual reasonable attack, I’m not sure, but I think it’s enough of a worry that I wouldn’t want to say all of the world’s economy to be based just on a proof of stake system.
Sebastien: Interesting, I sort of heard of these attack vectors before, specifically this last one, but the other two so that the trust assumptions that you have to make around checkpoints. So you mentioned that in order for this to work one would have to steal basically all the keys from a particular point in the past.
Tal: So this is like the worst place attack, there are weaker attacks that don’t steal all the keys, like this like an example of something that you could do in order to prevent this you’d need some stronger assumption.
Sebastien: Okay. How likely do you think this attack is from…
Tal: I don’t know. I don’t know and you know, it might not be that likely but part of the thing is that these are things that are very hard to quantify and you know, if you’re basing your economy on it, I’d like not to have all my eggs in one basket.
Friederike: That’s understandable.
Tal: I’m much more confident say about mathematical assumptions then in this kind of assumption about how people behave when the system is starting because we have much less understanding of these types of things.
Friederike: So what you posit as a new mechanism is the proof of space time. Can you explain what that is?
Tal: Yes. So a proof of space time is basically a real physical resource. Ideally what you’re be going to be using is disk space. So I’m going to be sort of filling up my disk and then not using it for anything else for a period of time. So and the time here is important because you think of say if I want to rent disk space right? I paid by megabyte per month not just by megabyte. So it’s not enough to say that, you know, I have this much space because if I can reuse the space again and again in short periods of time, in some sense my resources not limited. So I wanted to store space over time. Unfortunately, we can’t quite prove that that’s what you’re doing. So formally what we’re actually proving is that either I’m storing space over time, or I’m doing a lot of work of CPU work. So why isn’t this just a proof of work? It’s because in terms of incentives, it’s actually a lot cheaper to store than it is to do the CPU work and we ensure that’s true by making the CPU works hard enough so that it always is cheaper and if somehow storage becomes more expensive than we just make the CPU work more expensive, and so honest parties will always just store the data because you know, it costs you one cent to store things, but it’ll cost you two dollars to recreate it. And the adversary assumption about the resources is that this combination of CPU plus disk space is still a minority.
Sebastien: So in the white paper, I believe it’s the white paper or at least one of the places where you describe Spacemesh it talks about the assumption that it is unprofitable for a participant to sort of mine blocks on like a cloud instance or on even a dedicated computer at home and that one can only be profitable if they’re using it on a computer that’s being used for other things presumably, why is that?
Tal: Yeah. I’m not sure. Okay, if the statement is it’s not profitable it’s a bit too strong.
Sebastien: I think the word it uses is unprofitable.
Tal: Yeah. Okay. So maybe we should be more precise. We can’t guarantee that it’s not profitable right because profitable or not will depend on you know, what the price of the currency is and how fast it goes up and all sorts of things that we can’t control and they depend on all these economic factors that like, they’re not part of the technical design or the parameters of the protocol, What we hope, the way the protocol is designed is that it should still be profitable for you to do this if you have a home computer and disk that you already own for a different reason, right? So if you were talking about marginal cost, buying a disk for a home user is maybe actually higher cost than buying a disk for somebody like Amazon or Google or like some large whale who can have economies of scale and buy disks more cheaply and and this is the standard thing. So in any type of resource, usually if you have economies of scale it gets cheaper as you get more per resource. However, if you already have a disk for another reason and here we’re using the fact that a lot of people already have disk. So for instance I have already for other reasons way before I started Spacemesh fairly strong computer that’s always connected to the internet and it has four terabytes of disk and I probably use two terabytes. There are 2 terabytes there because when I bought the disk and I didn’t know how much I need and I think a lot of people are in the same situation. So my marginal cost for using these two terabytes is basically zero. It doesn’t cost me anything. I’m not buying a new disk in order to use Spacemesh and the idea is that because the marginal cost is low for a lot of people then they’ll be able to join the mining gear system and what we’d like to see is not that there are no whales but that there’s a long tail so there may be a few whales but they’re also a lot of people that are small and the long tail means that if we take sort of that the sum of all their storage it actually comes out to be a large fraction of the total system. And again, this is nothing that we can guarantee but this is like how we hope that the system will develop and you know, we’re doing our best in terms of the design to help that happen.
Sebastien: So you’re making the assumption here that people have computers with and presumably a lot of unused disk space and that these computers are on and connected to the internet all the time.
Tal: I mean, it’s not an assumption. But yeah, if for this long tail to work, then you need a lot of people that are willing to have, or maybe you know, they already have a computer connected to the internet and a lot of unused disk space.
Sebastien: Okay, but so there is a sort of trend that you know, most people, not just professionals. I mean, I don’t know anybody who has a desktop computer, like most people that I know and myself have had laptops for years that are on parts of the day, but mostly are off or in a backpack or like, you know, shut down or even in sort of emerging economies most people don’t even have a computer they’re using a mobile. As this trend continues and less and less people are using desktop computers and there might be like a significant margin but still like margin of people who are using desktops, as this trend continues would Spacemesh continue to work then?
Tal: Yeah, so like in the worst case, suppose there are only a thousand whales, Spacemesh still works, it’s not that you know, it will crash. Our security assumption is that there’s an honest majority. Bitcoin right now is like super centralized right? And it looks like it’s still working. We want it really for two reasons, one is sort of this general fairness where we do want, you know, the everyday people to be able to join the system, but the second one, I think the most critical one for any crypto currency, even if you don’t care about fairness, is your security depends on having an honest majority and the more centralized you are the less reasonable this assumption becomes. So if we have you know 800,000 miners of which 700,000 have a majority then colluding between 700,000 people is going to be really really hard so it so this honest majority assumption becomes a lot more reasonable. If you have 10 miners and they hold the whole system then it’s a lot more iffy, and but there are lots of medium spaces. If you have 1,000 miners, they’re all very big then maybe it’s already fine in terms of like your trust in the system. It’s not something, technically there’s no reason Spacemesh can’t work with two miners, right the same reason Bitcoin can work with two miners, it’s just do you trust that if they’re only two miners the system actually has an honest majority.
Friederike: Okay, I see, so walk me through the process. So I actually have two terabytes of unused disk space on a computer that is permanently connected to the internet. What do I do now?
Tal: So again, you will be doing this once our testnet launches or mainnet depending you know, how adventurous you are. So you download the Spacemesh client and the first part is going to be filling your disk. So this part actually requires a proof-of-work. I think the exact parameters are not set yet. But think of something like two days of your computer working and using your GPU and actually solving proofs of work to fill your disk. And then once these two days are up the Spacemesh miner goes into the background and it just listens to the gossip network and trades sort of like a full node on Bitcoin and once every two weeks your node is going to create a proof of space-time or actually the version of it that we call a nips and non-interactive proof of space-time and it’s going to do this by basically reading your whole disk. So once every two weeks, you have to read two terabyte so that could take you half an hour once every two weeks and then it publishes something that we call an activation transaction which contains this proof. This is everybody receives this it becomes part of the block mesh, and at the time you publish this activation transaction, it makes you active for the two weeks after so we divide time into periods of two weeks and that is called an epoch. And so you publish something today it means you’re active in the next epoch and the sort of one of the interesting things here is that when you publish your activation transaction, you basically already know when you’re going to be eligible to generate blocks in the next epoch. So it’s deterministic, it’s not a lottery at all. Right, if you published an activation transaction and you might get lucky and solve it or if you stored something for two weeks, then you can generate an activation transaction. If you generated one and published it then you will generate at least one block in the next epoch.
Friederike: Interesting. So that is completely unlike say the Bitcoin or the Ethereum Network where people actually who are miners band together into mining pools in order for people to actually generate some reward every now and then right.
Tal: Right. So I think this is a very interesting property and actually also really helps with this decentralization because it’s no longer rational to join a mining pool basically, right because mining pools usually joined because you want to get rewards more often and more with lower variance. So, you know it steady or schedule. But now you’re basically guaranteed you get rewards once every two weeks and you know, there’s no probability about it, the probability comes in sort of when within these two weeks you’re going to be generating a block so there is some randomization, but the fact that you’re going to be generating block and that will be accepted is guaranteed.
Friederike: So am I going to be allowed to generate one block regardless of how much disk space I commit or is it a linear?
Tal: Okay. So there’s sort of two answers to this question. It is linear. So basically the way we think of it as every unit of disk space, which I think we’re setting at least initially to be something like 250 gigabytes, but these are very system parameters or you know tweakable and part of the reason we’re doing a testnet is so we can play around with them and see what works best in various situations. But let’s think of it right now is 250 gigabytes. So if you have two terabytes you have you know, four units and each unit basically behaves like a virtual miner so you get four blocks in every epoch, but we do have optimizations so that you know, instead of generating four different activation transactions with four different proofs, you only need one, if you generate multiple blocks in the same time period then you can sort of in terms of the communication in terms of the computational complexity you can just use one and just say that this is actually represents four. So there are various optimizations to make this more reasonable. But in terms of the rewards you get if you have four times a disk space you had four times the reward.
Friederike: Okay, I see so basically as it’s becoming already apparent, there will be a lot of blocks generated which kind of does not go together well with the notion of a standard blockchain, and we will get to that in a bit, but let me ask you first. So how does the protocol make sure that I’ve actually saved the data on my disk for this amount of time. So how if I say I’ve saved this for two weeks how does it know? I’ve actually saved it for two weeks and I didn’t just put it there yesterday.
Tal: Okay, that’s a great question. So there are actually two parts to this. The first part is what we call the proof of space-time which proves that I’m still storing the data. So basically a proof of space-time has two phases, the first phase which we call initialization is this is what we said about working the proofs of work. So you fill your disk with proofs of work, right? This is the first phase, this is something you only do once. So even though you know, there is a proof of work involved if you keep running the same system for many years, you only did the proof of work once and the rest of the time is just storage. Now the second part of the proof of space-time is what we call the execution phase, so every two weeks you’re going to get a challenge, we’ll speak in a second about how this challenge is computed, but it’s supposed to be something that you couldn’t predict so you don’t know in advance what this challenge is going to be, and in order to answer this challenge you either have to have the proofs of work on your disk or you have to work again to recreate them, but the thing is even though you could do everything with proof of work it’s going to be a lot cheaper to store things. Right? So honest users will basically just store things and prove that they still have them.
Friederike: And how do you make sure that I can generate this data ex ante, so how do you make sure that this time has actually elapsed?
Tal: So this is an excellent question this brings us to the second part. So how do I know that I didn’t just know get challenges ahead of time or who generates the challenge, right. So if I know that this challenge I couldn’t have predicted it two weeks ago and two weeks have passed and now I have it then it proves that I’ve you know store this data for two weeks or recreated it. But again, this is going to cost more than storing it for two weeks. So the way we generate the challenge is we use a proof of sequential work. Basically a proof of sequential work is our proxy for elapsed time. So we also call it a proof of elapsed time. The reason is that it’s even though with a lot of hardware you can speed up parallel work. If you have to do sequential work, then the cost of speeding it up is much much higher and there’s some things we just don’t know how to speed up that way that well so what you do is you do enough sequential work that it should take you about two weeks. And the idea is that you use the previous proof of space-time as your challenge to this proof of sequential work and then you have to work two weeks. And then the output of this sequential work is the challenge to the execution phase, and because it takes me two weeks then I couldn’t have predicted it two weeks ago. I had to have store the data for two weeks.
Friederike: So this proof of sequential work takes me two weeks to produce but it runs at a fairly low intensity?
Tal: Okay this is what I wanted to say, is this also seems to bring us back to proof of work. So if everybody is running a proof of sequential work everybody’s running a proof-of-work? So the nice thing here is that we don’t care who generates this proof of sequential work, because you’re not proving that you did work. You’re just proving that time elapsed. And so it’s enough that somebody in the whole network does approve of sequential work, one person can do it for everyone. So instead of having everybody run their own proof of sequential work, what we’re planning is to have servers on the network provide this as a service because it scales right? So it costs you one CPU for you know, two weeks not that big a deal if 100,-000 people are all paying they can pay like less than a cent. And you will recover your costs. So the more people to have join this server that the cheaper it is. Basically this means you don’t have to trust the server because the proof is self-certifying but there don’t have to be many servers. So we don’t think that people will run their own servers, there will be very few and people who run the server they have think a fast computer and maybe you’ll use two of them just in case but the idea is that you’re just going to get this proof of sequential work. There will be one proof for or maybe you know a small number of proofs for the entire network.
Friederike: Okay, so I have this proof of sequential work now together with my proof of space-time and I can submit to be allowed to mine and block. Is that correct?
Tal: Yes. So you submit in order to create your activation transaction. And now once you have an activation transaction, we know that you are active right in the next epoch and there’s an exit thing there you have sort of this public key that you publish as part of this activation transaction and it’s a public key for a VRF or verifiable random function and this together with the epoch number and some other details tells user when in the next epoch you’re going to be eligible to create a block.
Sebastien: I’d like to come back to this idea that most honest users will store the data on the hard drive rather than do the work. Now, if the coin itself, if the asset becomes incredibly valuable and so economically if it starts making sense for people to do the work rather than store the data on the hard drive, you know, is that possible? And is there a way to fix that? Otherwise, we just get back to prove work.
Tal: You’re right. That’s a great question. So first of all, you know in terms of CPU vs. storage, the cost of the coin doesn’t really come into it right because if the storage is cheaper, it’s not like I get more coins by doing CPU vs. storage. Right? So I always wanted to have the lowest costs to get the same reward. So the reward is every activation transaction gets X reward. And now if CPU cost me ten dollars in storage costs me one dollar I’d rather use storage, and the fact that the coin if the reward goes up 10 times, I’d still rather use storage. I just get more reward, but it could be that the cost of CPU goes down or the cost of storage goes up or like there’s so many people are mining Spacemesh that you know, there’s a shortage of storage in the whole world and everything becomes more expensive.
Sebastien: It could also be that there’s like a competing coin that also uses a similar protocol and you would want to be able to sort of mine both coins. And so one you would do the actual storing the data on your hard drive and the other you would do the proof of work.
Tal: Okay, but again the proof of work is more expensive. So it’s always going to be cheaper if we build things correctly to do storage rather than work. If you want to do two things just get more storage. It’s still cheaper than doing more work. But the way we handle the cases where the storage actually gets more expensive than the CPU, remember we’re talking about like a two-week period and there’s also something that I haven’t said so we said there’s a two-week period every miner gets a block. So what if they’re you know, seven million miners suddenly, there’s so many miners that the communication cost build up. So once we get I think above around 800,000 miners then either we have to let the communication costs go up or we can increase the epoch size, right so we can say instead of a two-week epoch will have a one-month epoch. Now we can handle twice as many miners with the same communication costs, but now the storage costs twice as much because you need to store something for a month instead of for two weeks. And so if before the storage maybe was less expensive storing something for two weeks then doing the work, now the storage is twice as expensive, you’re storing it for twice as long so maybe suddenly it’s not less expensive.
Friederike: So this is the space time version of difficulty.
Tal: Exactly. So in this proof of space-time, there is a difficulty. What I said is we fill your disk actually with proofs of work. So we have a very easy knob to turn, we can make these proofs of work just more difficult. And this says how hard it is to recreate the table and one nice property that we have, so there are these sort of competing older versions called proofs of space, which there’s a difference between proofs of space and proofs of space-time is like this technical definitional thing, which if you want to know why I think the proof of space-time is correct read the paper that’s on e-print, but in terms of the constructions their constructions actually are also proofs of space-time in some sense, but they don’t have this difficulty in adjustment. So they have some other advantages but they have this big disadvantage where if I want to make the initialization harder, so it makes it basically make sure it still is rational to store rather than use CPU then in their case you also have to make the verification harder. You have to store more you have to do something to make it that doesn’t work quite well. And in our case we can actually just go over the proofs of work and make them harder.
Sebastien: And another question here with regards to possibly attacking the network. And of course this kind of maps onto the idea of a 51% attack, but I don’t know if it makes sense here. So enlighten me on this. So in terms of Bitcoin at some point, it doesn’t make any economic sense to use old hardware. So that’s why people are constantly upgrading the hardware because the old hardware is not energy efficient and the newer hardware also produces more hashes, has but with storage it’s just storage. So someone could literally just buy up, you know old hard drives that are being discarded from say institutions or school systems or there’s an abundance of old cheap like near zero cost hard drives out there that one could amass and build like a massive raid with all these hard drives and just keep adding hard drives and hard drives at basically no cost without having to buy new storage space. How would one prevent like a sort of 51% attack?
Tal: So first of all, it’s not no cost. There are two costs one is the initialization cost actually requires proof of work. So adding this space does cost you an initial work, which means that it’s going to be pretty hard to add, you know, a huge amount of space very quickly. Although again, it could be doable. The second thing is that you know, when you say zero costs old drives there actually is a cost. I think we did some like back-of-the-envelope calculations and in terms of like actually getting things in production and doing something that works it’s not clear that it’s actually cheaper to use old drives because they keep failing, because they’re a lot of operational costs around using old drives that might not make it again. I’m not saying that it doesn’t work, but it is a big cost. It’s definitely not zero and the thing is could it work? Somebody with a large enough budget can always attack any of these systems right? If you have a large enough CPU budget or just you know, cash, you can attack Bitcoin, right? So right now for Bitcoin this budget needs to be enormous because Bitcoin has a lot of budget already in more operational. If you’re talking about a smaller cryptocurrency, right? No matter what kind of cryptocurrency it is because you can trade resources money for resources, if it’s proof of stake if it’s proof of work, if you have a large enough budget, you can buy 51%. So is it reasonable that you can buy 51% of the space-time in the system? I don’t know. It depends on how big it is, but I’m sure in the beginning it’s not so hard. So if we just start maintenance, I know it’s 20 million dollars. Yeah, if you have 20 million dollars, you can you know, take over the system. One of the nice properties of proof of space-time versus proof of stake is that even if you did capture the system, over time as the storage grows, so suppose you captured it now you have 20 million dollars and you captured, you know, 90% of the storage, but in 10 years will be no one trillion dollars. If you didn’t continue investing and you didn’t invest, you know, 500 billion dollars in those 10 years you no longer have 51%, you’ve been diluted and there’s nothing you can do to prevent that except, you know, continually buy more resources and in that case there’s nothing we can do, and nothing that I know of that any cryptocurrency can do, if you have enough resources to always have 51% then this is the basic assumption of security for all of these systems.
Friederike: Okay. I have one last question about the security of this proof of space time before we move onto the mesh, just in very practical terms. So basically you say I have those two terabytes on my hard drive and I fill them with your proofs of work and then basically there’s a proof of elapsed time happens and depending on that, I get asked questions of what’s in position 1713 and what’s in position five and what’s in position 5,800,000, right? So and basically which positions I’m being asked about depends on this proof of elapsed time or the proof of consecutive work and it has an element of randomness and and actually making randomness on a digital system is incredibly hard. So how do you go about that problem?
Tal: This randomness is not so hard to generate because here we don’t need like true uniform randomness. We just need unpredictability and the proof of sequential work basically guarantees us unpredictability because if you could predict what the result of this proof of sequential work was, you could solve it faster than two weeks. So just by the fact that it is a proof of sequential work, it means that you cannot guess the result before two weeks are up and we use this together with hash function. So in our case say sha-256 which in terms of our theoretical analysis, we pretend this hash function behaves like a completely random function. So obviously it doesn’t behave like a random function it has these very structural properties, but this is a very common assumption in practical cryptographic protocols and even though in theory you can build these toy protocols that will work with a random function but will not work with any actual hash function, in practice we don’t know how to break any of the protocols that are secure in what we call the random oracle model when you take the random oracle and replace it with sha-256, right? So this is again, you know, it’s not a total proof of security because maybe somebody will find some break in sha-256 but if they do that they will break so many other protocols that you know, this will be the least of our worries.
Friederike: Okay. Thank you that makes complete sense. So let’s move on to the mesh. So as you alluded to earlier you actually end up with a lot more blocks than you would on a typical blockchain. How is this handed by the space mash protocol?
Tal: Yeah. So we divide time into layers. So we have a layer say every five minutes again, the exact parameters might be tweaked but this is the current starting set and we’ll have something like 200 blocks per layer. Again this is randomized. So there might not be exactly 200. It depends who actually is online and create maybe some some parties will not be online, they won’t create a block and there’s also this randomization of when within the two-week period you generate blocks it could be just randomly that some layers are a little bit larger some a little bit smaller, but this is sort of on average. This 200 block per layer gives us some very big advantages and there are some disadvantages too, so the big advantages are in terms of throughput right now we can handle much higher throughput because, I guess this comes from two things. One is that because you’re guaranteed that your block will get in, you don’t have this huge disincentive to make your block larger. In Bitcoin for example if your block gets larger, it takes longer for the block to be transmitted which means that if somebody else solved the proof of work at the same time, their block will get in first and so if you’re in a race, then you want your block to be a small as possible. And this creates this perverse incentive where you don’t want to put transactions in blocks and because you want them to be first in the race, and there’s also these sort of limitations of the system where if you make the blocks too large, it will just take them too long to be transmitted which means it will increase the chance of having multiple people solve the proof of work at the same time or around the same time, which means that the security of the system actually breaks so if time between blocks is too short. It’s very short compared to the time it takes blocks to be transmitted over the network then you’re no longer guaranteed that the longest chain rule will guarantee consensus.
Friederike: Okay, so basically large blocks and long propagation times, they foster a higher anchor rate.
Tal: Exactly.
Friederike: But if you actually mine many blocks at the same time, how do you actually how deal with conflicting transactions that will invariably be incorporated into all of these blocks.
Tal: So this is where we come to the disadvantages. So there are several disadvantages. One of them is it’s actually much harder to prove that you can get consensus because now we have to agree about sort of which blocks are the right blocks. the blocks in consensus and which blocks are say say, I published a block I pretended it happened, you know, 200 layers ago, right? So it happened last week or something and maybe even looks valid because I’m guaranteed to to generate a block in say layer 10, and now several weeks have passed. I generated the block I didn’t publish it to anyone. And now I publish it, you know two weeks later. So it looks valid in terms of, it was valid at the time, had I published it then it would be okay. So now we need to everybody to agree this is not part of our history because otherwise I could change history. So the main sort of challenges in designing Spacemesh were exactly getting everybody to agree on exactly which blocks are considered part of the history. So those are blocks that we call contextually valid, a block is contextually valid if it’s sort of part of the real history, the transactions in this block are actually part of the history, and blocks that might be what we call syntactically valid. So they were generated in some sense correctly, but they weren’t sent at the right time.
Friederike: Okay. So basically it’s the distinction between proof of existence and proof of availability. Right?
Tal: I’m not sure I understood that actually.
Friederike: So basically in the Ethereum space that’s also a very common problem. Basically, there are two different slightly different problems. So one is I have something and I want to be able to prove later that I had at that point in time. So basically what I can do is I can have a hash and I can post it to the blockchain and then I can show you later that because I had the hash of this at this point in time this is actually an incredibly strong proof that I actually had. So that’s proof of existence and basically proof of availability is that not only did I have the thing but I also gave others access to this.
Tal: Yes. Yes. This is the sort of questions we need to answer and you can think of it like in Bitcoin which is simpler, right? You have a block that you know solve the proof of work. It’s syntactically valid. But only if it’s on the longest chain, it’s contextually valid. So by looking at the block itself, you can’t tell if it’s part of history unless you also see which other blocks point to it. So we have to do something similar in the mesh, but let’s put that aside for a second how we decide whether blocks are valid, let’s pretend for example for a second that we actually know, we have for each block everybody agrees whether it’s valid or not. We still have this case where people are publishing blocks at the same time like you said and they might each see different transactions in the transaction might conflict. So in blockchain, we can sort of guarantee that if I’m publishing a block all my transactions will never conflict with something in a previous block or within themselves, but in a mesh you cannot guarantee that. And so basically we need to sort of say that we allow conflicts at least of some types, right if there’s a conflict that happens because two blocks are in the same layer and they couldn’t have known about each other. That’s fine. They can both be still contextually valid blocks. But we have this sort of stake evaluator that decides what transactions are valid or in the case of smart contracts, what is the current state of the system after running the programs that are part of these transactions. It runs over all these transactions, if there are two conflicting transactions, say the first one will be valid and the second one won’t. This is I think maybe even easier to see in the in the smart contract type system, right? So suppose you have a transaction that takes a hundred coins out of an account. And another one that takes a hundred coins out of an account and the account only has a hundred coins in it. So you can execute both of them, the first one will take a hundred coins out of that account. The second one everybody will agree that it didn’t manage to take the coins out of the account because the account was already empty.
Friederike: So how do you agree on the order of these blocks?
Tal: Once we agree which blocks are valid then actually agreeing on the order is very easy. We just need to have you know, hard-coded mechanism and so one like our initial mechanism is we’ll just sort the blocks by their ID, right so that everybody that sees a block knows its ID. There’s the hash of the block and so now we all agree on the order of blocks within a layer. And we can then take say the transactions within each block and the order of the blocks. But you know you this does sometimes, there are various subtle issues there with lotteries and things like that where you might not want people to be able to make sure that their transaction is first and so this mechanism how you decide order we can switch it. We can just as long as everybody agrees on it and you know, this will be hard coded in the system. But as part of testnet we might play around with different mechanisms and once we get to mainnet that there will be fixed mechanism. It could be the order of the blocks, it could be something that’s a little bit more randomized know that depends on exactly which transactions are there and you know, then we do some random ordering based on that but this is sort of an easy problem to solve when everybody agrees what’s there.
Friederike: Okay, and do you get the block reward regardless of whether your block is actually included in the final blockchain or not?
Tal: No, so what is guaranteed is that if you’re behaving honestly, then your block will always be included, it will always be contextually valid and so you will you will get block reward for every contextually valid block, but the only way that your blocks will not get in is if you’re behaving dishonestly. So you’re guaranteed a block reward.
Sebastien: So tell me where does this hare and tortoise protocol, these consensus protocols that you describe in the paper, how do they fit into this? And at which point are you using the hare protocol and at what point are you using the tortoise protocol?
Tal: Yeah, so the high-level the tortoise protocol basically gives us a consensus and irreversibility. So basically it makes sure that eventually everybody will agree on which blocks are valid and also that if we agree and we have high confidence that some Block in the past is valid then it will stay that way, history can’t be changed. And we called the tortoise protocol because it’s slow but steady, right it will guarantee eventually consensus for everyone. It requires very few assumptions to work. But it takes a while so consensus might take many layers. And the hare protocol comes in basically, what the tortoise protocol guarantees, I’ll get into maybe in a minute how it does this, but if all the honest parties start out agreeing then very quickly, they come to a confident consensus and very quickly means within one layer if there’s no attacks and on average two layers if there are attacks. So we come to a consensus very quickly. But only if all the honest parties start out agreeing. So why wouldn’t they start out agreeing, because the adversary might publish blocks that are sort of on the threshold between being on time and not being on time. So honest parties will always publish their blocks at the beginning of the layer. They know ahead of time exactly which layer is supposed to be the layer which they published, they can create them ahead of time, it’s always easy for them to serve to work, and the layers are far enough apart that all the honest parties will receive all the honest blocks for sure from one layer to the next, but if I’m trying to make consensus hard, then I can generate a block and then I’ll publish it just half a minute before the end of the layer. And now some of the honest parties will get it in somewhat and maybe I can sort of fine tune the exact timing so that like whatever fraction I want. So half of the honest parties will think it’s valid and half will think it’s not, or 1/3 will think it’s valid and 2/3 will think it’s not. So when this occurs then the tortoise will eventually guarantee that we have reached consensus, but it will take longer. So we’ll reach consensus quickly on honest blocks but much slower on dishonest blocks. And what the hare protocol does is it’s basically a protocol that we run off chain. So we run it on the gossip network, but it’s not part of the sort of final history and it helps the honest parties agree right away about which blocks are valid. So they’re guaranteed that all the honest blocks will be considered valid and these blocks in the middle, then they might be valid they might not but they will all agree on them. And so now the tortoise protocol will guarantee consensus.
Friederike: So you have instant finality on the clear-cut cases, whereas finality on not so clear-cut cases just takes longer.
Tal: Yes except because we want to have a programmable system smart contracts, it’s not enough to just say, you know, this transaction is good. It actually what it does depends on whether another transaction came in or not before it right? So we actually need to have finality on an entire layer in order to say that we can now compute what the current state should be and what the hare protocol does is guarantee very fast finality on the entire layer, even if there’s some blocks that are sort of maliciously generated and and they’re trying to prevent consensus.
Friederike: Okay, that was going to be my next question. So basically what kind of transactions does Spacemesh allow and you already said that you’re aiming for smart contracts, but does it mean that basically the reaction time of the network is five minutes or whatever one layer is?
Tal: Yeah, so the latency of the network is not fixed. It’s probabilistic because it depends on how it’s being attacked. Now if it’s not being attacked, which is probably going to be or being attacked like with a, you know, very small percentage of the resources, then the finality is going to be around five minutes so will be again, it depends on the parameters. So, we’ll play around with this. But yeah, I can be pretty confident in the results. And I think again we have this thing that’s sort of like Bitcoin in the sense that there’s no final finality, right? There’s just levels of confidence. So, if you see something with one confirmation then you know what the probability is that an adversary with this much resources can reverse that, if you see six then the probability goes down but there’s never a 0 so we have the same type of guarantee but it goes down very very quickly and because we have 200 blocks every time and we have the same type of analysis as Bitcoin, right? The blocks are sort of voting for the previous layers then we get it faster than Bitcoin much faster.
Sebastien: What are the useful applications you anticipate here or are you targeting a specific type of user or a specific type of applications for Spacemesh?
Tal: No, so we actually are a very general purpose infrastructure. Obviously the initial use was probably going to be payments because this is sort of what the main use of cryptocurrencies is in general, but our infrastructure is planned to support basically anything, daps, registries, whatever we can think of and especially whatever we cannot think of yet, but people will come up with.
Sebastien: Okay, I think the main use for cryptocurrency at the moment is speculation not payments.
Tal: Okay the main use that people talk about what it can do that right now we don’t have good ways of doing without it.
Sebastien: Are you incorporating at least at the beginning or at some point are you thinking of having some sort of governance mechanism like unchain governance, so I know in the paper there’s mention of this foundation. T
Tal: I think governance is it’s a great question and it’s something that you know, we’re actively working on we know that it’s something that we should have in place at least an initial version of this when we launch mainnet, but no, this is something that we’re looking at the options and what people are doing we’re talking about it. If people have opinions about this, then we’re very happy to hear it because it’s things are not fixed yet at this point.
Sebastien: Okay, and so I realize this is not necessarily your role but can you talk a little bit about the team and you know, the funding that you have raised and also, you know if there’s a business model here, maybe not at the moment but sometime in the future, how do you anticipate to make money as a company?
Tal: Okay, so in terms of the team so we’re an open source project. We have about 20 people working full time of which I think the vast majority are developers that are writing code, and we also have a research team which is basically all people in Academia. So we’re all with several hats, so there’s me there’s Iddo Bentov, there’s Barak Shani who did a PhD New Zealand worked on Elliptic Curves. There’s Julian Loss who is finishing his Ph.D now, and there’s Tal Malkin who’s a professor at Columbia. So, you know all people in Academia we’re also this is completely open project. We’re happy to work with anybody in terms of research. Like I mentioned, the Krzysztof and Bram’s work, in Academia we’re not competitors. We’re all trying to get the best technology out there and we’re building on whatever we find that’s the best that we can use. In terms of the developers, they’re an amazing development team. I think one of the critical things in terms of actually constructing a working protocol is having a development team that can do this and I think it’s quite rare to find people who can understand the theory deeply and are amazing coders and you know, just as an example, they started working I think less than a year ago, and we already have working protocol and testnet is starting in like I think July so, you know, this is a very very short time period for this kind of work and it’s definitely not a simple protocol. So I’m very impressed by the development team, I’ve worked with developers before and they really impressed me. In terms of investments, so we raised initial seed round I don’t remember exactly when I year and something ago and about six months ago we had another 15 million dollar investment from sort of the top-tier blockchain and crypto funds like Polychain, Metastable, Slow Ventures a collaborative fund another seven additional top-tier funds. In terms of the business model the idea is that the vast majority of the coins are going to go to miners, but there will be at least in the initial period a small amount of pre-mining, less than 10%, and tax again limited time period attacks on the block rewards that will go to compensate development and investors.
Sebastien: This pre-mining. So I suppose you will have some of those coins and you’re betting on the future value of the coin and these transaction taxes that you mentioned…
Tal: Not on transactions, but on the block rewards.
Sebastien: So once this runs out what do you think is the future of the company? I mean the investors invested almost 20 million dollars in your company, I guess are betting not only on the future value of the token but also on the company itself, what’s the plan there?
Tal: So again, this is like my area. I’m the chief scientist in charge of the technical part, the research, so I’m not a hundred percent sure. As far as I know the main return for the investors is going to be the value of the coin. So we’re not planning on doing anything more complicated. And also we are ensuring that there isn’t a large block of coins that somebody holds ahead of time that can you be used to manipulate markets and things like that. We’re sort of dripping the coins off over time to help it stay decentralized.
Sebastien: Now just for you wrap up, please tell us a bit about the roadmap. So I know you mentioned the testnet up soon. And also where people can find Spacemesh where they can read about it and learn more and maybe even get involved.
Tal: Yeah. So actually we’re happy for people to get involved, our website is spacemesh.io
Sebastien: It’s a very cool website by the way.
Tal: Yes. I also liked it. Our chief marketing officer is also amazing. The website has links also to the testnet. So we’re going to be launching testnet in July. If you want to get involved there, then definitely follow the links there are various interesting things you can help with. If you’re an amazing coder, we’re actually hiring people in New York. Our current client is built and go, we want to build an actual completely separate implementation in Rust so that we’ll have something that can validate the spec and not rely on the code is the spec. In terms of road map, testnet is going to run for at least I think six months and mainnet we hope to get out by the end of the year, but you know, the whole point of testnet is testing and finding bugs finding problems. We want the security to be obviously we’re writing security proofs for everything. We do want to make sure the protocols are sound but that’s not enough to guarantee security. So we’re also going to have bounties for finding bugs. We’re going to run extensive testing, we’re going to have code reviews, we want to make sure the software is also secure and depending on what we find obviously this could cause mainnet to be delayed but ideally it’s going to be by the end of the year and we’ll start running active currency.
Sebastien: Great sounds very exciting. And yeah, it’s a fascinating idea for a consensus protocol.
Friederike: We look forward to the testnet.
Tal: Thank you.
Guests
Sponsors
Cosmos
Join the most interoperable ecosystem of connected blockchains. Learn more at cosmos.network/epicenter.Vaultoro
The gold hedging platform for the crypto community – Get early access to the new V2 platform at http://vaultoro.gold/epicenter.