Nervos – Scaling Smart Contact Blockchains With Proof of Work and Generalized UTXO
While recent blockchain launches seem to leverage various Proof of Stake consensus mechanisms, some believe Satoshi’s consensus mechanism is optimal for distributed protocols. As decentralized ledgers jockey to become the chain of choice for enterprises looking to leverage blockchain technology, projects are looking to offer a solution that maximizes security, decentralization, and transaction throughput.
Kevin Wang, a Co-founder of Nervos, joins us to discuss why Proof of Work was implemented as the consensus mechanism for the network. To enable greater flexibility for application developers, Nervos created a Common Knowledge Base (CKB) to focus on the security of assets, enabling a complementary layer of Virtual Machines (VM) to scale and facilitate computation.
Kevin also discusses the active initiatives underway with the Nervos Grants Program to foster ecosystem development and encourage developers to evolve the permissionless network.
Topics discussed in the episode
- Kevin’s background at IBM, his open source development, and journey to crypto
- What the blockchain scene is like in Hangzhou, China
- What’s unique about Nervos, and the importance of each layer within the network
- Introducing Nervos’ consensus mechanism, NC-Max
- Why Nervos decided to implement Proof of Work
- Explaining the Common Knowledge Base (CKB), and its significance in the Nervos network
- How developer experience is in the Nervos ecosystem
- The economic model of CKB, Nervos’ native token
- Progress of the network, and a call for developers to consider the recently announced Nervos Grants Program
(6:13) Introduction and background, running a tech podcast in China.
(9:17) Being in Hangzhou, the FinTech center of China.
(12:54) Relationship between cryptape and Nervos.
(13:55) Influence of Jan’s early work with Ethereum.
(16:11) Nervos Vision and high level view of its architecture.
(20:16) The problems of sharding layer one.
(23:43) NC-Max a variant of Nakamoto Consensus.
(27:04) Elimiating the profitability of selfish mining.
(31:38) Choosing Proof of Work.
(38:32) The Eaglesong hash function and mining ecosystem.
(40:15) Nervos’ UTXO based Virtual Machine.
(42:19) Lock and Type scripts.
(47:35) Computing transactions off-chain, verifying on-chain.
(52:05) Two parties engaging with the same contract in one block.
(57:22) RISK-V architecture for Nervos VM.
(1:03:11) Learning to develop these contracts.
(1:04:00) The economic model and CKBytes
(1:13:26) Flywheel economics
(1:17:46) Base and secondary issuances.
(1:22:22) Application of Nervos
Sebastien: We’re here with Kevin Wang. Kevin is the co-founder of Nervos, a multi-asset store of value blockchain, which comes out of China. Kevin is going to walk us through how Nervos works. One of the things that’s interesting about Nervos is that, unlike many other block-chains coming into the ecosystem, Nervos uses proof of work, which will I’m sure ruffle the feathers of many of our listeners. Nonetheless, it’s a really interesting model. Thanks for joining us today Kevin.
Kevin: Good to be here. Thank you Sebastian and Sunny for inviting me.
Sebastien: So let’s start with a bit of your background which is far removed from what you’re doing now. Previously, you were a consultant, you worked for IBM, and then you started one of the biggest tech podcasts in China. What’s it like running a podcast in China? I’ve heard that running a podcast in China is very different from running a podcast here. One of the reasons is because people in China that run podcasts actually make money. But yeah, tell us a bit about your background.
Kevin: We didn’t really run a for profit one. It was technology focused, called tea hour. This is how I met some of the co-founders of Nervos. It was, at the time, one of the largest technology podcasts, focused on programmers, hackers, entrepreneurs, that type of audience. It was quite fun. We ran somewhere over 100 episodes.
I was trained as a software engineer. Like you said, I worked for IBM. I started my career there, with Silicon Valley Lab, and did some Big Data engineering solutions, before they were called Big Data. Then I jumped into the startup world, getting into open source and the web, between 2000 and 2010, where you see a lot of social apps and a lot of entrepreneurs getting into that space.
I caught part of that wave, and worked with a good friend on a developer education startup, focused on training people to become professional software engineers. Through the journey, I discovered Bitcoin, like many other folks, and reading the Bitcoin whitepaper was a really profound experience for me. I knew that I always wanted to do something in this space, and started Nervos with some of the other co-founders back in beginning of 2018, and working on it since.
Sunny: When I came in to visit your office last year, in Hangzhou, it was a really cool experience. It’s like the startup capital of China. Can you tell us a little about what the scene is like, in Hangzhou, and what other startups are there? Are there a lot of other blockchain companies based out of there?
Kevin: Hangzhou is known as the FinTech center of China. The biggest player, obviously, is Alipay, which is part of the Alibaba Group. They have many products and run a huge operation there. Then you have more traditional FinTech companies, there are p2p FinTech companies, and in the blockchain space there are also many companies. Zhejiang University is there, one of the largest engineering focused universities in China. A lot of good engineers come out of that university.
We’re there, and our engineers are mostly in Hangzhou. You also have imtoken which is a wallet company, and you have Sparkpool, which is the largest Ethereum mining pool there. then you also have some permission blockchain companies there as well. Also, a lot of researchers and blockchain engineers. We have events regularly in Hangzhou, and we know that crowd pretty well, it’s a very entrepreneurial city.
Sebastien: What would you say is the level of interaction between the FinTech community there, and more specifically the blockchain community, with the West in Europe and the US? Are you seeing a lot of interaction? or is it mostly encapsulated there?
Kevin: It’s interesting, because the wind shifts a lot, it’s very much regulation driven. In the early days, there wasn’t clear regulation or where the line would be drawn. A lot of people try to cross the boundary and do a little bit of blockchain but also FinTech companies looking to blockchain. The regulation in China, in itself, is a pretty big topic. Now, it’s more clear where the boundary is. So the traditional companies tend to be in the permissioned blockchain space. The public blockchain, which is crypto assets and smart contract platforms tend to be more grassroots. Since last year, President Xi Jinping had top-down mandates for the country to develop blockchain technology. So there is some trend coming along that this can converge. The process is starting, but we’re not really there yet.
Sebastien: Okay, interesting. it would be really helpful for us at some point to do a whole episode on the Chinese ecosystem, and all of the regulation, but also all the initiatives that are coming out of there. From this side of the world, it’s often hard to dissect.
Kevin: Yeah, it’s very different.
Sunny: I’ve been actually following a lot of Jan’s work for a while. His Company Cryptape, they built one of the first alternative implementations of Tendermint, in rust, about two years ago, and I’ve been following them since then. What’s the relationship between the Cryptape company and Nervos? Is it one of the main development companies, or did it turn into Nervos?
Kevin: Good question. The first one that you mentioned, Cryptape is the main developing company for Nervos. It’s very similar to Tendermint and Cosmos. Nervos is a public blockchain project governed by its own foundation, Cryptape is tasked with engineering tasks or implementing the protocol and developing the ecosystem tool chain products.
Sunny: So Cryptape originally started off with doing private blockchain development, What drove the vision to instead start focusing on building a public blockchain? What was that vision there for Nervos?
Kevin: Yeah. So it really is from Jan Xie. He was a core Ethereum researcher, core developer, and used to work with Vitalik pretty closely, in the early days, around 2016. At the time, like you said, Cryptape built a permissioned blockchain called Cita, a variation of BFT Tendermint. At the time, he was still working with the Ethereum team pretty closely. Through almost two years he worked there, he really got a front-row view of how Ethereum has grown and also how Ethereum has many of the growing pains. That experience informed him of a vision that maybe we could go a different direction. If we start with Bitcoin, and then you get to Ethereum which has smart contract capabilities and general computation capabilities, and then
your Common Knowledge Base layer one blockchain… Almost as if to go from Bitcoin, to get a Turing complete smart contract platform, you either go the direction of Ethereum or you go the direction of CKB. So turning left you get CKB, turning right Ethereum. I think both ways are viable, but thought there’s advantages to the technical architecture that we chose, obviously we can get into more of that in this podcast.
Sunny: It’s interesting that you guys came heavily from the Ethereum community. Just by looking at some of the architectural decisions, if I had to guess and didn’t know, I would have thought you guys were Bitcoiners, starting to build a smart contracting platform with proof of work, and UTXOs. It’s a very interesting design paradigm that’s different than Ethereum, and what a lot of the existing smart contracting systems are moving towards.
Sebastien: Yeah, just about anybody else. So let’s talk about the Nervos vision, then can you describe at a high level, what you’re trying to build here and why you’re building it this way.
Kevin: So Nervos network is a layered architecture. In the early days, we specifically chose the direction of scaling through layer two. In other words, we keep layer one as simple, and playing as limited a role in the whole ecosystem, as possible. Then use layer two as the scaling layer. So that decision informs a lot of the technical trade-offs that we made, including things like Sunny mentioned, we use the POW. The rationale here is if we were to choose POS and some novel consensus algorithms (we use NC-MAX a variation of Nakamoto consensus) and potentially achieve higher scalability. On the other hand, if you have chosen to scale with layer two, you don’t necessarily have to do that.
We take a no compromise approach to decentralization and protecting the cost of running full node, just like a Bitcoin, BTC. We don’t do sharding and want to make sure to keep all the global state in one piece, obviously, Turing complete to support layer two. All this comes together as the Nervos network. I know we’ll get to this little bit later too, on the economic side, we feel like we fixed some of the issues that Bitcoin faces, and then also have a crypto economics model specifically designed to be the layer one for layer two, as well. Both architecturally as well as crypto economically the Nervos layer one is designed for layer two technologies, and together that is the Nervos network.
Sebastien: You said you don’t do sharding, so the layer one conserves all the different states of history of the Nervos network. So explain then why is Nervos better for L2 than any of the other scaling approaches that we see currently, in the ecosystem?
Kevin: it’s a pretty big topic. I’ll just spread a little bit and not get into depth of each one. First you have to look at what layer two is best for. Layer two technologies are best for scaling. You have really cheap, potentially really fast finality, and really fast confirmation, and can scale really well. So when you design a layer one, for that purpose, to complement layer two, you get things that layer two needs, and layer two doesn’t have. Which is you want to have some global settlement layer. That is objective, and which points to PoW, you want to have a layer one system that’s secure.
We can talk about the economic model of Nervos layer one little bit. You want to pick one that doesn’t have sharding so that you have a global state and don’t get into the synchronization of different states and other issues. You want to pick a layer one system that’s maximum decentralized. It’s difficult to push for that, if you want scaling on layer one, as well. If you don’t need to scale layer one, we want to push the other direction. We want to have no compromise on decentralization, which means, like BTC, you want to protect the costs of running full nodes as much as possible. Also complete Turing scriptability cannot be on that layer, because the lack of opcodes for Turing complete.
Sunny: So one of the main benefits, as opposed to Ethereum, would be the lack of sharding. Why is that important for L2? As long as all the participants of a particular L2 system are on a single shard, then it should be fine. Like, as long as all the players in this plasma game all have an account on that shard. Why does it affect the fact that there are other shards in the Ethereum world?
Kevin: That’s true. We have seen for example, in the DeFi space, specific use cases. You have a lot of applications that depend on each other, we call this composability. When you have applications depend on each other, you want to make sure that they can synchronize state very easily. What you will see probably, in a sharded blockchain, interdependent applications tend to then reside on the same shard, because that will make the most sense. However, this defeats the purpose of sharding. Let’s say DeFi or some other killer application would evolve, and then you have to put all of them the same shard. Then you go back to the issue of scalability, because how do you scale the shard?
What we see, first of all layer one, we want to have a unified global state and there was a lot of value from that. So you can compose very easily. The way we think about this is layer one is for preservation of assets. When we talk public blockchains, the value derives not from the fact that they can do computations or they can pass messages around, but they can support applications focused on value or assets or finance right? Then whenever you have assets, DeFi applications tend to go with the assets. So, you will see that DeFi, very naturally, would prefer a blockchain that has this unified global state, then all the compatibility can happen, and it is much easier for that purpose.
Sunny: So you mean there will need to develop an ecosystem of composability, even for two? Let’s say, I need to be able to close my payment channel that triggers some event happening on a plasma chain. It’s good to have those in a shared state so that way they can affect each other easily?
Kevin: Yeah, you could. But again, that’s not the advantage of layer two, unless layer one is absolutely crowded, and it’s too expensive for these transactions. I’m not saying they cannot be performed on layer two, but these type applications, I call settlement transactions. These are high value transactions, and these typically need global consensus. So the best place to perform them is on layer one, and you have a mechanism that can reach global consensus, because that’s the most valuable. Those are the most valuable transactions, in a way.
Sebastien: So let’s talk about the consensus mechanism. So I mentioned at the beginning of the show that Nervos uses proof of work, it has a variance of Nakamoto consensus, which is called NC-MAX or Nakamoto consensus Max, can you explain What is NC Max, and how does it improve on the Bitcoin proof of work that we’re used to?
Kevin: Yeah, so NC stands for Nakamoto consensus, which really is Bitcoin’s consensus mechanism. This chain based consensus mechanism prefers availability, this is what we believe is property the layer one should have. NC-MAX improves on Nakamoto consensus in several ways. First, we believe that bitcoin’s 10 minute per block underutilized bandwidth, and in fact, it’s very wasteful. So we want to have a mechanism that can dynamically adjust difficulty, so that instead of 10 minutes per block, we want to target a specific uncle rate. Currently it’s targeting between three to 5%. Within that range of uncle rate, we can keep the block time as fast as possible. So right now our block time is somewhere sub 10 seconds, seven eight seconds per block and still have reasonable uncle rates, to achieve consensus.
So that’s one, the other property that NC-MAX prioritize for is to maximize the bandwidth. So Nakamoto consensus has every transaction propagated twice through the network. For NC Max, we want to make sure that we only propagate those once, to best conserve bandwidth.
Ultimately, the goal of the design of NC Max, we want to maximize for decentralization and preserve the cost of running full nodes. Then, for any given, let’s say three to 1000 full-nodes and around that range, then it really becomes a function, for any consensus algorithm, it really becomes a function of how you can best utilize bandwidth. That’s what NC max designed for to best utilize bandwidth.
Sebastien: So in Bitcoin, we have a 10 minute block time. The idea here is that within 10 minutes, we’ll have near perfect propagation of one megabyte size blocks. Given the evolution of bandwidth globally, and how that’s improving, we can probably assume that’s a very high margin of security. So rather than taking this approach, Nervos is taking the approach where instead of trying to limit the uncle rate by having this very long block time, you’re going to optimize for the uncle rate by analyzing it and in real time and adjusting the difficulty based on the actual conditions in the network, is that a fair?
Kevin: Yeah, exactly, you got it. So if technology improves, and then in the future network will, let’s say, go 10x and it’s possible. That’s been going up since the last several decades, and with 5g coming up, there’s even more room to improve. Yeah. So again the transaction per second for our layer one protocol NC-Max will increase automatically. Another thing I forgot to mention the third property of the improvement, for Bitcoin protocol selfish mining is profitable because they don’t count the uncled blocks, into consideration of difficulty adjustments, and therefore attackers can increase the uncle rate and then have decreased difficulty for the next difficulty adjustments, and then become profitable. For us, we count the uncles also within consideration of difficulty adjustment, and therefore selfish mining is not profitable.
Sebastien: Okay, so here you eliminate selfish mining by adjusting that difficulty rate, on the fly, and then that makes selfish mining unprofitable. For anyone who’s trying to attempt that. How does the network know what the uncle rate is? How does that get incorporated into, say, a block? Or how does that information get captured and transmitted across the network for that difficulty rate to be adjusted?
Kevin: Okay. This is why I’m a little bit out of my expertise here because I’m not a developer that actually wrote the code. I’ll go on the limb and say it’s probably a property on the block headers…
Sebastien: From what I read, it’s a property on the headers. But what I’m asking here, and it’s fine if you don’t know, was how does the network come to consensus on what orphan blocks are? I don’t know if you can shed some light on this Sunny.
Sunny: I mean, it’ll be similar to how Ethereum does it. In Ethereum the miner will include any uncle block, up to a depth of seven, in the header. They’re incentivized to include them, because they also get a reward for including more uncle blocks. so I imagine it’s probably something very similar. I remember looking through some of the documentation from Nervos, and it’s not in the header, but another spot in the block, and doesn’t count towards the blocks size limit. That way is to avoid disincentivizing inclusion of them. So yeah, it is very similar to Ethereum in that way. What I found really interesting, reading through the economic model, I’ve never actually seen it presented in such a way that selfish mining is actually an attack on difficulty adjustment, because I don’t know, I feel like when I learned about selfish mining, that wasn’t how it was presented. But then I read one of the papers linked in your documentation. That was very interesting to see that if you solve how difficulty adjustment works and make it take into account the orphan blocks, selfish mining becomes much less of a concern. There’s not really much of an attack that you can do there.
Kevin: Yeah, it’s very interesting because we actually saw this, first hand experience during our test net. We have something called mining competition, we encourage people to mine the testnet and they can get some tokens that can be converted to mainnet tokens later. We saw somebody launched a selfish mining, like this huge organization. We saw that going on for some time and then just stopped. Our hypothesis is that the person doing the attack was testing so they could realize it’s not profitable. So they compared the income of mining honestly versus doing selfish mining. That was really interesting for us to see.
Sunny: So maybe they take a step back actually, what made you guys decide to use proof of work in the first place? Nowadays, all the new networks launching are using proof-of-stake. This month alone, I run a validator, like four new proof-of-stake test nets are launching, in January. What made you guys decide to launch a proof-of-work network in 2019?
Kevin: It has a lot to do with the overall vision of Nervos network, which is we believe the layer one protocol needs to be rock solid and somewhat conservative, not to try too many things and not to try both decentralization and scaling, and then rely on novel cryptography. What we wanted to have on layer one is, like I said, very rock solid, decentralized and battle proven, something that has been studied in research for a very long time.
At the time, we looked at all the research done with the Bitcoin consensus algorithm. It’s just about the only candidate that fits this requirement. Ever since we started to look into more properties of proof of work, and feel we made a good choice. When you think about having this global settlement layer, and then the security properties you want for that, you want to have a sustainable blockchain and what you need for that, and then whether it’s about decentralization. Sure proof of work has mining pools and they may not be entirely decentralized, but that’s the same for proof of stake as well, with staking pools or staking as a service programs. Even more so, recently binance and some exchanges got into the staking business and then charge zero fees.
So you start to see this, we can say the staking service becoming very much centralized force, it tends to focus on the ecosystem service provider, they already have all the power, like, again, wallet exchanges. The reason is because they have coins, they have tokens. It’s very easy to see that these players will consolidate power over time. For proof of stake, it’s very difficult to break out of that monopoly, if you will, because for as long as large stakers continue, this is like the rich get richer, theory. So if they continue to stake, they will retain their monopoly in the ecosystem like we have seen again for delegated proof of stake like EOS and some of the issues there.
For regular proof of stake it’s not an immediate concern, but things could move towards a similar direction with the staking provider to serve the community. With proof of work the difference is, for these monopolies, like mining pools, to retain their power, it has a huge operating cost. They have to invest real resources and then keep innovating, to stay on the edge of technology to keep their monopoly. With technology the paradigm shifts every decade or so, then it’s a lot more difficult for them to retain that monopoly power forever. So I’m not saying with proof of work there’s no mining centralization issues, but it’s a lot easier in our opinion for that to change over time. This includes the mining machine users and mining pools, and even where the low electricity costs is, because again, that can change with technical innovation over the long term.
Sunny: What made you decide to use NC Max, which is very similar to Bitcoinesque proof of work or Nakamoto consensus as opposed to some of the newer approaches to nakamoto consensus, such as Bitcoin NG or more dag like protocols. I feel like a DAG like system would actually be really… one of the cons of DAGs system is you do need a UTXO system to make it very efficient, which happens to be what you have.
Kevin: Yeah so, whether we use Nakamoto consensus or some variation of it, that’s half the question. So that, for that, it’ll be probably good to have our consensus researcher Ren Zhang, to be here. That’s exactly what he focused on for his entire research and PhD program, and is actually known as the person that broke Bitcoin unlimited. So, he will give a very good answer to the question. He basically studied all the variations, and looked at the chain quality and other properties, and decided Nakamoto consensus is actually the best of all the alternatives.
Just to give a little bit of background, this NC max algorithm was developed… he worked for blockstream under the mentorship of Greg and Peter, some of the prominent Bitcoin researchers. So, there’s definitely a lot of Bitcoin influence into this, and there’s a lot of thoughts on how to maximize the protocol efficiency of bitcoin’s consensus algorithm. So that would be a great question for him. But I know that’s what he says right? So I know He’s a research area.
Sebastien: There’s a really great talk that he gave at this scaling Bitcoin meetup in San Francisco last year about a year ago, which was incredible. I would recommend anyone who’s interested in learning more about that, check out that talk. There’s also a series of blog posts on your website, which summarizes the contents of that talk that will link to as well in the show notes.
Kevin: Yeah, that would be the best.
Sebastien: I want to ask you about mining. So you have your own hashing algorithm. So I presume that Bitcoin ASICs won’t work on Nervos. Can you talk about what the mining ecosystem looks like?
Kevin: Yeah, so we have our own hash function. We thought on this for quite a bit and reusing an existing hash function put the project at risk, especially when you start, because there is this inventory of existing machines that can always point to your blockchain and double spend or attack the blockchain. This is the reason that we develop our own hash function called Eaglesong. Evolution of the mining ecosystem of Nervos CKB, it will be very similar to how Bitcoin you started with the CPU. In fact, the mining competition we did, the first phase of mining competition, everybody CPU mined some coins, and then it will shift to GPU miners, and then from GPU to FPGAs. So right now, as we speak, we have both GPU miners and FPGA miners on Nervos, and then eventually we’ll move to a more ASIC based mining system. So we are supported pretty much by all the major mining pools. We’re pretty happy with the hash rate distribution and also the enthusiasm from the mining community.
Sunny: Yeah, I’m actually in the process of setting up a mining rig, myself for nervos. I have my Grin miner, turned that off, and started trying to install the nervos software, right now. Unfortunately, I didn’t have a chance to finish it in time for the episode…
Let’s move on to the VM. The CKB-VM because that’s actually one of the most interesting pieces I really like, because this is the VM I’ve always dreamed of. I wanted to always build, at some point, to design a smart contracting system that uses UTXO’s. Then I found “Oh, wow”, this is what you guys ended up creating. So, can you tell me a little bit about the cell model, and what does it mean to be separating state generation and state verification? Why is this the design you decided to go with?
Kevin: Yeah so happy to. That’s the core of the ledger structure. Cell model is a UTXO like ledger structure, or data structure, if you will. If you start with a Bitcoin UTXO, you can only express one piece of information which is balance or amount of bitcoins. If you generalize that, then you are able to support any type of information, you can do token balances, for example, and then add the capability of fully turing complete scriptability, that will execute in the virtual machine, and that becomes the cell model. A cell is basically a generalized piece of UTXO, and just like a Bitcoin when you create a transaction you have inputs and outputs. So Nervos CKB, when I say CKB, by the way, I realize I didn’t explain it, the acronym stands for common knowledge base, which is layer one blockchain for Nervos network.
Sebastien: Okay, so just a little side note. The CKB, what you call this common knowledge base, is in fact, the layer one that supports everything else.
Kevin: Yeah, so CKB is just one layer one blockchain and then for layer two, we can have many blockchains, or other channels and all that. When you create a transaction on the Nervos CKB, you also create inputs and outputs, just like Bitcoin transactions. Then the inputs are basically cells instead of UTXO. So you can have multiple cells be part of the inputs, and then the outputs are getting the result. So you could have transactions when it’s verified, executed accepted. The inputs then will be spent. These cells go to what we call dead cells or expired cells. Then the outputs are the new cells. For Bitcoin, it’s the set of unspent transaction outputs, that’s the current global state. For nervos it’s the unspent cell outputs right that will be the global state for Nervos. So smart contracts, the code can also be part of the cell.
This is where we have two types of scripts. One is lock script, just like a Bitcoin. So you can still say, I have my private key, I’m going to unlock this. For Bitcoin. I’m gonna unlock these UTXO to be able to spend it, for Nervos, you’ll be able to unlock the cell to make the cell part of the inputs for transaction. So that’s what lock script is. It allows you to include a cell as one of the inputs of the transaction. Then you have the second type of script we call type script. So lock and type.
The type script, not to confuse it with a programming language, but type script allows you to put the cells as the outputs of the transaction. In other words, the type scripts verifies that a state transition from input to output actually is valid according to a pre specified rules. Again, we’re talking about smart contracts. That’s how I would say the cell model as a generalized UTXO model. It takes the input output style and transaction structure and then adds this type script to make sure that you can run the verification rules in the virtual machine. Effective smart contract to impose rules on state transitions.
Sunny: This is taking a bit more like a functional programming approach. Instead of this data that has these functions, like Ethereum smart contracts, or I’m calling functions that are mutating the state of this contract, instead, what I’m doing is defining contracts as these pure functions, which are these lock scripts and type scripts. I’m basically burning the old state of a contract by passing it through this function and outputs this new state of contract, which is this new cell.
Kevin: Yeah, exactly, exactly. It’s very predicate based. The verification engine, or the virtual machine execution, the result is a boolean. It’s true or false. So is a valid or invalid transaction? If it’s valid, then it’s accepted by the blockchain. If it’s invalid, you’ll be rejected. Like you said, you actually spend or burn the old state, then you have this new state coming out of the output.
Sunny: And are there rules of what outputs have to be generated from the burning of this input? How would I string a series of functions, lock scripts, together to make one larger workflow that I would want to do?
Kevin: Yeah, you got it. So what you’re describing, it’s equivalent, in Ethereum, of smart contracts calling the other smart contract. What you’re describing, it’s the equivalent paradigm in Nervos. Instead of object composability, so your Kommodore, or Ethereum is an old programming paradigm where you have accounts that have internal states. Then when you interact with them, you mutate the state, and then objects themselves can pass messages and they can use that to mutate other objects. In Nervos CKB, that’s what you just said is the same how you can post transactions together, coding smart contracts together is that you pass them through a series of transactions, outputs can be linked to inputs and so forth, you can string a series of transactions that way.
Sebastien: All the computation for this is done off chain correct? So the blockchain only stores the state, but the computation is done on, I guess, individual nodes.
Kevin: The verification happens on the blockchain. So With the Bitcoin, the code to construct the transaction that happens off chain. So this is in your wallet, you search for UTXO. Okay, I’ve got this many UXTO, which I’m going to include as part of the transaction. That’s not Bitcoin Core code, that’s just the wallet will search through the transactions, but verification, which means things like input and output have to be balanced, that happens on chain. So verification always happens on chain. For us, it’s the same. Constructing transactions is off chain, but the verification whether you can verify signatures and whether the type script running the virtual machine can return true Boolean true value that happens on chain. So you make sure that these rules are verified.
Sunny: If I understand that correctly, maybe another way to think of it not just as a function of the system, but it’s also a declarative, smart contracting system. so maybe to give an example of like this idea of why state verification should be done on chain, but generation could be done off chain, is imagine you had a smart contract, the point of which was to sort a list of numbers, from smallest to biggest. The sorting algorithm would take n log n time. But the verification that a list is sorted, actually only takes linear time. So you could basically require that whoever is generating the smart contract, they do the sorting on their wallet, but when they put it on the chain, everyone is just verifying that the list is sorted, not actually doing the sorting algorithm themselves. that actually allows you to basically make the work that everyone else has to do much smaller.
Kevin: Yeah, you got it exactly right. So you specify specifically what you care about to verify, not this procedure, the steps to get there. then everybody just by taking the same steps and see, oh, are we arriving at the same state? Right, but you say, “this is what I care about, in the end, it has to verify to this.” Then, like you said, the computation and verification can have asymmetry in terms of complexity. The sorting algorithm you mentioned, it’s very reminiscent of a lot of the zero knowledge proof stuff that we see today. How do you reduce specialized computation into some sort of set circuit rule that can be more easily verified? At least reduce capacity to perform so.
Sunny: Right? I was just gonna mention this seems like ZK roll up, in a way, where it’s like the computation of generating the proof, is all done client side. But the verification, which is simple, is the actual lock algorithm. So I guess, is this what you guys are implying, what you’re saying that it’s well designed for L2 systems in the way that it can make it easy to do these roll up style processes?
Kevin: I think it is. Back when we first started this direction, roll up was not even a term. There was definitely an early layer 2 solution that was built, when we look at it, it points to that direction, I don’t care about every single state transitions that happen on layer two, or off chain, as long as we can come to agreements on layer one, and either crypto economically or by some other means that we can say okay, this is what we all agree, then that’s really it. I want to verify that’s the final state, then this paradigm maps very well to that way of thinking.
Sunny: So what happens now when you want to create a smart contract where you want multiple people to be able to interact with it in the course of a single block. Let’s say you have an ICO or something that’s happening. There’s no reason that multiple people can’t participate in the ICO, in a single block. The problem is, there’s only one cell and whoever hits it first end up killing that cell. The second transaction that tries to buy from that ICO, that cell is no longer there. So how do you construct paradigms like that in this UTXO cell model?
Kevin: Yeah, so what you’re pointing to, it’s kinda like a parallel processing that could be allowed by the UTXO model. In Nervos, when you construct transactions, you actually specify the dependencies of the transactions explicitly. So the world runtime can do this dependency mapping and see, okay, these transactions can actually be executed in a parallel fashion or verified in a parallel way. So they could, as long as they’re not conflicting with each other, they don’t grab the same cells, and things like that, then they can be processed in parallel.
Sunny: But what if they are trying to hit the same cell? So imagine this cell was a like counter on a tweet. I tweeted something on this Twitter thing built on CKB, this is how many likes it has, let’s say two people try to like it and the current value is five. And the smart contract, the lock function is that it will increment by one. So two people both try to send a like and they send the declarative value of six, the first person will increase the like counter from five to six. The second person’s like will fail because it will say, Oh, it’s already at six. How would I have it so both people’s likes can happen in the same block, I don’t want it so that every time you do a like, it says your like is in the same block as someone else, you have to redo the like, again.
Kevin: For those instances, one of them will be accepted. So whichever comes first, and then that transaction will be probably the fastest and have global successes. Again, it’s a chain based consensus algorithm, NC-MAX. So it’s possible strictly speaking, these two votes will be in different blocks on different chains temporarily, but eventually that will get to consensus. If we’re talking about transactions in the same block, yes, one of them will be rejected. If the previous one was included in there, then the next will be rejected.
Sunny: Isn’t this pretty bad UX?
Kevin: You could, we have a term internally called layer 1.5. This is where you can have aggregators aggregate all the transactions, and then propagate and produce blocks. Then that’s similar to the issues you talked about. We do have a concept, Again, this is different from Ethereum account model and smart contracts where you have, effectively everybody’s balance in one single account, and they try to mutate the same object, if you will. With Nervos, it’s different in that… let’s again, say Icos. All the Ico participants are actually operating on their own cell. So if I have a balance of a token as a 100 tokens, it’s actually contained in my own cell. And then you also own the cell that contain your balance. So I can unlock my cell and then spend my token, maybe send to Sebastian, and you can do the same as well. This is just like Bitcoin UTXO. Everybody’s own assets is, we call this a bearable asset. So everybody truly owns the assets that you have. So this way, when I try to mutate my cell, I send Sebastian a few tokens, then that’s independent of you trying to mutate, you maybe want to send somebody else some tokens. We don’t have to like lock the contract so that I can update and then you update, for example. In Nervos, it’s called first class assets, which means the ownership of assets or tokens are segregated about users, if you will.
Sebastien: So taking a step back and looking at this and comparing it to Bitcoin, just so that we get an idea of where it sits next to Bitcoin and how it compares. The only things that separate it from Bitcoin are the cell model. Which is a generalized version of UTXO, we also have state, in addition to public key balances. The other thing is this consensus mechanism that improves the throughput by detecting the uncle rate and adjusting difficulty based on the current uncle rate. Are those are the only two things that separate this from Bitcoin.
Kevin: Another big difference is the base level virtual machine. Bitcoin does not have a virtual machine. It has a fixed amount of opcodes you can use to construct a multi SIG, and simple smart contract, if you will. But with Nervos its full Turing complete scriptability, that means you can have our locks and our type scripts can run the virtual machine.
RISK-V is a standard CPU architecture. The virtual machine of Nervos CKB it’s essentially like a RISK-V computer simulator, which means all the programming languages that can compile down to like the LLVM, or GCC toolchain can be then used to script Nervos CKG. You can use these languages to write your equivalent of smart contracts on Nervos. That is a big improvement over Bitcoin. Also the economic model, we haven’t got to yet but we can get to that later.
Sebastien: That’s an important distinction as well. I was asking for more protocol and consensus aspect of it. Yeah, let’s talk about RISC-V, a little bit. RISC-V is a very low level framework that gets implemented in things like CPUs, and that’s where very low level languages get built, and can interact with, like core processing of computer. I mean, that’s a simplified description, of RISC-V. Why did you want to build such a low level VM and Not keep it at a higher level of abstraction.
Kevin: There are several reasons. One of the most important reasons is if you look at hardware specifications, it’s very rare that they change, or if they change, its a very rigorous process. They almost always observe backward compatibility. Producing hardware is very expensive. They want to preserve the prior investments. This is perfect for blockchain space, because block chains have almost have a hardware like property, which means new opcodes are very difficult. It needs a lot of justification to add, it’s very difficult to break the current operating system, if you will. You don’t want to upgrade too often. then when you want to update often, you want to be able to make sure that existing smart contract and whatnot has to be preserved.
From that point of view, it’s really good. It’s open protocol, and then it has a lot of ecosystem players. It’s been rising in popularity in the last couple of years. It’s essentially the anti Intel Alliance. A big ecosystem, a lot of industry players pour a lot of money into this. New instructions, for example, are well compartmentalised. It doesn’t break the previous one, which is very different from others like webassembly because webassembly is this standard created by lines of competitors, browser vendors, and they have very specific concerns. Then they have conflict of interest. It’s a more high level virtual machine, and then we can get a little bit why lower level is easier.
Also, it’s not designed to work for like a blockchain or this space. For example, for RISC-V virtual machine, because of the CPU simulator, we can actually use the CPU cycles to precisely measure computation unit, like the equivalent of gas in Ethereum. The CPU computation cycle will tell me exactly how many cycles this computation will cost. Whereas in WASM, it’s a lot more difficult because it’s the higher level virtual machine that has garbage collection. So when that got thrown into the whole equation, it’s just really difficult to do that.
Sebastien: Being such a low level programming language, what’s the developer experience, like, as a developer entering the Nervos ecosystem, I want to build a dapp or a dao or something. What do I need to learn, in addition to knowing how to code say, like, go for instance.
Kevin: Basically, whatever languages that can be compiled down to run on a RISK-V computer, you’re able to run on the virtual machine. Again, it’s just a computer simulator. This industry standard is evolving very fast, and they are putting a lot of money into this. For example, like the C ecosystem, you wouldn’t recommend people build smart contracts with C. But theoretically speaking, that’s the easiest way, start with a C programming and even higher languages, if they can compile down to C then they can be supported as well. A really good property of this is a lot of the crypto primitives are very well supported in this ecosystem. A lot of them can be easily compiled down to C. So if you want to use a crypto primitive, let’s say for your zero knowledge proof solutions, you can just roll your sleeves and then drop that into the RISC-V compiler, add your own library, then you can just use it very, very easily.
We have some community members that are working on this, and it’s been pretty good. Supporting some signature algorithms that’s natively supported by mobile Phone chips and even browser. The private key solution will be much smoother on Nervos, or much faster to get there than other solutions, other platforms.
Sunny: Is there a tutorial that people can follow for writing smart contracts on this? Because compiling Rust down to RISK-V is one thing, but you also need to make sure your contracts follow the lock and type script rules and how those are formatted and what not.
Kevin: To answer there is, this by this stage, the best way is probably to pop into our telegram. Nervos Network dev channel and get help there. So we do have documentation. there are developers tutorials and show how to do things. But it’s definitely not as mature as some of the other ecosystems. I would recommend that developers who want to roll their sleeves just come to talk to us. We’re all hanging out there. So we’re very friendly and helpful.
Sunny: Let’s move on to, the last time we met, we talked about the big differentiator is the economic model of CKB. Can you tell us about what CKBytes are and why that economic model was chosen in order to design a multi asset chain?
Kevin: To answer your first question, the native token of the Nervos CKB is called CKBytes. So one CKByte, one coin, represents one bite or claim to one byte of global state. In a way, if you own let’s say 10,000 CKBytes, you own 10,000 of the global state in the blockchain. The reason we do this, I’ll start with the problem, then we can talk about why we arrive at the solution.
I feel that to become a good layer one, if transactions are moving to layer two, and that’s where it’s cheaper and faster and everything to do transactions, then what’s the purpose of layer one? In our view, layer one should be for asset preservation or provide security and censorship resistance. So layer one should be where assets are.
Different from Bitcoin, there’s only one asset BTC on the Bitcoin blockchain, or any smart contract platforms, you have many infinite user defined assets. The goal of layer one protocol is to make sure it provides sustainability so that the assets are going to be long term secure. So this is what we call the concept of call store of assets or multi asset store of value.
One of the properties, when you design this multi asset store value… think about when you have multiple assets, in Ethereum miners are paid with a fixed amount of eth per block. In Bitcoin miners are paid with a fixed amount of BTC, per block. This makes sense this will make their economic work because if the asset value increases 1000 times the miners income will also appreciate 1000 times because they get fixed amount of BTC per block. So this makes Bitcoin a good store value. It’s like a city, when you assets are appreciating, then the defense automatically appreciates as well.
The protocol can provision defense as the value rises and and goes down. I’m fearing that’s not the case, because if the assets on Ethereum goes up and down, they have almost no correlation with eth value. Hypothetically, the ideal native token for Ethereum blockchain, if you take the store of assets mindset, the ideal of native tokens should be some index fund unit of all these assets weighted by market cap, then a single unit of that index fund can be used to pay miners. Then if this whole ecosystem goes up and down the whole asset value goes up, then this native value also goes up and provide more protection for the security of the protocol.
But if Ethereum goes up like 10 times or 100 times you can’t guarantee that the defense will go up with them. In our mind, this is an issue of economics. Attackers can always just attack your base consensus protocol to double spend these assets, if there’s enough incentive. If Maker token really appreciates, they could attack the ethereum consensus.
Another flip side of this is really the different side of the same coin. On Ethereum, it’s really the ETH holder that provides security by volunteering diluting the ETH supply to provide the security to the miners. But these crypto asset holders are not making a similar contribution. ERC20 tokens might not inflate and then pay the miners, because their assets are protected but they’re not contributing to this common good. Whenever you have this tragedy of common issues, when the incentives are not aligned, and you’re not making people contribute to the common good, then you can be abused. There’ll be issues. You got people that free ride the security and in our mind, it’s not sustainable.
Sunny: So how do you turn CKBytes into that index fund?
Kevin: It’s very difficult to truly build an index fund right on the blockchain. The example I gave to point to a direction we need to think about this issue, and the solution resides in the fact that you need a native token that can capture the demand of all the crypto assets running on the blockchain. In other words, if I’m a maker token holder, I need to hold a maker token, I need to contribute to the blockchain overall security as well. So it needs to be this single asset that can capture the demand of all the assets on the blockchain. The other one is that this contribution has to scale over time. In other words, if I hold my token for longer, then I need to contribute more to the security of the network. So it has to scale with both space and time.
Our native tokens are called CKBytes which represent the claim to the global state. So the idea is any crypto asset demand will result in the occupation of global state. The native token is basically the value capture of the entire ecosystem, if you will. Here, we typically make the analogy of land. You can open different shops, say, McDonald’s a laundromat… All the different shops are very different in their own way and they have their own ecosystems and economic properties, but they all occupy land. So whatever shop you open, it will put demand on land. So land is the value capture of this ecosystem. This single property or single assets that can capture the value of multiple assets or economies being built on blockchain. So that’s, I think, our insights.
Sunny: It seems to capture the number of users that are holding a token, but not exactly the value of the tokens. Because let’s say there’s a 1 million users holding MKR, If the value of MKR 10x’s suddenly, that doesn’t actually increase the amount of storage that MKR is using, only if the number of MKR holders 10x’s does it increase the demand for storage.
Kevin: If you add another assumption, which is the global state is scarce. If it’s an infinite resource, then you’re right, but if it’s a scarce resource. It’s actually market priced, which means that the more demand you put on land, you actually drive the land price. If you think about a corporation balance sheet right? If the cost of building this city is very high, then I will not build here because the relative cost is too high. If maker value increases that will decrease the relative cost of every single holder will have. If it costs me 10 cents to store $1,000 of value, that’s no brainer. If it costs $100 to store $1,000 that is a different calculation.
Because the global state is scarce resource, it will become a process of value density on the blockchain keeps increasing. As you keep increasing, those lower value applications or occupations, will slowly move out of the blockchain and move to layer two, and then probably use some proof, and really simple one, still utilize the security guarantees, but maybe make some sacrifices somewhere else. The high value assets are still going to be kept on layer one, because again, this is the most secure global consensus and everything. As value density goes up that will encourage people to stay, because the relative cost is lower.
Sunny: Would we also start to see people move towards more off-chain state? Instead of actually holding a lot of data in a cell they just store root state hashes in the cell and then pass in the data when they make the transactions? So it’ll be like stateless verification on Ethereum. That way any user would really only ever have constant demand for state because they could take all their personal state and put it into 132 byte hash
Kevin: I think it’s all trade offs. So again, we try to make the analogy of let’s say Manhattan’s land. In the early days, because the land is so cheap, you can just build fast food restaurants in there and not feel like this is wasteful, because almost cost you nothing to acquire the land to build the shop, you won’t just drop a skyscraper in Manhattan on day one, because it’s impossible. I mean, you make sacrifices, which means you have to ride the elevator up and down every day. What you described, putting state root proof on layer one does have its consequences well, because you probably have some latency trade off, and then if you’re running on the platform, maybe there’s token economics assumptions or liveness assumptions of watchtower, you have to make those assumptions. So those are not without its cost or trade offs.
What we foresee, is when the cost of global state is low, then people will directly build on layer one, sort of, build your Manhattan in the early days. Then as more valuable applications come up, and then the ecosystem grows, the land price will increase, and then will automatically provision higher security, and more protection. We believe this has a positive feedback loop, which is about very, very few positive feedbacks. I gave a talk on flywheel economics, last year. This positive feedback loop is very important, because the most important or demand of assets on blockchain is security. It’s not storage cost. Some people feel like oh storage is getting cheaper and cheaper, why is this getting more expensive? The answer is you’re not just saving something there. You’re saving something of huge, very, very important value. What you’re paying for is really the protection of the security. As a store of asset blockchain becomes more valuable, it will have more protection, it will attract more valuable assets to the blockchain. As you migrate to the blockchain, this is the part of feedback loop, it will increase the token because you put more demand on the token and then increase the property and then you attract more valuable assets. That is the flywheel that we talked about. I feel you almost have to have this token economics to be sustainable and to preserve more and more assets.
Sunny: The other chain that does a very similar economic model is EOS. They also have this similar notion of holding EOS tokens grant you access to more computational and storage resources. So the RAM and CPU And neck that they have, their model gets much more complex, they added too much complexity there. One thing that does make sense, is for them, each token represents a percentage share of the capacity rather than an absolute amount. Given that CKBytes is a fixed supply system, that means that there is a fixed amount of storage that is ever possible on this system. As we see advancements in storage, and storage becomes cheaper over time, wouldn’t we want to maybe have total storage size as a governance parameter, and your CKByte as a percentage of that. As storage gets better, over the years, then we can increase the total network size?
Kevin: Very good question. So I’ll detangle this in a few ways. The cost is not just about that harddrive space. The cost for the entire network is that we cap the global state with a monetary policy. Every year, the maximum global state is predictable, that’s governed by the monetary policy, how many CKBytes you issue over time. We do this to make a scarce resource. The goal of doing this is to preserve decentralization, so everybody can run the full node. It’s very cheap, it’s very fast to sync. Developers don’t have to always depend on the full node for different applications, then there’s not this very expensive full node that will take days to synchronize. Everybody can verify, independently, transactions. That’s what we believe it’s necessary for a true decentralized global network.
If you do that, then you have to cap the global state. It’s not that your hard drive is not large enough. It can increase many, many folds in the future, but the way we cap it is because we want to preserve its status and properties. Also how fast the network can sink in. This decentralization itself, it’s really public good. It’s really a piece of public good. So which means everybody has to contribute to it.
If you think about our issuance policy, which is part of the crypto economics, if you think about this issuance policy, we have a Bitcoin like fixed supply, as you said, we call this first issuance or base issuance. That is not enough, because I talked about this earlier that people that preserve assets on the blockchain have to pay with the time that they preserve their assets on the blockchain. In Ethereum this is called state rent.
The whole concept of the rent, how do you pay for this and things like that? Because our native currency is a bite of the global state, we come up with a mechanism that you can pay rent automatically with issuance. This is what we call secondary issuance. The idea here is, imagine on Bitcoin, you only want to charge certain type of Bitcoin holders. How do you do that? Let’s say the people who used UTXO to store some other data then balance, how do you charge only them for state rent? The way we do it is, say bitcoins issuance is 50 25, it goes down every four years. We tack something along with that schedule effectively will be like 51 26 13.5, and so on, so forth. This constant component of the issuance, we call secondary issuance for each block, and then imagine you can give everybody this one block first, and then the ones that are not using their CKBytes to store data to store state, then you compensate them for the issuance.
Therefore the people that do use CKBytes to store state this additional issuance will go to the miners. So if 30% of the CKBytes owners use their coin to store state, 30% of the second issuance will go to the miners. Eventually, if we give everybody the same issuance that will be fair to everybody, but then you divert this part to the miners. So the longer you use your CKBytes to store data, your keep paying miners a percentage of issuance that would have been given to you. Then we have a special smart contract called the Nervos DAO. if you Save your coin your native token to the Nervos DAO smart contract you will automatically receive exactly the same yield of the second issuance so that for you, it’s as if the second issuance does not exist and you are holding a Bitcoin like fixed supply native token.
Sebastien: So what projects are being built on Nervos? What are the initial types of applications that you’re seeing here?
Kevin: We launched main-net like two months ago, and the type applications best for Nervos, is asset focused, broadly you can put them into DeFi camp. What we want to be is if you think about Nervos network itself as also this multi blockchain topology similar to cosmos and polka dot and whatnot. The difference is we want to have this one single blockchain concentrates value, and then all the other blockchain specialized for scaling. In a way, like in the finance world you can think about this is like your custody provider, but decentralized, obviously. Then all the other ones are like transaction based systems. So that’s my model for the Nervos network.
Sebastien: Where can people learn more about Nervos and potentially, if they’re interested in building a blockchain? How can they get started?
Kevin: Go to our website and start work. The best way to get a quick understanding of the project, read our positioning paper. It goes into what we talked to today.
Sebastien: Thanks, Kevin, for your time today.
Kevin: Thank you guys. Thank you, Sebastian. Sunny.