Charles Hoskinson

Cardano – A Third Generation Smart Contract Blockchain

We are joined by Charles Hoskinson, who played an early role in developing Ethereum and BitShares, and is currently the CEO of IOHK. IOHK is an engineering company that undertakes cryptocurrency research, contributes development efforts to the Ethereum Classic ecosystem, and is spearheading the release of Cardano – a third generation blockchain protocol. IOHK has made the news recently after the publication of fundamental research papers on the Proofs of Proof of Work and the Ouroboros PoS algorithm, and its recruitment of highly rated academics.

Topics we discussed in this episode
  • IOHK’s role in the Ethereum Classic ecosystem, and how Ethereum Classic differs from Cardano
  • Cardano as a “”third generation blockchain”” and what this means
  • Governance system of Cardano and the challenges behind developing a decentralized governance system
  • Ouroboros PoS algorithm – why was it developed and what’s special about it
  • Ouroboros Genesis: how full nodes can be bootstrapped without requiring checkpoints
  • Cardano’s bet on K framework for smart contract execution
Transcript

Meher: We have Charles Hoskinson back on Epicenter. You’re going to focus our conversation on Cardano. Charles, welcome back to the show.

Charles Hoskinson: It’s a pleasure to be on, guys.

Meher: So we had you for the first time in episode 144. This was a few weeks after the DAO hack happened and Ethereum is split into Ethereum Classic and Ethereum. And give us an idea how it’s been since then. What’s been going on in your life?

Charles: Well, you know, I still like long walks on the beach and you know, things like that. But no, but levity aside, it’s been pretty crazy, you know. IOHK has grown from a few dozen people to about 130 people. We operate 10 countries now. We work on a lot of projects. We work on Zencash. We work on Ethereum Classic. And the one we’re increasingly becoming known for is Cardano. So the Cardano project is like a Leviathan. It’s got dozens of researchers and engineers and we’re doing a little bit of everything. I suppose I should give you guys, since my first time I was on here was with the Ethereum Classic, a brief update there. When I came on Epicenter, we were just kind of talking in hypotheticals like IOHK wants to build a wallet and IOHK is going to hire some developers and, you know, we’re going to go do some cool stuff. Well, we did that. We actually hired seven full time Scala developers and what we were able to do with those developers is build a full Scala client. So it’s about 12,000 lines of code. It’s a full node. In fact, not only we released it, we’re actually on the next version, a 1.1, which has some performance improvements and bug fixes and things like that. It’s gone through a full security audit from Kudelski Security. And at the moment, I think it’s the most concise and the only functional Ethereum client implemented, whether it be ETC or Ethereum. So that teams really had a heck of a lot of fun. We learned a huge amount in the process of building a client and now we’ve kind of gotten to the point where we have to make a decision to either kind of retire the client or to scale up the team and start making some substantive changes to the Ethereum Classic ecosystem.

So there’s a kind of a loose governance structure that’s been forming that very silver brought together with a bit of funding and we’ll have some discussions there. And if we can get funding, we’ll scale up Mantis and take it to the next level. If not, we’ll continue maintaining and leaving a few developers on it. It’s an alternative that people can download. But mission accomplished. We said we were going to hire some people. We said we were going to build it out. We went and did that, and it took a whole year and it was a huge learning experience for us.

Sunny: That’s really awesome. Yeah, I mean, I’ve actually used the Mantis client before and it’s a pretty good user experience, honestly. So, you know, before we jump more into Cardano, what do you think is, you guys are still relatively focused on Ethereum Classic as well. How do you see the relationship between Cardano and Ethereum Classic looking like going forward? I know that you are building Ethereum Classic support into the Daedalus wallet, right?

Charles: Right, right. Well, Daedalus is eventually going to be a platform. So the goal is that the Daedalus to start looking a lot like Android and that you have one-click install and developers are able to kind of package and bundle their own dApps or their own wallets and there’s an obvious way of doing that. And so it’s really difficult to build an architecture this way that’s secure and user-friendly and develop- friendly and so forth. So there’s a lot of discussions there, but yeah, Daedalus does support a Mantis as it does a Cardano as well. So what’s the relationship between the two projects? We kind of view them as different styles of cryptocurrencies. Ethereum Classic, because of the economics, the culture and the ecosystem, looks a lot like a better Bitcoin. Basically, it’s got supply mechanics like Bitcoin, it’s got proof-of-work, it’s going to stay on proof-of-work forever. So if you’re cut from that cloth and you like that cloth, it’s a better type of commodity in simpler aspects, you know. A silver does more than gold, you know, and they’re both commodities and people look at them along those lines.

Whereas Cardano is like the whole banana, it’s got a governance system. It’s a peel that PoS versus proof-of-work. It’s going to have multiparty computation. It’s got side chains built in; it’s going to have lots of computing stacks. So really, what Cardano is about, is doing more than just being a store of value that has some utility attached to it. It’s about saying, let’s say, I go to Ethiopia and I want to rip out the entire financial stack of the country and replace it with a cryptocurrency framework. What would that look like? So it requires you to answer a lot of questions like what’s the relationship between permissioned ledgers and permissionless ledgers? How do you actually have voting built in? So you can make changes to it. You’re not necessarily going to be immutable in every single case, you’re not necessarily going to be private in every single case in these types of things. And there’s some sort of system to reconcile all of that. So it’s a much broader scope and therefore requires a kind of a different technology and a different philosophy. So we believe in both and we maintain both in our portfolio and you know, for the foreseeable future, we’ll continue to support these projects, especially Cardano for being paid for that one. Just to point out, I spent over a million dollars on ETC. I haven’t been paid anything yet. So principles are getting pretty expensive. 

Sunny: Last time you were on the podcast you mentioned that one of the things you liked about IOHK was that you guys were able to focus on very general research rather than having to focus on a specific project. And you know, now it seems like, you know, you are working on other projects as well, but there is generally a large focus on Cardano. And so what kind of decisions led you to make that change and how do you think that to affect the future of IOHK and stuff?

Charles: So what happens is that you kind of have cryptocurrencies come in in stages or generations or phases, whatever you want to call them. And every time they come, they tend to introduce a collection of new concepts. So you know, Bitcoin came in, really, Bitcoin wasn’t trying to actually be money or a payment system. It was something much more simple. It was a collective delusion problem. So basically, the goal was to convince people that these magic Internet tokens somehow backed by math are actually worth real money. And we can buy and sell and trade them, right? So basically, it took a few years for that to set in. And really, the turning point year was 2013. And at that point, like Bitcoin was here to stay. People said, okay, this is not going to go away. This is a real thing. But then immediately, people said, well, hang on a second here. This is just a push payment system that’s really slow and really expensive and not very user-friendly. I want to do a lot more with it. Can I do a lot more?

And then we had this grand conversation about how do we augment it? So we saw things like color coins and master coins. We saw all coins, like Nxt, for example. And they all brought a lot of innovation. But at the end of the day, we need a programmability. We needed it as when Java, JavaScript came to the web browser, so you know, Vitalik and I and others, we created Ethereum. And the goal of Ethereum was to say, okay, give control over these protocols that run on a blockchain to the developer and give them an easy way of doing that. And that’s another proof of concept. And so a lot of people thought we were crazy. In fact, a lot of people still think Ethereum is crazy. [Laughs] That’s fair. Although a lot of people still think that Bitcoin is crazy, so you know, that’s fair too. And so basically, Ethereum comes out as the next big generation. And they kind of introduced the notion of smart contract, you know, this notion of more complex computation in the transaction and then all these emergent structures that you can kind of build from it. All right. So now that generation is done, it’s set. People seem to agree it’s a good idea. There’s a lot of competitors like NEO and EOS and others that are coming out or already are out in market.

And we’re now entering a third generation where we say, okay, the delusion is good. We like that. Computation is probably a pretty good idea, but how do we do these things at a scale of millions of users to billions of users if they’re actually going to be useful for people? Second, there’s probably going to be hundreds to thousands of cryptocurrencies long-term. I don’t think we’re gonna see the great Cambrian extinction. You know, we lose all these cryptocurrencies. They’re still going to be a lot around because you know, everything humans do, we do a lot of. We have lots of languages, we have lots of cultures, we have lots of governments and we have lots of religions. It’s really hard for human beings to agree or consolidate on anything. And so something as controversial as money or your financial life is probably not going to consolidate on just one universal standard. So as a consequence, you know, you need interoperability. You need the ability to actually move value in data and preserve certain things like security and privacy as you traverse the internet of value and you go between all these things. So there’s a scalability component, there’s interoperability component, and it seems like Ethereum Classic or Bitcoin Cash have really brought to the forefront. There’s a governance problem as well. Where as we move beyond just a couple of nerds who meet up and you know, they enjoy talking to each other over Slack to natural system which has millions to billions of users and has control over your financial life and all of the facets of that, maybe including your identity and your property, you need some form of a democratic process to make decisions of where the system’s going to go and how do you pay for things and so forth.

Now, whether that’s meta to the system, meaning that there’s some sort of meta consensus or it’s embedded within an existing government or it’s built on blockchain, there’s a lot of debates about that. And so you see projects, for example, like Dash and Tezos or Cardano, we view that on blockchain governance for at least some things is probably a good idea. Other projects like Bitcoin, for example, a lot of people have been arguing to keep that off-chain and to have some sort of open source consensus, you know, material rise around it. And I think Vitalik has also been a bit skeptical of on governance as well. So these are kind of the three-design criteria that we view or required for the next generation of cryptocurrency. Keep what we know and love. So keep the collective delusion, keep it weird. Keep the computation. That’s pretty cool. But then also, go ahead and move into a domain where you get faster as you get more users, you can talk to all the different cryptocurrencies and you have some way of sorting out who pays and who decides. So that was really the 2015 idea for the Cardano project in a nutshell. These were the kind of the business requirements that are super high level. Then what had to be decided is, well, how the hell did we do that? So we spent the first two years kind of thinking really carefully about a lot of deep tech. We started a really deep research agenda and we started tons of different threads, threads like better consensus algorithms, threads on consent on a voting threads, on things like you know, better crypto primitives, making us resistance to quantum computers, you know, threads involving things like side chains and so forth. And what we’ve been doing is gradually now closing out those threads, turning them to actual peer reviewed whitepapers and then converting those specifications and then gradually implementing those and putting them into a production system. So the first version of that system came out in September of 2017, kind of runs like Ripple. It’s a lock delegation version of Ouroboros. It’s a lot simpler than the papers we’ve recently published. And over time, we’re just going to more and more decentralized the system. And then eventually, add on components like our side chain components so we can link our smart contract layer and so forth. And that’ll be coming in the next few months to years depending on the features. So that’s a good high-level summary of I think what we’re doing.

Sunny: I see. So to break that apart, so yeah.

Charles: It’s a lot, right?

Sunny: Yeah. So you know, you guys had a lot of ideas on what needs to be done to make a third generation blockchain and then now Cardano is basically the amalgamation of, okay, let’s take all of these ideas and make a prototype and show that these things actually meshed together and work together and we can make a production system using these.

Charles: So it’s important to point out that there are other projects that are chasing this. But usually what they do is they chase a particular dimension like EOS and IOTA, for example. They’re really chasing the scalability side. They say, oh we can do lots of TPS. And then, interoperability, you have projects like Aion and Ripple, for example, with Interledger. And they’re really trying to talk about that internet of value. And then in terms of governance, you see projects like Dash and Tezos. So I think there’s consensus in the space that at least one or more of these dimensions are really important. I think we’re the only one that tends to view them so interconnected that you kind of have to do all three at the same time.

Sunny: Right. So scalability, governance, and interoperability. Those seem to be the three key things. So I guess maybe we should dive into some of them. Let’s start with governance, right? Can you give us a bit of a short summary about your governance mechanism for Cardano?

Charles: Yeah. So that’s still under design. So we have a team led by a professor in Lancaster University named Bingsheng, and we’ve done a few videos. They’re on our YouTube page. But basically, the idea is that you have to combat a couple of demons at the same time. So one thing that you have to combat is the who gets to decide a demon. And that’s a really difficult question. So you know, we tend to be beings of desired fairness, so we tend to be egalitarian and say, oh, well, everybody ought to be equal and so forth. But the reality is, people aren’t in terms of skills, time, expertise, these types of things. So it’s really difficult to build a voting system because you can either overshoot and then you end up having mop a role, so you get very poor decisions that are made. Or you can end up creating a very ivory tower, pristine group of voting people who ended up making, your betters are making decisions on your behalf. And that’s quite bad too. So you have to kind of find a sweet spot. And this is not a new problem. It’s something that political theorists have been talking about for hundreds of years and they’ve come up with numerous different voting systems. You know, everything from what are called linear preference ordering systems where for any decision, you don’t pick one, you pick a collection and you rank them. So kind of say in border examples of that to things like liquid democracy and liquid feedback. So we said, you know, liquid democracy seems really cool. So this is kind of delegating your vote.

So you say, all right, well, let’s talk about two events to elucidate the example. So everybody in the beginning, gets, let’s say, the same vote and the votes are a situational. So you know, Bob, you know, is proposing a nuclear power plant design and you’re allowed to vote on it for whatever reason. Well, most people aren’t nuclear engineers so they’re probably not going to have a very informed opinion and they’re kind of talk more about the bike shed in front of the nuclear plant than the actual design of the nuclear plant. So what if you could delegate your vote to Bill, who’s your neighbor, who happens to be an engineer you respect a lot and done this for 25 years? Great. But then, let’s say there’s another vote and that vote’s on, I don’t know, your roof or zoning laws or something like that. You’re at a big dispute with Bill and you don’t want to give him your vote. So in ordinary representative democracy, what you tend to do is just give one third party, like a congressman or senator power for a period of time, and then they go and decide on your behalf for better or for worse. In a delegated democracy system, you can delegate real time. So you can say, okay, for this particular vote, I give it to Bill. For this particular vote, I gave it to Bob and so forth. And there’s a lot of theory behind why that would make more sense.

Just as an example, if you had a delegated voting class in the US election for the presidency, you know Donald Trump would have a really, really hard time getting elected. Why? Because most people would delegate their votes to local leaders in their communities. And so Trump is not talking to the general American public anymore. He’s actually talking to a voting class that’s been specially selected to analyze him. So when he says we’re going to make America great, they’ll say, well, what exactly are you going to do? And he has to go into policy specifics and he can’t do that, right? He’s built this whole campaign on low information. So you know, there’s a lot of people seem to feel that liquid feedback, liquid democracy is a good idea. So we said, okay, let’s try that. So we’re gonna push a paper out for CCS. Come around May, we were hoping to get it out in February, but the universal composability stuff took a little longer to get done, but it has some details on exactly how we think that would work and involves a lot of complex math. But if you’re curious of how it basically works, we did a video. It’s about 45 minutes long and it goes into kind of details of how the crypto works and how we view the system and so forth. But so that’s one demon. The who gets to decide demon and you make voting proportional to your stake in the system. That’s the obvious starting point. But you can change that metric or reweight it based on other facts and circumstances.

The second demon is the demon of rational ignorance and this is the harder of the two to solve. So what is rational ignorance? Well, rational ignorance is basically where the value you get from knowing something is less than the time it takes to know that thing. So for example, let’s talk about healthcare. And the American healthcare system is horrendously complicated. It would take you literally years to decades to understand all the facets and inside outs of it. So if somebody comes to you and says, hey, Sunny, I want you to make a decision on whether you know universal healthcare is a good idea or Obamacare is a good idea, or something like that. The only way you can actually really have an informed decision is to invest those years of effort. And what is the output of it? Your vote counts exactly the same as my vote and exactly the same as the crazy hobo’s vote in the streets of San Francisco. So then you get a little disenfranchised. You say, hang on a second here. I got exactly the same payoff for putting a lot more work. So what’s the rational behavior to not stay informed? That’s why we tend to focus on wedge issues and elections because they’re very easy to have an opinion on it or understand. But in general, they aren’t, you know, they aren’t very substantive. You know, the things that really have a big impact over your life or things like foreign policy and healthcare policy and retirement policy and so forth, not gay rights or abortion rights. These are issues that affect small minorities, but those issues are much easier to understand than you know, the complex intricacies of the Syrian conflict, for example. So most voters tend to stay ignorant.

Now, what is this relevant to the cryptocurrency space? We’ll just look at basic debates we’ve been having like the block size or the difference between proof-of-work or proof-of-stake or should we bail something out or not? These are very complicated issues and they have many different dimensions to them and they take a huge amount of time to become informed about it. So most people just either vote with their wallet, meaning whatever they think is going to give them the most money or they just vote along cults of personality. Where they just say, well, what had Vitalik think or Charles or Dan think, or something like that, or Andreas think. And that just kind of go along that particular line. So the problem of rational ignorance is a heck of a lot harder to solve than the who gets to decide problem. And there’s been some ideas like maybe you incentivize people to be a voting class, like Dash has master nodes, for example. Or you know, maybe you can create some sort of boundary system or something. So we don’t have quite a solution yet for that side of it and we don’t really need a solution at the moment because the voting parts of Cardano won’t be out until probably 2019. It’s one of the last components we’re going to pull into the system as we could, you know, turn things on. But we will touch that topic in the May paper that’s being released. And then we’re going to try to experiment with some things.

The other thing is that we’re not in this game alone. Everybody who runs a cryptocurrency kind of has to deal with this in some capacity. So what I’d love to do is start a dialogue with a lot of other project leaders and try to create some common notions or at least some universal standards that we can follow about these things. But those are the two demons where I’d say, you know, we’re in the research spectrum of Algorand or Ouroboros or consensus algorithms, very advanced. It’s near done. Whereas the voting is, you know, in the early phases, I’d say 20-30 percent along that research spectrum. So we’re not really too far along. But there are some videos we have. Look at liquid democracy and liquid feedback and google rational ignorance and it’ll give you a really good sense of what are some of the challenges. Another thing is there’s actually a course on Coursera if you’re really interested in democracy. I think it’s from University of Michigan and it’s securing digital democracy or something like that. And then there’s another class on voting theory that they occasionally run, and these are really good classes and they kind of give you a high-level view of how these things work. And also some of the things to think about that you probably have never thought about before, and it’s a really fascinating topic.

Meher: In the governance aspect, you talk about these two issues. The one is the issue of rational ignorance, which is how do you incentivize the voters to actually invest the effort to vote well. And the second is the issue of deciding who gets to vote, right? Now, many people would say that it’s been proof-of-stake. There’s a third very fundamental problem of governance and voting. And that problem is somehow that like in a traditional proof-of-stake system, you need to stake your coins and then participate in consensus and then you’ll make more coins, right? So in the early days of cryptocurrency, people used to worry about the rich get richer problem, right? Now, when we start to have governance decisions about the system and the voters are the stakers, the rich get richer problem gets magnified. What I mean to say is, let’s say the inflation rate in Cardano is, I don’t know, two percent. If I’m really rich, I found really ADA rich, I want the inflation rate to be much higher because I will stake my ADA, I will evaluate and stake my ADA and I want the inflation rate to be high, so I’ll get more ADA and I’ll consolidate my position of having a lot of ADA. Whereas somebody that has very little ADA might not have done evaluated themselves and slowly, they’ll get diluted out. Now, the decision of what should be the inflation rate in the system is dependent on governance. And then in governance, if I have a lot of ADA, I have a lot of voting power. So don’t you see this as a problem that when you have these people that have stake vote on governance decisions, you might somehow magnify the rich get richer problem?

Charles: Yeah. It seems like a problem on the surface. But actually, if you break into the analysis, it’s a bit more complicated. So first off, these rich people are gonna vote on whatever they feel is going to produce value for them. And the reality is, short-term value is not as much as you would think. They look more long-term. Why? Because these markets are traditionally thin. So even if you could do something to temporarily increase the price 10 or 20 percent at a long-term deficit, if you hold 10 or 15 percent of the supply, you can never divest that amount. And if you do long-term damage to the cryptocurrency, you’re actually costing yourself money. You’ll make a little money in the short-term, but you hurt yourself on the back end. So the decision-making, the rational decision-making analysis of what I do if I’m a custodian of the system and I hold a lot of it, is a bit muddled. And it’s not just what will maximize my utility day 2 or day 3, but maybe a longer-term view. Second, let’s say you have problems like a 51 percent attacks through your plutocracy. You can always fire your stakeholders by a fork. And that’s something you can’t do with a proof-of-work system. See, this is endogenous consensus versus exogenous consensus. So external versus internal. So when your security comes from an external provider, they own all the hardware and if you fire them by changing the proof-of-work algorithm, you have to kind of rebuild your entire security based from the ground up. But let’s say Bob in a proof-of-stake system has 51 percent of the stake, the 49 percent minority, which let’s say is much more diverse and they’re actually doing everything, and Bob just happens to have a pile of coins and he does nothing but he’s malicious. You can just fork the currency, create Cardano too, and remove Bob completely from the set. Now, what have you done? You’ve removed the malicious actor. The remaining people are honest, and you actually have a much more secure system now, and you know, you haven’t lost any security. You’ve actually gained security from that event. Whereas you could never do that type of a thing with proof-of-work.

Now getting onto the voting set of things, you know, the most obvious to do voting for changing system level parameters like inflation or something like that would be proportional to your stake in the system. And that’s fine to do, but you can create other parameters in the system. For example, you could create the notion of a good citizen in the system, like a reputation system, like I’ve relayed lots of data. Or when your systems are more complicated than just consensus as a service, when you could have other things like trusted data feeds or multiparty computation providers or these types of things. Then in those particular cases, you could get points and those could then bias your voting power. So it could end up that the person who has the most stake doesn’t necessarily be the most powerful person in the system, rather, it’s the most useful person in the system, but that’s complicated metric to construct. So you have to start somewhere and the most obvious places to start where the people have the greatest financial incentive to participate in the system for the long-term growth and wellbeing of the system.

There’s another thing I like to point out, it’s called the lump fallacy. So it’s a common thing comes up in economics where people tend to believe that all the wealth ever created is already been created and it’s just about proportions. And how much did Bob get versus Bill get. And it’s a big misconception. You’ll see it a lot, especially in liberal politics where people say, well, look how much money these billionaires are getting, you know, proportionate to the rest of the people. And oh god, these evil people, the rich aren’t paying their fair share. Wealth is created. Wealth is created through actions. Wealth is created through productivity. So when you are participating in consensus, it’s not a rich get richer thing. You’re performing a service for the network and the network is paying you to perform that service. And this is complicated stuff. It’s not just the state. Eventually, it could be a decentralized file system, it can be computation as a service, which is effectively what Ethereum is. It can be relaying huge sums of data, like if EOS does live up to its claims of performance, it could end up moving gigabytes every second of data. Okay, so you’re being paid to provide these services which is producing wealth. It’s producing value for everybody and therefore, coins are being minted to pay you.

The last point is that the same 100 people who control the wealth aren’t going to be the same hundred people year 2 and year 3. People sell tokens. You know, if people just held onto their Bitcoins forever, then the original miners would have had much, much more. But the reality is, people run businesses, they tend to sell things. So you’re going to see a lot of rotation of value. Why? Because there’s tons of volatility. You know, so if your coins goes up 5x or 10x, you don’t say, boy, I should hold onto this because I can get, you know, perpetual two percent return. You say, I don’t think there’s 5x or 10x sustainable. I’m going to go liquidate 30 or 40 or 50 percent of my holdings and lock in my 5x or 10x. I’m so lucky. What does it mean? You’ve just diversified holdings? So in all things, whether it’s a startup or any cryptocurrency, what we’ve seen is a gradual deconsolidation of holdings. Bill Gates, for example, with Microsoft had about 64 percent of the company when he started it. He only has five percent now. He could have held on to that chair. He could have never diluted himself or diluted himself as small as possible and kept going along Microsoft. But he realized that diversification matters. And that’s generally what’s probably gonna happen with most people who are involved cryptocurrencies is that, as volatility goes up and down, they liquidate and then you see a gradual deconsolidation which is much greater than the particular inflation rate.

But, you know, these are different things. You know, with proof-of-work, you kind of kicked the can down the road and you hide inconvenient truths. It’s inconvenient to the Bitcoin space that 10 people, 10 pools control more than 51 percent of the hash power and I don’t know these pools, I can’t be these pools. They say, oh, well, anybody can buy a miner. Yeah, but if it’s patented and it has a very bespoke supply chain and people are using subsidized power, that’s not a fair game. I remember I bought a Butterfly Labs miner, I got it a year and a half after I put the preorder in. And they were testing them, “testing them”. They were mining with my miner for until it was no longer profitable, and they shipped it to me. That’s not a fair game. It’s a rigged game. So you know what we do is we say, oh, well, you know, there’s this implicit notion of control and it is very butterflies and rainbows and people like it. But then instead what you’ve done is given it to some silent, a small group and was proof-of-stake. You would have to explicitly have this conversation of who should be in control? How many people, why should they be in control? What are their incentives? How do we ameliorate certain things? So if the rich are malicious, you have the nuclear option of forking them out. You have other options of introducing metrics to dilute their power, like a proof of usefulness, for example. And then you have social metrics that can be applied as well. And if they’re malicious, the price will go down as well. So generally, the best outcome is to just have them behave following the protocol as it’s written. And I think that’s a much better way of going about things because it’s out in the open. It’s explicit. You’re going to not get it right the first time around, but you can fine tune the system. There’s going to be a lot of opinions on how things ought to be done, whether it be a delegated proof-of-stake or bonded proof-of-stake or a peer proof-of-stake system. And it’s very Darwinian in a certain respect, rather than just pretending like there’s this god-mode that’s somewhere off in the rainbow and they’re always going to act honestly and behave properly and don’t worry about it. They’ll just be there for you and they become actually barriers to change as we’ve seen with the Bitcoin network. So that’s my big rant.

Sunny: So one of the things that you mentioned that was actually very interesting is the idea of, this is one of my biggest benefits of proof-of-stake over proof-of-work as well, is that when people do something malicious, we can just easily hard fork them out, right? Here’s my question. It’s about what happens if they try to abuse the governance system? One thing that, so I’ve actually watched the whiteboard video. By the way, I love, your guys’ whiteboard videos are awesome.

Charles: Which one? The one with Bingsheng?

Sunny: The one with, yes, the one on the governance and stuff. And so one of the things that you guys, that he mentioned was that you use, the idea is that the voting is going to be in zero knowledge. And so this actually is an interesting question because we’ve actually had this discussion at Cosmos as well. Should voting be private or transparent? And we’re actually going down the route of, so what do you think is the effect of having private voting where you actually lose that transparency and potentially that accountability? So let’s say, there’s a vote and a lot of people decide to increase a bunch of money for themselves in some convoluted way, not talking to me that clear. But you know, how do we make sure that we figure out who vote, who was the one who voted that, so we can hold them accountable. Maybe not either through protocol by hard forking or at least socially accountable.

Charles: Oh yeah, that’s a great question. So first, it’s important to understand that there’s a spectrum of voting. So you start with things like SRO style, things like membership based self-regulatory organizations. And those are very fluid, and things can change in a single meeting. Then you have municipalities, right? And these are your local mayor and county council and you can go and complain to the county council, they take a vote and maybe they meet every two weeks or three weeks and they can change things fairly quickly. Then you have your state assemblies and those move at their own pace. But there’s still somewhat a democratically accessible to people. Then you actually have your federal government side of things and that’s much more bureaucratic. It’s much more difficult to get things done. And then you have even further the constitution where you say, okay, to change the constitution, it’s only been done in America about two dozen times and over our entire 200-year history. So that’s a real challenging thing to change. So what you have to do is say, certain parameters in my system live in different spots. So you know, things like user experience, taste, texture, feel, you know, nice to have features like HD wallets. Those types of things probably live a towards the SRO style side. So they don’t even necessarily require a vote. That’s just you, the developer, making decisions of what’s best for your users and there’s some ambiguity and wiggle room in the protocol to allow you to have that kind of flexibility. Other things like for example, monetary policy or how you achieve consensus or the voting system, mechanics themselves. Those are much more towards the constitution side. Okay, so that requires a lot more consent. A lot more time. It’s a lot more drawn out process than it would to be able to change something like that. So you have to find that entire spectrum. So that’s the first step.

Second step is should you have a blind ballot, which is anonymous voting, or should you have transparent voting? Now I’ve seen arguments on both sides. For example, Switzerland, they have a cantonal system so there are true confederacy and some of the Canton still do public votes. So you can actually go out and you know, and there’s a voting day and people shout out their vote and they go into groups. There’s also other public voting systems. Like America, we have Caucuses, for example, and a lot of the Caucuses will go into groups and who supports Ron Paul and who supports Romney and who supports McCain and so forth. And we all had to deal with that for those who are involved, those elections. And you get various degrees of outcomes. The problem with a transparent system when you’re talking about money is that, well, let’s say there’s a delegation and one particular delegate ends up getting a lot of influence, let’s say, Andreas for Bitcoin or something like that. And there’s a voting system and everybody likes Andreas. So he ends up getting 30 percent of all the votes. Well, if that’s publicly known, what you’ve basically done is painted a big red target on his back. So now he’s got to look over his shoulder and say, oh shit, you know, I gotta worry about my security. What if somebody kidnaps his girlfriend or his kids or whatever. If he has children, I don’t know. But you know, they try to convert or coerce him, and you know, they know he has 30 percent. So they can use the ranch or use some blackmail or something like that. And so he’ll go and vote, but he’s no longer voting with his own free will or conscience, he’s voting being coerced. Whereas if you have a private ballot, the advantage there is that you can never be able to conduct that attack because it’s difficult to know depending on how anonymous you are who these people tend to be. Except for you personally making that decision of to whom to delegate to.

So on an average, you eliminate a factor of attack for your system. Now, if you are clever about how you sort out system level parameters on that spectrum from the SRO all the way to the constitution levels set up, you still gain the benefit of not changing things quickly that ought not to be changed quickly. Like the inflation example that was brought up earlier. That’s a constitutional level change. And if that’s going to happen, that’s something that probably require multiple votes over a long arc of time. And there’s a lot of mitigations, a lot of opportunity for debate and discussion. And if it does get committed, it’s something that committee gets committed to months to years’ time frame. For example, with the Ethereum Classic, we kind of did this, we had to make a decision about monetary policy and we also had to make a decision about the difficulty part. This process took a year and a half to go through from start to finish them. These things are just not being locked in. So in any system level changes, whether you have an on-blockchain are off-blockchain governance mechanism, the audit takes a heck of a lot of time. It’d be a very open process with lots of debates. And what ends up happening is the better arguments tend to float to the top over time and it changes public opinion. And the probability of a vote maliciously succeeding all of those rounds ends up failing. And you know, there’s a lot of ideas on the best way of doing this. The Venetians had a voting system where they kind of randomly sorted people and had multiple rounds and the idea is, you’d never know which person to bribe and so forth. So it’s a really amazing how much ingenuity has come through. But at the end of the day, it’s also an empirical thing where you have theory and you think you know what’s going to happen and you think you know who’s going to be rational and how they’re going to be rational, but you just have to launch it and you actually have to see how it works in practice and where it’s gone wrong and so forth.

There’s also a participatory problem where these systems assume a rational majority and these systems assume reasonable participation. And the reality is that most democratic processes tend to fall apart when participation falls below a certain threshold. That’s why things like delegation are so powerful because you probably can’t get everybody to participate, but it’s a lot easier to get people to at least delegate their votes to somebody. You know, the American election, going back to that, if you look at 2000 versus 2004 versus 2008 to 2012, 2016, and you chart the total amount of people who were eligible to vote versus the percentage of who actually voted, it’s declining cycle by cycle by cycle. But if people had a delegative ability, the conjecture would be that you’d see a relatively high level of participation there. Because the people who are gonna vote are going to vote anyway, but then that delta between the people eligible or not, most of them would still probably delegate to community leaders or people that they know and trust. And you’d get a much richer conversation probably out of that process. But that’s a conjecture and the only way to verify that conjecture is empirically. So you have to run a system and take a look at participation and quality of participation.

So we do have some data. We studied the Dash treasury system. And we took a look at the master node counts and which ballots were being proposed and we wrote a big paper on it. So if you go to IOHK research and look at our library, you can see the Dash treasury report we wrote. And there was some good, some bad and some ugly there. In that, we discovered that there was very little funding diversity from when we studied that. Meaning that the same groups of people tended to be the one submitting the ballots and getting the ballots. Now, that can either be because the ecosystem is still very young and there’s a founder effect where people tend to trust the internal group of people to get it done and it has a diversified yet. Or it could be an endemic failure of the voting system itself. And really, the only way to, you know, understand that is to kind of look at it month by month and see if you’re getting a trend of increasing diversity or staying relatively stagnant. And if it’s staying stagnant, it’s probably a structural problem with the system.

Meher: So the link to the Dash governance system analysis in the show notes so our listeners can follow up on it. So, Charles, yeah, it does seem like Cardano, you have done a lot of research on governance and we look forward to what you end up implementing. To better understanding it correctly, that governance is slated for 2000 sometime in 2019, not 2018?

Charles: Yeah. The big focus of 2018 is smart contracts and decentralization. You kind of have to do things in order. 2017 was let’s get a product in market. Yey. [Laughs] It’s really hard to get scientists to do that. So you know, how long is a piece of string? So we got Byron out in 2017 and 2018 is moved to a specification driven development, fully decentralized the system and then turn on Golang, which is our smart contract system and get all that rolling out. And that’s a huge coordination problem. Then 2019, it’s about performance and governance. And so basically, take the system, start charting the system and then turn on the governance components gradually. There’s also a user education component to it. So you need, if you’re going to have an effective governance system, to have leaders materialize. So you need meetup groups to form. You need people to understand the philosophy of the system and why we’re building it, people to understand the underlying technology and develop an intelligent opinion. Or otherwise, what’ll end up happening is you’ll have a beautiful voting system, but nobody actually ends up using it. What they do is just say, what did Charles say or what a Duncan say or what did Agaloos say? So I’ll just vote for that. So you need about a year or two of community management and growth and development to create diversity, and once you have diversity, then you’d actually get a pretty vibrant ecosystem. So it would be counterproductive, in my view, to have a voting system at the moment. It would be just the voting system in name only. So 2019 is what we feel. And then if there’s any delays, 2020 is when we do that. We have an additional year specifically for that.

Meher: Okay. So right now, is it correct to say that the focuses on proof-of-stake and having a system in which different stakeholders actually validate the blockchain?

Charles: Yeah. And proof-of-stake is such a hard problem. You know, we got a Cosmos guy here, so you’re keenly aware of it as well. And you know, the problem with proof-of-stake is that you’re trying to take something that ordinarily takes enormous amounts of money and energy and resources and coordination and reduce it down to something that doesn’t require that you sense of sizing it, but then you want to get all the benefits. So it’s like saying, I want to eat the cake, but I don’t want to get fat, you know. That’s basically what proof-of-stake is saying. So the first thing we had to understand is, what are we even trying to accomplish? And that’s the first question that was asked. So what is security? What is the ledger? What is a blockchain? You know, this is a basic question. And surprisingly, that question wasn’t answered until we wrote a paper called GKL15 where we define basically what a ledger is, and we created some security properties for it. Then we had to create a baseline and say, well, does a proof-of-work provide that? And the answer is yes. Proof-of-work provides a secure ledger as defined. So you kind of have a basis of saying like this is what we’re trying to accomplish with a blockchain and this is what proof-of-work can do, and yes, proof-of-work is secure. Great job, Satoshi.

So then the natural question to ask is, under any assumption, realistic or not, can proof-of-stake achieve that same level of security? And that’s step one. And so Ouroboros was published in 2016 and it was very impractical protocol and when it first came out, it was tightly synchronized, and it had some undesirable characteristics about it, but it was basically a proof of concept for saying that the security models are identical. So they’re both provably secure. Whatever you get from proof-of-work in terms of what it can construct, you get the same thing in terms of proof-of-stake for what it constructs. So that’s a good starting point. So you have theory and then the next step is, how do we go from theory to practicality? How do we actually take this thing and put it into a real-life system and something that works? So most of 2017 and about half of this year has been consumed with that practicality question. So you move from a synchronized model to a semi synchronized model. You move to a model where an attacker can corrupt clients whenever they want. You move to a model where you can bootstrap from the genesis block. You don’t need a checkpoint, you move to a model where you have composable security proofs. You move to a model where, you know, you have a delegation system built in. So if a person doesn’t want to show up, they can delegate their stake to a stake pool. You move to a model where your random number generation is not done with an MPC, but it can be done with random oracle. And so it’s much faster, but it’s still, you know, resistant against grinding attacks and things like that. And it requires you to add a lot more crypto unlike VRF and perfect forward secrecy and these types of things. It’s not an easy task. So the follow up of Ouroboros was another paper called Ouroboros Praos where we did about half of the heavy listing and we’ve just released a paper today called the Ouroboros Genesis. It’s on the IOHK website. I also just tweeted it and we’ll be presenting both of those at Eurocrypt in Israel here in about a week. And this basically seals off most of the practicality concerns. At this point, there’s still some lingering things we have to clean up, but we feel this plus a delegation scheme is all that’s necessary for actually having a production proof-of-stake protocol in a system.

Now the next step is performance. So where do you go from there? You know, you’ve built this beautiful thing, you can drop it in. Bitcoin would run just fine with it. Any blockchain would run just fine with it and it’ll run forever. And you know, as long as we’ve got our economic incentives aligned properly. You still can’t scale as new users join in. It’s still in a replicated system format. So you need to shard and there’s a lot of opinions on how to do that. So there’s protocols like OneLedger, for example, or Thunderella and others in the academic circles. And these guys have come up with some concepts and there’s some engineering ideas like you guys at Cosmos, you’re doing some things. And Casper certainly has some ideas with Plasma. So everybody has their own idea of how should we shard EOS, for example? I think put Byzantine agreement on top of the depose. And it’s less about, can you shard? And it’s more about what is the tradeoff profile? Usually, what ends up happening is you go from 50 percent Byzantine resistance to a third to a quarter, depending on how aggressive you do things. The other thing is that performance tends to decline very rapidly as you shard, if there are Byzantine actors. So if everybody’s following the protocol and everybody’s absolutely honest and great a protocol streams, it’s beautiful. And if you’re Google or Amazon, that’s your normal operations, right? Because you own all the servers. But if you move to an actual Byzantine setting where people are unreliable or dishonest or trying to break your network, then you have to really worry about performance impacts for malicious actors and the types of attacks that conform. There’s some great research, for example, that was done with Ghost Inspector, which are directed acyclic graph ideas that [indiscernible 0:48:02] and Thompson and Pinski [ph] came up with. We actually implemented Ghost in Ethereum showing that while Ghost does improve performance, there are attacks on Ghost that can degrade performance quite rapidly if you’re clever attacker.

So you have to understand tradeoff profile. So that’s the next step in our research agenda after we close out the remaining Genesis and Praos related stuff. And that’s called Ouroboros Hydra because it has many heads, right? It’s going to be sharded. And at that point, we think we’ll let balance everything properly. It will have the right tradeoff profile. It’ll kind of have the right security design and so forth. But the nice part about the approach we’ve been following is that every step of the way, we’ve been doing it with peer review and with proper security modeling. So what happens usually when you’re building these protocols is that you want to add another McGuffin, another feature to your protocol. And what ends up happening, you have to regress and go back and say, oh, am I going to break something? But when you actually have a solid security foundation and good proofs, and everything is compostable, when you keep adding things, you don’t usually have those regressions, so you tend to see a lot of research acceleration at the tail. So you pay a much higher upfront research cost to get everything started and get that pump primed. But then once you’re rolling, it just all makes sense. It’s all composable. It’s all modular. It’s really easy to parameterize it. The other thing is that it can work in different settings. Like if you want to go to a permissioned setting, it’s really obvious how to do that for like a Hyperledger style scenario. But if you want to go to a permissionless settings, there’s also an obvious way to convert the protocol to that. You don’t have to build two completely new protocols for that setup.

So that’s where we’re kind of at with Ouroboros. Big team. There’s about 10 scientists who work on it and they do different dimensions off and on, and we’ve written, I think, six or seven papers. I can’t remember how many now. They just keep coming out. I think, you know, if we ever go into business, at least we can get into the whitepaper business, you know. [Laughs] 10 or 20 of these things every year or something like that. But I’m pretty proud of the direction of the research. You know, the other thing is that this is going to be the age of the academic proof-of-stakes. There’s some very tough competition coming with Thunder Token and Thunderella. There’s very a tough competition coming with Algorand and so forth. And these are not protocols written by everyday people. I mean, like Silvio Micali has the Turing Prize. That’s Nobel Prize of computer science and he’s sitting at MIT surrounded by some of the brightest people and a legion of graduate students that would die to be able to work at his venture and build something with him. So the rigor and the standards and the community expectation for what makes a good proof-of-stake protocol or what your protocol must be able to do is going to dramatically increase, I think over the next year or two. Thanks to, you know, tough competition and we’re excited to be in that running.

Sunny: That’s really cool. When I first read the Ouroboros original paper, I was kind of like: “eh, this isn’t that interesting”. And then Praos came out and I’m thinking this is actually usable now.

Charles: Right.

Sunny: And so I mean, I’m excited to hear, to read this Genesis paper. Can you tell us a little bit of about it? I heard you mentioned that you have a way of bootstrapping from Genesis or that track points that seems like—

Charles: That’s like the holy grail, right?

Sunny: Yeah. Because, you know, for my assumption, so once you’re past the unbinding period, you know, unless you have some sort of verifiable delay function, it’s really hard to prevent long range attack. So could you tell us a bit about how you are doing this?

Charles: I’m going to punt on that one a little bit because we released a video today. So there’s a 45-minute presentation on how Ouroboros Genesis works and we just released a paper and we’re going to be doing an actual presentation on this in Eurocrypt in a week. So if I talk about it now, I’m going to front run the Eurocrypt presentation. But basically, you just have to make some assumptions about the nature of the signature scheme and how random numbers are generated. And then within that, you’re able to very creatively construct a model where you can get, when you look at two alternative versions of history, you can calculate which version of history is correct. Proof-of-work is really simple. You know, just do a weight calculation. You say, ah, this one has more work than the other. So that’s my longest chain. So this just kind of gives you a clever way of doing it, using the magic of crypto. But that particular crypto is a little involved and it’s in the paper. So if you’re really curious about it, watch the presentation, read the paper. If not, wait for the Eurocrypt presentation and then we’ll talk about it a little bit more openly.

But we had to do a lot for this paper. We had actually two papers we are running at the same time. We wanted to reprove the Ouroboros paper using something called universal composability. And then we had another team that was working on this bootstrap from Genesis idea actually was related to our side chains research. And and then what ended up happening is we discovered that one needed the other and so both teams merged, and we went ahead and created this paper and it was just like a mad dash. It took us three months of hardcore work and lots of revisions, but we got it out. And I think we actually submitted it to Crypto ’18. So we’ll show it off at Eurocrypt. And if it gets into Crypto ’18, we’ll be in Santa Barbara and actually have dedicated session of it. But look, read the paper, look at the video and then next week, I’ll talk more about it.

Sunny: Sure. Another question I have about Ouroboros is how do you guys decide to go between, you know, in proof-of-stake, that’s usually like the chain-based model and the BFT based model. And so at Cosmos, we’re working on Tendermint, which is a very BFT focused way of doing it. While Ouroboros and its descendants all are more on the chain-based side. And I remember, you know, I’ve actually had this argument with you on Twitter before about, you know, your claim was asynchronous networks are impractical that, you know, weak synchrony is good enough for all real-world situations. So could you explain a little bit what’s your thought process there?

Charles: Okay. So if you’re electing an epic and you have a bunch of slot leaders and there’s a degree of professionalization amongst those slot leaders, and reality is you’re always going to have some degree of federation. So whether it’s a mining pool or a stake pool, there’s going to be, you know, some actors who set up dedicated servers, they run 24/7. And in the entire history of the Bitcoin network, it’s been fairly reliable. Blocks are generally produced roughly every 10 minutes. There’s a little bit of variation there, but it’s been basically as expected, and people have been pretty synchronized throughout the entire setup. So it’s more of a practicality argument where you’re like, asynchrony is nice to have, but first off, you know, you have theoretical things like Fischer-Lynch-Paterson, you have to worry about when you’re talking about asynchrony. In seconds, are you ever going to really be in a network operating mode for a system like this where that’s going to come up. So in practicalities, semi synchrony is sufficient where people may not necessarily show up on time, but eventually, they’ll show up within a bound. And that’s what Praos is all about. It’s to say, okay, well, you know, within a reasonable amount of time, they’ll stay synchronized or semi synchronized. And there’s a way of kind of sorting all of these things out, just like we do with any normal consensus protocol. You know, if you’re super worried about it, we could probably reprove everything in an asynchronous model. But you know, the downside is we’d probably regress a little bit in Byzantine resistance. So we’d probably regress a little bit in terms of performance for the system. But I mean, for all intents and purposes, if our goal is to have consolidation around a collection of stake pools for the large network, we anticipate that things will be probably running a synchronized mode or semi synchronous mode. I mean, just to give you an example of Bitcoin, there’s a Corallo’s Relay Network and the Falcon Relay Network for the mining pools actually gets to see the blocks before everybody else because they actually want the tighter couple of themselves.

The other thing is our network model is different. We’re moving from a traditional network model to something called RINA (Recursive InterNetwork Architecture). So from the outside, it kind of looks like UDP going into this black box of madness. But basically, because of that assumption and how we connect these nodes together, and because you actually have a permissioning system, thanks to the election system of Ouroboros, you really can’t create a synchronized private network of stake pool operators or slot leaders that’s permissioned because they’re elected. They have credentials to prove they belong there. So you can use a different network protocol to guarantee that set up. That’s why I don’t worry about it too much. I mean, it’s a nice thing to worry about it from a theory perspective, but at the end of the day, it’s kind of a much ado about nothing. The other thing that you can hybridize these protocols. That’s what EOS did, right? They start with chain-based model with DPoS and they’re moving with Byzantine agreement on top. So you can combine them together if you really want to. And there some evidence that that might be a good idea. Algorand also looks like it’s doing this. It started with like a traditional byzantine agreement and then it kind of fast sortition process and they dramatically speed things up. So that’s kind of the best non-answer I can give you a to your question. So it’s like I view it more as an it’s not a big concern and if it is a big concern, we have a mitigation strategy for it. But in reality, if our network is consistently running in an asynchronous mode, there is a more serious problem at hand than the consensus protocol we’re using. It’s a participatory problem. So the incentives are wrong.

Meher: So, Charles, like zooming out a little bit. So you have developed Ouroboros, Ouroboros Praos and now Ouroboros Genesis, right?

Charles: Star Trek, Wrath of Khan references here. Give us Genesis.

Meher: So I actually like a lot of teams in this space working on proof-of-stake in battle. So there’s Cosmos which is using Practical Byzantine Fault Tolerance and other Byzantine Fault Tolerance algorithms invented earlier and it’s like treating them with coins and the needs of like, it’s combining Practical Byzantine Fault Tolerance with the fact that the people that want to come to consensus need to be the coin holders at a staking coins.

Charles: Right.

Meher: Then there’s Algorand which is on VSphere. We did an episode which is also using a different kind of Byzantine Fault Tolerance. I think it’s called Fast and Furious Byzantine Fault Tolerance invented a decade back. And it’s combining it with some clever cryptography to decide on who should be the parties doing the BFT. Then there’s Ethereum, which is a different flavor of proof-of-stake, which is, as I say, availability-favoring proof-of-stake rather than consistency-favoring proof-of-stake. So the thing with Cosmos and Algorand is once a block appears and it’s confirmed, it’s confirmed. You can’t go back. Whereas in Ethereum, it’s more like proof-of-work where chains of blocks appear and you’re not sure that they are confirmed, but you’ll be sure that they are confirmed after a while. So in this whole kind of ecosystem of projects pursuing different approaches, tell me like what is special about Ouroboros and how are you differentiate it from the others?

Charles: Well, Ouroboros, you know, everybody always wants to say, oh, my thing is better than everybody else’s thing and I could care less about everybody else’s thing. I really, I couldn’t care less about it. The thing is that, it’s more about saying, look, we have to get the theory right. We have to get the security foundations right and we have to make these concepts and ideas accessible to the university environment. You see, what we did is we walked into an environment where if you went to a cryptography conference like Crypto or Eurocrypt, you say, hey, I work in the cryptocurrency space. I actually did this. I went up to Def [ph] at Crypto; and I said, Def, I worked in the cryptocurrency space. He grabbed his glass of wine and walked away from it. He didn’t even say anything. He just walked away. And I was like, what just happened? So the brand of cryptocurrencies is badly damaged in the cryptographer community. Why? And rightfully so, because what’s happened is you have all these people coming around making these magical claims about performance and security. They don’t do any of the basic stuff like build a model, write a proof, clearly state your assumptions, tell them what you can do, what you can’t do. So the first goal of the Ouroboros’ agenda wasn’t to go and build the best, fastest, most amazing protocol ever. It was rather to introduce the entire proof-of-stake problem in the grandest possible way to the entire cryptographic community. And we’ve gone to hell of a lot of conferences, Financial Crypto, Eurocrypt, Crypto, ACNS. You name it, we’ve been there.

And we’ve had hundreds of conversations since, for example, we published the first Ouroboros paper. I think it’s been cited more than 50 times. Seven papers have been derived from it or done things with it or built on top of it. And we’ve now started as a great conversation in the cryptographic community about what is the design space of proof-of-stake in general. Like for example, what are the incentives need to look like. If you are going to do delegation, what does delegation look like? How do you do cold staking, you know, how do we ameliorate some of these meta concerns? We haven’t discussed this one, but it’s a big one. How do you handle exchanges? They, on average, hold double digit percentages of the entire supply of the currency. They don’t own the currency, but they would technically be eligible to participate in consensus with a proof-of-stake style system, right? That’s a big problem. That’s like they don’t own it, but they can control it and they can derive value from it. That’s not a concern proof-of-work has. So it was ameliorating their strategies to mitigate but you know, these things need to be discussed in a broader context.

So the first major goal of Ouroboros was just have that conversation and pick best practices and we were very pragmatic. If there was a better solution, you know, Snow White came up with something or Tendermint came up with something that we felt was better, we’d take it, cite it and put it in. And there you go. So that’s step one. So anybody using our stuff knows that really smart people have checked it and that it’s gone through a very rigorous process and it’s kind of created a standard. Step two was in the process of having that conversation, try to get a sense of the impossibilities or the real difficult to-do things. Because I’m getting so tired of people posting a paper from Andrew from Blockstream about this is why proof-of-stake can’t happen. [Laughs] As if it can’t or something like that. I get so tired of that. You know, it’s like guys, I would much rather, in the academic community, have all of our Sins be and there are numerous Sins, right? And we’ve even discovered some. We have a paper on something called stake bleeding that we discovered along this research process. So create a way of going about explaining what are the tradeoffs? And what are we giving up when we moved to the system? And let’s not have that connected to financial incentives or to cult of personality. Let’s try to have that connected to an objective truth.

So these are ton of too meta advantages of the Ouroboros protocols that third-party verification and that kind of festival style airing of grievances of your protocol. Then in specific, the nice thing about Ouroboros is by its design, it’s very modular. So you can go from a permissioned setting where it kind of looks like POA or something like that. It runs in like a mode that you would expect to see for something like BFT simple to something like running an actual global scale permissionless network. And there’s a way of tuning the protocol to behave in both of these things. It was kind of a common core to it. And it borrows a lot of good best practices. You know, it was like the notion of an epic, this notion of slot leaders. Great. Because it allows you to construct a heterogeneous network stack as opposed to homogeneous ones to ever permissioning system and you can really defend yourself against DDoS and a lot of things as a consequence of that. And there’s a litany of other little basic design principles that we’ve rolled in. The other thing is that we’re very agile in the way we’ve been going about Ouroboros. The team has gone through, I think, six or seven major revisions of the protocol since inception in 2016 and we’re probably going to go through another six or seven revisions over the next year or two, as we learn more. And every time we do it, we gained some new dimension like this bootstrap from Genesis as a major advancement and I think a lot of PoS vendors are going to be inspired from our work and modify their protocols accordingly to try to capture what we’ve done. So far, I think, only Algorand and Ouroboros have this particular property.

So that’s more of a meta point, I think, to your question of well, why is it better? It’s more of a question of, what does the space as a whole demand? You know, if you’re an investor or an external person, the papers are getting too complicated to read, the technology is getting too complicated. There’s too much domain specific knowledge you have to have to sort fact from fiction. So rather, what you need is trusted third parties or a trusted third-party process to verify that the things I’m saying are actually right. And so, that’s why peer reviews so essential. Second, we need to have better conversations with each other. And the problem is we have commercial incentives not to have good conversations with each other because if Bob has created his thing and I’ve created my thing and we have competing tokens, Bob doesn’t want to make my token better. Bob wants to make his token better so he’s going to be closed off unless I adopt Bob’s token. So in the academic world, it’s kind of like a fair common where we can have these conversations and quickly learn from each other and steal from each other to try to converge to a collection of good design principles. And then third, you have to get much better about what are your business requirements for the cryptocurrency that you’re deploying? Who gets to be in charge? Do you want to have finality or probabilistic finality or what do you want to have for these types of systems? What do you need for your business domain? Do you need really fast settlement or is it okay to have much longer probabilistic settlement? You know, it’s what do you actually need in that setup?

And so what we’d rather have is a spectrum of protocols. Ouroboros being one, Algorand being one, Tendermint being one and others. And then what happens is, once you collect your business requirements for the blockchain that you need, a system you need, you spin the wheel and then it says, ah, this is the flavor that I require for my system. And when you adopt that, the hope is that because they have a common DNA in terms of the rigor, the security and the design, you know, that whatever you’re implementing is going to work well for you, the system architect. So I think this is the process that needs to be followed. Designing consensus algorithm is hard, guys, and it’s not a new field. It’s been around since the 1970s. And there’s a lot of people who know how to do this really, really well. What bothers me is when people just go and do it and think they’re experts at it and they make outlandish claims. You know, for example, the Hyperledger guys, they’re very grounded. They’re very realistic people. When I go talk to Christian and I say, what do you guys use? And they say, BFT simple. Here’s why the guy’s written literal textbook on distributed systems theory. You’re either reading that or Nancy Lynch’s book or something like that. It’s a good book. Okay. And then you go to the EOS guys and they claim they’re getting two orders of magnitude more performance in this best practice, you know, BFT protocol.

And I was sitting here thinking is the head of the IACR and all of these scientists at IBM incompetent? And a guy with a bachelor’s degree from Virginia Tech is just as sovereign and he’s come up with a system that performs two orders of magnitude better. It’s crazy. And let’s say it does perform at this level. They’re saying they can move blocks around 500 milliseconds around the entire globe. It takes 300 milliseconds to set a single around the entire planet. So, okay. And then I say, well, what independent verification that we have that? Did anybody performance benchmark any of these things as third party firm provided? No, but they just say it. Hashgraph says stuff. Everybody just says stuff. IOTA says stuff. There’s no verification that these things that people are saying are actually true. And oftentimes when you deploy them, you discover, oh, there was something I wasn’t accounted. Either I’ve actually deployed a broken system that’s not secure at all or my system is secure, but it’s a hell of a lot slower than I claimed, right?

Sunny: I’ve heard people claim ten thousand of TPS and I’m like, where did you test that? He’s like, oh, on my local machine.

Charles: Right. That’s not how you test the distributed system. No. You know, so this is my biggest gripe for it. So the point of the Ouroboros project is to try to separate fact from fiction, to try to federate the PoS problems that we’re having as a community, to try to create a trusted commons, which doesn’t have a financial incentive to pick winners and losers, rather it’s just focused on what’s right and what’s wrong, and to try and have an intelligent discussion about tradeoffs and intelligent discussion about things like performance and benchmarking and best practices and so forth. As the inheritor of that, our conjecture is that the output of this process will be a really good protocol for a cryptocurrency. It might not be the best protocol for your cryptocurrency, but for Cardano, we feel that it’ll convert that particular state. And every project will benefit from our research because it’s all out there. We try to annotate as much as we can and anybody’s free to borrow anything they want. There’s no patents. It’s completely open source. I love Silvio to death. He’s a good friend. Every time I see him, I complain about the patents on Algorands. I think that’s counterproductive for the space. So we’ve chosen to follow an open source philosophy and that’s what we’re trying to accomplish with Cardano as a whole. So every building block in the system, whether it be an interoperability building block or side chains or a performance been a building block with RINA and Ouroboros or a governance building block with the treasury system and the voting system, that’s all out in the open domain. It’s all through peer review. And if we got something wrong, you have a different opinion, that’s fine, change it. But if we got something right, everybody in the world is free to use it and it makes all of our projects better in a certain respect.

Sunny: Right. I mean, no, that’s really good to hear because I know a lot of people, I’ve heard that people will criticize IOHK and Cardano, like you’re using academic pedigree as just basically a marketing thing. But I find this is actually a really good reason. It’s a way of reaching all branch out to the more academic community and so that sounds really useful.

Charles: And Sunny, they don’t need us. That’s what we have to understand, these cryptographers have their own lives. They’ve been around a lot longer before Osman. Public key crypto came out in the 70s. So, you know, what we have done as a community is we’re kind of co-opting cryptography’s brand and it’s really pissing off the cryptographic community. Because they say, look guys, if you’re a full stack Ruby developer, you are not a cryptographer. And just because you can implement one of these protocols doesn’t mean you can design them and make them secure. And what they’re doing is they’re having this acid flashbacks of the 70s when the very state of cryptography was anybody who knew how to write code was implementing their own crypto and they’re violating every basic principle, you know. Security by obscurity and you know, they weren’t getting proper random numbers. It’s just horrific when you actually look at these things in practice. It’s not elitist them, it’s just a science. I mean, it’s kind of funny and everything else we’re okay with professionalization. You would like your doctor to actually be certified and you like your doctors who have actually gone through residency and proper training if he’s going to perform an operation on you. But then in cryptography, it’s totally okay for somebody to have no professional training or knowledge, but then go implement something that’s going to keep you private and secure and prevent, you know, the Iranian government from knocking on your door and black bagging you because they discover you’re gay or something, or they discover you’ve been using illegal cryptocurrency or the same for China. Yet it takes an equal amount of training to become a cryptographer as it does to become a doctor. In fact, in some cases, more. You know, you’re really at it in your 30s. So it’s just dreary. And I think what we need to do as a community is respect that there are people who came before us, respect that these are very hard problems and respect that these problems are not going to be solved in one grand paper and by one person. They’re going to be solved in stages, as a community over a long arc of time as we built up normal computer science,

Sunny: Right. Basically, we have to be able to talk the talk, so we can get the ear of the people who’ve done it. And I’m going to start my own saying which I’ve been pushing ever since, like the Neo Fiasco, I’ve been trying to start the saying, go, you already have, don’t roll your own crypto, don’t roll your own consensus algo.

Charles: What did the network break when one node broke? That’s a Byzantine tolerant.

Sunny: Yeah, it was some weird stuff going on. [Laughs] So another question I have about Ouroboros, right? One thing I’ve always had some trouble understanding was in a lot of proof-of-stake algorithms like Snow White, Ouroboros, especially in DFNITY stuff, there’s always just a huge focus on randomness, which I never quite understood. So for context, in Tendermint, so every block has a proposer and we have a deterministic round robin. And I know who the proposer one block from now. I know who’s going to be five blocks from now. I know it’s going to be 2000 blocks from now. And you know, maybe it has something to do with the fact that Tendermint distributes rewards equally to all the validators. And all the validators are, because of the BFT nature, everyone’s participating in it. But to me, the biggest thing that, you know, I understand randomness is nice. I think the biggest thing it can help with is like DDoS prevention. But why the huge focus on the randomness?

Charles: Well, there’s partly legacy, partly practicality. So most, if you do a literature review of cryptographic attacks or where systems have been broken, almost always there’s some sort of random number issue at the end of the rainbow, in order, we screwed up somewhere and there’s some bits got leaked or something. And there’s a bit more to determinism than we’d like to admit. So cryptographers are professionally paranoid about how clean the randomness as a basis. The other thing is that there are common best practices that exist, like you can use MPC, for example, to develop randomness. Shown makes scheme from Crypto ’99 is one. We made a modified version of it that’s linear time called Scrape. But in general, if you say, okay, the security assumptions in my proof rely on, you know, a pure source of randomness, then your system is not provably secure unless, you know, you have that. So you have to put a lot of work into your paper and the cryptographic community holds you to a very high standard to prove that you’ve done that. That’s why it almost seems disproportionate a lot of efforts put into the explaining that in the paper because it’s just a community expectation requirement. In a more practical sense, there were litany of attacks in early proof-of-stake like writing attacks where, you know, people could bias their chance of winning a to favor them just by careful selection of things. And so you would like to make sure that you have ameliorated that. It’s a historical problem in the space. But you know, again, it depends on your network topology, you know. If you’re DPOS, for example, and everything is 21 nodes or 101 or whatever the quorum set is, and you kind of know the order that they’re doing this, whether it be a round robin or not doesn’t really matter as much but randomness, there’s some question there, but not really. Whereas, if you’re actually going to elect a true committee of people proportion to their stake, you kind of have a different need for that.

The other thing is if you build a source of random is carefully within your protocol, that becomes a cryptographic beacon that you can reuse for collection of other activities. So let’s say you have smart contracts that require source of randomness, it’s a horrifically bad idea to have people roll their own source of randomness without a smart contract. In fact, I think there was a paper out of Cornell that did an analysis of the RNG and smart contracts. And they said these things failed miserably, or it might’ve been at an UIUC. I can’t remember which group did it. So it’d be nice as an API to say, hey, we have a beacon built into the protocol. And because we’ve done a really good job of making that as pure as possible, that becomes a public utility that the blockchain provides in addition to consensus that can be reused for other building blocks, whether that be lotteries or player matching protocols or these types of things and you know that that’s a fair source of randomness, which is very valuable for all cryptographic.

So it’s kind of a mixed bag. Part of it is legacy because when it was one of the focus things went horrifically wrong. Part of it as a community expectation where literally, your paper could sometimes not survive peer review if you don’t spend enough time talking about it and doing things with it. Part of it as a public good and part of it as a structural property, depending upon how your quorum is setting up and you know, how you’re voting on people. And systems like Ouroboros or Algorand, for example, do require a pure source of randomness or a really good source of randomness to operate with a proper security.

Meher: Okay. So one of the interesting things about a Cardano is most of your implementation is is in Haskell. So you’ve chosen Haskell. The only other project that has chosen a functional language is Tezos, right? With OCaml.

Charles: Oh no. Kadena, it also implements their blockchain in Haskell and also, Digital Asset Holdings does all their stuff and Haskell. But that’s in the permission side of things. And Barclays Innovation Group is all in Haskell as well. I think Chris Clack is there. So if you look for it, you can find it. But you’re right, it’s not a common language choice.

Meher: So why did you make that choice? What’s the advantage of Haskell here?

Charles: Yeah. So yeah, really, when it comes down to it, there’s first, the imperative versus functional war. And I think over time, people are starting to concede that even if you’re on the imperative side, like your love your Java and it just never going to not love your Java. But there are some things that make sense to do in a functional sense. That’s why Lambda came to Java. It’s Java 8, right? And so programming in general is becoming increasingly more functional because we’re worried a lot less about resources and we’re worried a lot more about things like concurrency and we’re worried a lot more things like correctness of behavior and conciseness. Because these, you know, these repos are just getting so big and there’s so many things going on. So I like functional languages because I just believe that they’re easier to reason about. I believe that it’s easier to test implementations and I also believe it’s easier to build distributed systems in functional languages. And actually, if you look at a lot of the Internet giants, like for example, Netflix, their entire backend microservices was Scala. You know, if you look at Facebook, large chunks of their systems are running in functional languages or the components that are mission critical have some sort of a functional component. Same for Google in a certain respect. So it’s less of a decision of, well, okay, should we go functional or imperative. For mission critical software that’s distributed that requires lots of testing, potential verification, in my view, it is functional. It’s more of a question of okay, which functional language you pick and there’s a spectrum of functional languages. You have hybrid languages like Clojure and Scala and F Sharp where when you want to be imperative, you can be imperative. Scala is great at that. Whereas if you want to be functional, you can be functional. Then you have peer for languages. OCaml and Haskell are very peer in that respect. And that you have even more peer languages like Idris, which is dependently typed language that is just mind-bendingly hard to write code in.

So why we chose Haskell was we have access to the royalty of the Haskell space. The inventor of the language, Phil Wadler, amongst others, there was a committee that created it, works for IOHK. So when the guy who created the programming language happens to work for you, it’s probably a pretty good idea to at least consider that as a viable candidate for your system. Second, we have access to pretty much all of the Haskell consultancy firms. Well-Typed, Tweak. We’ve talked to FP Complete and so forth. So all the people who are acknowledged to be the top percent of the space, in terms of development ability, work for us or we consult with us or we can talk to on a regular basis. Third, I think there’s a huge value in going boutique as opposed to going mainstream. So we’re one of the largest Haskell projects in the world and actually, one of the most valuable and prominent Haskell project in the world. So if you go to Haskell Reddit and you go and say, what is Cardano? Everybody in the Reddit will know it as a project. So basically, you know, a lot of the people in Haskell space like us, know us, and it means that we have a higher probability of getting independent contributors from the Haskell community to come in and, you know, eventually, over time, commit code, read our code and give us feedback and so forth.

Another reason is that we’re following formal methods. And so we start with a formal specification that we’ll be releasing one of the few weeks for our wallet back end. And if you want to prove that you’ve actually correctly implemented code, you kind of really need a functional language to do that. And you can do with OCaml and Coq, you can do with Isabelle and Haskell. There’s differences of opinion of what’s better, but you need some flavor of that. So if we don’t really have an option there, it would be damn near impossible to do it with JavaScript or Java or something like that. It’s really, really, really hard. Also, another thing is conciseness. It’s amazing. Just to give you a sense of numbers. Our Mantis client for Ethereum Classic is only twelve thousand lines of code. Compare that to the C++ Bitcoin client. It’s like over 100,000 lines of C++ code. So it’s about a tenth of the size to do something that does more as a virtual machine and all this network stuff and things that are more advanced than Bitcoin. So you have a more sophisticated protocol and you use ten times less code for a more sophisticated protocol, what does it mean? It means that just by lines of code, there’s much less to do, there’s much less to think about and it’s much easier to write good test suites and contest coverage for that and reason about the code. So the conciseness has a huge maintenance and technical depth value in my view over verboseness. And so I really liked that.

And then there’s a lot of Haskell specific things that we really enjoy. Haskell is probably the most advanced functional language because the community is really invested in an enormous amount of time into making GHC and other things really advanced and they put a lot of cool things in with the type system. The whole concept of the Monad is really easy to work with and it gives you a lot of tools for modeling concurrency and distributed system theory. There’s also actually some beautiful things like Cloud Haskell, for example, which takes all the Erlang goodness, it was OTP and it brings it into the Haskell space. So another language you could look at like Erlang or Elixir, for example. I think Eternity is using Erlang and that’s a great language for building a cryptocurrency and as well, you don’t have to make any compromises. We have Erlang stuff that we can pull into Haskell as well. So from a correctness, a conciseness, a testing, a community, oh, one last thing. Personnel wise. If you hire a Haskell developer, they have a master’s degree in computer science or a PhD or at very least, they have a lot of professional experience. No one starts with Haskell unless they go to Edinburgh or Oxford or some really good place. Most people start with an imperative language and then professionally, out of frustration, they find functional programming and there’s something they like about it. So if you hire a Haskell developer, generally, you end up, it’s a beautiful filter that gives you access to a much more experienced group of people that are more mathematically oriented. They have an easier time reading formal specs, they have an easier time reasoning about things and they tend to be a little bit grayer in the beard. [Laughs] So we wanted to have that added filter in our personnel, so we could have smaller teams that are more experienced, that are smarter. And overall, I think that’ll have a better output for our process. But you know, we do more than just Haskell. We also write Scala code. And then we also write a lot of JavaScript code as well. The entire Daedalus front end is written in JS and we try to make that immutable where we can. But it’s still an imperative approach. And that’s a separate team.

Sunny: I mean, I think their formal verification focus is really great. I was a huge fan of Haskell as well for the same reason. I think that especially what you guys are doing, the formal verification of your wallet, because I don’t think I’ve ever seen anyone ever do that before. That would be really cool. Especially because that’s usually one of the points where the easiest attack vectors on systems.

Charles: Right.

Sunny: Another thing. Could you tell me a bit more about the K Framework? Because I know this is something you guys are focused on a lot and it’s a very cutting edge topic that I don’t think that many people are, not let alone understand, even aware of. And so could you speak a little bit about what made you decide to focus on this? There seems to be no other blockchains, there seems to be a huge shift towards web assembly, everything, right? And so, why the K Framework?

Charles: Right. Okay. So first, you guys should really have Professor Grigore Rosu on your show. He’s based at the University of Illinois. Wonderful guy. He runs Runtime Verification and he’s worked at NASA and he’s done all these really, really cool things. But he’s the creator of the K Framework and he works very closely with IOHK. We actually have on contract 19 people full time for this thing. So it’s not an insignificant pool of resources and they do amazing work. Okay. So what does K all about? K is kinda like a meta language. It’s a language to build programming languages. So languages have syntax and they have semantics. And syntax are your symbols and your letters and these types of things; and semantics are when you chain them together, what the hell does that actually mean? Now it’s a formal language, so it has to be ambiguity free. And so what ends up happening usually is, the programming language designer will go ahead and write some document and we’ll have all the semantics written down in mathy style language and say, here are your language semantics. So if you encounter a statement in a language, here’s how you’re supposed to interpret that. And then the developer will go implement the language and hopefully, if they read the spec correctly, they should be one to one. So for any statement that you read, there should be semantics to cover that. Now in practice, this has not historically been the case, even in languages like C, for example. Especially for languages like C.

So what K is all about is saying instead of writing your operational semantics of your language in math symbols that you put on paper, write in it and code in a K Framework in a special type of markup. And then what’ll happen is K can actually build your language for you. So you have a correct by construction implementation of the language, all your tooling. Now where it gets really interesting is less about the construction of language. That’s really fascinating from a PL perspective and a correctness perspective, and it’s great for research. What gets fascinating is translation. So what K can do with something called semantic space compilation, it’s something we’re working with RV very closely and it’s a lot of work, is that you can take a K defined program and translated into another K defined program. So let’s say you write the case semantics of Java, which has been done, and you write the case semantics of C, which has also been done. Hypothetically, using semantic space compilation, you can take a C program and translate it into a Java program and it runs just like the C program ran.

Okay. And what is as interesting is then, that means you could build the ideal perfect virtual machine for your cryptocurrency. So in this case, it’s Yella. It’s the virtual machine that RV designed for Cardano. And then for interoperability, all you have to do is just go and write the case semantics for lots of programming languages and the SBC stuff will just translate those languages right to your perfect VM. So this kind of helps you a bit. Because you know, like the whole argument for web assembly is, it’s not that it’s the optimal thing for cryptocurrencies; it’s that a lot of people are working really hard to make this as interoperable as possible with as many programming languages and they have to do a lot of work there. But the thing is, it’s not fine-tuned at all for cryptocurrencies. So what you can do is build something that’s fine-tuned for your application and then your interoperability strategy is just go and spend a few weeks writing semantics down and then you’re done forever. You just append those to the blockchain in there. And if you’re curious to see what they look like, just google Runtime Verification GitHub and you should or excuse me, K Framework GitHub and then you can actually see the semantics for Java and the semantics for JavaScript and things like that.

And here’s the other cool thing. Let’s say you version your virtual machine. You go from version 1 to version 2. Ordinarily, when you go to version 2, you have to update all the compilers. So you kind of have a maximum threshold of comfortable languages you can support. Because when you update your VM, you know, you have to say, well, if I have a thousand languages, I have to update a thousand pieces of code to get all these things to work with my new version. With the K Framework, you don’t do anything. You just update the semantics of your VM and then the SBC stuff sorts it all out for you. So you can support unlimited languages and you don’t do any additional work whenever you version your system and upgrade your system and so forth. But guess what, here’s the best part. Let’s say for the sake of the argument that web assembly wins out. We could write the semantics of web assembly in K and support that as well. And so translate to web assembly or from Yella, if we want it.

Sunny: I think I’ve already seen that. I think they’re existing.

Charles: They are actually working on web assembly semantics and we’re actually also building a K LVM backend so that we can get basically, handwritten code performance from the machine generated code. So it’s a lot of theory. There’s a lot of things involved here. It’s a deeply involved project. That’s why there’s so many people working on it. I think all of them have PhDs or tenure close to it. And there was about ten years of computer science research that’s gone into this particular framework. And it’s actually already been used in practice. It’s a practical product. Runtime Verification, for example, works with NASA and Boeing and they do verification work with these organizations. Not small companies and not lowest share settings. So we’re really excited to try to bring this into the cryptocurrency space. Already, it’s given us the ability to be very pragmatic. We wrote the operational semantics for the K EVM, so we actually have Ethereum semantics and when we launch our test net, we’re going to actually launch a Ethereum test net alongside the Yella test net. So if you’re a Solidity developer, you have web3 and Viper and all this other stuff, guess what? It’s going to be one to one compatible. It’s just as if it was an Ethereum node. But we didn’t actually have to implement that virtual machine. It’s actually being implemented by the K Framework. So when we launched our test net, we’ll actually have a correct by construction VM. That’s built right from the semantics. Pretty cool stuff, no ambiguity. It passes all the Ethereum test factors and it’ll be the same for Yella.

Later on, we’ll start looking at things like proof carrying code and things like hooks in the VM that allow formal verification to be much better. Like for example, RV wrote a formal semantics for the ERC20 token standard. And this is a kind of a vogue topic right now because some of these ERC20 tokens aren’t necessarily implemented completely correctly and it creates problems. So wouldn’t be so cool to actually have an artifact that you can have with your ERC20 deployment that verifies that you follow the specification. So you get to proof of correctness with that. So you know that your ERC20 is right. You know, if you have billions of dollars of value behind something on your EOS, it’s probably a really good idea, you know, to verify that that token is correctly implement. And these are kinds of things that are going to be really easy to do in our view when Yella comes out because we’ve custom built the VM to accommodate that, but we don’t have to sacrifice interoperability.

Now, here’s the other really cool thing. Let’s say that you want our virtual machine to support your language. Eventually what you’ll be able to do is write the semantics of your language in K and then issue a special transaction to embed it into the blockchain itself. Then as a developer, here’s how your development experience looks. You write your contract in that language and you go and put a header in the thing to go look up on the blockchain for that particular language, and if the semantics are there, it can pull it and then use the SBC mechanics to just translate it to run on Yella. You don’t have to talk to me. You don’t have to say, Charles, can I get support? Or go and build a complicated compiler or anything like that. You just have to write semantics of your language, which you already have to do if you’re creating new language, for example. But now, there’s a more rigorous formal way of doing that were you know, these things are there.

And there’s a lot of other things like Debuggers and all this other framework that will be interoperable between each other. So K’s a great project. K Framework is just phenomenal piece of work and it’s being incubated at a major computer science institution, University of Illinois Urbana-Champaign. I think that in the top five of all CS schools in the world, or at least in the United States. It’s right up there with MIT and Carnegie Mellon and it’s known for formal methods and PL. This is a big hub for it. So it’s got kind of the right balance of practicality, of theory. There’s definitely the right team working on it. And if you guys want to know more about it, Grigore is a wonderful guy to bring on and I think he’d be able to give a far more justice to K than I could.

Meher: That seems like a powerful invention and thing to adopt for smart contracts.

Charles: Right. Because at the end of the day, developers are going to want to write contracts in the languages they want. It’s kinda funny when Joe or any of these other guys from Ethereum like Ethereum is one. It’s inevitable. You know, we have all the developers. It’s like, you have all the Solidity developers, the language you created yourself, but how many Java developers do you guys have? Or how many C++ developers do you guys have? The vast majority of people who write code do not know how to write code or are not writing code for your platform. So you need to be pragmatic and create a way to bring those people into your ecosystem. And that should not be using different programming language and throw away all the things you’ve come to know and love in your entire career. It has said bring as much of what you have into our system and it should just work. Now, it doesn’t mean it’s going to be secure. It doesn’t mean it’s going to be practical. It doesn’t mean it’s gonna be performance. It doesn’t mean it’s going to optimize gas costs. All of that is facts and circumstances, but it says that you can use the stuff you’re already familiar with and then you can have a conversation about those other questions over time as those communities emerge.

Meher: Cool. So now we have a guest idea, Grigore Rosu. So we’ll have him on Epicenter and discuss the K Framework. Charles, thanks a lot for the great conversation over the past hour. Whenever we invite you, we learn a lot of things that we never knew about. I remember the last time you came on the show, prior to the show, you showed us your collection of books and that was mind blowing in itself. That should be an episode, you know, Charles’ collections of books.

Charles: I don’t know if I get this one out. Did I? This was actually one of my favorite math books. It’s something I read as an undergrad. Naïve Set TheorY from Paul Halmos. Did I show that one off last time?

Meher: No, no. That’s new to me.

Charles: So it’s like if you ever actually become serious in math, you have to take set theory and in math logic. Anyway, Halmos wrote this is kind of a primer to help you learn about things like Russell’s Paradox and Benach-Tarski and you know, how natural numbers are constructed and so forth. Halmos is one of my favorite authors because he followed what was called the inductive book method. So basically, what he would do is you write the first chapter and then he’d go and write the second chapter and then go back and rewrite the first chapter, then he goes and write the third chapter and go back and rewrite the first and the second chapter. So if you ever read a book by Halmos, the first chapter is amazing. It’s like the best thing you’ve ever read and you’re like, wow, this guy is such a great author, but then something happens that the quality tends to decline. [Laughs] over chapter and chapter. Last chapter is terrible. What the hell is this guy talking about? Or something like that. But, wonderful book, highly recommended if you ever want to know about set theory and Peano axioms and things like that.

Meher: Cool. So we look forward to having you again. Perhaps when you have the next release of Cardano. I think that’s the Shelly release. We’ll probably invite you back and talk more about your smart contracting system and other elements of Cardano that we couldn’t touch on today.