Amaury Séchet

Bitcoin Cash

Amaury Séchet is the lead developer of Bitcoin ABC, the largest client for the Bitcoin Cash blockchain. Amaury first got started with Bitcoin in 2010 and closely followed the Bitcoin block size debate as it progressed through the early years of Bitcoin. Predicting the eventual failure of SegWit2x, Amaury was part of the original team that helped coordinate the Bitcoin Cash hard fork, timing it with the activation of SegWit on the main Bitcoin blockchain. We discuss with Amaury the roadmap for Bitcoin Cash, especially with regards to their approach to scalability. We cover many of the novel features the Bitcoin Cash development teams are innovating on such as Canonical Transaction Ordering and Avalanche Pre-Consensus, as well as cover some of the more juicy drama that plagued the Bitcoin Cash community in late 2018, leading to split off of Bitcoin SV.

Topics we discussed in this episode
  • Block Size Debates in Bitcoin
  • Origins of Bitcoin Cash and the Fork
  • Year 1 Technical Development of Bitcoin Cash
  • Bitcoin ABC vs Bitcoin SV
  • Future Roadmap
Sponsored by
  • Microsoft Azure: Deploy enterprise-ready consortium blockchain networks that scale in just a few clicks. More at aka.ms/epicenter.
Transcript

Brian Fabian Crain: So we’re here with Amaury Séchet who is the Lead Developer of Bitcoin ABC he’s been working on Bitcoin Cash for a long time. Of course Bitcoin Cash is something that we’ve never really talked about in this podcast, even though it’s been around for quite some time and at some point it took up so much mind space in the cryptocurrency world, but still there’s a lot of interesting things going on within Bitcoin Cash and we are really looking forward to diving into both the history as well as the technological differentiation that has come up in Bitcoin cash. So thanks so much for joining us Amaury.

Amaury Séchet: Thanks for having me.

Brian: I remember we spoke about a year ago for a while and one of the things I remember that you talked quite a bit about how you got into Bitcoin originally, which was very early on. Do you mind sharing your early exploration of Bitcoin?

Amaury: Yes. Well first I was very interested in digital currency before it existed. This was something that was interesting to me and I discovered Bitcoin in late 2010. I think it was November or December. And so that was kind of the first association of that idea that was actually working. So I was very interested and started following what’s going on and then in 2012 or so it started growing very big so I was like, okay, this is not just me that you seeing something there that is interesting. But actually it seems that there are a lot of people in the world that start catching up with that idea. So around 2012 this is where I had the realization that it would become very very big.

Brian: And so at the time were you just watching it from afar or did you start working on some little things or exploring different aspects of the code – in what way did you engage with Bitcoin then?

Amaury: So I was more interested by the economic aspect of it to begin with so I didn’t do it into code right away. I looked in the code more recently, because in the early days, it seems that there was a team of developers that were, you know doing just fine. So it’s not like there was much of a need for me to get involved in the code. But I probably started to look in the code in 2015 or so.

Sunny Aggarwal: I see. So instead of jumping in full time, I know when we were discussing you mentioned you spent a lot of time at Facebook along during that time from 2010 until 2015 or so. Was Facebook at all interested in what was going on or was this just on your side interest and how did your time at Facebook impact the way you look at blockchain development and whatnot.

Amaury: Okay. So so the main problem for a company like Facebook is that right now there are many open questions when it comes to scaling blockchains. And so if a company like Facebook were to say so for instance Facebook has a payment system within their Messenger App, if they were to say, okay we enable Bitcoin in Messenger next week, it would be a giant disaster because there would be so many people using it at some point and so some engineer at Facebook were interested by Bitcoin and even some engineer I know at the time wrote some code to support it, but it was not scalable enough for it to make a lot of sense for Facebook to adopt that. And so that was a bit of this situation. My experience with Facebook I think was useful in many ways. First I’ve worked in the growth department of Facebook. So what is growth on Facebook? It’s people building technology to improve Facebook so that more people use it essentially to simplify a lot and that gave me a lot of insight about how to make a product that people want to use and how to grow that from a technical standpoint but not only, and so that was useful to bring too, think it gives me a different mindset that most people may have in the space.

Brian: Recently there’s been lots of news that Facebook is starting to have a serious blockchain effort and they’re actually hiring some teams and supposedly this substantial effort. Now that you have some distance and you don’t work at Facebook anymore you can freely share your opinion. What do you think? What’s your expectation about what Facebook will end up doing with blockchain.

Amaury: I don’t have any particular insight because as you mentioned I’m not working for Facebook anymore. What I would expect from that is maybe something that is very Ripple like to settle payments because Facebook has a lot of international payment system in there and right now they rely on third party to settle PayPal and Venmo and this kind of thing, so if they can develop something that is similar to report it could help their system quite a bit and I think this is what they are doing, but I don’t know any better than anyone else.

Brian: So that’s more of an internal enterprise product then a consumer-facing thing.

Amaury: I would expect it. Yeah, I would expect it because Facebook doesn’t have very much this culture of let’s build an alternative currency to subvert the system or whatever. This is not something that is built in the DNA of a company like Facebook. So I would be very surprised if this is what they were doing.

Brian: Yeah, absolutely. So then you mentioned 2015 you started getting seriously involved in the Bitcoin space and I guess that’s also when the blockchain or the debate around the block size was really heating up and of course on this podcast we did countless episodes on it. So what triggered this involvement back then and how do you remember the block size debate.

Amaury: Okay, so it’s very interesting because I actually had a small intervention in the community in 2012 or something like that. So for people to know I was there very early on but for the most time I was keeping a very low profile and the reason is, I think most people in the space can understand. This is a very subversive technology and so it’s not always the best idea to attach your name too early on and I think this is why people like Satoshi choose to stay anonymous and I was kind of in the same mindset. But I still had some interaction with the community in 2012 around the block size because there was kind of like my last question. At that time I saw the technology is there is working it has the potential to be big but there’s this block size stuff and it was very clear to me at the time that if the block size stuff stays then eventually we’re going to run to the limit and so it’s gonna prevent the growth of the system. So I went in there I asked people but at the time almost everybody was like, yeah, no problem, when we get anywhere close to the actual limit we’re going to raise it and everything. So I was like, okay, this seems to be moving in the right direction so I should consider this to be something very serious. And then what happened is that the people that are more on the side where the block size limit should stay small starting a bit more and more influence into the community. I think the people that were for raising the block size made a lot of strategic mistakes along the way that allowed those people to gain a lot of influence and so it results in the situation where at some point those people were more influential and more numerous than the one that wanted to increase the block size. And this is where this whole thing started to turn into a war of some kind.

Sunny: What are some of these strategic mistakes that some of these big blocks opponents made? Would one of them be backing up Craig Wright’s claim to being Satoshi perhaps?

Amaury: Yeah, but that was fairly late. I think even before that there were various mistakes. So what you see in most open source projects…so there is another project I was participating in a lot that is called lnvm. It’s a compiler infrastructure project. So it’s software that takes code that is written by developer in a way that human being can understand and turn it into binary that a chip can execute. And this project is as participant from many big company in the computer science space. So we have companies like Google and Facebook and Amazon and these kind of people they are going to contribute. Why do they contribute? Because they have millions of servers literally. And if you have 1 million servers, you improve the performance of application by 1%, it’s 10,000 servers that you don’t need anymore that you can use to do something else. So when you have like millions of machines it really quickly adds up to very substantial amount of money. So they are participating for those reasons. We have people from Intel or AMD or ARM or chip manufacturer and why because they want to make sure that the code generated for their chip is very high quality. They also want to ensure that maybe the engineers that work for Intel, they care a lot about the performance of the code generated for Intel but they don’t care that much about the performance of AMD chips, right. So if you let Intel engineer to do all the work and you are AMD this is probably a strategic mistake because your processor are going to end up being worse for your customer because the software support for it around compilers  is not as good and I think the big blocker made that mistake. So the people that were more for small block invested very heavily in infrastructure, like a company like Block Stream for instance hired a lot of developer of the core software clients and the other company not as much, and as a result you have this effect where if you depend on some infrastructure, but you are not really invested in it then you are at the mercy of the people building it. And so I think that was a bit of a strategic mistake. So right before the first one and the biggest one.

Sunny: so the small blocker narrative has been kind of in line with that where they’ve been saying, look, all the core developers and main infrastructure people are relatively, you know, very pro small blocks and then a lot of these companies and miners are very pro big blocks, but, they actually flip the reasoning, their claim is that the reason the core developers are overwhelmingly pro small blocks is because they have some technical insight that makes them realize that big blocks are infeasible. Do you think that’s a valid claim?

Amaury: I don’t think that is the case. I think it’s more of a difference of opinion on what’s important and where the project should go. So for people in the Bitcoin cash community peer-to-peer, electronic cash is the most important thing and you want to have the best property for the system that fit that bill. People that were more into small block size they sell Bitcoin more as a settlement layer and you would use other systems to transact like L2 and Lightning Network and maybe Liquid and various other stuff like that. And so you transact using those systems and you use BTC just as a settlement layer for those systems. So there is a very important difference of vision. And so when you don’t want to build the same stuff, then you’ll just need the trade-off that you are making are not going to be the same and I think this is where the main difference is.

Sunny: So why do you think there was a lack of you know big block support amongst core developers? Why did the big blockers never really step up and take a big role in a lot of this core infrastructure development.

Amaury: Well I think the people on the big block size, they were more coming from the economic standpoint and I came to be coming from the economic standpoint. Maybe that community was a bit weaker on the technical fundamental for quite some time I think. I think a lot of people in the big block did not realize how important it was to make sure you have a solid infrastructure and put the people in the small box size, they realized that very much very young.

Brian: Yeah, that’s very interesting. And then one of the things that is noteworthy is that most, or majority of the core developers were in favor of small blocks, but at the same time the miners tended to be in favor of bigger blocks. Why is that? I mean, do you think there’s any economic reasons why miners would prefer something like electronic cash approach to this settlement vision.

Sunny: It always seemed a little bit counterintuitive to me because you’d think that miners would want higher transaction fees, thus smaller blocks

Amaury: Well, no, the revenue of the miner is the transaction fee times the number of transactions, right? So for miner to generate higher revenue you essentially have to generate direction they can try to increase the volume of transaction or they can try to increase the transaction fee and I think that the increase the transaction fee as a project did not make a lot of sense. The reason is that everything else being equal you’d rather use the product that have smaller fee. So if there is one product that has a very high transaction fee and limited capacity, and I know someone that has a very large capacity but low transaction fee, then on the long run you should see most of the volumes, moving to the one that have high capacity and low fees. So I think if you are a miner and you are thinking about this long term, the second one makes more sense, I think.

Sunny: Okay, so early on there was a lot of these, you know, it’s not fair to say that there was no development support behind big blocks, rather it just was missing from a lot of the Bitcoin core development team, but, early on that we already started to see many alternative clients start to pop up, things like Bitcoin Classic, Bitcoin BU. So could you tell us a little bit more about the history of these alternative dev teams, alternative client implementations.

Amaury: So I was not involved very closely with all of them so I cannot go to every detail because of some of it I don’t know but essentially there was three big effort on the big box side. There was a XT,  there was classic and there was BU so I have little income and knowledge about what happened with XT so, I’m not quite sure. I think it got a lot of traction early on but it was essentially disabled mostly by political move if I understand the history correctly. So there was this Hong Kong agreement for instance where core developers and miners agreed to not run XT and run core increase something like that. And so essentially like remove the wind from the sail of XT but then later on there was Classic and BU and the two of them went pretty strong. I think they made a few mistakes and and one of them is like not presenting a unified front. There were a lot of you know infighting between the two and they were not 100% compatible in terms of the consensus from that. It resulted in a fork in test net. So then everybody gets cold feet because it’s not quite clear which one of the two you got to support and everything right? So I think there was a bit of a strategic mistake and on the other side the other camp was also willing to play dirty. So when the other side is willing to play dirty you really need to have your act together.

Brian: What do you mean by playing dirty? Can you expand on that?

Amaury: Yeah, so I mentioned already the thing that they were doing well like investing in infrastructure and stuff like that, but there was also agreements like the Hong Kong agreement for instance that was more of a political move than anything else because they made some promise there that they did not really intend to keep or maybe they change their mind later and the internet to keep it at the time, but the whole stuff was very much political much more than it was for the benefit of the coin of technical reasons. There was also that the core developer that were more in the big block side, they were a few like, Mike Hearn and Gavin Andresen but they found themselves isolated very quickly and eventually removed from the project. There was a fair bit of censorship going on already in Bitcoin Talk and places like that.

Brian: So I just wanted to because, we’re speaking about these things years ago and probably many of our listeners are not not very up-to-date with that, so just I wanted to spend like 3 minutes recapping what happened. So basically you had on the one hand people who wanted to have bigger blocks and we spoke a little bit about that and then a lot of the core developers, they wanted to have this other thing called SegWit, which would give a lot of extra technical capabilities and we can speak about that later to exactly what SegWit was, and then you have these different factions and there was a division and lots of drama around it and we did many episodes back then, you know, we have my current on several times where that in background and Gregg Maxwell and like so it lots of discussions on this but basically at these differing visions and then there was this sort of agreement to do kind of both right, so the core developer says, okay we’ll have megabyte block size increase at least we’ll double it. And then the other one said okay we’ll go ahead with SegWit and activate that but the SegWit thing came first so the SegWit thing got activated.

Amaury: I think there were strategic mistakes made with SegWit2x as well. Like you mentioned the fact that the two don’t activate at the same time was a bit of a mistake. I think there is also probably a bigger mistake and this was the mistake that led me to believe that SegWit2x will fail with very high probability at the time. And so that that BCH was very important. It was at some point the activation of SegWit2x was modified in a way so that it’s compatible with the way uACEF activates the grid. So for people who don’t know uACEF is essentially a group of people that decided on August 1st we’re gonna enable SegWit no matter what and we’re gonna run a modified version of the Bitcoin core software that does that and even if there is no majority support for the miner or whatever we just are gonna activate SegWit and essentially fork the network with the SegWit branch on August 1. That made a lot of people very scared because a lot of people at the time were very scared of forks and so they choose to activate SegWit2x in a way that was compatible with uACEF. I think that was a mistake because clearly the uACEF people were not really there to find a compromise or have any kind of negotiation. It was  a movement that was very much my way or the highway and if you don’t modify the activation to be compatible with them, on August 1 they would fork themself off the network and they would find themselves in the minority chain and then maybe they want it maybe they don’t want it, maybe they come back or whatever. But essentially it removed a lot of wind out of their sails. On the other hand if you activate SegWit in a way that is compatible with what they wanted then they can claim that SegWit activated because of their effort and so you give them much more leverage in the negotiation suddenly. And you do that in between the time where they get what they want and when the other side is supposed to get what they want. Right? So it’s pretty much a guarantee that you’re gonna have a bait-and-switch If you do it that way. Like if you empower the people in the negotiation you give them more leverage that don’t want the second part to happen. By the way, you do the first part you’re pretty much guaranteed that the second part is not going to happen. So that was what I saw at the time.

Sunny: I’ve seen this debate happen endless times on Twitter already about, was that August first SegWit activation caused by uACEF or by the SegWit2x agreement and it’s kind of impossible for anyone to really decide

Amaury: It is impossible, like the way SegWit2x was made essentially made it impossible to know.

Sunny: Before we continue talking about SegWit2x and the origins of Bitcoin Cash, one thing I wanted to bring it back to for one second and discuss really quickly, we often see that there’s this block size increase versus SegWit and these are usually the two main popular proposals that are well-known. There’s other proposals as well, extension blocks, which is how you would….

Amaury: I want to go back to something that you guys said a few times,that it was very much big block versus SegWit. And I think it’s a  bit of a strange representation. It may be what it looks like now, but it was not like it was then. If you go back in the past there was this proposal to do SegWit as a hard fork instead of doing it as a soft fork and a lot of the people that would be now in the big block camp were actually in support of that and I was in support of that. I know people like Gavin Andreson were in support of that. So, it’s not said that everything is bad about SegWit but the way SegWit was made as a soft fork creates another case of four megabytes, so when you implement SegWit, essentially you can create a block that is up to four megabytes, but effectively in terms of capacity you get 1.4 to 1.7 x the capacity depending on the assumption you make, we are probably going to see in a few months what is the real number, but in that ballparks. So let’s say less than 2x. But you get 4x adversarial case, which means your software needs to be able to support up to 4x, the base base block size and that is not really a problem if you plan to keep the block small, but that’s a very big problem if you want to increase the size of the block because suddenly you are that case that gets incredibly big and someone can craft the special block that exploit that case and bring the network to its knees.

Sunny: I know like a lot of my issues with the SegWit software proposal and mostly just around technical debt where it just seemed to be a very complex change that touched all parts of every piece of code.

Amaury: Yeah, that’s another thing. The way it was done as a soft fork was significantly more complex than the way as a hard fork  because obviously you need to retro fit everything into the existing rules, but the existing rules has never been made with the consideration that you would retrofit all of that in them. So it was a bit more complicated but that’s the road they choose to go into. So yeah to get back to your question, there was also a proposal like extension block. This was actually what I was working on initially, and so extension block was essentially a way where you don’t do anything special in the base block, the base block stays similar to what it always was before SegWit or before big block or before anything, but you create this extension block in which you can put SegWit like transaction in them and this section should work would be eight megabytes as per the proposal and so you would create a situation where you get most of the benefit of SegWit and you also don’t get the main drawback of SegWit that is the four megabyte adversarial case, and you get a bigger capacity as well. And this is a soft fork. So that seemed to fit the requirement that you know, many different party wanted or at least they said they wanted, so I started working on that but then SegWit2x started becoming big so it kind of remove the wind of the sail of the extension block ID and at the same time they were doing it in a way that I thought was likely to fail so I had to change plan.

Brian: One of the things that is also interesting around the Genesis of Bitcoin Cash is because the Bitcoin Cash started, so August 1st was this key date where uACEF, the threat was there’s going to be a Bitcoin fork and uACEF, and people still thought SegWit2x is going to happen at least most people thought that and Bitcoin Cash was actually beforehand and people were not really paying attention to it. There was like what’s this weird thing Bitcoin Cash and then when SegWit2x started failing that’s really when Bitcoin Cash picked up. So can you speak a little bit about it. Because when did you start working on pick on Bitcoin Cash? I mean you initiated this initial fork as well.

Amaury: Yeah, I wrote most of the software and most of the spec for it. There was also this other guy that goes by the name of Free Trader that wrote maybe the second biggest part of it.

Brian: For you it was very clear, even as a time, okay SegWit2X is going to fail, Bitcoin Cash is the right thing to do now, maybe people don’t see it this way but soon they’ll realize SegWit2x fails and then there’s going to start momentum around Bitcoin Cash.

Amaury: Yeah, so maybe I would not put it as strongly. At the time this is in the future, you never know 100% what’s going to happen in the future, but I thought it was more likely than not that SegWit 2x would fail.

Sunny: So the question about the timeline here, so, I don’t know if these dates are exactly right, but this was just what I was able to pull from some articles and stuff but it seems that the Bitcoin Cash chain was announced on May 15th of 2017, but the SegWit2x New York agreement didn’t come out until May 23rd of 2017. So was the Bitcoin Cash plan happening……are these initial seeds the result of the SegWit2x or did you already have this idea going in even before the New York agreement?

Amaury: So there was an idea to effectively fork that chain and create a big block chain, but I was working on extension block at the time as I mentioned. It was more of an effort that was supported by classic BU and XT as well. But there was this SegWit2x stuff, I was convinced it was unlikely to work and at the same time there was a lot of discussion between classic and XT and BU but they seemed to not be able to agree on what the spec was gonna look like. So roughly two months before the actual fork update, this is when I jumped in.

Sunny: I see. And so it was sort of this frustration I guess that you saw that this SegWit2x thing wasn’t going to work.

Amaury: Yeah there was a lot of frustration on my side because what I was seeing at the time is that on one side you have two people that build something I’m not very interested in, but they are executing very well, both on the business side and the infrastructure side, the development and everything. They are doing what it takes to make their thing work, except it was not the thing that I was interested in. On the other side there was a group of people that tried to do something that was more in line with what I wanted, but they seemed to make mistakes again and again and so that was very frustrating.

Sunny: And so how did this coalition come together. So you mentioned the XT classic and developers are already talking about this, but the Bitcoin Cash, like what I see is that coalition was bigger, you had a lot of these big miners like Bitmain and public figures, I’ll call them like Roger Ver, how did this thing in just a short period of time of, you mentioned two to two and a half months, really come together and coordinate. From my memory, this Bitcoin cash hard fork actually seemed very well coordinated, there was a lot of unified messaging there, how did all that coordination come. Who’s the one who really stepped up and organized this?

Amaury: It probably looked more organized than it actually was from the outside from what you’re saying. So who stepped up, so I stepped up for this back end code and did some coordination obviously, but a lot of us are people that were in the big block movement and wanted to see that happen and it was very organic. There was no Mastermind in it. It was very organic.

Brian: And so let’s speak a little bit about, I mean you touched on it before, there was a big disagreement, not so much around SegWit, but around SegWit as a soft fork and there was this strong argument in fear, which I honestly never fully understood, that soft works were such a dangerous thing. And of course other blockchains have taken a different approach, Bitcoin Cash has taken a different approach, Ethereum has taken a different approach. But why is there such a big disagreement around that and what’s your perspective and maybe the Bitcoin Cash approach to forks versus the Bitcoin one.

Amaury: So yeah, I think it’s a bit like both positions are a bit strange to me, like the no hard fork whatsoever and the one we need to do everything as a hard fork, like it’s a bit of a weird ideological positions. It really depends on what you want to deploy in the network. Some things just make more sense as a hard fork or soft fork. If you have a natural way, so say you want to add something new to the protocol. If you have a very natural, you know extension point where you can include that stuff, then you should do it that way and it’s gonna end up being a software but if there is no natural extension point you should probably not try to retrofit something very weird in the place it doesn’t quite fit and and just do a hard fork instead. There is also this interesting idea that there is actually no difference between a soft fork and a 51% attack. It’s just like a software, it’s just miner refusing to mine on top of block that have some properties and a 51% attack is the same. Right? So the main difference is not a difference of what it is. It’s the difference of perception, if you like the new rules that the miner are enforcing that it’s a soft fork. It’s not a 51% attack. But if you don’t like the new thing that the miner are forcing effectively you are facing a 51% attack. So some people are a bit put off by this.

Brian: Yeah I remember that was one of the arguments that Mike Hearn made and I think we talked about it back then wherever his argument was basically that soft fork is like most dangerous for users because they don’t explicitly agree with this update and they just kind of go along because that’s the new rule whereas with the hard fork. Okay, if you don’t actually download the new client and draw on the new client then you’re not participating in this.

Amaury: Yeah, that’s why it’s similar in the 51% attack in many ways because as a user you don’t have a lot of say about what’s going on in case of a soft fork.

Sunny: A censorship of, let’s say we decide to center a certain account that that is a soft fork really then, so….

Amaury: Yeah the main difference between a soft fork and a 51% attack is very much, you see the change as a good thing or a bad thing. That’s the difference. It’s very much perception difference, on the technical level there’s no difference.

Brian: So what does the Bitcoin Cash ecosystem look like today? So what are the different teams and the different clients?

Amaury: Okay. So in terms of software Bitcoin ABC is still the main client. This is the client that we wrote and continue to write. BU is still very big within the BCH ecosystem. And I’d say one of the client that is quite interesting is BCHT because those guys, it’s a bit more of an experimental client. Maybe I would not recommend the miner to use it to mine blocks or whatever but because they are more experimental they can innovate much faster. So they are playing with a bunch of new IDs faster than other clients are doing. So it’s an interesting client to keep an eye on.

Brian: Yeah, and what does the community look like today? And how has it evolved since the since the split?

Amaury: Okay, since the split in in August.

Brian: No I mean the original split for away from from Bitcoin.

Amaury: Okay, I think the community is actually stronger now, even though it’s a bear market. So the situation looked worse if you look at it from the outside, but I think the community is much stronger. At the beginning like we mentioned, it was put together very quickly actually and so it took a bit of time for everything to settle down, to identify who are the good people doing solid work, where the people that were just making noise. And so it takes some time for everything to emerge and for people to take position that makes sense for them. And I think we are in a better position on that front. Well more organized and generally everything is much higher level.

Sunny: So another question then actually as well that I had about the planning of the fork was how did you guys come upon the name Bitcoin Cash? Why was this name chosen and obviously one of the most contentious things about this name, is that people like to say, oh you’re trying to subvert the brand of Bitcoin. So how did this come about and the famous Roger Ver catchphrase or Bitcoin Cash is Bitcoin, to what extent do you agree with that statement? And is that what you’re trying to do? Are you trying to replace Bitcoin or are you just trying to create some alternative that will coexist. What’s the goal here?

Amaury: Okay. So yeah, we kept the name Bitcoin because I think it has it has a legitimate claim to the name Bitcoin but in a bit of a different way than people who say Bitcoin Cash is the real Bitcoin or something like that. I don’t think the question of who is the real Bitcoin makes a lot of sense. I think if you say a Bitcoin Cash is the real Bitcoin, right, the people within Bitcoin Cash are going to be happy with that statement. But the people within BTC are probably gonna see that as a bit scammy. The people outside of crypto don’t care about what’s the real Bitcoin or what’s not right? It’s not even a question they are interested in so I think it’s a bit of a red herring and people are putting way too much attention on that. So I’m happy to say that Bitcoin Cash is one Bitcoin maybe and there are other flavors of Bitcoin now, there is not just one Bitcoin like it used to be. And so then the name cash is there obviously to say that this is what we think is important about Bitcoin, right? This is the peer-to-peer cash system aspect of it like in the title of the white paper and it’s a bit of a statement that what we think is important is the intent, building these peer-to-peer cash system, rather than adhering strictly to every single detail of what was coded and described in various early stuff. We recognize that maybe some of the stuff needs to be improved like the block size need to be increased for instance. So it’s a bit of a statement that this is a kind of Bitcoin and this is what we think is important about Bitcoin.

Sunny: And so how do you personally feel about the nickname Bcash, do you think it’s an okay to use term?

Amaury: So the problem I have with it is that it’s used often and pejorative manner. There wouldn’t be so much people being like Oh Bcash Bcash, then I probably wouldn’t see a problem with it. It’s probably a useful shorthand, but because it now has acquired that negative connotation, I don’t like it too much.

Brian: Today, you know at one point Bitcoin Cash was up to 20% of the Bitcoin I think market cap or maybe even higher, and in terms of the hash rate to the two chains were at some point almost a parody I think in terms of the hash rate, but today of course Bitcoin is much much higher in price, I think today Bitcoin is around $4,000 when we record this and Bitcoin Cash, I don’t know in the hundred ish hundred thirty. And also that hash rate, there is a big difference now, Bitcoin has a very high hash rate and as you would expect, Bitcoin Cash…..of course the whole security assumptions of proof of work and of Bitcoin are really that a 51% attack is expensive and that was what makes it secure, but with Bitcoin Cash today that’s not really the case. A Bitcoin miner on its own could maybe do 51% attack on Bitcoin Cash, so is that something that concerns you?

Amaury: Yes and no. So obviously the security on BCH is going to be smaller than on BTC because the price is smaller. When you put it in dollar term, running an attack is still fairly expensive and also miner have demonstrated in the past that they were willing to pull hash from BTC to put them on BCH temporarily on another loss to protect the chain. So I think it’s a very very strong sign that the miner, the BCH right now are fairly committed to protect it if the needs is there.

Sunny: isn’t putting a lot of dependence on the altruism of certain miners or the external incentives of certain mining pools to protect the network because we’ve already seen a number of minority hash rate change get 51% attack and last year and a half, the biggest example I think is probably Ethereum classic which I guess shares lot of similarities in positioning as Bitcoin Cash where its position to it’s like older brother will say right…

Amaury: So GPU coins tend to be weaker in that regard because with GPU you can mine any other GPU coins, right? So the pool of available hash rates can be much bigger than what it looks like. It’s not like people can just pull from ETH to attack ETC they can actually pull from almost every coin on the market except…

Sunny: But don’t you think the pool of Bitcoin miners is even bigger than the pool of all GPU miners from all coins.

Amaury: Probably not, like I said if you are ETH, that is very big or for most GPU coins it’s much worse.

Brian: I mean, I think your point is is sort of fair that okay, the miners have kind of proven that they will to some extent protect Bitcoin Cash and maybe step up but that feels very weak. It feels like in Bitcoin you have this game theoretical assumptions and then you say okay it’s actually economically infeasible to attack this at some scale, and then in this scenario you say okay, maybe that’s kind of broken but at least we kind of trust the entities that control it so, I mean it feels something essential was lost here.

Amaury: You always trust the miner, though the trade-off that you’re making here is a bit different, it’s actually fairly interesting and I don’t agree with that. I kind of agree with you that it’s weaker, but actually some people agree that it’s stronger and their argument goes as you are paying for that hashrate all the time if you are in the majority chain, but actually having so much hash rate on the chain is only useful when you are under attack. So in various ways you are overpaying for security. Whereas if you have a pool of available hash that can be used to defend against an attack then it’s more economically efficient and I would say actually both of those arguments are true in some way. So the security is weaker, but it’s more efficient.

Sunny: Was merge mining ever a consideration, because this is something I’ve talked about extensively with the Ethereum classic dev team. So has this ever been open on the table? Like potentially merge mining with Bitcoin?

Amaury: Well, the problem here is that if you ever get close to the size of the chain, you are matching from, then you are in a world of trouble because the incentive doesn’t work anymore and I think this is likely to happen. So we talked about BCH ended up being like a non negligible portion of BTC before and in this condition a lot of money would not have worked very well, it would have been a big problem. And actually I kind of predicted that the share of BCH compared to BTC would decrease a bit during the bear market and the reason is that people are more susceptible to the problem that caused immediate pain rather than a possible problem that’s going to happen in the future. Right? And so right now Bitcoin is not running at capacity. So Bitcoin is working fairly well if you want to transact. It’s maybe not as cheap as BCH, but it’s fairly cheap right now. But what’s going to happen is that during the next bull run when people are going to come in, you’re gonna see the same problem that we saw last year on BTC and I expect when that happens that the share of BCH compared to BTC to increase again, and so it would be a mistake to do a lot of mining in those conditions.

Brian: And do you think you’ll ever make sense to explore very different approaches to securing Bitcoin Cash whether that’s proof of stake or something else.

Amaury: So we have in the pipeline that technology called Avalanche, maybe we want to talk about it later. But Avalanche essentially is something we want to use but one of the side effects you get is that it’s much more difficult to do a 51% attack on the coin. And so that is the kind of stuff that is in the pipeline as well.

Sunny: If you buy this narrative that the majority of the miners on Bitcoin were big blockers, was it ever in the books or thought process of soft forking a block size increase into Bitcoin and so obviously by that I mean things like extension blocks or you merge mine Bitcoin Cash and soft fork a drive train between them and force them to be imparity and stuff, were any of these kind of things ever in the consideration of forcing big blocks upon Bitcoin through a soft fork.

Amaury: Yeah, but obviously I’ve worked on extension blocks so that was kind of one of the ideas behind extension blocks, though I don’t like this idea of forcing onto people, if there is a disagreement on something I think it’s better to fork. So okay. It’s better to find an agreement. Right? But if no agreement can be found, it’s I think better to fork and see the market value at the end. When this kind of stuff happened most branch of the fork increase in value so the case of BTC and BCH of the case of ETH and ETC, in both cases the sum of the two coins is larger than the sum of what was before and the reason is you have to vision right and none of the vision can realize itself while everybody is fighting with each other and after that you have a vision that can be realized so that the the overall value is increased. Obviously if you fork for more frivolous reason, then this doesn’t happen, you see a net destruction of value. But if the situation is really such as you have two visions and the people that have those two visions are fighting with each other and they cannot come to an agreement then you’re better off forking then forcing the community to something they don’t want.

Sunny: Now can we talk shift gears and talk a little bit more about Bitcoin caches approach to scalability. And so you had this great blog post talking about how you guys are really focused heavily on client improvement and how we can create clients that can start to support bigger blocks. And so could you go ahead and just give us a bit of a summary of that, you know vision there.

Amaury: Okay. Yes. So on the high level there are many arguments that were made by the people that are more on the small block size that the problem when you increase the size of the block and those arguments tend to be right on the qualitative aspect but not very right on the quantitative aspect, so you don’t run into all those problems when you increase immediately to a few megabytes and you know instead of one megabyte, but if you want to go to very large block then there are many problems that you run into and basically they boil down to that assumption. So for a blockchain that is based on proof of work to work well and all the incentive to work properly you need that the time required to propagate the block on the network and validate it, you need that time to be small compared to the block time. Because when that’s not the case you first start to have perverse incentives in the mining that start to occur. And after that the next step is that it doesn’t work anymore. Right? So if the time you need to propagate a block and validate it is more than 10 minutes on a Bitcoin or you know any variation of Bitcoin that have a 10 minute block time, then what’s going to happen is that you’re going to find block faster than block can propagate on the network. So the situation is that the network doesn’t converge to one truth anymore it just forks more and more and more and more faster than you can converge and so if you want to have bigger blocks you cannot just say okay we change that number in the software and everything is going to work great. You actually need to have solutions for those various problems that makes the propagation and the validation of a block slower, and so on the very high level it’s not like it’s more of a death by a thousand cuts by kind of problem than there is like this big issue you want to solve but generally first if you want to make the propagation faster, you need to propagate less information. That means that you need the node to be able to predict what the next block is going to look like as much as possible. And so then you need only to transmit the difference between what the node expects and what the reality is and you want to keep that difference as small as possible. So that’s the first thing and then the second thing is that you want to be able to validate the block very quickly and to do that you need to be able to validate the block in such a way that you have many small independent chunks of work to do that don’t depend on each other, and that way you can have different core of a machine to do each of them or even if we scale very big you can have a rack of machine and each of them to do a portion of the work. But if you have work that depends on each other, what we call serial then it’s a bit of a challenge. So the general idea is to limit the serial stuff and deploy technology that is not allowed to synchronize with each other as much as possible ahead of time so that they have less work to do when the block arrives.

Sunny: Right just last week we actually had Alexey Akhunov from TurboGeth and he’s kind of approaching a lot of these very similar approach to the scalability issues in Ethereum where a lot of other people are focusing on sharding and stuff, but Alexey, he’s been really focusing on pushing down the propagation time, let’s improve this improve the sync speed, let’s improve the validation speed. So there’s a lot of similarities there.

Amaury: Well, those are not you know very new ideas. This is all many large-scale systems, Facebook works for instance, where the work is done in such a way….. there would be no way to do any kind of serial work at Facebook scale, right? It’s just impossible. So the all the work is organized in such a way that you can distribute on many machines, a small amount of work and this small amount of work don’t depend on the work that the other machines are doing, and then all the machines propagate back to you and you can aggregate those results and use whatever you want to compute from it.

Brian: So you mentioned a little bit that the performance increase comes when propagation happens more quickly and propagation can have more quickly if nodes can already tell what the next block is going to look like without having `whole blocks being sent around. I guess that ties into the topic of transaction ordering and then the changes you guys have made there. But first of all, how does transaction ordering work today in Bitcoin? And what were the downsides of this?

Amaury: Okay. So right now in Bitcoin the transaction ordering is such as, we call it topological in computer science term and what that means is that you can essentially put transaction in any order in there with the only constraint that if one transaction is spent from another they need to be ordered such as the parent transaction is first and then the one that spent from the parent, the check transaction, is after  in the block, so you have you have this constraint, but you have know other constraint in the block.

Brian: Okay, and so that means that if I as a miner create a new block and Sunny is a different miner and then Sunny has basically, I mean he may be able to predict what transactions will be in that block, but he doesn’t know the structure of the block.

Amaury: Yes exactly, because there are many possible valid ordering, then when you find a block you not only need to transmit the other party what transactions are in the block but you also need to transmit the order in the block and when you do the computation based on information theory, let’s say assume you know of a set of transactions and Sunny knows also of a set of transactions that is almost the same as you, because you are both connected to the same network and the transaction propagate on the network. So you both know about the same transaction. So let’s assume that you find a block and you want to tell Sunny what this block looks like. Well, if you know of the same transaction, you just need to send a 1-bit yes or no for each transaction, right? So if there is any transactions lying around theoretically you could transmit a bit of information to Sunny to tell him what transaction is in the block and what transaction is not in the block. Obviously in practice you need to send more than that, but that’s the theoretical limit. Now if the ordering is important then you need to send also the information about in what order they are. And obviously if you have any transactions then the first transaction can be in a different position, and the second one in a minus 1 position because it cannot be where the first one is and so on. So you get N factorial possible ordering and to transmit that you need N-log n bits of information. So you have a factor of log in thereof difference between the two so say if you have if you have a thousand transactions in your block then you literally have 10 times more information that is about ordering than transaction that is about what is in the block and what is not, and as you grow bigger it only gets worse. So any kind of technology that relies on you and Sunny having a common knowledge of what the state of the world is, under essentially transmitting almost only information about altering and very little information about what’s actually in the block. And so those are the theoretical limit but in practice you have this technology called graphene that allows to transmit block and there are two versions of graphene that have been implemented. Right now they are in the prototype stage and the other one that doesn’t and you see that the one that doesn’t transmit the other needs seven times less information to propagate a block, so it’s the not as good as the 10x you would expect from the theoretical perspective, but we can see that it’s in the same ballpark.

Brian: Okay, that’s amazing. And I mean I know in Bitcoin there has also been some kind of efforts on reducing propagation time. There’s a relay network, but that works because that only transmits the headers or what are the similarities and differences with the efforts that have happened on the Bitcoin side to reduce propagation time.

Amaury: So the general idea is the same, the generality of all the fast freely techniques being like compact block or the faster network or graphene or whatever, right? They all rely on the same assumption that if you want to send a block to Sunny the two of you have a lot of information in common already, you know about most of the transactions that possibly can be included in the block. And so instead of transmitting the content of the blog to Sunny you transmit to him a spatial data structure that’s going to say usually there is a short ID that is associated with each transaction and you’re gonna say to Sunny, the block has this this this and this and that transaction by sending a list of short IDs. And then Sunny is going to look into the transaction he knows about and match the short ID to know which one is in the block and you need to have those short IDs that are ordered in the way they appear in the block. And they all do some kind of variation of this.

Brian: And I guess in Bitcoin the thing is that what you guys are trying to do here with having a kind of a predefined order where it’s exactly, like if I produce a block with a certain list of transaction in it, that block is going to look the same as if Sunny produce the block with the same transactions. Is that basically….

Amaury: Yeah, so let’s imagine we continue under the same technique, right? So you sent to Sunny a list of short ID that correspond to each transaction then for the first one Sunny need to match all the transaction it knows about, you know, if it match that short ID and for the second one the same and the short one the same and so on, but if you have a predefined ordering Sunny is only going to need to match the transaction that could possibly fit at that position in the block. So if there are 10 transaction in the block then for each ID Sunny needs to essentially have one tenth of the amount of transaction to match that short ID, which means you can get much more aggressive on how small the short IDs are because they need to discriminate between much less transactions. That’s the intuition behind why you can transmit much less information.

Brian: So you said what’s in tests now and in exploration is this graphene, but when this graphene would come to Bitcoin Cash will basically have a predefined transaction ordering and if I produce a block certain amount of transactions, it will look the same as with Sunny. And so then we can cut down the amount of data that’s being propagated.

Amaury: Yes.

Brian: And this would of course be a hard fork as well.

Amaury: No. No, the hard fork is enforcing transaction during, which we do. So now we can deploy graphene whenever it’s ready, right now. It’s in prototype stage, but, it’s probably going to be ready during the year.

Brian: Oh so now you do have a transaction ordering which is enforced already but there isn’t the technology to take advantage of that order to reduce the amount of data that’s being sent around and that’s the graphene thing.

Amaury: Exactly exactly. That technology is in prototype stage at this point. It’s not yet deployed.

Sunny: But so real quick though. This does come at a cost though, right? Because once you’ve gotten rid of this, what was the term you use, topological transaction ordering. Once you get rid of that, now you put extra burden on any full node, or anyone who’s verifying a block to basically make sure that you don’t have….how do you deal with like this child pays for parent and whatnot.

Amaury: Okay. So from the clients perspective the person that received the block and need to validate it what actually happens is that topological ordering is exceedingly difficult to validate in parallel. And this is one of the reasons we wanted to remove it because you can you can parallelize the block validation, you know, it says you’re to parallelize the block validation and the way you do it is that you pass over the block twice. So once you pass over the block and go over all the outputs of all the transaction and you add them all to the UTXO set. So that part can be done in parallel, it’s like very embarrassingly parallel. And then you do a second pass on the block and you go over all the input and you mark all the input that is being spent in the block as spent. And if you do it that where you essentially have a two step process to validate the block and each one of these step is, you know, very parallelizable.

Sunny: Yeah, I don’t see how this helps with the parallelizability here. It seems like you could do something similar with topological. In the optimistic case and topological you could be doing it in parallel. And only when you hit a child pays for parent and you have to deal with that, right?

Amaury: Yes. Absolutely. You can do it optimistically in parallel and then fold back to the topological stuff. But your fallback is going to be serial always, so I can produce a block with a lot of chain transaction in them and get you to validate it in, essentially you have no part of these so I can poison you with a bad block.

Sunny: But isn’t that true in this as well, can’t I just write a block of like a chain of child pays for parent and then you basically essentially end up falling back to sequential as well because you have to do iterations so many times.

Amaury: No, because you are going to add all the output to the UTXO set. So say you have a thousand chain transaction, and they all have one input one output to make it simpler to understand. Then you’re going to add the 100 output to the UTXO set and then second pass you’re going to spend 100 output. And so at the end you’re going to have one that is spent and and one that is created in the UTXO set.

Brian: Great, that’s very helpful and that sounds like a very interesting change. Now another thing that’s interesting, so Bitcoin Cash when you guys forked the new block size, in Bitcoin cash was eight megabytes and since then there’s been a further block size increase to 32 megabytes. Why is that? I mean the Bitcoin Cash today, there’s not that much usage to blocks that are mostly empty who’s never really at capacity. So what was the reason to go to 32?

Amaury: I’d say probably because we can. You got to understand that because there was so much contention to increase the block size to begin with there is a bit of apprehension within the BCH community that the block size is going to be stuck. And so I think this is mostly why even though it’s not needed right now. We would have 8 meg right now would be perfectly fine. Like we are not using that much capacity but people are afraid that by the time we need that much capacity then maybe you know, the ecosystem would have drawn the lot and we would have another class of small block you’re in there maybe.

Brian: Yeah, okay that makes a lot of sense. So basically you’re saying you want to change the default now when there’s no real controversy around it rather than get into situation again, like in Bitcoin and of course in Bitcoin too I know initially when Bitcoin was launched, I think there was either no block limit or it was like a really big one and then at some point it was added as a sort of hack, oh let’s just put this in for the moment so that somebody can create a giant block and then later when you reach it we get rid of it. I think that was the thinking of Satoshi back then and it didn’t seem like he anticipated that this was going to become a contentious issue.

Amaury; Yeah, I think this is what happened. Yeah. So because that happened once, people are kind of afraid that it’s gonna happen twice. So right now there is a discussion within the BCH community to have an algorithm set the block size rather than having people deciding once in awhile to increase it.

Sunny: Yeah, that was the original idea behind the original Bitcoin unlimited proposal, right?

Amaury: No Bitcoin unlimited is a bit of a different proposal, what they were doing is what they call emergent processes, so essentially it created the notion of a soft consensus right? So if you have a block that is very big and is bigger than your block size, instead of marking that block invalid you would mark it excessive, which I mean is not invalid but you are not going to follow that chain right now. And then what happened is that if you see that most of the network is building on top of that also block that you consider excessive, rather than on top of what you consider is, you know, the main chain, then you are going to reconsider that block and try to actually validate it, so that is that is the idea behind a merchant processes. One of the problem with that is that it creates a situation where it’s quite difficult to upgrade actually because everybody needs to do it at the same time because if you do it by yourself you, you know may end up creating block that is too big that everybody reject and you lose a bunch of money. So it puts the miner in a bit of a tough spot. So I think this is why it wasn’t widely adopted.

Sunny: I see. I also heard some stuff about the Bitcoin unlimited team working on what they call gigabyte blocks. How realistic of a proposal is this and is this some sort of long-term vision thing or was this something that you guys are looking into in the very near future.

Amaury: So the Bitcoin unlimited people are running what they call the gigablock test net, that is what the name says. That’s the test net with ridiculously large block size and they have to to generate a tonne of transaction on there. The main goal is not to do like gigabyte block right now, but it’s to identify what kind of bottlenecks exist and what kind of challenges exists when you want to grow the capacity of the network and then we can take enough data that comes from that experiment and use that to improve the software today. But there is no immediate plan to move to one gigabyte. Though if the software can support it and if it works and everything why not right, but it’s not  the case right now that we can do that very safely.

Sunny: Okay, cool. And then so I guess the last thing, as we’re running out of time for this week, one last thing I want to ask, one of the reasons that I actually personally got pretty interested in Bitcoin Cash was you guys seem to have, so we’ve discussed this throughout but you guys seem to have a very open policy on hard forks. And so, to me Bitcoin Cash just seemed like this place where all of these ideas that could be done as hard forks on Bitcoin now we actually have a place to try them out and test them and so I know that you guys came up with some roadmap process where you are committing to six hard forks every six months or something similar to that. Could you go ahead and discuss a little bit about how this was agreed upon and like why it was agreed upon.

Amaury:  Yeah, so this is what we do. We do an upgrade that’s usually a few hard fork and a few soft fork change every six months and we are not the first to do that actually, Monero have been doing that for a fairly long time and you have other coins that don’t have a very specific schedule, but that also do a fork in the regular basis like ETH, so the reason why we went that way is because we knew that if we want to increase the capacity a lot, we’re gonna need to change a few stuff. It’s not enough to just change a number and expect everything to work well, because the number is more of a security measure because there is stuff that doesn’t work when you go bigger and it’s not because you change that number that suddenly becomes secure to you know, do big blocks, and so we knew that we would need to improve all the things to be able to create and propagate and validate those large blocks faster if we wanted to realize that vision essentially, so we needed to have some way to do hard fork to do that and doing it on the schedule that is set ahead of time makes things easier for everybody because now everybody knows what to expect.

Brian: Well, thanks so much Amaury. This was a really great discussion. And so I think that concludes our first part of this Bitcoin Cash  interview. And so we’re going to do a second part which comes out next week and we’re going to speak about the split, Bitcoin and Bitcoin Cash, and then of course there’s the fork between Bitcoin Cash and now what became Bitcoin SV, so we’ll speak about that and  we’ll also speak about the other things that you guys are working on technically because Bitcoin cash is certainly experimental in exploring a lot of interesting new technology. So yeah, we’ll come back to that interview next week.