Episode 310

StarkWare – Productizing zk-STARKs to Provide Blockchain Scalability and Privacy

Eli Ben-Sasson

This past year, we have witnessed what some are calling a “Cambrian Explosion” in zero-knowledge proof systems. New proof systems based on a variety of cryptographic assumptions are popping up every week. And while zero-knowledge systems are known for their privacy-preserving characteristics, they have proven particularly useful for scaling blockchains through off-chain computations.

We’re joined by Eli Ben Sasson, Co-founder and Cheif Scientist in the East at Starkware. His company is developing a full proof-stack, which leverages STARKs. Pioneered by Eli, STARKs are zero-knowledge cryptographic proofs, which are succinct, transparent, and post-quantum secure. StarkWare has demonstrated how zkSTARKs may be leveraged to provide off-chain scalability by generating proofs of computational integrity which may be verified on-chain.

Topics discussed in the episode

  • Eli’s trajectory and transition from academia to the startup world
  • The origin story of Starkware and the founding team
  • The current explosion of zero-knowledge proof research
  • An overview of zero-knowledge proof systems and how they work
  • What are STARKs and what are their properties
  • How STARKs are different from SNARKs and Bulletproofs
  • What are zk-Rollups and how they are used by StarkWare to achieve scalability
  • The StarkDEX experiment and scalability benefits it had demonstrated
  • The issue of data availability with layer-2 scaling solutions
  • StarkWare’s business model and the solutions they are building for customers

(19:10) Eli’s journey from the academic world to the startup.

(21:06) When Eli first got involved with ZCash

(22:01) Starkware Origins

(23:43) About the zero knowledge cambrian explosion

(27:39) Finding the perfect proof for most use-cases, or many systems for different cases

(29:00) The trade-offs and comparisons between different proof systems

(30:45) Refresher Zero Knowledge Proofs

(32:42) What the Starks are, and how they fit in this broader context

(34:13) About the meaning of various properties represented in the name Stark

(36:03) More on the difference between Snarks and Starks

(37:31) Tradeoffs between ZK Snarks of ZCash and the Starks of Starkware, in comparison to bulletproofs

(41:12) Quantum Resistance

(41:47) Stark’s reliance on cryptographic assumptions that have existed since the 70’s, and why it took so long for them to come about

(44:27) About some of the newer stuff coming out, like Aurora in the IOP family

(45:58) Specific use cases for any of these systems over the other

(52:07) How Starks are used to arrive at scalability of 10s of thousands transactions per block

(53:59) Why Eli chose to focus on scalability, rather than privacy

(1:02:27) The Starkdex demo

(1:05:50) Starkware’s goals and its customers

(1:08:34) What prover-as-a-service is and why it is useful

(1:09:59) Writing the proofs yourself, or licensing

(1:13:08) Final Thoughts


Sebastien Couture: We’re here with Eli Ben-Sasson, who is a returning guest on the podcast. He’s been here before it was almost four years ago. And Ellie has been gracious enough to give us some of his time while he’s on vacation and it’s late at night, where he is in Israel. Ellie, thanks for coming back on the show.

Eli Ben Sasson: Thanks Sebastien and Sunny, always a pleasure to be on the show.

Sebastien: Well, it’s a pleasure to have you back. And yeah, so as I mentioned, I was looking before we start recording last time you’re on was in 2016. This was long before I think there was even an idea to do something like Starkware back then. You were a professor at Technion, talk about your trajectory since then, what’s it been like to go from the academic life to full on startup mode?

Eli: A lot of fun. And one line? Yeah, this trajectory, I think, started years earlier. My eureka moment was in 2013. May 2013, in the Bitcoin conference in San Jose, where it dawned on me that the research I was doing about scalable proof systems that can be very useful for blockchains. That was my turning point. And then 2016, I was still doing research as a professor in advancing the science and technology, of the very particular brand of proof systems that I think we’ll talk about later. Then, I believe it was at the end of 2017, that we realized that it might be time to try and commercialize this stuff. This was right after we were ready to publish work on ZK Starks that we’ll talk about later on, and we started this company almost two years ago, around the end of 2017. And it’s been a lot of fun since then. Very exciting.

Sunny Aggarwal: Before working on Starkware. Were you were also one of the co-founders of ZCash, right. Yes. What ZCash already going on when we when you were on last time or was that something that you got started with afterwards?

Eli: I was definitely involved in ZCash in 2016. I think the coin launched at the end of 2016. So I guess around the same time I was doing the interview with you, but I was definitely involved with ZCash then because the company was working on it for a while since then. Yeah. So back then i was also involved in Zcash, which is also a very exciting project that I’m very proud of… having contributed to along with my other founding scientists on that thing, but since then, we moved on to other technologies and other endeavors.

Sunny: So with one of the other co-founders at ZCash, Alessandro Chiesa, now you are both Chief Scientist at Starkware. So can you tell us a little bit about the origin story of how you decided to how to Starkware came into being.

Eli: Yeah. So I guess for me, the origins go very long way back to the days I started my postdoc in 2001 at MIT. And I was doing research with Madhu Sudan, one of the leading figures in the development of the PCP theorem, and it was this trajectory of… both making systems a bit more efficient, but also reducing stuff more and more towards practice. And this is something that was our passion for a very long time. so again, this for me started maybe in 2001, I started collaborating with Alessandro Chiesa around 2010, shortly after Michael Riabzev our third co founder, also started working with us and advancing this, both the science and the technology of this thing. Uri Kolodny, our CEO and fourth co-founder. He’s been a close friend of mine for more than 30 years, and also almost as many years was my business mentor. And he certainly was following everything in helping us out with already ZCash and other things. So at some point, when it was clear that this particular technology can benefit from a dedicated company, so… the four of us joined forces and that’s how it started the towards the end of 2017, which is about two years ago.

Sebastien: Cool. We met a couple weeks ago in in Tel Aviv, Starkware organized this fantastic conference, which was called Starkware Sessions. We’ve mentioned it a few times on the podcast since then, and during your keynote talk you gave You gave this description of the zero knowledge space as going through a Cambrian explosion. Describe what you meant there and talk about the unique time in which we live with regards to how ZKP are evolving.

Eli: So, this notion of the Cambrian explosion, there was this era, about half a billion years ago, where, from this primordial soup of microbes or things like that, all of a sudden in a very short span of time, a lot of the more advanced creatures that we see today… various plants and insects and other forms of life, also… spawned off in a very short time, and with cryptographic proof systems, commonly referred to as ZKPs, although not, not all of them are formally zero knowledge proof systems. There was a big family of them. So these proof systems, they’ve been researched, in theory, since 1985. Since this beautiful discovery by Goldwasser, Micali, Rackoff, of interactive proofs. And they’ve been pretty much confined mostly to theoretical works. And then around starting in 2005 2010, suddenly we see this emergence of more practical stuff. And over the past decade, there’s been a proliferation of different forms of proof systems based on all kinds of different assumptions. And it seems that the speed of release of new kinds of proof systems is increasing almost exponentially or very rapidly. And just over the past three or four months we’ve we’ve heard about plonk in Sonic and supersonic and dark and fractal and Marlon and by the time will end this interview, maybe a couple more will be released. So it’s really quite remarkable.

Sebastien: And where do you think this is all heading… if we’re now in the Cambrian explosion, and and you equate that to the real Cambrian explosion that happened and with regards to life… where can this whole lead us knowing that there’s just so much more had to discover?

Eli: Hard to say okay, one thing I think is for sure that we’ll see a lot of systems deployed and and integrated into products. That’s one thing I think Starkware is leading with… this particular adoption of particular form of proof systems and bringing them into life products that will help scaling. So we’ll see a proliferation not just of academic research, but also have dedicated productization and robust code basis. The will be used, I think at the same time will also see a better understanding of the different building blocks by which you can compose and build different proof systems. We’re already seeing this. So this will continue and at some point will, will come to some decent understanding. And the most exciting stuff is that often with with research… some some completely unexpected new discoveries or challenges or open problems might be discovered, but that… one cannot predict what they will look like. That’s maybe, as a scientist, the most exciting aspect to me.

Sunny: Do you think it’s a matter of like, we’re going to be finding the perfect proof system that like kind of satisfies most of the use cases, or are we going to get into a ecosystem where we have like many different proof systems, each of that are maybe are good for specific use cases.

Eli: I think that when the dust settles, they’ll probably be a very small number of proof systems that are actually used at scale because… they’re built in different principles and mixing them is not as efficient as sticking with one or two of them. And I’m biased Of course, but… My bet is on starks for reasons that I can describe later. It’s a little bit like, if you look at other kinds of infrastructures for in the computer science world, there is an abundance of communication protocols or ways to build operating systems or programming languages. But at the end of the day, there is a very small number of them that actually stick around and they’re used by everyone. So I think it will be a little bit like like that. Maybe not one, but I think there’ll be a small number. So not all this cool research will be necessarily adopted as infrastructure by all systems.

Sunny: So how would you say like we should go about like thinking about the trade offs and comparisons between different proof system? What are some of the parameters we should be looking at? Like the ones that come through up that I’m aware of are things like prover time, verification time, and proof size what are some other things that we should be looking at?

Eli: Yeah, so definitely prover, verify, and proof size. I would say that, especially if you look at scaling, it’s far more important to look also at the amortize costs. which means for for certain batch, you take a prover time, you divided by the number of statements or transactions that this proof covers, and the same thing with verification time or gas cost. Same thing with proof length. That’s far more important in terms of scaling. One other set of parameters that was, I think a bit overlooked, and we were very interested in is the cryptographic assumptions in which you’re building these systems. And a way to think of it is in terms of future proofing your systems. So the more your assumptions are fundamental have been around for a while, and the more other stuff is built on them, it’s implies that they’re a little bit more future proof than more exotic assumptions that have been around for a shorter amount of time and scrutinized by fewer peers. So that’s another they mentioned that was slightly overlooked in evaluating these systems.

Sebastien: So I think this is a good segue into the world of Starks. And before we dive deep into Starks, I think it’d be helpful to get a brief refresh on zero knowledge proof so that just everyone’s on the same page. Can you summarize very simply, what is this zero knowledge proof?

Eli: So zero knowledge proof, there’s a mathematical definition. And it is one that covers privacy. Informally, zero knowledge proof, think of it as some some beefed up grocery receipt, that when you look at it, you’re completely certain that the, let’s say, the total sum that you need to pay is correct. But you learn nothing other than this fact. So it’s a privacy preserving technique, a magical one at that. The term ZKP, by the way, by now has been borrowed to cover a much larger range of proof systems that I like to refer to as either cryptographic proofs in general or sometimes as proofs of computational integrity. And both of these terms are terms that are not mathematically and formally defined. So you can loosely use them to define a very large range of systems, including ones that mostly care about scalability and our privacy. So you have this receipt that tells you that a very large computation or a very large batch of transactions has been processed correctly, without needing to pay the cost of checking each and every one of these transactions. So this larger domain of cryptographic proofs or proofs of computation and integrity, there’s a variety of technologies that allow you to scale systems up and assert their correctness and computational integrity and also to do so in a privacy preserving manner.

Sebastien: Okay, thanks. that’s a very clear explanation. And so, what are Starks and how do they fit in this broader context?

Eli: So, within the variety of proof systems out there, a proof system that satisfies two important parameters, will be called The Stark and these important parameters are one that is scalable, which means that as the number of transactions you’re processing goes to infinity, proving time scales with it nearly linearly. So it’s almost the same cost to just compute the stuff as it is to generate a proof for it. And at the same time, verifying a proof scales exponentially smaller than the amount of computation. So system that is scalable, that’s the essence of stark, and also transparent, which means that there is no trusted setup and all the the only ingredient that you need in order to make the system secure is public source of randomness, or you need to assume that the universe has some entropy in it. So systems that are scalable, and transparent, are called Starks. And there’s a Very natural way of constructing such starks that leads to systems that are also post quantum secure and have very efficient, prover and verifier, in concrete terms.

Sunny: So I knew that the T in Stark meant transparent, but I actually didn’t know that the S stands for scalable because in snarks the S stands for succinct. And so can you explain like, what’s the reasoning behind that difference? there? Does Starks not have this like succinctness property that’s present in snarks?

Eli: Yeah, well, the mathematical definition of succinct in in a snark is one that involves a security parameter and talks about a constant size proof that is constant, but allowed to be polynomial in the security parameter. So I don’t want to get bogged into very technical details. One could say That the term succinct refers only to one part of scalability, which is you want the verifier to be very efficient. But that’s not enough for scalability, you need something more, and that is that you need the proving time to scale really well. And the reason this is important is because we know theoretical constructions, for instance, the PCP theorem, where you can get amazingly succinct verification time, but at horrendous cost, the proving time. So succinctness is not enough for scalability, it’s necessary but not sufficient. You also need this other aspect, which is super efficient proving time. So when we coined the definition of a stark, we wanted to make sure that we’re also capturing that aspect as well, which is why we think it’s better to use scalability as this two pronged definition. Both efficient Proving time and efficient verification time.

Sunny: So when it comes to snarks, I think one of the things that sometimes a little bit confusing and correct me if I’m wrong here, but the term snark it… it refers to this idea of something that’s a sustained non-interactive argument of knowledge. And then the term ZK snark, also at the same time refers to like, a very specific construction. Is that true? And if so, is that also the same thing for Starks is that is it also referring to a specific construction? Or is it a just a general term?

Eli: The term snark and ZK snark and the same thing with Stark and ZK Stark, these are general definitions that could cover potentially a very large variety of proof systems. But it is true that both of these terms have been associated with the very specific systems. So when people talk about snarks that usually mean a very specific kind of ZK snark that is used by ZCash. And I guess that when people talk about starks, they usually refer to the flavor of systems that is based on IOPs and uses things like the FRI protocol for a low degree testing. so I guess this is inevitable that you have these general mathematical terms that then gets associated with very particular proof systems. But that’s… it is what it is.

Sunny: So could you now walk us through a little bit of what are the trade offs between the ZK snarks that are using ZCash versus the starks that you are working on versus some of the other family such as bulletproof?

Eli: Yeah, sure. So let’s talk about these three things, again, the the snarks of ZCash, the starks that we’re building, acknowledging that there are other kinds of snarks and Starks and there will be other kinds, but let’s just associate snarks with the stuff of ZCash, starks with the stuff that we’re building, and there’s bulletproof. Snarks have famously very short proofs, like around 200 bytes. Bulletproof, has longer proofs around, let’s say, two kilobytes or so. And starks have longer proofs around 20 kilobytes. So you go one order of magnitude increase, moving from snarks to bulletproof. And then to start. In terms of verification time, snarks, and Starks are pretty similar. They’re very, very fast, starks are a little bit faster and verification, but 10 milliseconds in snarks versus I don’t know eight milliseconds or less and starks and then bulletproof, less so. Verification time, in bulletproof, scales linearly with the amount of computation. Bulletproofs are, not scalable, according to our definition of the term. That’s in terms of verification time. Proving time is fastest in starks, then about one order of magnitude slower in snarks and bulletproof. And I think the most important differences in this other dimension of future proofing the systems are what kind of assumptions you’re using. So, starks require only the existence of some collusion resistant hash function, which implies that there possibly post quantum secure and the required very lean cryptography. bulletproof require assumptions regarding the discrete log over elliptic curve groups, which is a slightly more exotic problem, but it’s been around for like to get tickets or so. And then snarks require things called [knowledge of exponent](https://eprint.iacr.org/2004/008.pdf), which are even more recent and slightly more exotic. So I guess that’s a comparison along the four dimensions of Proof length, proving time verifying time and future proofing the system.

Sebastien: Okay, so just to summarize, in terms of proof size, there’s a clear difference between bulletproof snarks and Starks. So if one order of magnitude for more than the previous for for every system. However, in the verification time where starks have a much larger proof, the verification time will be lower than snarks and bullet proofs, so, we’ll have faster verifications. And then in terms of the real differentiating factor is that Starks is relies upon cryptographic assumptions that have been around since the 70s. So these are collision resistant hashes, which means that they’re a quantum resistant and very lean and presumably future proof because of this quantum result. feature. Does that sum it up correctly?

Eli: Yes, I think that’s a good summary. And also, there’s a fourth axis, which is the proving time, which is again, fastest with Starks.

Sunny: And we get the quantum resistance because there’s no like public key cryptography or pairings or anything like that, right?

Eli: Quantum computers are known to be pretty good at solving problems related to hidden subgroups, factoring in discrete log and things like that. But they’re not known to be able to break off cryptography. And in particular, there’s a wide held belief that most hash functions will be secure against quantum quantum computers, which is why starks are so..

Sebastien: If starks relies on cryptographic assumptions that have existed since the 70s, why did it take so long for them to come into existence? Are there other things that need needed to be invented before Starks could exist? Or did we just need you to figure it out?

Eli: That’s a good question. A lot of the practical cryptography in recent years has revolved around number theoretic assumptions and elliptic curves, and so on. There’s this very wide class of researchers that are somewhere between theory and practice that are very familiar, who are very familiar with cryptography that uses elliptic curves and RSA and other things. And the branch from which starks emerged from which is known as computational complexity or the PCP theorem and things like that has been… this playground mostly of the theoreticians and mathematicians and very few practitioners, or more practical oriented researchers, have ventured into it.

Another factor was that some of the earlier constructions of things like starks were not as efficient and and you know we needed to tighten and invent some some new stuff like you know the the FRI protocol, Fast-Reed Solomon IOPP and the IOP model. The IOP model, with Alessandro Chiesa and Nick Spooner, the FRI protocol is joint work with Michael Riabzev, Iddo Bentov, Yinon Horesh. And then then we’ve done some some further improvements to fry that also made things a bit tighter, like Deep FRI that most recently emerged which is joint work Lior Goldberg, from Starkware, and two of our scientific advisors, Swastik Kopparty, Shubhangi Saraf. There’s a little bit of advancement that needed to be done in some… new mathematical stuff that needed to be invented but there’s it’s I think it’s also this cultural thing that the class of folks that can build things like Starks used to include a very small set of people, and and those that are more familiar with the techniques around snarks are bulletproofs are, other things is a wider set of researchers.

How should we think about some of the newer stuff that’s coming out as well, especially within the IOP family things like Aurora and stuff.

Eli: There’s a wide variety of systems that are similar to Starks in requiring only the existence of a hash function(). So always one, there’s a ZKBoo the Ligero and then there’s more recent, I believe it’s fractal and Marlin that at least one of them doesn’t really require anything but for collision resistant hash. And they’re all very similar in some of their techniques, which uses interactive oracle proofs and low degree testing algorithms. There’s similar in particular Arora is not scalable, because it’s verifier scales linearly with the size of input. It’s more geared towards a… circuits of unknown structure that the verifier must process. Whereas Starks have scalable verification or exponential speed up. That’s the main difference between Aurora and Stark. There are other systems out there that are also similar like Ligero that has square root proof length, its verification times scales linear in the size of the computation and there’s others.

Sebastien: So looking at this broad set of zero knowledge systems that we’ve described. Are there specific applications that are better suited for say, Starks or snarks or bulletproofs? How do they distinguish each other and how they’re implemented?

Eli: That’s a great question. I think that so from my point of view there, I think we’re not that far from from the optimum, at least with respect to Starks. And let me explain why. There are some mathematically proven lower bounds that we’re not that far off from. So for instance, if your computation scaling parameters, let’s use T for that parameter. So as T goes to infinity, we know that verifier time must increase at least like the logarithm of T. And we know that the prover time must scale at least like T. As far that we have right now prover time scales, almost linearly in T. There’s really like one Fast Fourier transform there. The Fast Fourier transfor cost t times logarithm of t. and improving on the FFT is this long standing open problem in all of… math and algorithms, so I think it’s very safe to assume that… would require a very major breakthrough in order to do something that’s better than TlogT. That’s my belief. Then, in terms of verification time again, Starks already have log of T, to a very small power. So you could reduce that power little bit, but you’re very close to theoretical limits. And in terms of cryptographic assumptions, again, there’s, I’m not aware of many assumptions that are weaker than assuming the existence of a collision resistant hash, so long, almost any of the parameters that you look at, there’s just very little… fat that you can hope to trim in the future. So, that’s part of the reason that we are so optimistic about the use of Starks. But that’s really good question. when you get close to theoretical limits, you know that you’re pretty safe, I would say.

Sebastien: Recently, Starkware announced to the products or initiatives that they were working on Stark dex and Stark pay. Both of these make use of what we call ZK rollups, which we’ll get to in a little bit. Can you talk about how these two products fit into the broader mission of Starkware?

Eli: Yes, There’s this principle of blockchains that I like to call inclusive accountability, which means that everyone using their laptop is invited to monitor the health of all of the system and verify everything that’s going on. But once you impose this principle of inclusive accountability on a financial system like Bitcoin or Ethereum, then two things get compromised. First one is privacy because everyone verifies everything. And the second one is scalability. Because if you want to grow your system 10x, then you need everyone who wants to monitor the system to go and buy 10 laptops instead of one or increase their bandwidth 10x, which is unrealistic. And if you do that, you’ll be throwing out a whole lot of folks from monitoring the health of the system. So what you really want to do and this is where something like starks come in, as you would like to use the scalability aspects and have one entity generate the proof for ever increasing batches and use this magical aspect of starks where verification time scales exponentially smaller than batch size in order to maintain inclusive accountability, still everyone can check everything and make sure that the system is okay. But you don’t need to replace your laptop every time the system goes up 10 x. Now, we started asking ourselves, where can we deploy this functionality, the scalability in the best way. And we looked around a little bit and it seemed to us that the most simplest and fastest way to address the real problem seen by the world today is in the area of transacting, that’s payments and also trading, because currently, due to the low throughput of blockchains, essentially, if you want to use them either as payment systems or you want to trade them using the principles of inclusive accountability and trust no one, so on and so forth. You can’t really do that. So you have a wide variety of players, custodial exchanges that say, ‘send your Bitcoins or Ether here, park them with us will maintain the keys and then you’ll do all your trading on our books.’ And similar things happen with payment providers were the telling you… leave your payments with us. At the end of the month we’ll check out all the books and send one big payment to the various merchants and so on. And we thought that would be really good to use starks to show the world how you can maintain inclusive accountability not need to trust or hand over custody of your funds your payments at any point and still scale the system even within its existing parameters without waiting for plasma or a theory of two point O on the existing theory and we can already batch settle and batch pay 10s of thousands of, of transactions, which is… two to three orders of magnitude more than In theory we can do natively. So that’s how we got to this line of products.

Sebastien: That’s, that’s really cool. And I was really excited to hear about this at a Starkware sessions, when you first talked about it publicly. How do you arrive at this scalability if 10s of thousands of transactions per block in Ethereum and how are you making use of starks to do that?

Eli: That’s a great question. So that’s the end. So, remember that the S in the stark stands for scalability, which means that as T where T is now the number of trades that you’re settling, ST goes to infinity proving time, which is done… on the cloud or on some huge server scales almost linearly with it. So you can reach very large batch size. At the same time verifying a batch of T trades. does not take does not scale linearly. T, it scales like the logarithm of T, which means that each time you go 10x on on the number of transactions you want to settle, you’re only doing plus one on the amount of gas that you’re paying on the chain. So, using this kind of math allows you to take, for instance, a batch of 32,000 trades, generate a single proof that they settled correctly. And that single proof can be verified within the gas limit of a single Ethereum block. So, this gives you an advertised gas cost of around 200 gas per trade that is settled. So you are using the scalability, the exponential speed up and verification in order to exponentially reduce the gas costs of settlement.

Sebastien: And why do you choose to focus on scalability, rather than privacy, for instance, which is what I think most people associate to zero knowledge systems.

Eli: So, back in the day when we thought about which of the two aspects of Starks should we pursue first, is it privacy or scalability? Our thinking was something along these lines, okay. There are a lot of technologies that for the single shielded transaction are pretty good. You have the snarks of ZCash you had, already back then, bulletproofs which, again, for a single sheilded transaction works pretty well, and a bunch of other technologies. But there was this huge need in scalability solutions. there still is, and there was no real technological alternative to the efficiency of starks in this respect, And I think there still isn’t. This goes back to your question about… how far are we from… the optimal proof system. So even today with all these newest systems, if you look head to head in huge batches of computations, Stark still outperform all of them. And I think this… it’s likely to continue in this way. So it was very clear to us that scalability is an area where this technology can be applied in a very unique way. That’s a very big need. So that’s why we addressed it. First, we will add privacy have no doubt, but I think will come later.

Sunny: So one thing that’s happening here is in this model you’re batching with in a block, but it’s still you amortize only what’s going on in a single block. We can compare this to things like Coda for example, or things that make use of recursive snarks were not only do you amortize the computation within a block, but you amortize the computation over the entire system. Would it be possible to recurse the Starks? Because then If so, you only really have to publish the proof onto Ethereum, the data, but you don’t actually have to run the verifyor on Ethereum until someone actually wants to exit. So would that be possible to be done with the Starks?

Eli: So whenever you have a proof system, in which verification scales sub linearly with computation size, you can compose it incrementally. So this this notion was first described in this beautiful paper in by Paul valiant, and it’s called incrementally verifiable computation. So when Ever you have a proof system in which the verifier running time scales sub linearly with the computation size, you can use it for chopping up a computation in two steps and doing them one after the other and proving that. You ran a verifier that ran a verifier, so on and so forth. So you can do that with starks as well… quite efficiently whether for a given problem, this is your best line of attack. I’m skeptical. I think that for most applications, you’re better off just using the stark will be more efficient, or you might want to use limited recursion starks, let’s say one level of recursion, which mean you prove that you check the bunch of proofs. That’s where you went, not that you check proofs, the check proof, the check proof and so on and so forth. So just to summarize, you can do a recursive stark. I think that practically for most problems that you face you are better off not using it. Even though you can,

Sunny: Let’s say in the starkdex, let’s say users only exit their coins. Once every hundred blocks, let’s say write it then if we use the recursive Stark, instead of having to run a verify or every single block, we can only have to run the verify or once every, really only we only need to verify the stage prove the state to Ethereum when someone is actually trying to exit. So how many blocks for example, would that have to be that makes it worth it to use the recursive system?

Eli: I don’t know. But like, you can still, if you want to prove that you checked 100 proofs sequentially, it would still only be one level of recursion. Your statement would be i saw a sequence of 100 proofs one after the other, this is not the 100 levels of recursion. 100 levels of recursion would be, I want with each block to verify, you know that verify run by the previous block which check the verify run by the previous block and so on and so forth ad infinitum. That’s a very different construction that is also it says security analysis is much more tricky to do and if you really want to do it the right way, then various parameters blow up very quickly. So even for the use case that you’re giving, which is a very practical one, you’re probably better off with just having one level of recursion… every hundred blocks you have a verify are proving to you that it’s saw sequence of 100 proofs and it checked all of them. This is still one level of recursion. And you could have one for every hundred blocks and you then you could have a daily proof for the all blocks of the Day, all blocks of week, and so on and so forth. So this is an example of a use case that I think you’re better off using just one level of recursion, rather than the notion of infinite recursion.

Sunny: Oh, I see. Okay. Yeah, that makes sense. And so would it be possible to compose these things together. So let’s say… it’s 150 blocks, we can take one of 100, and then one of 50. And we can put these together. So essentially, what I’m trying to think through here is, is there a way to offload the verification gas costs on to the users, the user who wants to exit rather than on the people who are submitting the proofs?

Eli: Well, the way it works right now is that the gas cost is not on the proof, is the provers are working very hard to generate the proof but the gas cost is… Oh, sorry, you’re right. The prover submitting the proof and Paying with gas to this network to check this thing. I think that if you don’t have any proof out there for a while, then you’re risking all kinds of attacks, right? How do you know that the system is actually evolving correctly, until someone hundred blocks later comes and says, Oh, I need to take my money out, maybe by then someone ran off with it. And no proofs are provided. so I’m not sure. I think you still need to proof, pretty frequently.

Sunny: I believe that from what you wrote up, it’s that you including… to solve for data availability, you’re still pushing the data onto Ethereum, of all the trade data. And so couldn’t this proof data also be pushed on in the same way that the rest of the data availability is done, but just not actually run the verify, or?

Eli: Yeah, you could do that. But again, I think you’re risking. So if the main network doesn’t really… see it, check the proof, that someone could start deviating from… you’re just not putting proofs as they should. Now I think you’re going away from this notion of maintaining at all times a system that is has integrity to one that requires something like a watch tower or fraud proofs or something. Just wait until you reach that checkpoint 100 blocks later.

Sebastien: This starkdex demonstration that you’ve built, I believe it’s live on Ethereum main-net, is that correct?

Eli: No, it’s not. It was just the demo that was run for some time on Robson test net.

Sebastien: Okay, what’s the barrier to running this on Main net? And are there are further optimizations that you could make in order for it to be even more scalable?

Eli: Yeah, so to run it on Main net. First of all, you have to run a whole bunch of audits and then add a lot of functionality. For instance, it was only in the maker-taker model, and… if you want to put it on a main net you want to add other kinds of order types, limit orders… partial, fulfillment, cancellations, whatnot, you also needed to be integrated. So this was a settlement engine that has to be integrated with some exchange. So you would need to integrate it with some relayer, if it was to be over 0x, over some other protocol. There was a lot of work that still needed to be done and could still be done if we… if relayers come up and want to work with us. And definitely you can improve the functionality of it. And that’s precisely what our team has been doing pretty impressively since the launch of that thing. So now we have a system with far greater functionality and scale than what appears in that demo alpha.

Sunny: Are there any improvements that can be made if there were any pre compilers that you were allowed to add?

Eli: Yeah, you would lower the gas costs, that’s what would happen. But we found ways to make everything work within the existing Ethereum system and without asking for any pre compile. so we can work pretty efficiently even over the existing Ethereum, and, we’re very proud and happy with that. That’s another aspect of the efficiency of starks. if you compare this to what happened with snarks, for instance, so… without the pre compilers, they couldn’t really run them nearly at all on Ethereum, so that’s why Ethereum went ahead and added some pre compiles, but with Starks they’re already efficient enough without any changes to Ethereum.

Sunny: So based on those numbers that you mentioned earlier, that like how many transactions per second we could get with 8 million gas? the limiting factor is no longer the gas limit for the verifer, the limiting factor is proving time for the prover.

Eli: The gas limit per block is still a limiting aspect, but we can put a lot of proofs out there. So we reached 32,000 trades that we can send settled in a single block. Maybe we could push it also to 64K. I should say with the once Istanbul turns on with EIP 2028 that we’re very proud to have helped push forward, then the only factor that will limit us is exactly what you said. It’s the amount of compute that we can generate off chain. But practically almost, I can’t even compute what the limit will be but it will probably in the many, many millions, if not maybe billions of trades.

Sebastien: Let’s let’s move on to the Starkware the company because you have built these, these demonstrations starkdex and there’s another initiative that came out recently called Stark pay, which we didn’t even have a chance to talk about. But what is the goal for the company and what problems is it trying to solve for which types of customers?

Eli: so our long term vision is to help stark technology become prevailing and used as infrastructure and a whole variety of blockchain uses and then also uses outside of blockchains just in the standard world, but we’re, we only have 32 engine… folks right now and we need to move cautiously one thing at a time. So, our first product is… wanna emphasize something here, is not a dex. so we are not currently building a dex we are building scalability solutions for standard exchanges. the kinds that are known as centralized exchanges. So just a week ago, our team announced at Devcon five, that by early 2020 will be launching the stark exchange engine, which is not a dex and it will be serving, DiversiFi, which again is not a dex itself, it is a exchange that operates like Bitfinex or Ethfinex with similar liquidity pools, but against which you trade without ever handing custody over your assets to the exchange operator. That’s the big difference.

Sunny: So, very similar to the 0X model,

Eli: Very similar in the sense that you do not transfer custody over your assets to anyone while you’re trading but in other ways, it’s very different. They think zero x is this Basic protocol that is used as a… a layer that others are supposed to build on. Whereas we are building a service that will be serving a particular customer, in this case, DiversiFi, even though we would very much like to offer this service of generating proofs to other exchanges as well. So it’s a different business model and different system that we’re building. But it is similar in allowing self custodial trading.

Sebastien: As I believe you use the term prover as a service to describe part of what Starkware does. What is prover-as-a-service. And why is that useful?

Eli: Starkware is a for profit company. Famously we have not done an ICO we do not have a coin. So we’re bound by the laws of physics, economics and business. We have to find ways to generate profits and then sustain ourselves and can’t just be burning our money. And so we need to think about business models that makes sense while we’re advancing this technology and infrastructure. And the notion of proven as a service is a very natural one, just like you have software as a service and other service providers… it’s this thing that as long as you’re using your paying it, but if you could turn it off whenever you want. So our model Currently, the one we’re using first is is prover as a service. So the exchanges and various companies that will be working with us will be some one way or another, renting or paying for these services. And it makes a lot of economic sense on both sides.

Sunny: Would you be licensing out the prover software? Or would you be actually writing the proofs yourself.

Eli: So in the prover as a service model, you don’t license it out, you run the servers and you get, for instance, will be getting batches of settlements from DiversiFi, and then generating a proof for that batch and sending it to the verify or contract on the main chain. That’s the current model. But I want to emphasize that it’s not the only one that we’re considering and definitely down the line things like licensing or freemium and maybe in three to five years, maybe selling hardware, or other things like that are all viable options that we’ll be exploring.

Sebastien: So hearing this, it sounds that it sounds like you will have some service running on a server and that service will be receiving transactions from an exchange and then you’ll be generating a proof and sending that to the Ethereum chain or whatever blockchain is being used. I suppose to the uninformed ear, that would sound like you are essentially centralizing that service. Talk about the liveness issues that this could cause? And how are you ensuring that exchanges don’t have to rely solely on you being available? How do you reduce that dependency?

Eli: So the first answer is that, just like other service providers, we actually expect and embrace and welcome competition, which means… just like, if you’re looking at your cloud provider, you don’t need to, you can’t be censored by let’s say, Amazon, you can just move it to… Google Cloud or something like that. So over time, we’re sure that there will be other prover as a service competitors out there, which is one answer. Another answer is that… till then, and even when that happens, it’s very important to allow customers and The end users to be able to control and get their funds even in catastrophic events where, let’s say starkware is hacked and the exchange itself is hacked. So just to emphasize in both these cases… if you hack into Starkware you can’t really take the users funds because they are being traded self custodialy, only the users can now do that. But someone can hack starkware and try to shut down its service in order to prevent anything from happening. So we’ve built a bunch of the emergency hatches, that you could, that are automatically invoked when when folks want to take their funds out of the system, and it just doesn’t service them. And in order to do that, we will be launching a variety of data availability solutions that will ensure that users have redundancy in their access to the information they need to in order to extract their funds if stock were or the exchanges ever catastrophicy hacked.

Sebastien: Cool. So is there anything you want to share? That… is coming soon to Starkware, you know what’s what’s on the roadmap and where can people find you and get him off?

Eli: Yeah, so we were talking to a whole lot of exchanges and custodians and traders and services, around exchanges and in this area, because we want to, first of all, serve as many exchanges as they’re willing to use this technology. And second, we would like to ensure that traders and users have a seamless experience when they’re using our technology. And we’d like it to be of use to anyone who offers services to end users be at the custodian service or an OTC desk or A trading firm and so on and so forth or broker. So we’re holding a lot of discussions and we see a lot of enthusiasm for integrating with this technology. So meta mask, announced that… we were the first team to use their new API in order to allow traders to use or trade on our systems using meta mask. And we’re currently integrating our technology into ledger. So that again, traders can seamlessly use it. And we’re, I expect we’ll be announcing a whole lot of other collaborations and integration projects so that everyone can use this thing. At the same time, we’re also talking to a lot of exchanges… big ones and small ones, and I think we’ll see a few others joining diversify in this move to larger liquidity pool for trading and a self custodial way and this is very efficient. For a variety of reasons, first of all, it’s safer for traders, which is good for business. It’s also good for the exchanges… the insurance costs and security overhead is much lower. Another thing is that you can move in and out of your positions across exchanges much more seamlessly. And that’s very important for, again, streamlining and making blockchain ecosystem a bit more like the traditional one. And lastly, there’s a lot of fragmentation of liquidity between different exchanges due to a variety of reasons… geo fencing and geographic locations and regulatory stuff and technological differences. And we believe that our technology can enable… a de fragmentation process of this liquidity pool, and a lot more market efficiency, which is why I think the folks we’re talking to are also very enthusiastic. about this.

Sebastien: Cool Eli, I want to thank you once again for coming on the show and for being so gracious with your time. I know you’re on vacation, so on thank you again and I look forward to having you again in the future. Maybe not in four years, but at some point.

Eli: Thank you, Sebastian. And thank you, Sunny. This is, as usual, a very delightful experience.


0:00:00 | -:--:--

Subcribe to the podcast

New episodes every Tuesday