thank you foreign welcome everyone to the modular Summit this is the second time we're meeting last year it was in Amsterdam and a lot has changed we've so much packed in the next two days within the next few days and this Summit captures all the change that has happened in 12 15 months I'm going to kick it off and introduce balder from Maven 11 for opening words [Applause] is it working yeah perfect uh thanks Akron wow what a venue um amazing to be here today um well obviously uh great grateful that you're all here after a long week in Paris um obviously I'm very excited about the coming two days um I remember very vividly that Celestia or actually called lazy letter back in 2019 was just a research project and fast forward to 2023 see how far we have come it's a big theme in the industry scaling blockchains is one of the most prominent themes still today and we're very excited about being here today um we um I didn't get this over the coming two days we're going to be very excited they're going to be listening to key speakers and key people in the industry and but I want to have a big big Applause for both teams on the maven 11 side and on the Celestia side they've been grinding the last few months of getting this together indeed like a chrome set in Amsterdam last year we had over 800 attendants and here we are again with a lot of interest in the space as you have seen we have called the stages after three mathematicians friends mathematicians in this industry and they are they laid the groundwork for Erasure coding as we know today and that's the basis of Celestia's data availability assembling the paper written by Celestia or Mustafa actually and we get this um over the coming two days there will be around 100 speakers on very interesting topics like infrastructure Mev blog production and everything as we know today um I want to give it over to Akram for the next one [Applause] our next guest is a large part of the reason why we are here the modular blockchain was incepted by him and the theory and the implementation of this technology is at his home and as I mentioned last year in Amsterdam a lot of what we talked about was laying the thesis of the modular blockchain and a lot has changed so it makes sense to start off with a modular State of the Union so I'd like to introduce our next guest co-founder and CEO of Celestia Labs hacktivist Mustafa al-basam [Applause] does the clicker work hello hello good to see everyone welcome to modular Summit 2023 about a year ago at modular Summit 2022 we hosted an event in Amsterdam where we try to tackle a problem that has plagued Us in the blockchain space for over a decade which is this problem that monolithic blockchains don't scale and we've end up in a We've Ended up in a constant endless cycle of new monolithic a1s every single cycle that fizzle out and don't live up to their promises so to recap I'm going to explain what module blockchains are and what their benefits are then I'm going to talk into you a little bit about the progress of the of the modular stack today some open problems on what the destination is that we should all be aiming for so you know when the Bitcoin white paper came out in 2008 it introduced a model of blockchains that kind of stuck around for the next decade which is this monolithic model of blockchains this monolithic ERA this model where a blockchain couples consensus and execution a model where every user has to execute every transaction of every other user which we all know doesn't scale a model with that limits flexibility because you're enshrining specific execution environment and you can't experiment with different execution environments but in 2019 I proposed lazy Ledger which is like a very simple blockchain that only does consensus and data availability and in that model um you have a very roll-up Centric model model where you have a data and consensus layer that is only responsible for consensus and then an execution layer which could be a roll up that posts its blocks to a data layer and inherits consensus and security from the from the date layer and this basically ended up in a modular blockchain ecosystem where consensus and execution are no longer coupled so what are the layers in the modular stack let's let's quickly let's very briefly go through them to recap so the first layer is consensus that's the layer at the bottom and with consensus provides an ordering over arbit-free messages so developers input messages or transactions into the system and the consensus layer simply decides what the order of those messages are and then once those messages have been ordered users need a way to verify that they've actually been published the network because what could happen a validator could execute a block or data withholding attack where they only publish the metadata of the block header but they didn't publish the actual data and in that model in that with that attack um people no one will know what the actual ordered messages are and then no one will know what the state of the chain is and be able to generate fraud groups or progress the chain and interestingly if you actually go back to the original Bitcoin white paper the solution the purpose solution to the double spend problem was this idea of a Time stamping server and I'll just read that here out here which is a timestamp server works by taking a hash of block of items to be time stamped widely publishing the hash such as in the newspaper or using that post the timestamp proves that the data existed at a certain time obviously in order to get to get into the hash and this is basically describing what the core property or what the core thing that a blockchain provides are which is all the data that is made available and timestamped and if you have this basic primitive which is a timestamping server which is basically a consensus and data layer then you can pretty much build anything on top of it using any kind of execution environment and because we if you understand that data availability consensus are basically the core primitive is a blockchain we figured out scalable ways to kind of scale that using a primitive called Data availability sampling with data availability sampling you have an over 99 guarantee that almost all the data is available by only downloading a very small portion of the data and with this primitive that basically means that we don't have to live in the world anymore where users have to download every other user's transaction and so now you can scale blockchains more directly and more and in a more practical way and then finally or well not finally there's something after this but you have the execution layer and the execution layer sits above the data and consensus layers and what execution layer does is it you know takes a bunch it takes a bunch of transactions and it outputs a state so for example those transactions you know it could be payments and the state is what what people's account balances and that's what a rollup does for example or layer 2 does it provides an execution environment to process transactions and to create a state commitment to what people's balances are and in the modular in a modular blockchain model the consensus and execution layers are decoupled as I mentioned and then finally you have the settlement layer and a settlement layer is basically just like a special case of an execution layer that is used to bridge other execution layers or roll up together so for example if you look if you look at ethereum as an execution layer you have unchain like light nodes for Roll-Ups on ethereum which act as Bridges for um between rollup and ethereum you can Bridge assets between them and they can verify and the on-chain like client accepts block headers from the roll up and verifies Ford proofs or ZK proofs so putting that putting that all together like what is a modular blockchain there's something wrong with the sides well what is a modular blockchain a modular blockchain is basically a blockchain that fully outsources at least one of the four components of a blockchain as I mentioned it consensus State availability settlement or execution okay there we go so what are the benefits of modularity the first one is obviously scalability so for several reasons I'll just goes like to make two reasons here the first reason is that as I mentioned you know users don't have to execute the transaction of every other user because now roll ups because they have they have their own execution environment that means they have their own dedicated computation resources like if you spin up a roll up you the roll up has its own computational resources so even if another roll-up gets busy or has a high computational requirements that's not going to affect every other roll up in the system and secondly thanks to data availability sampling and you have this Loop where the more light nodes you have the more block space you can have in a secure way because the more like clients you have the assembling the bit the more data they can collectively reconstruct and the bigger the block size that you can have because in a system that does data availability sampling the light nodes are collectively storing and making the data making all the data available instead of just like one or three nodes secondly you have the developers get the freedom of choice so you know like with ethereum and instead of being limited with the ethereum virtual machine for example over the past few years there's been a lot of new developments and advancements in more efficient and more more practical execution environments for different use cases whether that's for scale or for privacy you know there's execution environments that add certain ZK op codes and um it's not really practical to deploy a new layer one just to make a modification to an execution environment and so with with the modular blockchain stack you can now just modify evm a little bit you know either up code and just deploy a roll up for that instead of having to deploy a new um layer one from scratch and this is also what um various projects in our ecosystem have done as well and you also have different types of Roll-Ups that you can use according to your use case like solvent Roll-Ups settled robots ability DMS and celestiums and specifically with Sovereign Ops are an interesting case of robots that have kind of green traction over the past year that effectively give the community if that roll up the freedom to Fork that roll up so you basically get the freedom of a layer one but without the overhead of a layer one without needing to create a new consensus Network or token necessarily from scratch so let's talk about what the modular stack looks like today and because we've made a lot of progress over the past you know 12 months this is what the modular stack looked like a year ago in 2022 um you know a year ago it was you know mostly theoretical you know there was various projects in the stack but it was still very underdeveloped ecosystem you know like ethereum was the only settlement layer is you know very very few execution layers not a lot of infrastructure around it but we've made a lot of progress in the past year and this is what the modular modular ecosystem looks like today we have various new data availability consensus settlement and execution environments but more also interestingly we have a lot of new infrastructure that's surrounding that modular stack you know we have infrastructure like block explorers you know and analytics providers and so on and so forth we also have a certain interest in sequencing providers shared sequencer provider shared sequencing and design flight sequencing and that rollups can use that makes Roll-Ups more sensitive resistant um or more have have better self-commitment finality and then you have various roll-up Frameworks which make it very easy for developers to deploy their own new roll up without having to Define to to write their own roll up from scratch um you know Stacks like op stack solvent labs and serving SDK role kit and you know those Stacks make it very easy for people to just you know write their application and they'll play roll up and then we also have roll up as a service providers um that use these roll-up Frameworks and provide a hosted service for people to deploy their rollups so instead of having to maintain your only push fracture just like you can go to AWS or digital ocean today you can deploy a virtual machine in the cloud in seconds in the future you'll be able to deploy a roll up in seconds you know with a hosted provider with whatever with with your code and the goal here the optimal goal here should from engineering perspective should be that deploying a role up deploying your decentralized application as a roll up should be more easier and more convenient and more practical than deploying a new smart contract and that's what that's basically what we've seen in web 2. like in web 2 if you create a new web application or you deploy a new website you don't you don't use WordPress usually or you you don't use like a host provider you deploy a new virtual machine in the cloud you create you have your own virtual machine so in the and because that gives you more flexibility more scale more and more choice you don't use existing you don't use a shared hosting provider or a shared platform necessarily in many cases um you know like a WordPress or or a blogspot and then finally we have a various number of new cross chain providers and Mev providing bridging across Roll-Ups across different Frameworks but the goal of this conference today is to get everyone to make connections and to talk and to discuss um the future of modulus ecosystem and then so who knows what's in store for 2024 and what new layers in this or what new types of tooling and infrastructure that we might have not even thought about today might exist in 2024. you know like like a like a year ago people weren't really talking about shared sequences now they're older age a quick few highlights um over the past few years in the past few months um and I mean a few months ago op stack was the first you know uh kind of uh ethereum focused roll-up framework to either modular data the API and we can we actually contributed that data availability API and that made it possible to use to uh create to deploy upstack chains using Celestia as a DA layer and other layers as well and to me this is like really the meaning of modularism and not maximalism because this is like an example of different ecosystems working together and interoperating with each other in a beneficial way you also have Manta which is which is deploying opsack roll up based on this interface and then calderas also wants a test net um using this op stack interface and we have like a year ago about a year ago I we introduced the concept of sovereign Roll-Ups which are still controversial in certain communities but um the idea for Sovereign roll-up is you know you have it like it roll ups don't necessarily have to be a scaling mechanism for L1 they can also just be a new interesting and more efficient way to deploy a new blockchain or Sovereign chain like instead of deploying a cosmos chain you can just deploy a cosmos roll up and a year ago Sovereign exist like there was no implementation for southern rollup but now we have many projects actually building and working on Cyber develops um which'd be really cool to see you know we have Sovereign Labs building Sovereign SDK they recently launched an alpha release of sovereign SDK which is a toolkit that lets you deploy and create sovereignty cable apps and we have Eclipse which is a role of a service provider for southern rollups and we also have a roll kit which recently was able to deploy the first Sovereign roll-up on bitcoin which is really cool because you know Bitcoin has historically been one of the most maximalist communities and this is really kind of like uh when when that was published that was kind of seen as a way to kind of foster across um cross-chain collaboration we also had a dimension releasing the first IBC enabled roll-up using the evm which is really cool to see as well um and they also have a test Net Life as well but there's many more highlights which I can't list all in this talk we have a very Celestia has a rapidly expanding ecosystem we also have here um various applications including gaming providers gaming chains after this talk I know that Scott is talking from August about the world engine curio did an interesting demo recently where they run a real-time strategy game on a modified evm roll-up on Celestia and there's many other as well many other interesting pieces of infrastructure and applications on the stack so we have made a lot of progress over the past year and we're reaching the inflection point but there's there's still like a lot of oven problems that we need to solve to get to where we need to get where we need to go and to really make to really defeat maximalism and to have a you know positive sum mindset instead of a zero-sum mindset and this conference is um is meant to kind of like Foster these conversations and try to you know discuss these open problems and progress the stack it's one of the open problems is you know you use a ux for bridging um there's a lot of work to improve your ex and bridging especially in the cosmos ecosystem like users need multiple V tokens for example to bridge across chains I know there's various people also working on that like skip I think they recently did a demo recently um you can go ibc.fun it's like a it's a website that has a demo we also need like tooling for custody and payment systems for Roll-Ups and to access resources across the stack so for example you might have a roll up that uses that that needs to pay the da layer or settlement layer and there needs to be a way to kind of hold these different tokens or do exchange or do um kind of like token exchanges in a easy way for developers without them having to maintain too much like wallet infrastructure and you know prices mechanisms and so on and so forth there's also a lot um it's kind of like a good problem anyway but there's a lot of choice for developers and that can be very hard for the developers to understand the trade-offs between those like different execution environments different element layers different da layers and so I think we need like do a better job at trying to have like educate developers or explain the trade-offs between different between different components in the stack we also have a lot of dependencies across the stack um you know like a DA layer has to connect to an execution environment and you know so on and so forth and there isn't really any common interfaces you know like for example op stack has a specific da interface um different the role as a specific data interface um you know tender mint has this ABC I implementation interface that interacts with Cosmos and these dependencies can be very hard to maintain if there's a breaking change any of these dependencies so I think we should have some discussion around if there's a way to create some common interfaces or to have better dependency management across the stack so things are less likely to break when improvements are made we need better proofing systems or more work on proofing systems so like fraud proving systems are still underdeveloped because you know like there there isn't a single permissionless deployment of a Ford proof roll up except for field V1 obviously and then ZK proofing systems are still slow there's still a lot of kind of optimizations need to be made they need to be made faster I know there's a lot of work in Hardware acceleration and um fpgas to to to make ZK proving systems faster and also a privacy um like one of the reasons why the current blockchain does not have privacy is because you often need to enshrine it into the execution environment but now we have an opportunity to do that because instead of having to deploy a new layer one just to deploy a new execution environment people cannot people can now experiment with previously enabled execution environments is going to be talking about some of these topics um later today and tomorrow so like what is what what we're trying to achieve like what is the destination we're trying to get to so so what like let's discuss like some of the values of materialism and what we're trying to get achieve the modular blockchain is like so first of all users should be first class citizens of the network this is like an ideal in crypto and web free that seems to have been seems to have been forgotten over the past 10 years like the whole point of blockchains and the whole point of Bitcoin is that you don't have to trust middlemen you know like you know and that includes validators and minors you shouldn't have to trust middlemen or and centralized RPC endpoints and apis because that's just web 2 all over again if the main way that users are interacting with web 2 is just to centralized apis that's not like fundamentally different to web 2 just interacting with the database so like one of the things I appreciate about Bitcoin is it has very good light node support like client support you can actually install Light client on your phone and that connects directly to the to the Bitcoin Network and can get data out of the Bitcoin Network um and without using any centralized API endpoints so I think we kind of like need to go back to this ideal um and that's what that's why data availability sampling like clients are important to allow users to get back to the roots of web 3 and to really make allow users to not have to rely on centralized middlemen and endpoints which which on which are prone to censorship and Corruption secondly um uh like modularism and the maximalism is one of the obvious important ideals of modularism and this is pretty much what this whole conference is about and the reason why this is so important is because over the past decade we've been stuck in this endless cycle of new layer 1 chains every single Bull Run you know like you have ethereum in 2014 then in 2017 you had EOS you know Tron cardano and they promised the world and then um 2021 we had salon on Avalanche and then this time now we have Aptos and suey but this is this is not sustainable because this is just creating uh it was creating in the cycle of new tribes and new ecosystems that are not collaborating with each other we can't have it it's a very zero-sum mindset and that needs to be replaced with a positive sum mindset where incremental improvements Can it can impact everyone that uses crypto and we can replace the zero-sum mindset with a positive sum mindset by adopting a modular ecosystem or modular stack where people can you know for example there's there if people make a more efficient execution environment like Aptos and so we have you can just deploy you can just replace that layer in the stack you can you can replace the execution environment in the stack without having to deploy a new layer one because it's simply not sustainable to have a constant graveyard of new layer ones that are sucking up a lot of funding but eventually like extract value they fail to get traction um like crypto is never going to mature with this with this cycle and it's really important that we escape this endless cycle as soon as possible to have a more positive sum crypto ecosystem that actually kind of develop develops into worldwide mainstream developer adoption and finally um one of the important aspects is of moduleism is that communities have the choice to be Sovereign if they want to they don't have to but like if they want to they can and sovereignty is basically their freedom to Fork because like one of the fundamental things that blocked a crypto and blockchains allow that previous systems haven't allowed is the ability for a group of people to with a specific shared goal to kind of Thrive through self-organization and Collective action by effectively having a creating a contract with each other that for the first time does it does not need to be enforced by you know phys like physical law or like you know police or courts that can be enforced cryptographically on the peer-to-peer Network um whereas previously you had to if you wanted to create a shared agreement we have to do so under like a specific jurisdiction but with blockchains you can kind of bypass all of that and have a direct top level social contract on the top level social contract gives people gives you the freedom to Fork if the community decides that they want to change the political rules so to recap the three values of blockchain modularism users should be first the user should be first class citizens of the network um by focusing on light nodes and allowing people to run light nodes secondly modularism and non-maximism because we need to escape the it's really important that we escape the layer one you know blockchain monolithic Loop otherwise crypto will never grow up and finally communities can choose to be Sovereign because they have the right to Fork if they want to so I really hope that you enjoy the conference and a lot of interesting conversations happen I'll be around and many of the selection team and other teams on the ecosystem will be around thank you [Applause] there you have it folks the modular State of the Union by Mustafa in his presentation you may have seen the ecosystem map and in the modular stack one of the hardest and most Innovative areas is gaming and our next guess is spearheading that and I'd like to introduce Scott sunarto who will be speaking about world engine horizontally scaling Roll-Ups with charts [Applause] cool hello everyone and thanks for coming today to kind of like uh modern Summit and today I want to talk about a little bit of something that we've spent the past almost three quarter over a year working on but the story as you'll see is much longer than it currently uh has seems like so yeah cool then click all right let's see all right so quick introduction about myself before I started Argus I was one of the co-creators of dark force dark force for those of you who might not have known that is the First full yanchen game on ethereum built using ZK snarks um well that started with a simple question of like what if we create a game where every single game action is an on-chain transaction back then in 2020 this was an absurd thesis a lot of people um was like wondering like why would you make a fully lunch in games um and how is this even possible with blockchain being as slow as it is but regardless driven by our curiosity driven are just like you know our kind of culture just walking around and finding out we decided to build dark force and so this is like what dark force looks like uh back in 2019 2020 um and yeah it's basically space exploration game where um thousands of players were fighting on chain to expand their empire and within the first week of launch we had more than 10 000 players or wallets and trillions of guests spent in the ethereum test Nets attracting a large number of players and eventually we had to move on from test Nets because other developers want to test their applications we're not able to and eventually move to the sidechain and even in the side chain is supposed to be more scalable it turns out that it is not as scalable as we think it is we quickly filled up the entire block space driving up gas costs and practically making side chains unusable with only a single application um and so now begs the question right so with all of that limitations why are people still so excited about on-chain games well after dark forests we've seen many people investors Founders Builders hackers alike building on top of the legacy of dark force with things like lattice building mud a library and framework to build launching games easier and also content companies like primordium building a fully Unchained games and also things in other ecosystem beyond the AVM like Dojo on starknet um the key thing here is that we realize that the limitations that we have with existing blockchain comes down to the fact that we are sharing a chain with everyone else we are sharing this like very small room where there's not a lot of space with other applications that also want to use that and so if you see on the chart there you can quickly realize that if there's a another game like Dark Forest living on that same chain then there could not possibly you know be a functioning chain and so the question is like what now right do we just give up do we just like you know kind of like just like throw away the concept of like on-chain games and so we decided to no we want to actually explore how we can build better engine games and we want to make sure like how that the game that we're going to build next is going to be scalable and so we embark on our journey that starting with a one big key realization that we currently took blockchain architecture for granted we have all these l1s and l2s and yet they all look the same they all will tell you very very similar things they'll tell you that our consensus is better we have like you know better trade-offs than the other blockchain they'll tell you that their VM is faster they'll tell you their VM is like paralyzable or they'll tell you that maybe if they're an L2 their fraud prover is better or how the other competitors you know fraud prover doesn't exist or like their other Roll Up is fake news some people will claim that their proving system is better like our ZK approvers are faster also we never see the benchmark [Music] but all this war um to what end to create yet another Decks that you can deploy to on any other chain like it doesn't matter what VM it is it's all just looks the same or to Min yet another nft that again you can deploy on any other chain we decided to take a step back and think differently think how we can actually see blockchain architecture from a different lens we asked the question of like what if we escape the classical blockchain architecture every blockchain that we've seen until now it all looks similar to bitcoin all looks similar to ethereum behaves the same way you have the concept of guess you have the concept like transactions like people some transactions they submitted to blockchain and that causes uh like you know a state transition it all looks identical to each other and another key thing that we realize is that other people and other blockchains Let It Be L1 or L2 they're all trying to build a blockchain for everyone and I'm not saying this from like a wholesome way it's more of the fact that like they don't really take into consideration a specific kind of like you know use case or a specific kind of like you know user Persona and instead they just try to build something that everyone will just like assume is compatible with their product in mind we took a different distraction we choose to build the best blockchain for a very very specific user in mind in our case being players and game developers and so again we asked this question and it comes down to really understanding how games are vastly different than your typical applications so for example like application that you might use a lot is things like social media like Twitter um and like and like the other kind of things would be like you know games like Minecraft on your hand and so here we can see that like you know with Twitter um Twitter acts in this like very straightforward way where if you write a tweet and then you click a button then you will have like you know your Tweet posted one thing to note here is that every time you click a button there is a user input event and this this is what triggers the state transition um this is like what I typically like to refer to as an event driven runtime as you can see this is very much similar to like how your typical blockchain would look like a user would want to trigger an event and they would click like you know send a transaction and the transaction with it inflicted but if you see games on the other hand games doesn't behave the same way as like a web application even without user input even if you're AFK if you're just like you know going away with your brother State transition still happens like you know fire will continue to cause damage water will continue to flow wheat will continue to grow and day and night time will continue to move and this is like what we like to call the loop-driven runtime key thing here is that no user input is needed to cause a state transition and so drawing an example back you will notice that the web app is again very similar to like smart contracts let's say in the UDA swap a user wants to take to trade token a and token B you would submit a transaction and that trade is executed again event driven runtime we realized very quickly this event driven runtime nature of a classical blockchains are just not compatible to run a game State machine and so we kind of explore deeper the loop-driven runtime that game has and again all game engine is specifically built to support this loop-driven runtime the key thing with the loop driven runtimes is game progressions and like you know how times are like you know organized within the games is referred to as a ticks or the atomic unit of time each game blue is going to be executed in a single tick and the higher tick rate that the game has the more the game feels more responsive if you play the game like you know Counter-Strike or Varan you'll notice that these games have like you know 20 like tick rates or like significant rates for the modern game servers while games that are more old they'll typically have like lower tick rates and often you feel sluggish doesn't feel responsive right in blockchain you can treat like you know these texts to something that's analogous to a block it's basically a single unit of time where State transition happens if a thick or a block feels slate you visually and feel luck in the game and if you've built games before or even you've played game you know how suck it feel when like you know your game move flag and you die and you get killed and like they would rage on to like the enemy or like you know the game developers it's not a pleasant experience and so and you know today we believe that games are loot driven in nature and because a lot of the game of State transitions are not triggered by external input for example a gravity doesn't rely on the user pressing the W button to move forward gravity will continue to exist regardless of user input there's also a case for that like like deterministic transaction ordering let's say if you want to inflict um like a damage to your user should the game kill the like like apply a regeneration to the HP of the user first or should it inflict damage first with a traditional ordering of like just like this random Builders of like a typical execution layer like you know let's say the AVM you can't predict or you can't like deterministically control which state transitions are getting applied first and as a result you have this deterministic transaction ordering problem that causes reliable issue in the game Loop itself on top of that you can also allow for more aggressive parallelization by using like data oriented system design in this loop-driven runtimes last but not least is that while some people might talk about it retrofitting event driven runtime stick DVM into a loop-driven runtime we've seen that this like leads to a lot of issues because the nature of like how you do gas metering like how do you do accounts that is not as simple as just like calling a loop or calling that like a single smart contract function again and again at every block and so if you have a layer two or roll up with a little driven blockchain for a game so what unbox do we get the first thing is of course we want to maintain the composibility making the the blockchain from inventor and the loop driven doesn't mean you have to sacrifice composibility and that's actually the reason why we want to use blockchain as a runtime for you know these games in the first place on top of that with the loop-driven game Run time you can have real-time gameplay where you can now start to blur the line between a blockchain and a traditional game server and eliminating the concerns of building a game on top of these roll ups with the idiomatic runtime like with the like in like the loop driven runtime you can also build more complex games than you can before in like in our blockchain there was a reason why most quote unquote games that you see on the blockchains are mostly just people minting nfts because that's really what you know like the easiest you can do when like you can only have even driven runtimes and last but not least the more that you can emulate a traditional game engine runtime the more that you can just treat the user experience much like playing another game but with all those good things in mind we still are missing one key ingredient to build a scalable game server blockchain and the key thing there is is the fact that we also need horizontal scalability when you're playing a game you are not only playing it on a single server if you play MMOs MMOs are comprised of many many servers if you're playing Counter-Strike these Counter Strike games are you know spread out across different sessions that is run across different computers but if you realize at the end of the day a roll-up runs on like a computer and so they are bound to the physical limitation and of like you know computation itself and say games and like most large-scale applications scales using multiple computers but if that is the case then like why don't we just like spin another roll up then if we do a knife approach of just spitting up another roll up we can lead to like composable fragmentation smart contracts would stop talking to each other and while there are kind of different construction of shared sequencers a lot of constructions are less than ideal for gaming use cases for instance you might lose you might rely have to depend on crypto iconic security to prevent things like you know locks dos factor and doing like an atomic shared sequencer Construction can also lead to constraints on execution layer as a result we need a new strategy to see control of transaction and to in the search for that we had a give from the pass again we'll look into how traditional Game servers especially those with intensive performance excitation um like massively multiple or online game like world Warcraft RuneScape ultimate online whatsoever through the concept of sharding nope with charting like some of you might have known this concept from like you know databases like mongodb but the reality here is like there's this conspiracy theory that the concept starting itself comes from Game servers first instead of the database first and so how does charting work in game The key thing here is that there's no one-size-fits-all problem again at the end of the day shards are just tool in a toolbox not a prescription of how you should build your game so for example in the first starting construction you can use location-based sharding where you can split let's say a Cartesian coordinate into four diagrams and like when a player went across from one shot to another you simply send a message to other shark and then the player would be teleported to the the to the other shark the second construction is by using something that we call multiple sharding if you've played MMO games before you might have seen that like when you log in you have multiple servers that you can choose this is the same construction that you can do where you can have distinct States or distinct game worlds that people the players can decide to join so now we have all the ingredients here right you have the loop driven runtime we have a horizontal scalability and we also want the awesome compost ability all this sounds great but how do we achieve this in a ruler right this looks like things that on the surface seemingly looks like beyond the reality of like we couldn't get from a blockchain but this is like why we created roll engine we realized that we can't just use a normal roll-up and expect that it will run the way that we want it we took it to our own like hands to actually build the solution that we need the same way that back in the 1990s when you want to build a 3D game there's no 3D game engines available out there you have to build them by yourselves and so the world engine is again divided into two two key Parts the first one being the core which is comprised of two key elements which is the AVM Bayshore which is a hybrid execution layer and sequencer with charting support and the second part is a game shark which is a high performance game engine plus execution layer and on top of that you have peripheral components like you know the transaction relay and net code to do the client server communication and also things like ZK Cloud prover for you to have ZK games like dark forest and so the world engine core really comes down to our very specific design of our sequencer while other sequence like shared sequence reconstruction would optimize for having Atomic composibility we decide that like Atomic composibility are extremely overrated especially when you're operating on the context of games and that's why we went full asynchronous and so we don't have to put locks under runtime an AVM bayshark we have a global AVM chain where players can deploy smart contracts to compose with games create marketplaces dexes we built this on top of Polaris which is uh Cosmos SDK compatible like evm module that allows us to customize the AVM to much greater extent than what we could by let's say just like dribbly for a lively 4 can get under that we have the game short which is running on top of the evm Bayshore sequencers which is a high tech rate mini blockchain designed to serve as a high performance game server the game chart is also designed to be State machine and VM agnostic we built an abstraction layer much similar to like Cosmos SDK ebci so so that you can customize the chart your liking or build your own by implementing a standard set of interface we've also built the first game chart implementation to provide an example and we use an entity component system that is commonly used by like you know game engines and with construction that prioritize entity computer system as a first class citizen so every single objects or every single um you know primitive on the state machine itself is treated like an entity so things like accounts are a part of ECS transaction is a part of DCs system it also has configurable tick rate so you can customize your game to be as fast sticking as possible or you want to slow it down to prioritize more number of entities and like the best part of this is that you don't need to rely on indexers you can have fast reads on a blockchain without having to have this like lack of eventual consistency in textures like like what we currently have with Mod dojo and whatsoever and my very power of all is that you can write your code and go so you don't have to wrestle with you know smart contract languages that can be very limiting sometimes and so the key thing again as I mentioned shards are agnostic in nature to our abstraction layer so you can build other shards construction like a solid game Shard um to complement your like you know Cardinal game chart you can also build nfting chart that have custom rules um by like let's say custom mempool and like ordering construction to solve the basic minting Noisy Neighbor problem you can also create a game identity shot to use ift to present your game identity allow you to trade your game identity as well and again we don't use slux and as a result we don't have to block the main thread and making the game Run game chart runtime as reliable as possible and not causing any lacks and we don't have to rely on crypto economics construction anymore on top of that we also have many interesting sharp properties like how each shark can have different da batching compression strategy and you can also geologize shards to reduce gameplay latency and last but not least you can also Run game charts as an independent game server onto its own so you don't have to worry about roll-up deployments on day Zero and so we've built many various games on top of the game charts we've built like an angel agario which have traditionally not been possible using our full world engine sequencer stack game chart nakama and so on and so forth we've also worked with a hybrid model where you would use existing game engine Frameworks on solidity and combining that with a world engine and the future is for you to decide you can use our Cardinal stock you can do hybrid you can build your own game chart it's basically kubernetes for on-chain games it's mix and match Lego for your games and so now you're able to try out the world engine it's now open source on our GitHub we are welcoming new contributors for people who are interested feel free to reach out afterwards and if you're interested in building your first world engine game we are having a workshop later today at 11 30 a.m at the college stage and then tomorrow we're also going to be hosting the gaming track we have a panel and we also have a talk about unchain games at the four-year stage and and last but not least the key takeaway from iTalk would be that let's build cooler Roll-Ups we're right now at the roll-up Renaissance and what we already know is that we of course roll ups allow you to scale blockchain of course robes will allow you to tap into the security of the underling L1 but right now we're still living this like on this very evm Centric conception of roll-up architecture this is the starting line and not the end what we want to go towards is a user and application-centric roll-up construction where people can build cool and yeah that wraps my talk and thank you for listening [Applause] thank you Scott the next topic it covers one of the bleeding edges bleeding edges of crypto intense what are they why do they matter our next guest is Chris ghost from anoma who will talk intense Centric roll-ups thank you let's see if I can answer any of those questions all right so if you read this schedule I think I started with an initially a slightly different title which was something like privacy preserving runtime Roll-Ups or efficient da sampling um but I've been sometimes told that I use too many words so I tried to simplify it and now it's just intense Centric Roll-Ups uh I'm Christopher uh thanks for coming wow this is a beautiful venue you may know me is uh the co-founder of the Noma project I also worked on IBC before but really I'm just an armchair philosopher in denial so this talk started by a little bit of a meditation of mine on what exactly is a roll-up and someone you know uh maybe adjacent to but not always part of the modular ecosystem this has not been entirely clear to me so on Monday I went to the celestia.org glossary which seemed like a good canonical place to look this up and found this definition quote a roll-up is a type of blockchain that offloads some work to a layer one like Celestia good marketing Rolex host applications and process user transactions once those transactions get processed they are then published to layer one it's layer one's job to order those transactions and check that they are available at minimum it's a good definition but there are three words here that that I think we might come back to later in this talk and those three words are type of blockchain so I ask you now just to meditate what exactly is a type of blockchain but for now let's move on to a term that uh you know uh is even more criminally underdefined which is intent so what is an intent uh anoma has been using this word for a while we've been perhaps criminally vague about what exactly it means you know we wrote it down in some ways but it recently became popular honestly I think it has very little to do with us because a lot of people are using it in a lot of ways that are not perhaps exactly the ways that we went but they seem to be seem to be sort of correlated so here are a few takes on intents there's there's Uma from succinct talking about intents for cross-chain bridging ux very important at research day in New York uh here's penumbra talking about intents for a user basically user wallets and tents for users thinking about how to declare what they want a transaction to do or not do then of course we have the radical takes tucks intense turn your front upside down get your kids to call you back intense our goal's best friend okay okay so maybe back off a little bit and if we languages it's like OG decentralized coordination scheme right um and we don't really get to pick you know individually what exactly terms mean uh to me the interesting question is when you zoom out and look at how people are using words there's some commonality here right there's a reason the word intent became popular so quickly so fast and that people can use it with each other and they all seem to understand like roughly what they mean and I think what they mean is that an intent is if you put it in kind of slightly more mathematical terms and intent is a commitment to a preference function over the state space of a given system right so as opposed to a transaction which specifies a specific imperative execution path do a then B then C and intent says I want an execution path that satisfies these constraints right I have these preferences over what gets executed in the state space of a given system this is what varies right so when Uma is talking about cross chain Bridging the system is like the change that you're interested in bridging between when you know anoma is talking about who knows what as we'll get into a little bit later the system is like the information flow right so the bounds of what the system is that the intent refers to can change but in all cases intents are these sort of like commitments credible commitments to preference functions so I want to zoom out a little bit and kind of just analyze like okay so here on one side for the modular ecosystem there's this concept of Roll-Ups and everything is kind of organized around Roll-Ups and from from our land we come from the concept of intense and everything is organized around intense and I think these are two interesting Concepts because they come at the problem from different directions to me Roll-Ups are kind of bottom up like you start with this modular thesis you start with data availability as the base layer and then you sort of build rollups on top of that and then we see a kind of proliferation of different execution environments different approaches to sharding as was covered in the last talk stuff like this so roll ups common things bottom up to me at least intense common things you know quote unquote top down or like users down like users have intents right they're always going to start with intents and the system had better figure out how to do something reasonable uh you know in some senses as system Architects we actually don't get to choose users are going to have their intentions their intents when using the system and we just have to try to build something that uh can satisfy those as credibly and fairly as possible right so um in particular I want to ask the question of can intent sort of help out the modular ecosystem so Mustafa in his talk earlier brought up a bunch of challenges um at the end and kind of open problems um and I have a slightly slightly different list but I think it shares several components so three challenges for modular as I see it at the moment just kind of an architectural Paradigm are these inefficient charting application lock-in and user-facing complexity and by challenges I don't mean like flaws I want to be clear I just mean as the ecosystem is sort of moving at the moment if we try and foresee what might happen and try and avoid some potential problems while we still have a chance to steer things here a few things I think we should just sort of be cognizant of and I bring them up here because maybe I think intense can help but let's let's just go over what they are first so challenge number one is inefficient charting the most expensive thing in the world of distributed systems is atomicity because atomicity requires that you send messages to one place and you order them there um and that you know always implies that least some kind of N squared because that's this communication and it just means that you have to be processing things in like one location right it's the typical problem of uh you know a sequential uh behavior in concurrent systems um users of course want everything in particular they seem to want to cross-roll up transactions applications interactions uh you know token transfer stuff like this so users are not going to be thinking about like oh I should put all my state on roll up once that it's most efficient no no no no users are going to say I have asset here I want asset there do something make it work and it's our job to try and make it work in one you know in making it work we want to kind of give as much Freedom as possible when it's properly constrained to The Operators of the system to make it more efficient so in particular if we envision a world of Roll-Ups and all of the Roll-Ups have different applications right and the applications are like bound to specific rollups and users want to do lots of cross application interactions then we've added this weird constraint where it's like there's a sort of demand for atomicity right like users want to interact with application a on roll up one and application B on roll up two and they want that interaction to happen you know atomically there's a bunch of shared State and if we tie applications to Roll-Ups and kind of have separate sequencers then we get the sort of static sharding system where we can't change the topology of which different Roll-Ups are settled atomically right it's static and if we View kind of the demand for atomicity is varying over time perhaps dynamically this seems to me like it's inefficient we've sort of added this constraint so that's challenge number one Challenge number two is application lock-in one of the great things about the modular stack is that you can build heterogeneous execution layers very cheaply because it doesn't require deploying a whole layer one but one challenge with heterogeneous so heterogeneous protocol execution layers is that they make applications less portable so you know the evm for example is a interesting VM of course it changes quite slowly one thing that I would like very selfishly often is for the evm to add new pre-compiles for new curves so we can do more efficient cryptography and it would be easy to launch a roll-up that Forks the evm and has a new precapile for a curve right but one disadvantage if you sort of look at this from the application sort of whole ecosystem perspective is that then if other Roll-Ups don't also adopt this new opcode you know the app is kind of locked in like if the sequencer of that roll up starts charging higher fees if users can't easily switch if there's a bunch of application state that gets tied to this like very specific different execution system the application can get kind of locked in uh and this means that apps may be paying you know really for more and more kind of atomicity uh than than strictly speaking necessary and the third challenge I see is user-facing complexity so one you know modular component selection and certainly modular component construction adds a lot of clarity to the design process of blockchains um it allows different teams to work on different parts which I think is very helpful from a coordination perspective but it's especially if these different parts are operated by different sequencers different validator sets uh it tends to entail some complex security assumptions so if we think about this from the perspective of a user uh and what the user has to reason about in order to know whether their interaction is safe right something like this if there are different parties for solving execution data availability all of these components of the modular stack that's a lot for the user to think about right different interactions require different safety levels the user is not going to you know every time they send an intense under transaction the user is not going to reason through all of the sort of crypto economic calculus of is this thing in fact safe right given all of these specifics so I think user facing complexity can be a challenge uh sort of one that requires that we come up with good standards for describing what these security assumptions are and also we want to always maintain the ability you know as Mustafa mentioned kind of sovereignty to me sovereignty also includes the ability for users to easily switch if something goes wrong right communities like own the system they give it value by bringing their applications bringing their intents to the blockchain and they should have the ability to switch away and in particular to kind of credibly threaten to switch so that they don't have to actually do it because it's cheaper right so uh I'm going to postulate a kind of thought experiment here and see if it might help with some of these challenges and that thought experiment is what if at the moment applications are as I understand and you know I could be slightly wrong but as I understand applications are kind of defined on top of Roll-Ups in the modular stack right there's a data availability layers some execution there are particular Roll-Ups those Roll-Ups have you know State formats they have instruction sets they have VM stuff like this then applications are defined on top of the rollups right and I propose a different way of defining applications which is to Define applications as intents so in anoma an intent uh is kind of opinionated about some things and unapinated about other things so in particular intents specify which parts of State they must modify atomically right if we think about the whole system as having a sharded state where the state is sharded by concurrency domains and different security domains intense you know specify sort of in an include explicitly include fashion which parts of State they must modify the state can be held on different domains you know not you know if you require Atomic settlement between two completely different validator sets that's not possible right but you can specify in the intents which things you need to be Atomic and the kind of custodians or the validator sequencers in charge of that state must sign right and I think uh one uh way of kind of understanding the relation of this to Roll-Ups so we we have a concept we call partial solving and partial solving is like if you have some intents let's just describe the intense abstract claims let's say one intent this is the one on the top uh top left here uh is Alice and Alice wants to trade uh star for Dolphin right we have another intent that's Bob Bob wants to trade dolphin for tree then we have the third intent Charlie Charlie wants to trade tree for Star right okay we have something we call solving solving basically means matching intents and solving can be done fully when you take a bunch of intents you match the completely you get a fully balanced transaction or it can be done partially in this particular diagram gives an example of partial solving so in this example we take Alice's intent in Bob's intent um and Bob already has something Alice wants right Bob already has a dolphin so we take spot we take Bob's dolphin and we send it to Alice and then we craft a new intent that now requires uh that we get a tree and give a star right so we can do this kind of partial solving by combining two intents doing some kind of simplification and creating a new intent that we then self send elsewhere to do some more solving later so if you think about it abstractly if we have like an a for B intent a B for C intent partial solving just takes those two intents combines them and makes an a for C intent right then in this particular example in the second stage of solving we have this partially solved intent and we match it then with Charlie's intent and then we get a fully balanced transaction where everyone's assets get swapped in just a three-party partner so what is partial solving I postulate that partial solving is a Roll-Up so if we think about type of blockchain what does type of blockchain mean I mean I think there's some hash linking involved there's like some history we need to be able to verify this later in partial solving satisfies all of those properties it's just kind of On Demand right we look at the intents we do some kind of partial State change sending in this case Alice's um oh no sorry sending Bob's dolphin to Alice we have some State changes that we still need to do we commit to those we can perhaps in in enoma's case we can make them private we can even roll them up in a ZK proof so we have zero knowledge and computational compression so that's like a ZK roll-up um and then we sent it onwards right now what is not fixed in this particular design is that we don't fix what has to happen afterwards right so we just take these two intents we see that we can do some simplification we like make a roll up like this is what you know sometimes we've been calling runtime Roll-Ups or on-demand roll-up and then we can send that partially solved you know partially rolled up intent onwards we can do some more rolling up and then as soon as it's fully balanced it's like a transaction and can be settled somewhere right so in a kind of intense Centric view of Roll-Ups partial solving and roll-up creation are just the same thing so that may be the difference in the kind of with some of the current modular stack is just the Roll-Ups are created On Demand right and I think that this has some advantages it allows for this kind of global compositionality in determining the actual topology of sharding at runtime right just like when you're processing intents around the network it preserves local liveness it does require standardizing on a state format this is you know perhaps controversial but I think that we can do this in a way that doesn't really constrain choices um the one nice thing about the way intents work is that intense specify verification conditions not the execution method which means that you could have different instruction sets so you could preserve basically heterogeneity of execution systems as long as you have the ability to verify right so like think about it like if everything is a ZK roll-up then intense specify conditions for verifying like the other guy ZK roll up if the other guy uses some other op codes internally you don't care as long as the condition is eventually satisfied so in some sense it's a standard which allows you to agree on as little as possible which is always good in distributed systems right um so intense Senator growlups enable Dynamic charting choosing charts at runtime intents can specify which consensus providers they're okay with they can specify like more options so it doesn't you don't need to fix like sending your transaction to one specific roll-up you can say Okay I want like the cheapest settlement subject to these conditions and here are the security assumptions I'm okay with this enables the network to dynamically sort into independent Atomic bundles so it should end up being cheaper for users defining applications by intent formats if done well in kind of standardized should help a lot with application portability because applications then are not they're not tied to a specific roll-up they can sort of move freely across Roll-Ups and maybe heterogeneous instruction sets become more like specialized solving algorithms for different domains the same application can Shard its state according to what users want right so you can move code and data across chain and application portability in particular gives you that gives you as a community or as an application user a credible threat to Fork out extractive operators right because it's easy to move your application code and logic somewhere else everything is kind of standardized to a division degree then if someone is charging if they're like extracting a lot of Mev if they're charging High fees to withdraw your assets over Bridges or something like this then you have a credible threat that you can just leave and I think you need this in order to constrain the kind of operator extraction and these systems uh then specifically in anoma we've been spending a lot of time trying to craft a good framework for describing declaratively what information flow users want to allow in intense so this looks basically like declarative constraints and tenants can say that you know in in the in conjunction with this Atomic settlement uh this you know this value acts must be revealed to A and B so X could be a note could be a key uh they could say this value y must be revealed to some other party C at block one two three in the future and these kind of declarative information flow constraints enable things like Crossroad private bridging new auction designs privacy preserving governments programmatic disclosure of aggregate data yeah a lot information flow Patrol If done properly I think can be quite General so what is blockchain I'm back to this question uh and you know personally I mean I think a blockchain is a data structure like if you take a piece of data and you hash another piece of data and you include it you just get you've just gotten a kind of partial ordering relation and this is the essential thing uh and everything else it perhaps can be separated uh with intense Center curl ups we just create blockchains on demand they live very ephemeral lives you know a blockchain exists for a second when two intents are matched and then it's kind of rolled up and then then you know you can verify it later but you need stores that are somewhere there's still data availability but the blockchains are really quite ephemeral and whether something is an L1 and L2 or L3 is just an observer dependent finality choice so shout out to John charb from DBA this is kind of my my meme summary of this talk roll ups or l2s roll ups or just tell ones Roll-Ups aren't real blockchains are real okay so finally a few a few kind of um grab bag slides of just interesting interesting points that I think come up when you look at things in this way so if we conceive of a world of like what do uh with the economics of these different systems look like in a world that's modular with intents um and shout out to Zaki who who I think had a tweet that says something roughly like this um you know I see a kind of kind of two classes of uh you know maybe a value capture or like two classes of things people will want in an intense sensor modular world and the first class and this is like maybe maybe this will be controversial to call it a Dao but the first class I'm going to call service provider daus and the reason I call them Dows is because there's like a group of validators or operators who are working together and are providing Services as a collective and they're like coordinating to provide that service efficiently and reliably but users or sort of applications see it as a service provided as a whole right so I would say that uh one kind of Dao you can have been in an intense Centric modular world is a data availability dial which provides data availability and ordering and there's some slight differences here but you know at the moment Celestia and ethereum and the kind of roll-up Centric roadmap um data availability host layer model are providing this kind of service right then you could have some kind of execution dials maybe current Roll-Ups are like this you could have solver Dows you know Suave as I understand it is like this and these service provider Dows are competing on the basis of liquidity and sort of role specific optimizations right they're like providing really efficient data availability sampling they're providing you know extreme like uh more private solvers in the case of suave using sgx right they're providing some like a specific specific service that they think people want right uh then they're just assets people want you know people still want Bitcoin they want eth somehow they still want the almighty American Empire bucks um on the blockchain um and those assets are competing you know independently of protocols right they're competing on the basis of distributions and how good they are at public goods funding so three concluding thoughts here uh one I think you know intense and modular or like a match made in heaven they come with the problem from opposite directions they can kind of help solve each other's problems you know in an intense Centric architecture I didn't have time to go over all of it in this talk but one challenge we've had building a Noma is simply that we don't have specialized Primitives right like we don't have efficient da sampling we don't have these like individually optimized things so I think there's a very nice Synergy there one thing I also really like about about the modular blockchain world and kind of some of the conversations we're having here is that it seems like it's sort of a fusion of ethereum and Cosmos of the polycentric or self-sovereign political ideology with the kind of clear architectural thinking of the ethereum ecosystem this also Maps like Celestia andoma many of these teams um in the modular world came from Cosmos or worked on Cosmos and we're now kind of converging with ethereum and finally I have a kind of shout out uh similar to something Mustafa mentioned which is that let's please please please not remake the mistake week of building a lot of transparent blockchains that are not going to work like if you're trying to launch a transparent roll up and it's not for like some cute game maybe it's okay but if you're trying to launch a transparent roll up for financial settlement and you are going to spend years on this and you're going to spend a marketing budget and you're going to spend go to market effort and you're going to like convince a lot of people to use it make it private don't make it public it's not going to work that's it thank you [Applause] thanks Chris so we have had back to back to back talks solo talks time to switch it up we're going to do a fireside chat next and for our next guest polygon has been in the news a lot lately and we're fortunate enough to have Sandeep here with moa's moderating please welcome Sandeep and mo from polygon moderator hello hello what's up everyone so um I remember when Celestia and lazy Ledger started uh the architecture that's proposed is uh is one of the original Cosmos ideas from like 2017 and uh you know it's very anticipatory of a lot of scaling bottlenecks and a solution to it polygon was born in much more uh practical and reactionary implementation of addressing scaling problems in real time so yeah we talked a little bit about sort of the story and how we kind of get to modularity but you know how did you guys kind of get started in India decide on architecture for the POS chain ZK evm and then up to polygon two yeah I mean when we started uh you know polygon uh I think the core team initially uh the way we were different is that we even though we started building the infrastructure but we were before that we were actually building the apps right and uh it was very clear for me mid 2017 itself that you know building the apps on this system like on ethereum on a public blockchain is not going to scale most probably and uh it was also very clear that the the dev Community liked uh ethereum a lot and there was already a very big community so we like now it's look it looks like how did we see it before like you know five years back but it was even then it was very clear that uh you know eventually the ethereum is going to emerge as the layer one and now we are seeing like even many layer ones even becoming layer twos and things like that so it was very clear that we wanted to you know build something which you know dab developers could be would be able to use and uh that is what you see in whatever you see polygon does right you know we are we are not uh even though now we have like extremely high quality like research like we almost spend uh spent like one billion dollars in uh you know the and and now have the best talent in the ZK space and you know became the first uh you know project to launch a full-blown ZK evm which is a layer two built with ZK Security on top of ethereum the DNA is very clear like we want to build for the developers where uh real world applications can be built uh so you know everything we do uh the core DNA is that and we move uh as per that mission the mission is that to bring millions of users in web3 and whatever needs to be done for that had to be done so polygons put uh more transactions of value and transactions through tender my consensus then pretty much any other projects that maybe the binance bridge uh and it's a core piece of architecture Celestia is also using um what's kind of what was your kind of your guys decision to utilize that and how's that experience been no I mean tender mint is I think as a as a pluggable consensus you pick it up and build on top of it I think it's the is the most evolved most mature uh you know uh consensus or consensus uh SDK Cosmos SDK previously I mean this is one of the best ones and that's why like at that point in time and not only now only even in 2018 when we were building it it was the most mature and the best and I think from our side also although we are very deep into the eth community uh you know care about it very deeply but I think after ethereum if there is a ecosystem which you know I also and a lot of people in my team which is layer one ecosystem which we respect really is Cosmos ecosystem and tender mint is a you know invaluable contribution from Cosmos through the whole ecosystem and even now like uh you know we are building some things where we need a single slot finality consensus and obvious obviously tender mint constantly keeps up keeps coming at the top of the evaluation list so I mean now it's much more evolved but even then it was very clear to us that as a you know Plug and Play consensus into whatever we are building uh tournament is pretty good and uh yeah we are we are happy that we as you said that you know we would have uh you know put in more value transactions on tenement consensus then I think the whole of uh maybe orders of magnitude bigger than that actually yeah Luna had a good run um so what was kind of the polygon experimented with a lot of different scaling Technologies um is you know kind of new the limitations of this uh you know plasma side chain uh what was kind of the story and journey to you know start working with the Hermes team and get to this uh ZK AVM that launched in March yeah so I mean when we started uh building this at that point in time uh uh only like the Practical approach as as I said that we are driven by pragmatism we are not we don't want to build sand castles or like some Cloud castles which nobody uses and back in 2018 19 all these Layer Two approaches were not evolved we started building plasma we were the only team which actually delivered a plasma to the mainnet but then nobody use uses used plasma at that point in time and uh then uh you know we were very clear and then this optimistic rollups approach came along and all those things and we were we evaluated that and we realized that this approach is also again uh you know kind of uh Band-Aid right like you know this is this doesn't give you the ultimate uh you know infinite scalability on top of ethereum and that's why like we were looking for a more a better approach the end game approach uh to say and that looked to be ZK at that point in time and that's why we focused all our energies into ZK and here we are like I think uh you know arguably pound for pound the best team in ZK with the with the products which are out there in production and getting traction day by day yeah so you know ZK technology is obviously very hot um it's definitely the core technology underpinning polygon2 uh so that was recently announced in the past few months um will you talk a little bit about the sort of architecture and vision and the components of polygon2 yeah so polygon 2 a point O is essentially a multi-chain vision wherein like the the goal is as I said that you know our goal is again our goal is not to provide this blockchain scaling technology our goal is not to provide these fancy consensus Technologies and all that stuff our goal is how do we get one billion people into web 3 in next five to ten years and what technology needs to be built for that and at the end of the day for the Developers if this needs to become as we call it internet of value it needs to have the similar kind of characteristics that the internet of information that we see and we call web2 today it needs to have similar characteristics what are those characteristics the current web 2 world is practically I'm not saying theoretically it's practically infinitely scalable the more apps more uh you know new different kind of applications are coming in you can spin up more servers you can provide additional amount of computation everything um is available and it's almost practically uh infinitely scalable secondly the information is seamless like I you know back in the day like 10 20 30 years not 10 20 but 30 40 years back when the internet was starting you know these were like die separate they used to be darpanet Euro net this like you know different kinds of networks and then this whole www tcpip uh kind of uh things happened and then all these internet all these individual networks got connected with each other and today we have a you know seamlessly connected internet across the across the world but previously even if you had information on one internet which is let's say one network which is on us and you want to bring it to let's say Europe Network you have to the same thing kind of what you have to do today like you have to bridge the value from one asset one chain to another chain then you you know and right now not they're not even safe security zones that from one chain you take on the other side either you are relying a bridge or you're relying in the destination change security and all that stuff and that's why like it's the value is not seamlessly uh you know it's not it's not easy to move uh the value seamlessly from one chain to another and this internet of value needs to have the similar kind of characteristics as the information has anybody can create uh share and exchange information in the web to World same way it should be possible for Value in the web web 3 world so uh you know the point how we have built is that this is a multi-chain environment like where you can spin up as many layer tools you want as many change similar to very like similar to Cosmos Vision you can spin up as many layer twos as we want but they are all secured by the zero knowledge technology so all these chains provide their ZK proofs to ethereum right and we are for for all of these chains to have a fast interconnectivity with with each other they we we propose the interoperability layer everything else like otherwise this cross Chain lxly Bridge and everything is built out from our side but now we have proposed the fast interoperability uh aggregator layer which which what it does is all these chains submit their ZK proofs extremely fast already we can do two minute proofs uh as per our current technology and we'll be uh the new upgrades that are coming up you will be able to Every Chain will be able to share a create proofs like in five to ten seconds and eventually going to two seconds also so every chain essentially creates two seconds proofs submits to this aggregator layer and all these proofs of different different chains they get aggregated and get submitted to ethereum so you have eventual hard finality on ethereum but you also all of these chains let's say I am on chain number 100 and I want to interact with the transaction across chain transaction that is coming in from chain number 10 let's say the moment chain number 10 10 submits the ZK proof on this aggregator layer on chain number 100 and all the other chains they clearly know that okay this particular transaction which is coming in that is already proven by the ZK proof on this aggregated layer right and I can without trusting I don't care now that whether that chain has one sequencer two sequencer thousand sequences it's a reputed sequence uh you know validator on that chain or it's a private chain Enterprise chain I just don't care it's a simple value or the transaction that is coming in which is proven by ZK so we can prove the execution so it doesn't matter if there is a college dorm room a guy who who's who has a chain which has multi-million dollar value flowing into the public chains the public chains can easily trust that value so as I was saying that what this system provides is first of all it's infinitely scalable that as many layer tools you you want to create like today if we wanted to with this aggregator layer if there were 100 000 chains this system will still work if there were 1 million change the distance will still work right and secondly all of these chains have fast interconnectivity between them and eventually you will reach a place where users won't even realize that you know this all this bridging and all that for them they are clicking or let's say I'm playing some game in some chain which has a dedicated capacity to that chain almost like a server I play I you know get some money I want to swap it to USD on a public chain where the liquidity is there it should be for for a user it should be a simple click and the transaction happens that's the ultimate vision and again as I said that it rolls back into our ultimate Mission which is again how do we get you know mass mass across chains yeah um so that sounds like a fairly modular architecture would you kind of agree with that no definitely that's a modular architecture and uh you know modularity is you know is the end game for this like you know we can't have monolithic architectures like uh you know many layer ones like uh without naming them many of the layer ones have you know postulated that there is one single layer all the transactions of the world live in that that has like you know even if you are not a technical person you can understand that would have limits of physics on that right you can't have the whole world's data transactions everything in one place so modularity is going to be the end game and all kudos to Celestia uh Team like I think you guys own the modular uh you know concept uh because you know like uh originally coming up with this uh separated parts of the stack uh with this with this idea but then uh from our structure I think like in it naturally evolved into that where now we have multiple roles like polygon 2.0 also postulates that that uh basically you have multiple roles within the system first all the polygon validators have hundreds and hundreds of these chains to validate on plus you have multiple roles so you can either be a prover you can maybe a data availability cluster provider like you know we call it local data availability clusters if you don't want to go into a public data availability chain but the system is very open if you want to have a data availability on Celestia on Avail or any other data availability provider our system is pretty agnostic to that and then you have the validator layer then you and the way some of the strides that we have made on the ZK level like you know right now uh a lot of these things are being productionized but on the research level everything is built out and eventually we'll be able to have like decentralized prover layers and then obviously this aggregator interop layer is also one more layer where uh you know people can collaborate so automatically yes this is like a fully modular uh you know Vision yeah so that aggregator layer I mean it sounds kind of a lot like sort of the IPC Vision uh key part of this unified liquidity piece um you expand a little on the the bridging thought process right now and the lxly bridge that's currently proposed for polygon2 so the functionality wise yes is exactly like IBC that uh you know you you can accept value coming in from one particular chain on the other part or the other chain and the whole ecosystem is interconnected seamlessly and the user experience is really good so functionality wise exactly like it's an interconnectivity protocol in terms of I would say in terms of the how it works it may be like actually the reverse of IBC on how IBC works is that each chain like this is as per the best of my knowledge that how IBC works is that each of the chains is running a light node of the other chain and then whenever any transaction is happening you rely on the light node uh you know information that that that you're getting but you have to still rely on the consensus on the other chain but what aggregator layer is doing so basically the the the the block creation or verification is is is spread out like each chain has to very by the verification of the other chain separately whereas in the aggregator level we are actually aggregating the proofs of all the chains like all the ZK proofs are being uh aggregated on one layer and then every other chain can simply uh you know take that proof and and you know rely on the transaction so in terms of construction it's uh completely the reverse but functionality absolutely um the same and then overall the liquidity the secure liquidity kind of bridged over from ethereum with all that sort of be aggregated and available to any of these yes Essentials yes yes that's the whole thing that's the whole lxly bridge where we have a you can call it like a master uh smart contract non-custodial contract which all chains Connect into and that smart contract actually uh then any ZK proofs you are submitting so you can directly move your funds from one chain to another and you can directly exit to ethereum because the funds are the liquidity is being aggregated in one single layer um can you so some a lot of talk about validiums have come up um obviously the plan right now seems to be the existing proof of stake chain will migrate to a z KVM Palladium at some point you kind of talk about the structure of a validium and you know what are some considerations for Builders um it's really like Cosmo started as this hubs and zones concept where you'd have your own kind of it's like cities and towns like every city kind of provides the same infrastructure but has its own culture and growth trajectory and values and uh it sort of seems like you know that's still kind of the multi-chain goal that people don't really talk about it that much but yeah what's sort of the the architecture for validiums and um you know why why should develops consider building with them the architecture that I talked about that you have multiple chains they have prover layer they have data availability layer and there is an aggregation layer for everything the validium is one special case of this whole architecture like validium in the concept of ethereum actually in the context of ethereum when you have both the data as well as the proof on ethereum that's a roll up and then when you have only the proof on ethereum and data can be elsewhere that is the validium you know construction uh for ZK roll ups and it's actually specially applicable to ZK roll ups because uh on ZK roll up the sequencer cannot cannot cheat once the ZK proof is submitted that's why it's called validity proof like in case of optimistic rollups for example they are called optimistic rollups because there is you optimistically assume everything is correct and then you as you expect that somebody from the uh from the community if there is a fraud Community will will you know run a fraud proof whereas in ZK these are mathematical proofs these are validity proofs once the proof comes in it's accepted on ethereum you know that the sequencer has executed the transaction still this proof uh correctly so that's why like you know you don't need data on ethereum blockchain uh to uh basically to validate the chain like in case of optimistic rulers the data is not there you can't even validate the chain because you don't know what what is happening in case of z k the ZK proof itself is the validation of computation you need data only if you when you want to exit or you want to do any kind of cross chain operations and all that so that's why what you want to do is you want this data to be fairly available but that data doesn't contribute too much to the security of the chain per se 100 like of course there is a like weird Ransom attack and all that but there is there are multiple other multiple ways to address that using uh you know Force exits and all that I'm not going into that but point is that with ZK this construction is possible where you can have the execution layer uh submitting only the proofs and the data is elsewhere and that's where all the data availability uh chains become relevant so and I'm sure you get this question a lot but we have a very uh Cosmos is very has been very much known for its tech for a long time but uh the community is constantly asking where we're going to find users uh polygon has been very famous for its BD efforts and output uh what's kind of the secret there you know what how have you guys been able to get the users that you've been able to get I think this this stems from the DNA itself as I said that you know our mission is very clear I said that you know I mean at least from my side and now we have like some of the biggest researchers like Jordy and you know Daniel and uh you know bobbin from maiden and all that so their focus is building all these research efforts and all that whereas and I think you know we are as a team also we are fairly modular right so there is there is people who are building the tech and I am my mission of life is how do I get you know one billion users into web 3 maybe starting with 100 million users let's say in next two three five years right so like as I said that from my point of view we are not here to create fancy technology this has this approach that approach all that kind of fancy stuff that goes around I am here to make sure this trustless compute basically for me is stress less compute this Human Society currently we we interact with digital systems all these digital systems are centralized and you know that's why we are fooled with so many fooled with so many things in so many ways and my mission is that how do we make this trustless uh you know World possible because we are spending 60 70 of our time in these digital worlds and how do we make it more trustless that looks like a natural evolution of humanity and once that mission is clear then all of these things flow through that it's very very clear in my mind that you know what we are building needs to have users if this is something you know kind of extreme which can't get users I would probably not vote internally uh that you know we should build something like that even though it looks very fancy and you know crypto we as crypto industry we are very big fans of narratives and you know all that stuff I don't care about that so in that Pursuit I'd say you probably talk to more builder teams than any founder that at least in crypto that I've come across what do you hear about Celestia or D.A layers uh when any of those teams are considering their choices yeah I think right now when I speak to a lot of developers obviously that 95 of them are on ethereum only and on ethereum as a developer they really so the developers don't really care that you know when they are building an application where is the data going to be honest right and the Celestia kind of use cases the the kind of Technologies they are more relevant to let's say the protocol Builders like like us um and also I think the the roll up as a service providers The Sovereign Roll-Ups thing that you are you are doing but because you can't go to the developers and tell them that you know we will give you this data availability system that data will be system to the developers it should be like this is the environment where you can build the app and all the other things are abstracted from them and this is the job of these system integrators or roll-up service providers and all that to make sure that you know the better data availability products are used for that be it ethereum 4844 be it Celestia Avail or all the uh different kind of datability solutions that are coming in yeah I can tell so I I don't yeah I get da so but I don't hear Too Much from the developers and ideally I should not like we should not even index on it too much it's actually the job of the infrastructure providers who are going to provide execution environment to the developers that should be the job of like roll up as a service software develops and and things like that yeah so just make it make sure the SLA is there and it's a seamless to all the potential users as possible cool well that's all we have time for I don't know due to q a or you know all right thanks Sandeep thanks everyone [Applause] thanks guys thank you okay back to the solo talks now we're going to switch gears do a deep dive into da and our next guest will go through the Avail architecture and show us the nuances there introducing anurag from avell hello yeah uh good morning uh very pleased to be here very well organized Summit from the Philadelphia folks my name is anurag and I'm going to talk uh about a whale architecture today uh can I have the next slide please how do I think yeah so just to provide some context on what Avail is right like I mean uh uh I uh so Avail was started within polygon in November 2020 and we recently spun a whale out in March 2023 to become a completely separate independent entity uh what Avail is is a data availability layer which uses a combination of Razor coding kzg polynomial commitments and datability sampling um and uh essentially you know like uh and I'll provide some context on where we are in that some of my background is I previously founded polygon in 2017 and because we and I started the project with my co-founder prabal in 2020 and uh and and we just spun it out just very recently so the entire team of polygon uh came over uh to China the whale so that's some of the background and you can see some of the history here so today you know like a lot of people you know ask me on Twitter regarding uh the architecture and so that's why I focus wanted to focus on what Avail is what are the use cases and so I'll try to get a bit into the technical details um as much as possible within the short time window ah so before I get into the meat of the presentation right like uh some context I know this is a modular Summit uh uh audience and so in general uh no major uh introduction required but essentially what I want to talk is that Roll-Ups are now acknowledged to be the main way to do option execution um and if you can see the rise of ethereum Roll-Ups right like all the big activities happening on uh layers like polygon GK VM or arbitrum and optimism and so the Roll Up is now considered as the best way to do option execution uh and but if we consider that the roll up uh is the way to go uh and blockchain constructions are becoming more and more modular uh with the rise of roll-ups now it is important to see what these Roll-Ups uh really want and what are they hungry for and the answer is that uh you know they really want lots of Da or data availability and that's kind of the primary reason why we are working on Avail because and I want to make a bold statement here in the sense that every base layer blockchain in the future is going to be a DA layer even ethereum is uh has already pivoted to a role of centric roadmap and it is you know like prioritizing pivoting to a deal here if you've heard of prototank sharding dank sharding all of this point to the fact that the base layer is going to be a DA layer and all the execution is going to move to the roll ups on top and that is the context in which you should view Avail that Avail is a base layer that provides scalable data availability for rollups now what is Avail you must I mean this is a Celestia I mean is the modular Summit and you know like Celestia is one of the organizers and so you'll ask what is the what are the differences between a whale and Celestia and I'll get into that but essentially Avail is a modular layer that focuses on datability it does not do any execution it accepts transactions from Roll-Ups and makes them available uh via a combination of Erasure coding and kzg polynomial commitments and in a sense what it does is it kind of orders the transactions that come to it and uh provides it to the light Land network in general the mental model for looking at Avail is very similar to what ethereum like layer provides to the rollups on top so in in that case the Roll-Ups do the execution on layer two and then there's the base layer that does the data availability and so you can have a variety of roll-up execution environments uh so this is this includes something like the evm but also more complex environments like svm but also app Train app specific chains and how we do it is a combination of Erasure coding kzg polynomial commitments and the USP of datability sampling which allows downloading of block data with a few random samples and we'll get into how that gets done but in general a variety of rollups can leverage this capability um this is the um base layer architecture and in general if you look at it right like I'll just go through it in little bit of detail so what is happening is the primary consumers of Avail are Roll-Ups roll ups accept transactions and uh basically submit transactions directly to Avail we have this concept of application ID where each roll up corresponds to a particular application ID and then they can submit that onto the same base layer so that we can have multiple role of submitting data to Avail democrated by application ID um what what we then do is kind of extend the data or Erasure code the data so if you see this diagram so the original data is then extended uh in general via eraser coding and then what we do is primarily create commitments of the data so I mean if you see this slide so this is the rough structure of the blocks right like so if you see the original data the data from the Roll-Ups is packaged into the block and we create polynomial kzg polynomial commitments for the data and its erosion coded in such a way and the homomorphic property of kzg allows us to mirror the Erasure coding in the com of the encoded data on the commitments as well so if you see in the right hand side the C1 to CN are the commitments of the original data and because of the morphic property of the case energy commitments you know like we are able to kind of extend that to the Eraser coded data as well ah and so once uh that happens sorry how do I go back this one yeah okay um in general uh we are able to you know adjust the Matrix size so what we really create as I told you is like an M cross n Matrix it is Erasure coded to create uh in general uh you know we double the data in general and what we then do is take the commitment um and put into the header so the header has all the commitments to the data it also has the app index and certain other meta information now this uh block data is then propagated to all the other validators and basically each variator at the moment in the current implementation regenerates those commitments and you know like comes to consensus on the Block so that's how the base layer works and we've already gone through the block production stack um in general we've used uh substrate to build the relative node and the consensus that we use on the network is Grandpa and babe so babe is the block production um uh mechanism and we have grandpa is the finality Gadget so it's a hybrid Ledger in that sense and you know like protects against large number of Roads crashing Etc the incentive mechanism is nominated proof of stick why we chose that is basically because it allows for wide stake distribution and so what happens in nominated proof of stake is that you do not delegate to a single validator you delegate or nominate to a pool which then is fairly distributed to a large number of varieters so you can do a rank choice of validators and so essentially why is this important is it allows us to have a pretty decentralized set of validators and we can have up to a thousand validators in the validator set Ah that's that's um that's what we get from you know like using substrate we use substrate but you know like we of course it's it's a datability layer so there is no execution so we've disabled all the runtimes and such and so it's a very light run time on the base layer itself now uh the beauty of this whole construction is we are able to do a pretty neat Lifeline Network and in general once the blocks uh are finalized and the headers are propagated to the light clients even if a validator withholds the data the lifeline networks you know like basically because they can sample the block pretty efficiently they can come to know if there's some withholding of data possible and in general uh we want to Target like a large number of light lines but in in a sense a few hundred or a thousand nodes are pretty much enough to kind of sample the block data pretty quickly um to get into the light lined uh uh architecture a little bit right like uh so it's so we started with a different implementation initially but then we had to kind of uh build the uh lightline node from scratch we use a cademia DHT implementation uh distributed hashtable implementation and so the LC is basically form an overlay uh p2b Network on top of the base layer ah initially they use the full node for the bootstrapping but over time when you have a number of light lines on network a new light line that enters the network uh essentially can start sampling from the light line so within the light line Network there's a P2P Network as well as a DHT uh and so I mean you can think of this from a mental model uh the lightline network almost is like a torrent lag Network it's not the same but you can think of it like that because it stores some of the sample data locally as well for a period of time ah how we do the proof verification is that you know these we we generate cell level proofs so as I mentioned we have this m cross n Matrix and so depending upon the size we'll have these proof generated on the cell level and so that's why uh there will be a random sampling from these light lines and they they are able to verify uh these sellable proofs and as I said within within a few samples or if we have like a few hundreds or even thousand light lines the enter block gets verified and of course there's this property that the larger than the number of light lines in the network we can increase the size of the blocks uh as well uh you know like in that sense uh and I will say from effort engineering effort perspective um bulk of the work that has gone into building a whale and so we've been building this with uh for more than two and a half years now first within polygon but now as a separate identity uh and most of the work is on you know like the bulk of the effort is on the lifeline P2P because there are a number of issues uh that you know performance Etc that have to be looked into uh there uh this is sort of a visual representation um of of the data Matrix that I talked about uh we can play around with the number of rows the number of columns and you can see a reference Benchmark um on our performance in the sense that if we have like if you see the table on the right right like um like there can be uh two if a for a 2mb block size right like I mean the rows and columns are such and the times to generate the commitments the polynomial commitments uh are pretty pretty neat and even if we increase the block size to a 32 MB or a 120 MB you can see that these are well within our Target block time which we have at the moment kept as 20 seconds for now and this allows for you know like propagation across the network as well as well as verification of the commitments as well um okay so um I will I will have time for Q a after uh the session but so in general if you have any questions on the architecture happy to answer post the talk um but having said that I mean once we've arrived at this whole construction um uh what is the ecosystem that we are envisioning that will be built on a whale so as I said this is a role of centric uh blockchain right like we provide we our primary customers are roll-up developers infrastructure developers and so these are the different kinds of ah solutions that we are kind of looking at right like so Sovereign roll ups validiums optimistic chains uh and you know like app specific change right like and think Cosmo Style app change but more in the validity proof or optimistic construction uh Manner and of course uh General opens roll ups as well and recently I mean um so the way we are thinking about this uh go to market um and I'll get into that a little bit is that there's been a lot of activity on the um L3 uh uh area so a lot of the l2s all the major ethereum l2s are now looking at their own L3 initiatives so if you look at something like an arbitrum orbit or a zkasing hyper chain or a polygon 2.0 or Star Quest fractal scaling strategy or optimism super change strategy and so they are basically optimizing for a lot of l3s uh uh in general because uh they what they want is um to optimize the L2 as as a liquidity hub for all the l3s on top so that's why you'll see in the coming days a lot of one click L3 deployment Stacks now why am I talking about this is basically because when we talk about l3s on ethereum right like the first thing that they need is a DL air to do um you know like dump their uh for the datability needs and they cannot use ethereum for that perspective and so uh this is sort of the graphical representation of uh you know like how a whale will be used in conjunction with these l2s and we're working with starting to work with a number of these uh to come up with these constructions we also release the Avail attestation Bridge recently it's a pretty interesting Construction in the sense that there is a data attestation bridge between the Avail base layer to ethereum right now of course it's on testnet for now we are doing an optimistic style construction of the bridge but we've been also been working on a ZK snack based data attention Bridge with our partners at succinct in fact we just shot a whiteboard session on on the construction it's pretty neat and so we are going to be working with succinct in general to create this bi-directional bridge between Avail and ethereum because sustained if you know already has a telepathy Bridge which proves ethereum's proof of consensus and now we have with them um you know like uh zika snark based construction which proves avails proof of consensus which is Grandpa babe um so that's that's one Focus area that we are going to work with and of course we are going to work with Sovereign labs in that Sovereign Roll-Ups in that sense in the sense that current Roll-Ups are primarily implemented as um you know like um to be verified on a smart contract on the ethereum base layer but with our data really sampling light lines what is also possible and especially with ZK constructions and recursive proof mechanisms we are also able to kind of propagate these proofs to the light line layer and in fact we're also talking with a bunch of wallet teams uh in in general to kind of embed the light client uh into the wallet itself right like I mean right now uh light lines um are run via desktop apps or CLI or something like that but what we envision is eventually they will make their way to wallets and so very similar to you know like let's say Bitcoin light lines or such right like where uh where the user doesn't even know that the light client is working in the background and the and why we are able to make this possible is basically because of the lightweight Construction in which these light lines can actually even work on mobile devices and hopefully in the browser and such uh at some point in time and so we we envision that there will be a lot of light lines and essentially I mean this is an underrated uh development but today you know like to verify for a user to verify the state of the blockchain it's it's not that straightforward on today's blockchains and what we will enable with this combination of Da light lines plus recursive ZK proofs is that any user will be able to verify the state of the blockchain uh pretty easily um and as I said modular baselines are perfect for Sovereign roll ups and along with us of course Celestia is also taking up the mantle and so we are very happy to grow the ecosystem together um I'll quickly end uh with um you know like the development stage and timelines uh as I said we have been in development since uh two two and a half years uh currently on our second long-running test net which is we we call it the Cartier test net it's named after uh aniket who was one of the researchers behind the kzg polynomial commitment we already have a robust set of external evaluators already on the test net and we are targeting 200 um next month I guess and we want to do a incentivide test net which we want to scale to a 5000 light client uh a number pretty quickly we'll have an incentivized testnet also this quarter and the main net Target is end of Q4 or early q1 that's the main net Target for now uh we are pretty comfortable way in terms of where we are in terms of development uh quickly uh getting to the optimizations uh So currently in our base layer what happens is when the block producer creates the uh proposes the block uh creates the commitments we propagate this to other violators and they regenerate the commitments at their end what we want to be want to move to a construction in the future is a construction where other validators can just verify the commitments and not regenerate them which will make it much faster for um you know like to arrive at finality and such we are also working on a very uh neat construction called kzg multi-proof so if you remember the Matrix that we create we create these proofs at the cell level and so what we want to uh do is do SUB Matrix level openings and so you know like we kind of reduce the complexity of verification pretty significantly and this will you know like create huge improvements in terms of you know like the opening generation the DHT population and you know like overall keeping the network uh streams you know like manageable um and of course uh while ensuring backward compatibility and as I said a little bit earlier we are working on the ZK snark base data attestation bridge and this is a pretty neat construction where uh you know like uh we are able to prove Avail proof of consensus which is Grandpa consensus uh within the snack circuit and I think this will be pretty useful for deploying or connecting our chain to a variety of other ecosystems because the nature of the uh Bridge itself yeah I think I'm um wanted to I've wanted to cover whatever I wanted these are some of the links important links you know like please feel free to um scan the QR code we will have this presentation uh available uh online as well um and general yes we are hiring across the board we when we started like three months ago we were at 18 now we are at 27 and so we're looking for uh quality folks to attend uh join the team and so if you're looking uh for a you know like to join a growing team please feel free to Ping us and this is also our Twitter handle um so you know like looking forward to talk to some of you and of course I'm available and a lot of team is also available uh in the event um and you know like happy to talk to you as well yeah thank you thanks on a rug so we are going to stay on the topic of D.A next up will be a panel I think it's our first panel and I'd like to welcome back Mustafa from Celestia anurag from avail Togo from squirrel and then bartek who will moderate please welcome okay so I uh I guess we can start so let me first maybe introduce myself my name is bartek I'm a founder of L2 beats we are the community Watchdog for uh all l2s right now on ethereum and we try to inform the users what are the trust assumptions the security assumptions of all these Solutions so the users are actually aware and I've got an amazing uh panelist today representing I think three most known uh projects that promise to deliver da so we have Avail we have Celestia and we have a toggle from squirrel who will um I guess I have a interesting role on this panel because you will represent ethereum so let me just welcome my panelists and let me just start by saying that I've watched this panel last year it was very docile you know everybody was very uh nice to each other and I think it was because maybe the space was nascent everybody was building but now we're like almost launching or have just launched and things are becoming a little bit more spicy so let's make this panel spicier and see how we go so my first question to you guys um will be well ethereum Community uh is probably considered to be one of the largest and should ethereum Community uh really care about your uh Solutions or should they just simply wait for uh proto-dunk sharding and dunk sharding so maybe let's start with a veil and then you know we just go this direction yeah you know I think this is a pertinent question to ask um in general I mean if you look at the timelines right like um prototank sharding is will will come maybe end of this year early next year sometime but Bank starting is going to take a lot of time uh to come because from our perspective we've gone through the whole cycle of engineering the p2b on Avail as I think Celestia has been on the same journey and I think ethereum being a system that you're already securing a lot of assets on the chain it's difficult to kind of introduce functionality you know like that can potentially you know like jeopardize the current state for example so it's going to necessarily take some time ah and you you're also seeing the rise of you know like let's say L2 systems l3s on top like I said on my talk a little bit earlier uh you know like each major L2 today is looking at this L3 strategy um you know like ZK saying hyper chains orbital morbid and so on right like I mean everyone is looking to do that and uh right now let's say if you look at uh uh arbitrum which is doing a Nova or a stack wave which is uh you know like has a DAC so all of them are operating dacs it's a pretty decentralized uh sorry centralized and uh all these L Series will require some secure da Solutions and which is less expensive than what ethereum um you know like can give at the moment of course it will the cost will go down with eip4844 so we acknowledge that but in general if you look at our architecture and you know like I will agree with you know like with celestials architecture as well in the sense that we are able to provide far significantly less cost and quickly before you know taking up too much time you know like uh data availability sampling is massively underrated like it's not very well understood by a lot of people and I think unless uh there is a data building sampling implemented on ethereum I mean there's a lot of way to go there and we are not really using uh this construction uh you know like things like with recursive proof ZK proving systems already in there uh you are able to now give proofs propagate these proofs to the P2P for example to the users directly so something like a um like a starkware which puts proofs on ethereum every six to eight hours uh someone like that can actually create intermediate proofs and pass them directly to the users and of course they need to wait for the proof to come to ethereum for bridging and such but you know like it's much faster uh verification time right like sorry not taking up too much time but you know like I think all of these funds are important yeah sure so I mean I think there's definitely place for multiple da layers and they all have different trade-offs different use cases you know like um for example fluid ethereum as um as anyone mentioned you know sharding is just like a very small step to the overall remember tank shouting to set up different trade-offs so like for example um celestion Avail are more overhead minimized like there's no like state state baggage already so for example like if it's more practical or overhead minimized for solving Roll-Ups for example if that's what if that's what you want to build um it's also like various design different design choices so for example you know eip4844 you can only fit eight blobs um without burst on a block um like if you want to have if if you want to do like app chain roll ups you'll probably need some data aggregation service for that to be practical whereas like for example on selection Avail there's no specific like blob size limit uh minimum blob size limit for example and also of course there's a fact that we have data availability sampling and that's not that's that's still kind of further on in the ethereum roadmap what do you think I mean should we consider using other da so first should I explain why ethereum Community should care about data availability on ethereum or no I'm kind of assuming that we all know that so let's just move on yeah that was my guess as well um I feel like there's a place for multiple data availability Solutions and one thing to understand that we're trying to solve different problems so in ethereum the the a layer serves as as a way to separate the mark to separate the markets for data and uh and execution and also increase the capacity by separating that market and basically that allows Roll-Ups that are deployed on ethereum to scale better because you can now post more data and still settle on ethereum for Celestia and Avail uh the the the target market is a bit different they're targeting silver and Roll-Ups and protocols that don't really care about execution on the base layer but more just want some ordering and data availability guarantees and I think that there's even a way where you can combine the approaches for example validiums can use Celestia or Avail for data availability but still settle on ethereum so I don't think that there's a world where only one solution is required and only one solution is needed we need multiple Solutions especially if they solve different problems so it sounds like um ethereum is targeting different uses perhaps I don't know Mustafa uh would you agree with that that you know your target is slightly different than you know what this is ethereum trying to to actually do well I mean you can technically deploy seven Roll-Ups using ethereum SDA but I don't think that's like the that's not something that from a social perspective that I think the ethereum community colleges coalesces around um and and um it's more overhead it's not overhead optimal to to have a software roll up on on ethereum because you have to also like run a node that gets the state of the chain but ultimately the whole point of is modularism not maximalism the whole point is like it's not a zero-sum game um there's different trade-offs like if you do like if you ethereum Roll-Ups have to have on chain the a so if you want to deploy ethereum roll up you have to use it as a d a there's no way around that if you want to do a validium or Celestial or optimistic chain then yeah you can use option D A But ultimately um yeah this is all security trade-offs and different design different design trade-offs I would sort of agree in the sense that um there I mean it's very difficult to demarket the Target segments in general like how would you uh demarket the users of a rollup with you know like a validium with the same stack of course the security properties are very different but uh I mean that's the reality that we are seeing on production right like I mean you have an arbitrum one with an arbitrum Nova you have a stock net with a volition uh built in right like and so so we are increasingly seeing this polygon also you know like announce their validium plans they have a roll up they have a validium and so of course the target segments might be different you know like some apps need the security of ethereum for example uh but you know like the rollup developers are asking for um you know solutions that help them Target uh maybe different you can say there's a different Target segment it's the same Target segment but it totally depends on you know like uh what GTM these uh roll-up developers and I'm not so so but yeah I would agree that from a more Sovereign Celestia are better options at the moment okay so let's dive in a little bit into those trade-offs so that you know it's properly understood because I think uh I'm actually taking it from mustafa's uh talk in the morning that you know it's not easy to talk about these trade-offs so um on ethereum especially um we all know that cost of data is actually quite high for all the roll-ups to pay and obviously that translates to the cost uh to end users and that's why we're exploring all these strange constructions like volitions validiums that are available the Committees even such weird constructions like optimistic data availability schemes so um which solution do you think will be ultimately the cheapest to choose as a da just you can raise your hand you know I mean yeah I mean so I think um who's the cheapest that's my first question being available the one that has the most capacity well I'm I'm I'm like you know thinking about typical uh Dev team that is trying to uh deploy uh either server and roll up or a roll-up but you know for them uh the current da uh on ethereum is just expensive right so you know they would come let's say to us 12 to beat and they would ask us what do we think you know which solution might be the cheapest and they will provide you know the cheapest uh uh essentially block space for for data so I think there's two ways to look at this the first way is you look at like closet classes of different types of the a so for example like you can say like ethereum Celestia and um available are like one type of its own chain d a you have like a blockchain at the a then you have like other kind of like less secure types of the a like a decentralized uh sorry a data availability committee you know with six with like a multi-stick of seven you know that's what stock wear and any trust arbitramp have and then you can just have like a centralized like a single server so ultimately the cheapest is just like have a centralized server as a DA well obviously that's not useful so let's let's assume that we're talking about like Unchained the a the cheat like so the important thing to note here is like um in blockchains you cannot guarantee um low-cost transactions and the reason for that is because if you can guarantee free or low cost transactions then you have the you have the denial of service problem so it I think it's not about framing the question in terms of cost because ultimately there'll be fee markets and it'll be supply and demand it's about firming the problem in how do you have how do you get the most report the only thing that blockchains can guarantee is the report they have or the block size they have because ultimately ultimately the fee of the pricing will be determined by supplier on demand and how much demand there is for that specific box base I mean again yeah I sort of agree with the Mustafa in the sense that I mean it's not good not enough to just look at the cost sense of course there will be throughput and you know our architecture allows us to create high throughput as well but also we have to look at other factors like decentralization and stuff like that right like I mean so I mean there are a lot of factors there it will be cheaper of course and we have we can increase throughput as such ah but as you said we cannot compete with the single server doing da right like I mean and there will be Solutions like that we are already seeing Solutions like that right like uh so the the thing is it's a trade-off between uh cost decentralization um you know like throughput and lot of other factors um and again just coming back to the point right like it's certainly cost is a factor but we also have to look at things like datability sampling light lines which provide new kind of powers to these roll ups right like I mean propagating the validity proves of fraud proofs to the users directly can users verify that directly for example is like how can we come up with those kind of constructions we should consider that as well is and the light node is also an important piece because all the Roll-Ups that I'm running on Celestia right now like the sequences running on Celestia they're not running full nodes they're actually running light nodes like the sequences themselves are running light nodes and they're not paying for RPC endpoints or running a full node and that's just significant YouTuber than having to have need access to a filtered but that's not I don't think that the for a sequencer the cost of running a full note really matters like I run a couple of full nodes at home and like even if you assume that I run relatively high in hardware for that the cost of one notice less than one thousand dollars and considering that like sequencers extract so much Prof potentially can extract so much so many much profits from Mev etc etc I don't know I'm not saying that's like the main thing it's just like that's a like one Nuance for example I'm not saying it's like the main cost depends ultimately it's about the cost of the day gotcha so what about the uh um security then I mean the common uh argument that I keep hearing is that um it's always a security trade-off to cross the trust boundaries when you actually using the external da so clearly the security of validium is very different than the security overall up right and the cost of validium is also very different that's why we've got our route from arbitrum Nova that's why we've got starknet or Stark X rollups and Stark X validiums and I was always wondering like ants users at the end of the day they seem to have very little to say like if you are let's say the idx user what benefit do you gain from dydxx reposting all the transactions as opposed to them using some you know data availability committee right you just want to trade on dydx and you want to have a reasonable guarantee that if things go wrong you can always like recreate the state from the data and you can always exit and and it seems that this is all about the trade-off right so with um you know increased security comes I guess bigger cost so how do you as an app developer or as a rollup developer suppose that I'm dydx how do I make this trade of yeah I think uh ultimately I think the choice will be decided by the rollup developer and I think not by the app developer directly I mean unless we're talking about app chains for example but before that you know we are seeing like a lot of roll-up orchestration players coming to market for example and I think these are the set of developers who are making the choice in terms of what data to use and I at least from my perspective what we are seeing is it's at the moment uh cost versus security trade-off kind of a thing so of course they want cheaper costs for the users that's kind of the primary metric but in general from a roll up uh operational point of view right like I mean at the moment of course no decentralized DL air is in production which is going to change pretty soon and that's why you see these datability committees uh in uh and right now that may not be a problem but I definitely see uh you know like decentralization of the DA's similar to the decentralization sequencer questions that will come from a regulation point of view or whatever right like I mean in general people don't I mean is for now single sequencer datability committees are fine but I think uh just to safeguard regulatory interests and stuff like that I think for sure you know like things will move to a more uh decentralized context are you saying that this is the primary reason why the idx chose to go you know its own way and like be more decentralized uh I can't talk to the intentions of course but I think in general I didn't fully comprehend the whole reason why they moved of course they have more flexibility in running their own app chain as such and so they are able to customize a lot is what I feel I think the Stark X solution was a bit limited is what I I may be wrong uh in terms of what they wanted to do and of course starkax has now I mean there's new upgrade the stock net is much more powerful Cairo 1.0 and such so I don't I can't subscribe their intentions but um yeah yeah so I think um there's kind of like an interesting question in here which is about how much do users actually care about decentralization like pure for my ux perspective there's no ux difference probably like be interacting with the AC versus interacting with an Unchained a but consider this um like the users today can use polygon that you they can Bridge tokens from ether polygon and use it like pretty much as the same as an as a L2 or roll up even though it's not but if that's the case then why did polygon you know spend a billion dollars on ZK on ZK Roll-Ups it doesn't make it like doesn't actually make a direct difference from user perspective in fact it's probably like slightly less TPS at the start until we optimize the ZK premium systems I think ultimately this is what differentiates web T from web3 from a social perspective ultimately people um do coalesce and do care about the decentralization properties of the systems they're interacting with like otherwise why has no one just created you know set like a centralized proof of authority L1 you know just 10 the committee of 10 nodes you know billion TPS you know like that's no one would take no one would take that seriously so that's why I think like yeah people should use the ACs for certain use cases but ultimately um for an application to be kind of like fairly decentralized incredibly um like have like a social like social uh you know Community around it there doesn't need to be like at least decentralized the a as an option uh I have a small question for you with the whole validium talk are you trying to bring back the conversation whether validiums or l2s or not because I had a feeling that that's where you're doing there so I feel like from uh it's all about the trade-offs so because when you're building a certain application on a certain protocol you need to assume what is the worst case scenario that your protocol can handle and is it is that worst case scenario worth taking certain trade-offs so for example if you're building a game that doesn't really have any monetary value then you probably shouldn't use on chain da because it doesn't matter like worst case scenario your game assessors disappear whatever but if you're building an app that has billions worth of dollars deposited on it I think then you should take more care of how you design things because billions of dollars Frozen in a contract that you can never withdraw from that's a bigger problem than your game assets being frozen um okay so let's um maybe uh uh switch gears a little bit so here's another common question that we get uh from from users uh well first of all sometimes it's hard for them to differentiate between the data availability and data storage but let's assume uh for the sake of this discussion that we all know and for the audience that don't know maybe some of you can like give a very quick introduction but the question is on ethereum uh it seems like we have a very reasonable and strong I guess guarantee of the data storage because there's this you know huge vibrant ecosystem of of different explorers and indexes and whatnot and people just simply assume that it works right now if I use availa or Celestia how can it be guaranteed that in 12 months you know I will have access to all the data and I will be able to actually recreate the state yeah so I mean that's like a general question about the difference between data availability and data storage so um you know like even with within shorting and Deng shorting projecting also does not plan to guarantee the data forever it's like the current plan is to prune data prune blobs after 30 days and the reason for that is because like data availability is there including ethereum especially available it's not it's meant to be like a real-time bulletin board to allow Roll-Ups opportunity to get their data to um to make sure it's published so that they themselves can store it um so what is the difference between data so I would actually propose like I actually propose renaming data availability to data publication because I feel like that's a easier to understanding easier to understand static availability is about proof of publication like proving the data was published so that people can access it um specifically well specifically in Celestia at the moment we don't prune the data blobs like they kept around forever right now and but at some point Arthur maynet the community will need to coalesce around how after what certain time point in time data blobs uh will be pruned but that being said even even in ethereum even on networks where blobs are pruned I actually still expect that the data will be permanently stored somewhere and permanently accessible to the public simply due to the stressand effect like it's very like the poor data storage only requires an assumption that a single person I ideally have more but the minimum assumption is that you have a single person storing the data and that's very easy that's extremely easy assumption to achieve on the internet I think bartek's argument was more that because ethereum has a such a vast ecosystem where you have rpcs bulk explorers Etc storing the data it's highly unlikely even if all the nodes proven the data that the data will be lost and uh so for example I think it was Ripple the at some point lost like a day worth of data because there are servers somehow had a bug or something so I think bartek's question was more sorry if I'm rephrasing it incorrectly okay I understand but I argue that would also still happen on Celestia and although the Laos including a bill like it's the the costs of running the download the cost of storing data is cheap enough that multiple people will do it even even the ecosystem is smaller than ethereum and also the other very important thing to mention is that the whole point of having data availability sampling light nodes is that um your Distributing data across thousands of light nodes and that will also help somewhat with the data storage problem assuming that those light nodes are happy to store it for for a longer period of time yeah absolutely if these light clients can make their way into wallets for example uh I think you know like we'll have the data mirrored on the from the D layer to the lifeline Network and that can be propagated kept for a long time second is you know like just a point that if you look at the stack that we have built on like we've built on substrate so this is built on tenement for example and so a lot of the Tooling in these ecosystems uh is also compatible uh you know like with the stack in general so um and in fact you know like we are also starting to work with a couple of teams who want to put Avail data onto ipfs onto filecoin uh for example um and so as Mustafa said you know like ideally you just need only one copy of the data but we I mean we are expecting that you know we've uh we've been working with a few infrastructure providers in the substrate ecosystem for example um and um and there is the tooling the ecosystem is more mature than let's say two three four years ago like in all these ecosystems in the substrate ecosystem in the tender man ecosystem for example and so we don't really have to build all of this tooling from scratch and there are a lot of you know like uh providers out there but as part of the architecture we already have a Lifeline Network that mirrors the data uh you know like uh and so I mean so we don't anticipate the kind of problems uh that took rule is mentioning in that sense okay so um you represent I think to my knowledge that three most known uh Unchained data availability Solutions but uh uh very recently um it seems like there's new kid on the Block somehow and everyone seems to be talking about it uh and I mean eigen da any of you have a spicy Take On eigen Da and the trade-offs and its role in this da kind of landscape uh just to before I go on to express my opinion I would like to add that I don't represent EF just just in case the anchor it kills me if I say something wrong uh I just accidentally stumbled on stage um I think from the perspective of ethereum eigen the makes sense because it utilizes the existing validators or a subset of existing validators to offer extra capacity that is much cheaper than Unchained da obviously the security guarantees are much different so it's not comparable with just yeah what are these security grantees isn't it such a just a fascinating Community yeah I'll make this I'll make things spicy like like first of all there's no dogs like in the a so it's very hard to even compare investors like people keep saying oh like you know mantle and claim to yeah they launched right they claimed you've launched the main on the a but I had like other people are saying no it's not actually like I have no way to verify that because there's no docs anywhere and like like if you know people say but there's no docs so when there's locks it will be easy to compare but from what I know so far and it there's a various set of trade-offs uh like first of all I'm very skeptical of the idea that you'll actually be okay so first of all like um yeah I'm scheduled the idea that uh there's enough demand for mistake Services I think there's enough Supply there's a lot of supply of validators wanting to be staked very skeptical there's demand and because that this is that at least it's very hard to bootstrap your roll up if you're saying like the to convince validators to come and restake if the initial rewards only come from fees which might be very small to start at the beginning um and I hope and the whole point is that the fees are supposed to be cheap in the first place secondly um as far as I know again the a requires a dual token model anyway because you can't slash data on chain you can and so uh like a kind of the piece the whole point of restaking in my opinion because you have to restake not only eth plot but you have to restart the eigen token as well yeah because you because you need you can't slash data on chain yeah so so I was about to add up on this I I'm still not sure if that's a very sound model the whole data availability based on crypto economic guarantees because essentially the assumption is that if the data is withheld then the price of the state eigen token is going to drop and it acts as a disincentive so essentially you can see it as implicit staking like the one that chain link used before and I'm not really sure that that it's a strong enough guarantee for a lot of applications obviously as I said before some applications it's fine but like for a lot I just not sure if that's a because it's also potentially vulnerable to griefing attacks the same way methods Mathis d a solution is so yeah I'm not sure I don't want to like attack against the ATM watch when they're not here to defend themselves yeah and that's true I mean I I think last year's finalist Freedom was there and I really like him uh but you know we have to get to see um um a working system like I haven't seen that so I would deserve my judgment based on that first uh but I mean yeah I mean risk taking like not about eigen DM but restaking itself is you know it's a pretty neat idea but I'm I'm still skeptical of how it will actually work in production uh settings you know in terms of you know how many were letters sign up what is this actual security you know like giving attacks such and such so I I will I would want to reserve my judgment at least you know like I want to see something working before um worrying too much about it actually uh like from my point of view what I would love to see more is like more I think thorough security analysis of such potential attacks like griefing uh we had a discussion about that about two years ago or maybe it was a year ago when Mattis uh launched so-called smart L2 and optimistic da which for those that you know they seem to be publishing data directly to the storage uh without actually mentioning the D.A and if anyone like notices that the data was not published to the storage they could in theory challenge the sequencer and like force somehow sequencer to pose the data on chain so and the cost of such solution as you can imagine is extremely uh low uh hands fees on matis were very very low and I was surprised that very few people actually bothered like discussing that from like first principles right I mean what what are the security uh you know trade-offs there and is that scheme actually viable to consider for anyone um the community was largely silent like are you talking about I looked at I looked at met his profile on LTB before this talk and the idea is to use a data availability challenge right but the problem is with data availability challenges is that that's that's um like they have as far as they haven't actually there's no challenge packages in finalized or purpose yet but in general the general problem with availability challenges is that they didn't solve the data availability problem um like that was actually the first thing that people like vitalik looked at to solve the availability problem but it doesn't solve it because of something called the fisherman's dilemma which is where um because data and availability is what's called a uniquely accurate unattrovisable fault which means that the Challenger who's challenging the data availability it might be their fault that they can't access the data it might not because maybe the network is not is not good um and that basically creates like a dilemma where yeah it's like explained on uh on like the ethereum wiki but it's like um you either have a situation where you have a dose attack or there's not enough incentive to make a challenge there's no incentive to make a challenge in the first place because the um the the a publisher might only release the data after we make the challenge and so like you're basically griefing you so there's no way to make it economically sound basically yeah it's basically as Mustafa described because um default is not interpretable it's impossible to attribute the default here you would essentially let's say if I create a challenge and you post the data you can both withhold it and uh you you could have withhelded before and that's why I initiated the challenge or I was just not in sync with the network and that's why I didn't receive the network and that creates a long-term griefing Vector where essentially you start the challenge you stake a certain amount I post the data when you initiate the challenge you get slashed and then you continue to get slashed and until nobody is willing to challenge you anymore because basically every time you slash the sequencer just reveals the data so there's no reason why you would long term continue challenging it and therefore it's fine if you consider the optimistic case but but in a worst case scenario this whole security basically breaks apart um okay so um I have another question like from a completely different angle uh so you guys uh mentioned uh and this is the thesis that we've been hearing and will be hearing uh especially in this conference uh it will be very easy for anyone uh very soon uh to launch their own uh construction uh on roll up with very Custom Security parameters and as it is even today for users to understand the security assumptions is extremely hard for us as an Orc trying to understand that you know we're like really uh for us it's a moving Target that you know we need to chase how do you see in the future with potentially thousands of Roll-Ups being launched uh uh how do you envisage that users will be actually able to tell uh what are actually the secret assumptions and how do you see the role of you know people like like us in this whole Space I'm very curious because I'm frankly terrified about the future you're describing from my perspective ah so so the way I think about it is you know like there is like a wide variation in terms of roll-up orchestration today uh but as you're increasingly seeing all the roll up Stacks are also tending towards standardization right like so for example you know like ZK VMS in terms of the implementations you know like are going in a you know like a similar direction as such for example uh so that process will also apply to the entire roll-up stack pretty quickly I mean uh better than anticipated and as I said right like a lot of these uh the major players are pushing then there then standardized stack to uh you know like this rollover service providers or such for example and so what we feel is rather than having a wide variation of Roll-Ups there will be a set of roll-up Stacks that will be pretty much standardized and the deployment itself also will be really not it will not be like a thousand different role of configurations it will be like maybe five ten different configurations but like a lot of instances in that sense and so it's it will be easier to kind of look at it of course it's not going to be as easy as I make it out to be but uh yeah the only thing people should just use LGB like that's that's the solution you know I'm relying on you guys you guys are doing a good job so far um to be incredibly neutral and uh like um people trusted the way for people users and developers to get quick information about the security trade-offs between different the a layers and develops and ltus I think we open up opened up a can of Wars by by uh John specifically I blame John Trav but knew for everything by writing that article about social consensus or how Roll-Ups are real because now for the last three weeks or a month I've been hearing people making ridiculous claims about like Anatoly I love him but like his argument that if I post date on ethereum I all of a sudden become an L2 it just doesn't really make sense and so I feel like first before like we start discussing about how many Roll-Ups are going to be built on it on a different protocol we need to work together to Define what our rollup actually is because I don't really buy it as a whole social consensus thing because you can literally attribute it to anything and therefore like all the concrete properties that we have about consensus protocol signatures Etc can be just hand waved away through social consensus and so I think we need to start working on that first before we discuss what's going to be deployed where and we've just ran out of time so you know that might actually open the discussion for an entirely different panel so thank you so much guys I mean it was lovely to have you all here unless you've got like one last closing comment for Stuff I mean you've got to ask a specialist question which is like what's the report you know but I guess we are applying for that unfortunately all right thanks so much thank you very much thank you [Applause] which one which one should I use yeah where is it do you have it oh it's right there all right I am really excited for this next talk I work very closely with this next guest and I know firsthand the the passion belief and conviction that he has behind the topic he's about to speak about please welcome Nick white CEO at Celestia labs to talk about light nodes are more than just a meme all right uh it's so nice to be around other people who are as passionate and also nerdy about modular blockchains as I am and as our team is um and it's also really humbling to have seen the growth of the modular ecosystem uh in the last year since the first modular Summit and it's all thanks to you guys out there in the audience the builders and the Believers who are turning the vision of modular blockchains into reality uh you know day by day by building all this stuff so um today my talk is going to be about light nodes and see if my clicker is working doesn't seem to be working let me try this one I think we're having a little technical issue try it again huh oh okay there we go uh so yeah my talk is going to be about light nodes so you know if you're on Twitter in the last few months you may have seen a bunch of these memes circulating people were joking about running Celestia light nodes in the club or on their covet test and there were just Twitter for like a few days was just filled with all these memes about Celestial light nodes and honestly it was hilarious it was awesome I was stoked I'm glad that the community was embracing light nodes but at the same time I I had a little fear that people were gonna miss the Deep significance behind light nodes what they represent light nodes are not just a meme they're actually a movement and they're crucial to the success of modular blockchains and therefore to crypto and also therefore to the future of mankind and to explain why I'm saying that we have to cover some some first principles and kind of get really high level for a second so I love this quote by you've all know Harari in the book sapiens he says that humans have the unprecedented ability to cooperate flexibly in large numbers and this is something that you might not think about on a daily basis but if you were alone as a single human there's not much that you could do you wouldn't be you probably wouldn't even survive that long but when we get together in groups and we cooperate we're able to achieve some pretty incredible things and this is a feature of human beings that really sets us apart as a species and makes us as successful as we are today um one way to think about this like a good analogy is the difference between a single celled bacterium and a multicellular organism like a human being itself so when cells in biology get together and cooperate they're able to build much more powerful complex things and the same is true for for human beings so in in nature human beings can't actually cooperate in group sizes above 150 which is the so-called Dunbar number and this brings us to a really important concept which we're going to be talking about for the rest of this presentation which is this notion of social scalability So within any given system of cooperation there's a limit to the the maximum group size that it can support and one of the really cool uh sort of arcs of History has been that over time human beings have been able to transcend this Dunbar number and continuously uh you know find systems where they can cooperate in larger and larger groups and with each time that we're able to innovate and sort of take the step towards larger scale cooperation and coordination we have an increase in prosperity and all these big benefits to to everyone and so you know in ancient times or prehistory you know we would collaborate in the sizes of an order of magnitude of a few hundred people like tribes and then as things got more advanced and we had more systems of cooperation we could cooperate at the scale of city-states you know tens of thousands or maybe a hundred thousand people and in modern times we are cooperating at the scale of nation states so Millions tens of millions sometimes hundreds or even in in some cases a billion people within a nation and this has kind of been this Arc of progress for Humanity um and so now I want to talk a bit about like how have how does our current technology or systems of cooperation work so the way that they currently work uh is you know everyone starts out with a set of rules it's like laws it's like the Constitution or the Bill of Rights um or or the bylaws of a corporation you have a set of rules that everyone that's in the system is agreeing to follow right and this sort of sets the ground rules but unfortunately uh it doesn't end there because you can't just trust everyone else to actually follow the rules as they're written so you need some way of enforcing the rules so in lots of our current systems you have to empower a person or a group of people to be the rulers or you know they're given a special power that they get to enforce the rules on everyone else and this works but it comes with a really deep flaw and that flaw is summed up in this question who watches the Watchmen in other words you know who is actually uh enforcing the rules on the enforcers uh and the answer is really no one is you're just trusting them and uh the problem with that is that they often have an incentive to cheat and that's what corruption is that's what uh you know fraud is is when the The Watchmen are not actually being watched and they they go off and do their own thing and so this is a really deep problem in our current systems of cooperation and we got a really timely reminder of that last year with what happened with FTX unfortunately and although a lot of people interpreted FDX as an example of why crypto is broken it's actually the exact opposite it's an example of why we need to build better systems cooperation and uh you know build blockchains and decentralize things and that is really core to this whole Space uh is this idea that trust doesn't scale or at least trust in other human beings to be honest is not a scalable form of cooperation and so that's why in blockchain space we use these words constantly trust minimized um trustless decentralized all these things are pointing towards the fact that the problem that blockchains are trying to solve is we're trying to remove trust from our systems of cooperation because they trust represents a vulnerability like a weak point that can be exploited and if we want to achieve a greater degree of social scalability we need to find something better and the answer is trusting in cryptography rather than other people and so how do blockchains work how are they different to our Legacy forms of cooperation well they start with a set of rules just like the previous example and oftentimes this is called like a social contract and but instead of having a particular group or individual who gets empowered uh to enforce the rules on everyone else it's actually the people the users of the blockchain that enforce the rules directly and so you'll notice that there isn't this sort of like power hierarchy uh it's it's kind of like a direct relationship between the people and the rules and this is really powerful and unfortunately a lot of people map the old mental model of cooperation onto blockchains and they assume that oh it's the validators who are given this privilege they get to enforce the rules they're like the the new rulers in this uh blockchain based world and that is not the case um and if it were the case then there would be nothing special about blockchains it'd just be a repeat of our old systems of cooperation and we could all like forget about that we're building them and go home and give up um so this is really really not how it works and I hope this is probably one of the most widespread misconceptions in the space um so how do users actually enforce the rules directly it sounds kind of weird or like confusing uh how it could possibly work well the way that users enforce the rules of the chain is that they run a node and a node is a computer program that has an understanding of what the rules of the blockchain are and then it constantly Audits and verifies the chain to make sure none of the rules are being broken and if the rules do get broken they stop them before they happen so running a node is how users enforce the rules of the chain and this gives blockchains their fundamental value proposition this superpower which is that blockchains enable rules without rulers so using blockchains we're able to dissolve this power hierarchy we're able to remove the need for this trusted power powerful entity in the system and that enables us now that we're rather than trusting in other human beings to be honest if we can just trust in cryptography all of a sudden we can scale our cooperation we can achieve a social scalability that extends to the entire planet because everyone can agree on entrusting in cryptography and in verifying cryptographic proofs um and now this sounds amazing but unfortunately there is a catch and that catches that even though blockchains achieve very high social scalability they don't achieve very high technical scalability um basically technical scalability is the fact that they can't really process that many transactions per unit time so they can't actually support that many users uh this is why for example ethereum is always congested and to understand why the blockchains are not very technically scalable you have to understand what's actually happening under the hood inside of a node so when you're running a full node and you're verifying the chain you have to download and verify every single transaction that every other user sends so uh unfortunately users are just people right they're normal people like you and me they have a certain amount of bandwidth that they can download transaction data and they have a certain amount of computer power that they can use to verify these transactions and so that sets a finite limit an upper bound to the throughput of the chain if you exceed that then all of a sudden the users can't verify the chain anymore and you lose a property of social scalability so a lot of chains when they're quote scaling what they're doing is they're increasing the node requirements so if you create no requirements obviously you can run the chain faster and process more transactions but then end users like you and me average people get priced out of the ability to verify the chain and instead of running a full node they can run instead a light client and a light client is basically what it does is it assumes that the validators are honest it assumes that the validators are honestly enforcing the rules and even though it's very technically scalable to run a light client that's not very socially scalable because you're reintroducing this power Dynamic and the this trust vulnerability with by empowering the validators once again so um that's why blockchain scaling is so difficult is because you have this dilemma between social scalability and Technical scalability where you know you want to increase the throughput of the chain so you can support lots and lots of users but that's what we all want we all want Mass adoption we all want everyone to be able to use the same chain but the problem is that in this current Paradigm we can't do that without also sacrificing social scalability and social scalability is a whole reason we're building blockchains in the first place this is why you know when Bitcoin forked the the Bitcoin Fork that survived was the one that maintained social scalability um and it would seem like and that that's why basically we had this promise of hey we can rebuild this like a trust Network in cryptography that scales to the entire planet but then actually it turns out we can't really get there because we we need to achieve both Technical and social scalability at the same time to actually make that practical um and that's that's the sort of the state of things that's where things were up until 2018 and 19 when uh a Galaxy brain dude named Mustafa al-basam who was just up here published uh two papers that described an entirely new way of building blockchains uh which is called which are called modular blockchains and at their core modular blockchains defined a new way for people to verify blockchains in a way that can be both technically and socially scalable and the the way that it works is that unlike in a monolithic chain we have to download every transaction and verify every single one you just in a modular blockchain you just have to download a tiny sample of the transaction data and then you can verify a proof that the transactions are valid that they followed the rules of the chain so in doing this you're able to verify lots and lots and lots of transactions that the other users are sending but without having to do so much work and really importantly this technique scales uh sublinearly to the uh to an increase in transactions so it fundamentally solves this sort of dilemma and results in the whole uh you know purpose of this talk is this new type of node called a light node which has the security and decentralization properties of a full node but it has the technical scalability of a light client and and so light nodes are the core innovation of modular blockchains and there how we can actually have blockchains that scale both technically and socially at the same time and you can see this represented in the specs for a Celestia Knight light node as compared to nodes and other ecosystems in many cases as orders of magnitude less resources are required to verify the chain and that brings me to a really important value of the entire modular blockchain movement not just Celestia which is that we believe anyone anywhere should be able to verify the blockchain they should be able to run a light node and enforce the rules of the chain themselves oh and that's why we have targeted the specs of the celestial light node to be able to be run on a smartphone because a smartphone is the most widely adopted device in the world estimated 5.3 billion people worldwide own a smartphone so if we can if every one of those people can run a light node then we can achieve a maximum degree of social scalability all those 5.3 billion people could be connected together in a cryptographic network of trust and so we could with modular blockchains actually achieve this Holy Grail of of scaling cooperation to the entire planet um now again it's a really big vision and I firmly believe in it but I'm not going to lie to you there's still a lot of work that needs to be done so at the protocol level this is sort of a call to action to all the protocol engineers and Builders out there there's a ton of work that we still need to do to make light nodes actually technically possible I mean they are technically possible we have Celestial light nodes working on test Nets and soon on our main net when we launch but there's so many other layers of the stack we need better proof systems all these things listed here are going to be really crucial to be built out to make this Vision a reality so there's a lot of engineering still to be done and there's also a component of friction right we need to make it really really easy and simple for users to run light nodes and you know current status quo is that you have to open a terminal window and do command line and that is way too much friction for the average person as you know and so if if that remains that way we're not really going to get very far in terms of adoption but fortunately there's amazing people like Josh on our team who have been building you know desktop apps and hopefully in the future mobile apps so that it can be just a few clicks between uh you know starting up a light node and I think that's really where we need to start moving towards and and the last thing is I think we need to even embed light nodes into user-facing infrastructure like wallets and that way in the same way that we can have you know the Privacy Community has this notion of private by default in the modular blockchain Community we can have a notion and a value of verify by default and lastly we actually need to make light nodes into a meme so light nodes need to be embedded deeply within our culture as for those of us who believe in modular blockchains like running a light node should become a habit it should become a ritual even a right of passage for being part of of what we're building and what we're doing and uh if you haven't yet run a light node we have a booth outside and we're also teaching a workshop tomorrow and so I invite you to to join and give it a try and and be baptized uh in the the modular movement so the amazing thing is that the meme is working as you guys have seen people are actually putting light node to use in practice running it in all different kinds of places in the world on planes on boats at the pyramids people have been running them on all kinds of devices Kindles uh gaming consoles and even in this instance on on their car and uh so it's cool there's a momentum that there's this cultural momentum that we have and we need to to keep it going and so I really hope that you guys do your part and participate now you might be wondering though okay this is really cool but like you know why why should I care um well the answer is that and why should I care and why should I go through the effort of running a light node you know it is it's a busy world I only have so much time in my day why should I bother well the answer is you you should care and you should make an effort to run a light node because we're we're living in a world where our rights and our freedoms are being increasingly infringed upon by people in power and even if it's done with the best intentions it's it's really dangerous and it's something that we we shouldn't tolerate and so we need to run light nodes as a way to sort of stand up for our own rights and and build a system where we can't be taken advantage of and we've seen this all over the place we've seen you know in Canada the the financial system was used as a way to suppress protests by freezing people's bank accounts um we've seen with Twitter files and a lot of recent news that Facebook and lots of social media companies have been censoring information because of government requests there's also widespread surveillance on the internet and I don't think it's a coincidence that trust in our institutions is at all-time lows um I think people are waking up to the realization that the people in power don't always have our best interests at heart and the amazing thing is that you know our ancestors when they had to defend their rights and their freedom they had to pick up weapons and go to war right they had to put their lives on the line and and fight for what they believed in and the beautiful thing is that with light nodes we we have a peaceful alternative we can build a new sort of like fabric of society and build our own rules and build our rights in a way that can't be compromised and we don't even have to like fight it's it's just software it's just code so and last but not least like it's not just about defending your rights running light nodes is an opportunity for us to take this next leap to large-scale cooperation and coordination and rebuild a network of cryptographic Trust on which we can build a more peaceful prosperous and Democratic Society and so um I actually happen to have a this is kind of complicated I happen to have a node on my phone right here running hopefully you guys can see that and so what I want all of you to do with me is if you could reach into your pocket or your bag and and pull out your phone and hold in the air with me let's go guys but I'm up I want you to hold up keep them raised I want you to realize that what you're holding in your hand is not just a phone or a camera or a computer it's the key to your rights it's the key to your freedom and it's the key to a better future for Humanity so thank you very much [Applause] wow that was incredible give it up for Nick uh to close our last session out right before lunch Das Broadband is the title and please welcome Alex Evans from bangkap crypto to the stage [Applause] hello everyone I'm Alex Evans I work at band Capital crypto I'm going to first try to figure out how to work this clip I think that's fine hopefully it's not running a light note or something and it freezes halfway through um we talk a lot internally about applications and infrastructure and sort of the ways in which they interact and we're particularly interested in ways in which those interactions might change particularly with modularity being a key driver those who want to share some of those ideas the hairbrained are otherwise with you all today so and I'd say for most of my time in the space the nature of the interactions between applications and infrastructure have been mostly harmonious um with some key exceptions and some of them are sort of listed in this slide and coincide with the end of the last two uh respective bull markets and remarkably sort of parallel stories were right at the end of a cycle an application draws in a lot of excitement and fervor and interest crashes all the infrastructure a bunch of people get frustrated we get some cool scalability ideas out of it uh the application developers go off or want to launch a new chain and um I'd say that this sort of four-year four five year transition here like occurred while ethereum got meaningfully better as infrastructure in the meantime right the block limits unlike some other blockchains increase that roughly the rate of Moore's Law actually a little bit higher with burst limits given EIP 50 59 which was a major Improvement roughly Forex reduction in call data costs on a relative basis as well and that's even before things like the merge The Surge the Splurge you know stuff that sort of started happening later in in 2022. right but I'd say qualitatively the nature of interactions between applications and infrastructure during this period didn't change right so Applications had a wholesale choice to make when choosing what infrastructure to deploy on and what I mean by that is you embrace all the constraints as well as all the positive aspects of infrastructure um by deploying on there or you choose not to right and you can choose to deploy on ethereum or Solana or both or some combination or launch your own chain but that's sort of the type of interaction that you have as an application developer with the underlying infrastructure that you Deploy on uh and we think that's about to change and we use the term better here and not necessarily better just there's qualitatively different types of interactions that we think will be available to both infrastructure and applications that are enabled by succinct proofs and in particular the horizontal scalability that's a sync proofs in different forms enable so that's I include underneath that things like data availability sampling optimistic systems snark based systems and so forth but just to make this idea very very concrete I'm going to go through two general examples that also a little bit of audience pandering and that I know a lot of people in the audience work on one or the other of these two things or maybe some combination so I'll use snarks as an example in which this horizontal scalability of infrastructure enables new types of applications and then I'll use data availability sampling as an example of a changing the interaction between apps and infrastructure in a qualitative way you could swap these two but I just want to make it concrete so I'll just go through through each and each one in turn and just demonstrate the principle so starting with modular and easy K and by the way I made these slides on the plane over before I spent the last two days at Zeke with ZK content there's more ZK content today some tomorrow so I'll go through these relatively quickly um but I think most people at this point even just in the last two days have seen sort of this diagram right um and realistically when I was first looking at the space that you would see these papers there'd be just like these entirely monolithic constructions for the most part at least from where I was sitting right was like hey here's my snark or Stark and it's like fully featured and like it's better than this other thing in the literature or asymptotically or concretely or uses fewer assumptions and please accept my paper into your conference um and I'd say like over time and in particular more recently you know you'll see things that come out that focus on just one of these components what are these components so you start with like a front a program written in some high-level language compile it through a front-end to a set of constraints using interactive Oracle proof to reduce that checking some evaluations of different functions and commitments against some commitments that approvers made and using sort of a functional commitment scheme or polynomial commitment scheme and maybe Fiat Shamir you produce this short proof that you can circulate around a P2P Network or post on chain or whatever right so the point is researchers developers can just focus on one of these components right and achieve material advancements to the state of the art even in some of it by advancing one of these Sub sub components of these right and then these get reassembled back into uh more General Frameworks and I'd argue something like that happened roughly in the 2019 to 2021 era where things like plonk came out and high degree Gates became a thing and lookups and you know interesting advancements in polynomial commitment schemes they sort of got reassembled again accumulation in a sense of Halo and ultimately swapping out clonk and creating the framework Halo 2 which a lot of people use these are general purpose Frameworks that combine the modular component improvements into General Frameworks right that people can use and it sort of marks a transition between at least when I was looking at the space you know 2018 most people using some sort of growth 16 variant but circuit yeah sort of Marvel computer science you know 10 years leading up to it or maybe even more right um you know but circuit-specific trust it's set up to what people call more Universal architectures in the case of Halo plancky II resero things like that and we think this sort of architectural transition is enabled two fundamentally different things uh the first one is a transition from roughly more specialized architectures to more Universal architectures I don't just mean this in the sense of like you don't need to do circuit specific trusted setup I mean this in the sense of very concretely people are building ZK VMS out of it implementing an instruction set of the evm inside of you know that's provable um or um uh I think I lost the slides um okay thank you uh while they while they also boot up the light node in the meantime yeah thank you um so anyway the oops let's go back oh and I think we're missing okay never mind all right so yeah the key idea is this transition to you know people building risk five provable risk five chips and so forth like it's a very very conqueror ZK wasm or whatever like we sort of are going from to sort of more microprocessor bases I know these things did exist in the growth period but like fundamentally the the way they say if you look at ZK VMS they utilize these modular components like like not just recursion but lookups very extensively under the hood and so forth so that's kind of something that's been driving um new types of VMS and so forth we think the the more interesting thing that's been enabled in succinct proofs has been recursion it has enormous economic implications for the type of infrastructure and the types of applications that exist thereafter and again if you look at something like Halo 2 or you look at plonky2 you look at a lot of these sort of more second generation universal proof systems um they they roughly have the performance on a single machine of like something like a 1970s computer right like but recursion fundamentally enables you because it's proving it's very parallel to add lots of machines and as a consequence be able to amortize the cost of compute over for over a larger number of users right and so the analogy that I'm roughly drawing here is in the 1970s like the types of applications and services that made sense in Computing were Mainframe applications hence the system 370 analogy here that I'm Loosely trying to draw but you can there's Pizza really expensive but you can amortize it over a large number of customers in the Enterprise right and if you look at most of what's happening in ZK from in terms of what's getting funded and what people are excited about it's mostly selling succinctness in some form in the case of Roll-Ups or things like a lot of new a lot of these by the way are things that have been founded and most of these have been found in the last you know two years or something something a little bit older especially on the roll-up side right so the ability to add more machines and horizontally scale without increasing trust assumption has enabled sort of this Renaissance of more applications that sells to sickness and this has kind of been um I'd say the area of the largest growth in ZK um in the last few years so this includes things like Bridges and co-processors integrated what I call Integrated Roll-Ups that build both quote unquote the processor and the roll up but then also things that take modular components and assemble them kind of in a way that an upstart PC manufacturer maybe in the 70s would have done right using op stack or taking up chip from risk zero or something we happen to work with as a portfolio company for disclosure and then using it to build some ZK roll-up where before I used to take 10 million dollars to build one of these Roll-Ups or maybe more it's now these Frameworks have made it a lot easier to just assemble them as a service so I'd like to contrast this with what's happening on sort of the client side of the market where you can't really take advantage as much of horizontal scalability add more machines because you're fun because fundamentally what you're selling to the client is privacy usually and so you're much more limited in terms of how much how much you can take advantage of recursion capabilities right and so as a consequence the types of applications you want to run are fundamentally things that you'd be comfortable running on a 1970s computer right roughly again the the 70s Hardware analogies may be a little bit drawn out too far here but the the idea is that application specific architectures were kind of still more useful at the time that said there are really interesting examples and uh of of interesting applications that people have been trying to take advantage of these new capabilities like uh the ZK the attested mic uh that Anna and Kobe did like these things take advantage under the hood very often of things like lookups and so forth that are relatively novel capabilities in these systems people are doing experiments in identity and you know shared shared Global state with private client-side information and a whole bunch of interesting experiments are happening but we think fundamentally we need more vertical scale to enable more interesting and expressive applications on sort of the quote-unquote PC non-main frame side of the market here's just a couple of examples of strands of literature that is producing crazy advancements continually these are not all compatible with each other yet and I won't go into each of them in depth there's cool things on error correcting codes there's really cool things on um you know fft free iops that work with Planck and customizable constraint systems as obviously people have heard a lot presumably about all the advancements in folding and a whole sort of strand of literature spending the last year and big table lookups that allow you to do a bunch of all these things are the only thing I want you to take away from this slide is like people are sharing like are shape turning like square root things into log things like their shaving log like in a lot of more mature Fields these would be breakthroughs not I agree these are like asymptotic and algorithmic things like concretely we'll have to see and they're not fully compatible with each other but very often the combination of these things once you are able to first of all they are becoming more compatible once you combine them they often become greater than some of their parts as we saw in the case of Halo 2 plonky 2 has been the history of the ZK space Also more generally the history of computing so we think what's likely to have or like this is aspirational at this point this is not real um there's a transition up kind of if 2019 to 2021 is a guide right that these components then get reassembled modularly get reassembled into general purpose architectures again um and that these Universal architectures um are then more performant than what we've seen and aspirationally that have sort of the Macintosh taking us into the 1980s um which you know I've talked to some people are working around plunky 3 and they can run it you know roughly at this level of performance maybe a little bit slower than that something that is a chip that's like fully compatible for instance with a high-level language like rust um so maybe that takes client-side DK into the 1980s which is where PCS and so forth and more client-side things start to really take off um maybe but hopefully it'll be fun if it did happen um okay let's switch gears and talk about data availability sampling and one of the ways that we talk about this internally is as abundant highest Insurance underpinionated bandwidth of the word abundance should be clear there's just more of it uh Assurance is Nick just gave a great talk on how we gain assurances um by sampling um you know fully downloading data as a way of getting insurance as well right but maybe the point to motivate a little bit more here is this notion of unopinionated bandwidth that's available to roll-ups and again the the Paradigm that we've been in in terms of how applications scale and how infrastructure scales infrastructure updates you know there's Solana today has been in the Solana last year hopefully next year it's even better um and then sort of applications get this choice of like when infrastructure would I deploy on and are they famously you know have the strategy deploy in a bunch of different EVMS have a bunch of horizontal deployments in a bunch of different places so you don't miss out on users and usage and so forth of course if we take web 2 as any guy these are just stereotypically you know customer focused companies from web 2 that I pulled up and like of course all of them take some some advantage of integration in the in the vertical stack right so Netflix with open connect Amazon doing fulfillment Apple integrating into Apple silicon these companies as they scale they have very strong opinions about how the customer interaction should work they don't like to import other people's with their party's opinions about how that interaction should work they have their own and so it's quite likely at least that some applications maybe not all want to take advantage of more vertical flexibility and importing the opinions of the infrastructure doesn't necessarily accomplish that for you so specifically what I mean is right now Solana has a lot of bandwidth available the way you take advantage of that is you launch an application on there now this could change very soon and I think a lot of people are pushing to change this it's fairly easy to do ethereum is becoming more unopinionated in the way that you use call data instead of use blobs and so we're becoming less opinionated in the use of bandwidth overall and then the other thing it's happening is the and I promise Nick this slide um uh what's happening is the supply of unapinated bandwidth is increasing uh pretty rapidly in the next few months and maybe a few years where again like if you look at ethereums at like 100 kilobyte blocks and roughly the block time you're roughly at like a 56k bit modem from 1996 and you're about to go to you know fairly more modern broadband connection right and like of course like interesting applications come out of transitions like this like web 2 coming out of you know similar transition to broadband um like within a few years of that um but we think what's fundamentally more interesting and by the way there's there's more vertical scale that can come from lots of improvements that are planned along these things like proto-dank sharding the Dank sharding those really cool things and how you do client-side decoding really cool peer-to-peer and networking problems and how you discover people that have samples in the network uh cool things and how you prove that you've done an encoding all these things like improve more vertically but what's more much more interesting to us and Nick sort of put this much more eloquently I think than I did in talking about user-facing wallets and applications running like clients is there's a different qualitative and increase the assurances that everybody else in the network is getting and maybe even allow you to increase block size and performance over time so that's a much more we think virtuous interaction between applications and infrastructure where the infrastructure can scale horizontally as the application chooses how much vertical integration to do over time something similar I guess we noted as well in the case of recursion and snarks you can add more machines but the level of assurance doesn't decrease and that enables more virtuous interactions between applications and infrastructure and so aspirationally we think this is the Paradigm that allows infrastructure to scale a lot in terms of capacity but applications to have a lot more control over the stack that they ultimately deliver to the end user and we're very excited to see what types of applications and what types of interactions between applications and infrastructure emerged from this paradigm are we doing questions or I did save some time for anyway like a minute no okay thank you [Applause] all right our morning has concluded we're gonna break for lunch be back in an hour hello hi everybody uh welcome to the ZK track at modular Summit my name is Anna I'm the host of a podcast called zero knowledge but I'm also one of the co-creators of the ZK validator and so yeah today we've programmed all of the talks this afternoon just a quick word about ZK validator we're Mission driven validator we're on over 12 networks we will be on Celestia and what the ZK validator does is we support ZK through various initiatives we do educational initiatives like the ZK hack investments in Grants governance regulatory engagement and building infrastructure so yeah we curated the afternoon for you we have some fantastic speakers talking about very very relevant CK topics and uh yeah I will not be the moderator for the whole day my colleague Agnieszka will be up here but yeah I want to introduce our first speaker Brian from risk zero who will be talking about bonsai a verifiable and ZK Computing platform for modular World welcome to the stage [Applause] thanks thanks Anna oh okay well that's the old name of the talk that I was going to give that's fine um anyway I'm going to talk about Bonsai and we're also going to talk a little bit about voting how do I Advance the slides mm-hmm messy oh okay yes so I'm going to be talking about uh Bonsai it's a verifiable and ZK I'm trying to use the new terminology here um platform for a modular world so Bonsai basically lets you prove massive computations um for all kinds of applications we're going to talk about a bunch of those in this talk um yeah we are also it's now actually available for testing and if you want to if you want to get access to it there's actually a link at the end and please feel free to apply and waking if you're out there yes we will we'll get you a key so I do want to mention briefly Bonsai is built on top of the risk zero ZK VM or vvm and I don't want to explain exactly what that is because I'm sort of assuming most people know what ZK VMS are General ideas it takes a program and produces a ZK proof and in this case there's zero zkvm is a risk five chip sort of a virtual risk five chip which lets it run any kind of normal program and people built a whole bunch of interesting sort of hackathon applications on top of it and we've just recently added it's pretty amazing uh capability we call continuations this lets you split proofs into any number of chunks and prove them independently which lowers memory requirements and um yeah once you roll things up in parallel and prove arbitrarily long computation so there's no longer any kind of cycle limit oh so one of the things I was trying to think before I made this talk you know what are the themes of the modular Summit and what's why things going on in blockchain right now and this thought that came to me is that an ecosystem of any kind can be as decentralized as it as it can be and so I think I started me and my friends started the first uh the first maker space in Seattle like 15 years ago and the amount of sort of tragedy of the commons that went on in in that situation and how difficult it was to manage I really see it as a lack of collaborative tooling and ways to assign values to things that are you know a lot more fluid than the current tools at the time uh supported so I feel like the more we can all do to you know build capabilities in this ecosystem and especially focus on ones that allow us to collaborate with each other uh I think the more decentralized we can be and the more we'll be able to benefit from uh the advantages of those things so these are things that I think the modular ecosystem benefits from sort of roughly interoperability capability the ability to you know do and anything and run multiple execution layers diversity having lots of different projects different clients for the protocols that are out there Etc and then obviously customizability so Bonsai is a general purpose verifiable and ZK Computing platform uh which turns out is very useful to sort of enhance the capability of any system or protocol to do those kinds of things obviously ZK is a huge part of many of the interoperability projects that have come out recently and and you can see how critical ZK technology has been for allowing multiple chains to actually communicate with each other in a low friction and efficient manner obviously ZK has brought all kinds of capabilities to different ecosystems with all of these sort of ZK coprocessing that's going on and the kinds of things you could do with Bonsai uh you can build more clients with ZK and uh you can also build the fraud proving systems uh without doing any work which I think lets people have a lot more agility when it comes to getting you know systems that might be too complex for ZK currently out onto the market in a way that um yeah they can really serve their users so all all of the great things that you can do is EK are obviously only useful if they're actually accessible to Developers so Bonsai is really focused on making the most advanced features in ZK and advanced cryptography available to all of the developers in the modular ecosystem across all blockchains and and also off chain um yeah so we sort of always known we were going to build um some kind of network uh and or platform as a service type offering but I think the thing that's really crystallized to me over the past 18 months of being in this space has been how important it is to actually you know build an ergonomic platform for developers especially when you're dealing with something um as difficult as ZK so bonsai's goal is to make it as easy as possible to do the most things that you can with CK so what that means for now Bonsai is going to be a lot of things over time initially uh this reversion that's available for testing now and will be uh sort of 100 status in the fall uh the sort of core of it is a high a high speed approving service that lets anyone submit proof requests for for instance running Linux programs on top of cartesi on top of risk five um so you can actually verifiably prove the execution of Linux using something like Bonsai now you can do this on your own computer it's just very slow uh so bonsai uh has a much like a very very optimized machine in front of the singular executor part and then uh can spread computations out to thousands of machines uh and with that very recently I think the prior fastest risk uh zero VM execution speed that we've seen was about 100 kilohertz we're now up to 2.5 megahertz so it's about 25 times faster um and we'll we really expect to see that number continue to go up over the the near term so it's really amazing how quickly you can prove some fairly out there things and we'll talk about that in a bit so we also have built out a proof relaying system this is effectively it was focused on ethereum chains for now but this is a full Foundry integration uh and template that lets you instead of smart contracts that let you easily interact with Bonsai from uh from your smart contract so along with that um in the past we're a stark based system we always have been this produces proofs that are too big to economically verify on ethereum so we've built this is brand new and I'm very excited about it a start to snark translator for our proving system so you can uh actually just post a single growth 16 proof for any risk zero computation in the future we'll also let people save money by aggregating a bunch of proofs and sort of posting that for people and that will still integrate completely with the sort of relaying infrastructure um yeah that will be coming up soon we'll eventually probably make some kind of proof Market uh we already have a bunch of like client and eth proving stuff uh in the pipeline yeah this is gonna be a lot more okay these are not just they're not really case studies it's uh some of our partners and uh and some of the I guess example applications and spaces we've started exploring on the application side the first one I talked about this Sydney Denver a little bit was a central limit order book that utilized Bonsai to run uh all of the matching logic off chain So currently the version of this that will eventually release after we clean it up more um basically works by having people place their vote they're sorry their orders on chain and then the smart contract does literally nothing with those except call out to Bonsai where Bonsai ingests whatever orders have been placed uh whatever orders are open uh competes whatever matches exist and then sends back effectively a set of settled orders or orders that are no longer valid Etc and so by doing that we actually found that you know using this pattern it's about 100 times cheaper than sort of a pure evm club would be and actually cheaper than uniswap V3 bio two to three times so uh of course you know people always ask why on Earth would you build an order book in ZK it's going to be too slow obviously you know latency is a factor and is kind of slow but this is runs you know pretty quickly and you're also I'll talk about um one of our partners that's doing some Innovative work uh to to combine ZK with some other techniques to resolve that foreign so this is uh this is now available at uh on our get Hub page under examples uh slash voting I think or Governor um we just posted this uh this is a full app built on top of this Foundry template I mentioned that integrates with existing Dow voting Frameworks so it's kind of a it's an example of how you can use Bonsai to effectively replace gas intensive components of any application and I was I mean I was when people wanted to do this I was kind of skeptical that it was going to be worth a time it's actually 3x cheaper than um than the original application and I was really surprised yeah to see that if you then take this kind of concept and start integrating something like Celestia to store your votes you start to like reach the capability to do a bunch of things on chain that were difficult before especially when you couple you know okay with uh with the account abstraction coming online I mean there's hopefully people have seen some of the amazing demos people have built with this um I really think there's going to be a lot more capability to build applications that actually use blockchains and people use them so I'm really excited about the ability to have more information from people in a sort of ZK and pseudonymous manner uh coupled with identity feeds from sismo or anything like that um yeah so a nice thing about this we were able to effectively build something that would interoperate with existing uis and applications uh in about three weeks with one person so this kind of demonstrates the efficiency gains of being able to use general purpose language for your ZK development and this is this is a recent one that we've been very heavily working on as part of our our proposal to optimism to help them with ZK which I'll talk about in a second and where we've gotten to now is that you can run an entire ethereum block there's still like this is missing some optimization for pre-compiles this is using an existing evm I think we support our evm and Sputnik and we'll support the rest at some point as well um and yeah so you can actually run an evm that's been fully audited and is in use in many other places directly on top of risero now uh it takes quite a while like the two to four billion cycles yeah takes about 15 minutes uh on Bonsai uh but that's going to go down uh pretty rapidly I think so yeah we're going to be in a place where we're looking at you know 20 cents maybe to ZK approve eth blocks uh for um yeah for full full-size youth blocks without needing to even think about writing an evm or about compatibility with um other smart contracts hmm yeah and we expect people to start integrating this into various uh rollap Frameworks nobody's doing that quite yet though and I'm excited about it okay so I do want to talk about some Integrations that we've been working on uh and this is sort of a general category here and I was on a panel Monday I think and people one of the questions was uh okay it's you know 10 years from now or five years from now and ZK has lost to optimistic approaches why what happened and I think I don't think it's a it's neither or kind of question at all I think optimistic approach is coupled with ZK approaches um can produce a lot of value for the user in terms of supporting livelier applications um and then obviously throwing Z hand top of that supports a fairly quick liquidity in and out of whatever sort of applications you're using um so yeah so layered and the team over there actually built the sort of example of a actually they're pretty far along of a fraud prover uh for their order book and they basically didn't have to do any work you just run the matching engine yourself on data and produce a proof that you know the results that got posted on chain aren't the actual results so this lets you run your order book or any kind of application sort of at the frequency that you desire and then gives an opportunity to kind of use ZK to to tie it all up and give give everyone confidence that everything's in the right shape moving forward and yeah and indeed this is a lot of what optimistum wants to do with the sort of ZK support for optimism very excited that uh us along with uh the awesome team over at omlabs were both selected to work on this so very very excited about that um one of our partners uh from sort of the earliest days of of our company and theirs has been The Sovereign team they're building The Sovereign SDK which is a roll app framework that's highly aligned with modular ecosystem I really love what they're doing they basically made a very sensible way to program blockchain applications pure rust has tons of flexibility and then they kind of take care of all of the difficulty of figuring out which parts of the modular stacked to use where uh and um yeah and which ones so risero is going to be one of their first adapters that actually supports doing ZK computations for Sovereign apps very excited about that and yeah what I want to say uh also one of uh uh our great uh sort of Partners and people we collaborate a lot with is the team over at Eclipse they're building a roll-up framework as well uh the work we did with them uh so far has been limited to a pretty amazing project which is turning um yeah effectively doing ZK Solana executions by turning uh risk zero into an ebbf prover so we got pretty far with that um I'm really looking forward to you know starting to give people the ability to migrate code from Solana to whoever they want it to run or doing Roll-Ups on Solana even though you know in theory you don't need them so one of the most recent sort of hackathon winners I think this was yeah this was at eat Waterloo built a really amazing and fairly I wouldn't say it's not simple I mean they did it in a hackathon and it's effectively a demonstration of using some new authentication standards along with ZK to let you immediately sort of make a wallet that's based on a biometric identifier okay yeah thanks for two minutes um yeah so this lets you make wallets without worrying about seed phrases or anything like that and you can see this is a sort of amazing entree into making crypto accessible for a lot more people and that's um yeah that's pretty much all I have there's a bunch of links here if people want to get to know more about Bonsai what we're doing and if you want access to the Bonsai API you can go ahead and apply at that link up there thank you [Applause] thank you very much Brian yeah let me get that and now I would like to introduce to the stage Zach Williamson who's gonna talk to us about writing the code not the circuits hello hello we're good um what is this yes it works wonderful hello everyone so yeah I'm Zach I'm the CEO of Aztec and um I'm here to talk about uh basically if I can can I go back does this even work ah how does never mind okay we're not going backwards um yeah so I'm here to talk about uh how like how does one turn uh code um into ZK sockets specifically with the angle towards looking at privacy so privacy is hard um if you want to like the goal of what we're doing in Aztec is to enable users to write smart contracts where you can have genuinely private State variables inside of them um where they're encrypted you can still do Logic on them but only owners of those variables with the decryption keys can actually see what's inside um and so we spend a lot of time over the last few years basically trying to figure out how do you take this key foundational technology of zero and launch proofs and actually present it in a way that gives you the spec the benefits of privacy whilst making it accessible to developers without needing to require cryptographic knowledge um you know no and basically boiling down all the complexities to do with zk2 and privacy to a relatively simple set of heuristics and so the goal of this is to create rather composable modular abstraction layers that um ton like converts the code of a smart contract and and the the basically the the consensus algorithms that you're using to verify its correctness into algebra represent proofs so these are the abstraction layers um that that one that uh we've come up with at least when it comes to how to do this um we're taking a very different approach to uh Brian and risk zero basically because of the private State model once you want to create blockchain transactions where you have private state in the mix then you can't take a an existing like architecture like ethereum and then just wrap it as EK proven call a day because the um uh even if your state is encrypted the act of modifying it still leaks that basically leaks of transaction graph um so we basically had to build a lot of this from the ground up starting with what do you need to turn a slot code into snacks you need to cryptographic back end basically something some kind of printing system that will construct and verify General's proofs you then need some low-level language that you can use to convert programs into circuits um the idea here is that we have this abstract intermediate representation asir which is our attempted llvm for stocks so it basically describes um generic-ish constraints that are snark friendly and the idea is you can compose a program after these you wouldn't write a program directly in this here but the goal of us here is that you take language front ends like no I like sarcoma and you can and those which presents you know a nice programming language with clean semantics and that gets converted into um uh into into the intermediate representation a bit like how you take rust and rust is a language front and that compiles down to llvm Noir is the language frontend for ZK that compiles down into SA and the goal of this to be very modular so that you can swap out various proving systems um so like Halo 2 or um our Aztec soft brettenberg or arques to basically fit your own custom needs and then sort of that you actually need tuning for your language and then you need a program execution environment and a transaction execution environment basically the entire network and architecture infrastructure around sending transactions to a distributed Network so yeah that's not the bait let's start at the bottom zero knowledge proofs uh yes who wants to do some math I didn't hear you who wants to do some math yeah I know end of the week we're a little bit tired um so this is this is nonsense basically it's not nonsense but but um it's like a zero knowledge proof is you know you have a proof of you have a verified and some kind of common state some statement with some public input secret inputs and the goal is to prove that your inputs belong to some defined relation um uh and the people may have run some sort of algorithm this is is is not a program this is not a smart contract this is weird messy algebra um and uh it's it's it's a pain um it set up so these three like three fundamental conditions completely sound and serial knowledge completeness is basically if you um an honest private can always make a valid proof the verified want to reject a good proof Samus means that a verify will always reject a bad proof and zero knowledge means that effectively the verify doesn't extract any any useful information out of your proof um and so yes um algebra is like you can't you can't write complex algorithms um in terms of algebraic equations like not not practically you know it's a bit like trying to write a computer program by flipping bits on a on a magnetic hard drive with a needle it's not going to work and so the basic abstraction layer that we have to move from ZK proving systems to snarks is the concept of an arithmetic circuit so instead of having um an imperative program you have these concept of arithmetic gates gates have wires that go into the gates they go out of the gates the gates perform basic operations like add and mole and you can use this to sort of represent a program as in if your circuit if you have an infinite number of gates you can represent any during complete computation therefore it's you can sort of think of it as being slightly ish two and complete and so and we have very nice weight reductions that convert arithmetic circuits into snack systems um and so this makes the the the abstracts away some of some of the evil complexities was UK proofs but not very much of it because you still arithmetic circuits it's still algebra at the end of the day so yeah basically zero knowledge is Annoying It's hard we want uh people who are working with ZK Tech to not have to know anything about ZK or cryptography because it's an absolute nightmare um so okay so so I've described some of the basic abstraction layers that that we can use to um construct snark circuits but how do you turn a program into a snark circuit you know programs are these weird Complicated Love icons of code with lots of conditional branching and predicates and you know working on complex data structures how do you turn that into like additions and multiplications well you can do it with Noir so um this is a a programming language that we've that we're building from the ground up to be zika friendly um and uh support the kind of complex private State models that you that you need in in private transaction environments and so we've modeled it after rust so you know it has variables it has things like integers and booleans like you'd expect from a regular programming language um and yeah so it's a front end basically that could that it doesn't compile circularly into constraints just composing it in the essay and you can put it plug in any back end you want that supports a Sim the goal is to be a completely open architecture trying to and uh so that for other folks can customize it to their needs um you know plug in whatever cryptography they need to get that job done uh so yeah you know you have even things like arrays and you can access the arrays with um with with non-constant values which to a programmer is like obvious to a cryptographer that's really hard um but we can do it uh and you know basic compound types that you would expect um so effectively noise like a programming language from the 1960s but with modern modern semantics wrapped around it uh you know we have even things like a module system and sub modules isn't that amazing it's better than C plus we even have loops and if statements uh which again as a programmer of course you have your statements as a cryptographer you know you think if statements they're hard but we've got them so let's let's move on um so okay so now you have a programming language that takes your High Level Pro computer program and converts it relatively efficiently into a stock circuit so you then have a cryptography backend that turns your stock second into a Zero's proving system do you have a private blockchain no no you don't not whether to stat all you have is a programming language um what we need is an execution environment um and so basically some some system that doesn't need to be physical or real but that that will execute your program and perform actions as a result of what your program is saying um so this you know node.js is the is is a basically can be considered an execution environment for JavaScript the ethereum network can be considered an execution environment for evm programs uh so we're and so this is where Noir with smart contract functionality comes in so this is basically what we're building at Aztec it's a way it's adding the semantics around smart contracts in general so you can Define contracts you can Define functions that operate on public State private States you can Define storage slots and storage variables like you do in a you know regular smart contract language and and then on top of that is this on the next slide oh no it's not so sneak peek or contract syntax so this is the kind of this kind of stuff that we are developing internally that will be available externally hopefully next month uh that allows um so this is just some random like example transfer function but it has weird um keywords like secret and a secret balance so um yeah and you can but you so the goal is basically all the complexity around what the hell that means all of the encryption all of them like the weird stuff you have to do with Michael trees nullifier sets and you know and and witness encoding insights not accessible abstract it away and you just you get nice Easy Storage slots uh so yeah you know um basically taking it's an abstraction that gets rid of all of the all of the ugly stuff on the right hand side uh and then we combine that with a real proper Bonafide execution environment that is the Aztec Layer Two um it is a a um a rather large collection of snark circuits that's um compose a a lecture Network or effectively the goal of the network is to uh the user will send snock proofs that represents um the um function calls two various smart contract functions and the roll-up circuit um will one of the circuits effectively will use a heck a lot a lot of recursive proof composition so Spruce verifying proof verifying proofs to basically emulate a cool stack um for the user so that you can basically have a function called stack of private functions and public functions and you can work your way through the cool stack by recursive regenerating these snack proofs um and then you have a roller circuit which will take these snack proofs and um uh validate there correctness perform all the state up this value they're all correct do fee management do consent like all of the sequences selection consensus algorithm checking and you end up with a proof of a block but not just any block block a block with Chas an encrypted state tree so yeah that's the CKC roll up it inherits the theories security we're leaching from ethereum's consensus like all other layers um but the the critical difference between Aztec and the other as things is the fact that we support both private and public state and you can use that to create hybrid hybrid applications something I often get asked is like what can I build with privacy right privacy is like this weird abstract concept um and we don't have it on web3 not really so uh how how to articulate this um one of my go-to examples is um uh werewolf signing interest or specifically things like let's say you want to sign into a web free account using Apple ID using like FaceTime no not FaceTime face ID what will happen on your phone is that your phone is going to use its Hardware security module to sign a digital signature um according to a message format defined by Apple and according to a public key that is also described by your Apple ID uh there's no reason why you can't verify that in a smart contract using account abstraction and that can then become the default portal to your account however without privacy that means that every time you transact on chain everyone can link those transactions to your specific Apple ID which is rather problematic um if you if you're doing uh anything with any kind of like real value associated with it um maybe you don't want people to know that you're you know you're trading dgen board 8 nfts um with all your life savings so um that's one example things like down governance private voting um you know one of the key problems with data governance right now is the the massive like social pressure to vote according to certain ways and For Better or Worse privacy means that people convert according to their conscience perish the thought um uh so yes maybe we in a few in a year or two we will be able to see the true dark heart of web3 and uh what what our community really thinks um so yeah that is um so there's some of the things you can do with privacy that was the obvious user hiding properties you can hide your identity and you can then link your cryptocurrency account to a real world identity so yes um so I'm actually at the end of my talk because I blitzed through all the last slides and uh this was supposed to be a 15-minute talk I got the timings wrong so we have a few minutes of questions um thank you very much [Applause] anybody has a question um so you mentioned public and private applications do you have an example of like how that would work or what are the use cases for both of those at the same time yeah so the the reason why you'd want composable hyper applications is because there's there's lots there are a lot of um decentralized applications that require Global State and one of the difficulties with their private with um an encrypted State database is that you like in a private World status owned by individuals or groups of individuals it's encrypted with it against their public key and and so consider a defy app like um like uniswap which like or any kind of automated Market maker where you have Concepts like the total Supply in your in your network your liquidity uh pools things like that that's all Global States and therefore needs to be public and so then the question then the question becomes well okay well how do you get privacy guarantees with a decks well one thing you can do quite easily is you can keep the the token values public with the identity is private so the idea is that you can have token contracts which have um private like basically privacy preserving functions which allow you to Shield to to hold Shield balances and then you can directly deposit those that you can you can basically deposit those value like deposit into a amm like uniswap where um the yeah the value of your tokens is public but the identity is private um and then you would have the the public Universal algorithm execute the trade um in many ways this is uh I think this is I suspect this is going to be quite a popular model in the future because it gives privacy for the user but it still means that you get transparency for the protocol so you so you still know that's you know whatever algorithms of being executed by the protocol being run correctly you don't have to there's no centers of trust and longer term you can close the circle and make the entire system private by adding in multi-party computations of that um all of the price finding algorithms that an amm uses are executed in a multi-party way so they can actually be genuinely private but that is a future product to be built I think we have one more question yeah time for one more question hey so um you had a slide there where you sort of described from ZK to snark using a circuit but then from Z case from a snark to a programmable environment I'm still like like how do you turn then a circuit into a place where you can do all of this you talked about the language but I don't like where is it you're right I did skip that so let's let's go back so the first the first kind of um system that kind of did this was the zaxi protocol from 2016 but so we have the concept of a kernel snack circuit where what a kernel circuit does is it it basically ex it verifies the correct execution of a single function call so let's imagine now you have you you want you have a smart contract your smart contract has public functions and private functions each function is converted into a snock circuit and so then what a user does is they construct one of these kernel circuit proofs where the kernel circuit takes as an input a function call stack so at the very start that function call stack will have one entry in it it'll be the function that you want to call um and what the kernel circuit will do is it'll pop that function called off the call stack it will verify a proof that you've provided assuming you're you know um want your everything to succeed you provided the proof that proves that your the function call has been executed correctly the kernel circuit will verify that their that the verification key belongs to a specific smart contract and then what the kernel circuit is going to do is it's going to grab the public inputs of your inner snark circuit the one that made the function call and interpret that those inputs according to a defined API and it's part of that API um that function call May spit out additional function calls to be executed so if you from you know so if you have a for example you you want to call approve and then you want to call transfer from on different contracts for example and so that's one iteration of the kernel circuit but the kernel circuit is recursive is in that it verifies a previous proof of itself um if one exists and so what you can then do is you can you can can repeatedly construct proofs of the kernel circuit so to start with you know you have one function of the cool stack um that gets popped off and verified but then more functions get pushed onto the cool stack as a result of your first function call and you just repeatedly construct proofs until your function call stack is empty and then the the output of that is well you now have a proof of a kennel circuit with an empty function call stack so no one knows what functions you've called but also spat out of that proof are basically a bunch of encrypted State changes state updates to perform as a result of those functions being executed and that's kind of sort of how you can get a quasi-execution environment out of a snoc circuit okay thank you Zach cool thank you [Applause] okay now I would like to introduce to this stage Henry develins from penumbra uh all right so here we are um hello everyone my name is Henry uh this is my talk show the transactions or Roll-Ups so if you're already convinced then you know great um otherwise uh let me kind of go through what I mean so I work on a project called penumbra what it is is a private proof of stake L1 that has an interchange shielded pool so you can record any kind of asset from any IBC compatible chain and what can you do with those assets once you move them into the shielded pool you have a DEX that allows people to do private on-chain strategies so this talk isn't primarily a talk about you know what penumbra is as a product and so on but we've been focused on how do we solve this one really specific use case and then from trying to solve that one specific use case what are the kind of common features that we can generalize uh to you know more uh more varied kinds of computation so to start off the talk about sort of how do we view shielded transactions as being like a weird kind of roll up um why don't we say what is a roll-up in general and I think um a lot of the time people have this idea of oh a roll-up is a way to have like more copies of ethereum right and I think that that's a pretty limited uh perspective of of what we could do in general I would say a roll-up is when you have like one part of a system that we'll call the base and there's another part of the system that we call the roll up and there's this kind of like flow within the system where the base offloads execution onto the roll up and then the roll-up sends back some kind of a state route as well as some kind of reason why people should trust in that state route maybe that's there's a ZK proof maybe there's some kind of economic model with a you know an optimistic roll up but fundamentally it's about having a flow of execution moving out onto the roll up and certification and kind of summary of the results coming back so there's a super super enormous flexible design space here um and in this talk what I want to do is is look at a shielded chain from this perspective of like thinking of things as as Roll-Ups so to start off um in order to have a shielded chain you need to have some kind of composable state so you need to have uh the the state of the chain split up into all these little fragments and each frac each transaction is going to consume certain existing state fragments and then produce new ones as outputs so this is kind of like a utxo model although I personally am a little hesitant to use the word because it has a lot of like Bitcoin related baggage really what's happening here is that the state is split up into fragments and transactions only operate on certain transaction certain fragments of the state why do we need this it's so that we can replace all of those on-chain State fragments with just commitments to those States and that way instead of having to have the transaction actually work on all those things directly we can just do a ZK proof and hide the details of the state from the public chain but when you do that it's not really just that oh we add in a ZK proof fundamentally what's happening is that the execution of the state transition is moving off chain and so effectively you can think of each individual transaction as being its own little sort of micro personal roll up and a lot of the problems that arise in trying to build practical shielded chains you can kind of see from this perspective as we'll see so one big problem that comes up is uh oh uh I guess never mind um so yeah before doing that we'll we'll give a little more detail on on his perspective right so what do I mean exactly when I say the that a shielded transaction is is a kind of a micro roll-up let's look at some uh transaction on a shielded chain in general what are the pieces of this we have some kind of ZK proof that's going to provide trust that everything that the transaction is doing was done correctly um we're going to have some commitments to new output states that this transaction has created and then we're also going to reveal some nullifiers that consume the input States so you can prove in zero knowledge hey I know about this state that was previously included in the chain it's valid um but you now have a problem of how do you prevent uh double spends and the the general technique to do that is you assign each sort of piece of State a unique serial number or nullifier that's only derivable by the user that controls that chunk of state and that way they can reveal this random value and kind of remove that that piece of state from the the kind of active set without revealing exactly which which state they're they're nullifying finally a shielded transaction generally is going to have some kind of encrypted payload in it and the reason is that in order for me to you know use this chain I need to know like not just the the chain needs to be convinced that my transaction is valid but as a user I care about you know being able to learn what exactly my transaction is if I go on another device if I'm trying to sync how do I recover my own state so you can think of an existing shielded chain like for instance Z cash that's kind of bundling in this like data uh retrieval mechanism into a like monolithic chain design so what's kind of the the problem here this is what I was about to get to is why haven't we seen this be you know particularly useful or or receive a lot of adoption and I think the problem is that when you do this change what you lose is the ability to do late binding what I mean by that is we have this picture of okay here's a transaction it has these inputs it has these outputs and this whole state transition is this kind of sealed pre-computed thing but when you look at what people actually like to do with blockchains they like to interact with the chain and so that means that they need to have some kind of late binding capability you want to say I want to do a swap and when I do my swap I'm going to commit to these are the inputs that I want to swap but I'm not going to sign over like here's the exact state of the unit swap reserves and here's therefore the exact amount of output that I'm going to get I don't know that because at the time that I'm making my transaction I don't have access to that state and I can't ask the entire world to just like stop and do nothing while I submit my transaction because I'm important so the way that this is usually done is that on a transparent chain is that when the transaction is executed it can access the chain State and so the chain can kind of fill in the gaps and then determine what the outputs are and naively when you do this sort of shielded transaction uh uh rearrangement you lose this ability this is one of the things that we really focused on in in trying to you know build a DEX this problem of how do I know what the price is before it gets executed just shows up very very clearly at the start and the answer that we came to is that you know in a sense this is a kind of like a similar problem as doing like cross roll-up communication if every user is doing their own little independent uh State transitions on their own end user device then somehow those need to be able to communicate with each other and they're all going to be executing asynchronously so we need to have some model to do asynchronous ZK execution via message passing and a kind of schematic diagram of how this can work is I'm going to make an initial transaction that kind of sets off the action that I want to do it's going to consume my private inputs but because I don't know what I'm going to be sort of filling in the gaps with yet I can't finish the computation immediately so instead I'm going to send a message to whatever public contract I'm interacting with maybe that's on some other Shard of the state and my output for my initial transaction is actually going to be a commitment to some future in the programming language sense of an asynchronous computation that's waiting for some fields to get filled in and resumed if you've done async await programming you can imagine sort of each await point where you're waiting for some message to come in as turning into a point where you need to pause the computation you need to commit to all of the intermediate execution State at that moment and the reason that you do that is so that later once you get the message from the contract coming back maybe that's like hey your swap was executed with this price now we know what the price is now you can mint your outputs um the the user who had created that uh commitment to their intermediate execution State can now resume execution by spending their commitment and uh slotting in the public inputs from the contract into the appropriate places and minting their private outputs so when we do this on penumbra for swaps right the public input that's coming in from the the contract is what was the batch price for that block and the private outputs are then you know that user's Pro rata share of of the the batch so what I think is really interesting about this perspective is that it means that you can kind of generalize right you start with this idea of oh we have a multi-asset shielded pool we can record any asset but since those assets you know can be anything they can also be assets that represent you know arbitrary intermediate execution States and so now in the shielded pool the the invariant that I can't double spend funds that I can't just like print tokens is saying that I'm not allowed to just kind of restart execution I can't clone my program I need to sort of only act you know advancing my computation in the allowable way but the shielded pool can be recording anything that I want and so this I think is is what kind of um gives me this this sense that right now there's kind of this split in the the ecosystem where there's a bunch of efforts that are working on ZK for scaling there's another uh collection of efforts penumbra included that are working on ZK for privacy but I think we're actually going to see a kind of convergent evolution of these things into some sort of you know glorious future this is the the show portion of the talk um and the the perspective there is that what privacy enables is a kind of edge compute for blockchains so in the current world right all of the uh execution is happening you know sequentially one after the other everybody is doing a transaction taking a lock on the entire state of the world everybody else has to stop everything well I run my transaction and in this private world where we do computation asynchronously we can push the computation off of the base layer and out to the edges of the network but that's not necessarily saying like oh let's just like make you know a second copy of the exact same state model but we can go much much further and have this sort of like fractal like pushing all the way out until the end user device possibly over multiple Hops and then those users are going to send back their transactions that have only their proofs and data and those things can get summarized as they move toward the core of the the network so in this picture right we have uh you know the reason that it's sort of like grayed out right is is that there's no actual computation happening there it's just certification of the the data and these are all you know users with their own wallets uh doing their own computation locally um with this perspective there's there's another kind of neat thing that this uh enables which is that there's new possibilities that come out for State Management I think this is one of the hardest problems with blockchains and with scaling blockchains um having every single person in the world just use one world computer doesn't quite scale and if we go back to this picture of like okay what is actually in this shielded transaction we can look at it and say like what's really the expensive part here right so we've got this proof let's say it costs like two milliseconds to verify that's not a big deal we have uh the commitments to the output States those are 32 byte values clients might have to feed those into a tree that they're using locally for proving but they can filter like hey these ones I don't really care about because they don't relate to me the nullifiers are also 32 bytes um really only full nodes have to be processing those and although there's you know discussion of like hey there's this nullifier bloat problem where the nullifier set grows forever I mean realistically you can have a lot of 32 byte values before you really run into problems and so I don't think that that's a pretty significant part either the real problem is this encrypted payload right it's not 32 bytes it's going to be bigger than that because you need to have the full transaction data and unless you can build some other kind of system for allowing users to identify which states are related to them a priori would have to be scanned by every client and most critically if you lose this payload it's actually equivalent to losing your funds right because the chain doesn't have it has perfect privacy there's no way to know what all of these State commitments are and so if you don't know like hey what what was the note that recorded my funds that I control there's no way that you could possibly spend it and if we look back over the last like 10 years of uh doing blockchains managing State managing Keys is very very difficult even if you have you know fully deterministic derivation of all the key you know seed phrases are basically a technology that was invented to solve the key loss problem like let's just figure out how we can derive everything from one secret that we can somehow get a handle on these payload data it's equivalent to a key but um it's created dynamically and and so putting it on chain is not just oh this is a convenient way to like schlep the data around but it's also a kind of a security feature right if you want to move that off chain you have to have a really really robust story of how does a user ensure that they have backups of all their data so this has led us to thinking about a sub project that we called narsal and the idea of narsal is to try to create a personal roll up so using the same diagram that I had before of okay well we have our our base chain it's just mostly going to be focused on doing certification and we're moving the compute out to the edges well we can have this fractal perspective where those edges maybe some of them are you know a web extension that's syncing your client State maybe there's an app on your phone or maybe one of those edges is actually its own chain right so we started actually thinking about this because we wanted to have a story about how to do threshold custody where multiple keyshard holders can collaborate to produce a signature on a particular transaction this way you can do like you know multi-sig transaction authorization flows but when you start thinking about that you get into questions like okay well if we're not using the chain to coordinate how do the signers who are participating like know what all the the signing requests are like how would we make those signers have a consistent View of what transactions have been requested um and we're already using tenderman Comet bfts so like sounds great we know how to do that why don't we just have the shards communicate with their own code BFD Network they can run it in in proof of authority mode and what that means is that you actually not only get strong consistency of like hey what are these people signing between the custodians but it also means that every custodian has a fault tolerantly replicated audit log of everything that has ever been signed by this threshold key and that's exactly what you need in order to be able to have assurance that I can safely move my user State off chain I can skip posting those encrypted payloads and keep that only for myself and I know that I'm going to be relatively secure and resilient because I'm already replicating that across my own internal Network right so you could imagine say like a market maker who is going to be updating quotes on chain reasonably frequently right why should every user have to be scanning that market Maker's State updates if we could have that market maker just run their own personal roll up replicate their state internally and get uh you know less gas fees so that's uh hopefully sounds interesting um the plug is if you want to play with any of the stuff that we're building um here are a bunch of links everything we do is totally in the open we have test Nets you can play with them we have a Discord if anybody has any questions or wants to talk about it with us at any point just like you know show up and we're always happy to chat thank you very much Henry if you have any questions to Henry please find him somewhere in a Lobby and now I would like to introduce to the stage our panelists and our moderator Anna Rose welcome to the stage once again and then we're gonna have Nico Benedict Zack and Chris keep it consistent cool well what a what a panel I have here this is so awesome um this is this panel was inspired by an episode Benedict and I did recently in that we talked about the different parts of the ZK system so this is going to be a pretty technical panel I think we're going to go pretty deep in but I think let's start with introductions so I introduced myself earlier but I'm the host of a show called um zero knowledge yeah and I also am curating this this afternoon of CK through the ZK validator hi I'm Nico and I think says apatharia I'm a researcher in cryptography mostly applied cryptography at geometry hey my name is Benedict I'm the co-founder NG scientist of espresso systems work on decentralized sequencing there's a whole other track for that the other one actually but I also dabble in zero knowledge proofs hey there I'm Zach I'm the CEO of Aztec um we're a privacy infrastructure provider for web3 and yeah I I also dabble them Xerox briefs I'm Christopher I'm one of the co-founders of the anoma project I am not a classically trained cryptographer so I'm probably horribly out of my depth but perhaps I can provide a perspective on how these zero knowledge proof systems look like from the outside we'll see I think very cool oh yeah so I wanted um in this intro so before we did we dive in I do wonder if you could choose sort of one work paper project that you think most defines you that people here may be familiar with sure um I think I'll go after Zach actually it makes more sense in that the work I'll talk about probably builds upon the one he's going to mention now okay why don't we actually start from that side then right um I have not written any zero nodes proof systems sadly I hope to someday but a few years ago I was starting to go into the space and there just seemed to be a lot of brilliant people writing zero knowledge proof systems and I figured like we need people doing something else so maybe in 10 years when I'm like off in a cabin I'll do that but probably the most popular paper was IBC unfortunately I think the paper is terrible oh no I want to rewrite it to be clearer but maybe I'm just biased after the fact yeah I guess um uh yeah plunk I guess is that's the thing that's what I'm known for sir yeah Blanc and does everyone know what Planck means at this point who here knows what Planck means actually like who who here knows the Planck proving system yeah yeah okay so people know the proving system but nobody knows where that word comes from amazing stands for permutations over the LaGrange base for ecumenical um non-interactive arguments of knowledge and the a is silent some creative Liberty and the and the Planck also has a double entender there there is a definition yeah it's it's British slang for cheaper quality wine um because like like a bottle of bad booze getting to the bottom of a plunk it's going to give you a hell of a headache is it like a bottle or are we talking like bag like is it oh it's a box yeah okay okay yeah I guess probably still I'm most known for bulletproofs um which is a zero knowledge proof that is used in Monero and many other different proof systems I think zika should have actually as well are parts of it yeah not as known as these two cryptographers but I I did put out uh let's say an observation Building open plonk that's some techniques that we saw elsewhere in the in the ZK stack were applicable to Planck so that thing is called sangria fittingly very nice um okay okay start this with a little bit of a very kind of high level history of snark systems some I think these names may be familiar for anyone who's been following the space and what I'd like to do is kind of go through them and and any of you can actually weigh in here basically let us know what was the big change or what part of the stack was identified and optimized as we went through each one of these things um so yeah the fir I mean I'm going to start and Benedict you can tell me if I'm wrong here but I'd start with like Groth 16. sure I mean we can also start this is like starting in 2016 we can also start in like uh 2012 or well I was gonna say like 1980s yeah but yes let's start with like practical approved systems okay okay so we're starting with growth 16 still I mean it's still a system that's used a lot right these libraries were really well developed very much and it's still the system with the shortest proofs that we know of so that's that's sort of why it still has um a place in people's heart because if you're trying to post a proof on chain that will be the cheapest way to do it yeah and just a note here there's a ton of proving systems that came out in 2019 we're going to kind of what I'm trying to identify are the proving systems that we just got like a lot of Mind share around they kind of won that round in a way and then you kind of see those be used for a few years and then a new one a bunch of new ones come out one of those kind of becomes the standard in a way this is like subjective but uh the next one I have is planck actually so Zach maybe you can help what what was the big innovation or what was the part of the ZK stack that it yeah what was he the key innovation of plunk was it was what it enabled was practical Universal CK socks so um one of the big downsides of course 16 is you need to do is trust has set up ceremony for every single circuit you make and so we wanted to preserve the like the nice succinct poly logarithmic properties of elliptical based Knox but one where you didn't need per sector set up so it was integration of Sonic which is suppose integration of on bulletproof and a lot of other things um uh but the The Innovation was really it was a way of efficiently validating copy constraints so that was that was kind of the big bottleneck with universal snarks it was how do you verify that all your gates are wired up correctly um so we we had an efficient way of doing that um and that then in the effect of that also meant that a little like a a rather nice like a a way a nice way of arithmetizes arithmetizing snock circuits kind of fell out of it um which was then turned into kind of the plonkish arithmetization and this is where people were optimizing on what you had done but the thing they were focused on was this era arithmetization part like they were changing it um yeah well yeah basically like so the idea is with it's with Planck the nice thing one of the nice things about Planck is that the the um the algebraic expressions that you're checking on every single gate um nicely mapped to the overall um like prove very unverified algorithms that are being run which means that you can then construct like relatively complex custom um algebraic statements specifically for a system that you're building your stock for um and so yeah that that kind of took off in a big way I think one of I mean there's many different ways to look at the Innovations of this and the systems but I think one of the realizations sort of that came out in that time and and were plunk what was the Sonic in Marlin and Derek and others were sort of a big part of is this modularity right like the separation so the separation in this particular part into kind of I come I have a computation and then there's something called the polynomial IOP which is basically reducing it to some polynomial checks so I just checked that some polynomial is at some point is evaluated to I don't know zero and and this basically allows you to sort of pull apart you know these monolithic proof systems into two components one of them is this arithmetization component something like plong something like Marlin or you know there's others Stark uh like air I guess would be the other one and then the polynomial commitment constraint and now you can plug in different pieces for these different things so you can plug in for punk you could plug in kzg but you know you could also plug in like some hash based thing like fry and like the nice thing is and and sort of there were also former theorems about this that basically if you take a secure thing from one side and nice to take a secure thing on one side I get like basically the plugging together works and this gives you a a new proof system so for example now we have like Halo 2 which is uh plunk plus like a bulletproof style polynomial commitment or we have um I think plonky 2 which is uh plunk plus uh like uh the fry and I'm sure we have like air plus kcg and you know there's basically a bunch of these things and I think this separation is what I would view as the sort of realization of that time that then enabled Innovations like Planck yeah so you mentioned sort of the next stage that I would say had a lot of Mind share although I do feel like it's just part of the ecosystem but it was like the Halo work and then the Halo 2 work especially got a lot of Mind share a lot of people building on it um you had just said that it was using something like Planck had its two defined parts and then you could like take out one put in another did Halo 2 also introduce any other places to all of a sudden break off into another module okay to be honest I didn't really know what hey let's who is anymore because it's like altitude to me is one of the like no friends there's I mean it's very confusing because it's a library there's a paper Halo then there's like Halo 2 there's also like what people think it does and what it really does um like so it's an arithmetization language so it's many many different things so it's like also names just like terribly confusing we're really good at that I think what like I actually view Halo and Halo 2 quite separately and and what Halo 2 does it's really an in a nicer a slight A variation on the Planck arithmetization like a really nice way to sort of encode the the the Planck arithmetization in this this column for format with coming with a library a very very good seemingly good and widely used Library uh like cryptographic library with it is that do you agree or like yeah I broadly I think maybe I'd add one other thing which is that like I think one of the things that Halo 2 really pired was the basically the use of cycles of non-paring friendly curves to do to enable like uh recursive pre-composition um but that's just not implemented in Halo 2 right now like this is like a really confusing thing is everybody thinks this is implemented in the library it is not right now so I think the confusion comes from the paper that came out from the oh interesting uh Z cash people so I think it's Dara schonvo and I forget there's a third author on on the paper and that paper sort of describes this yeah thanks um that paper describes this idea of how can we get cheap recursion using cycles of Curves um so maybe that is what people refer to as Halo or yeah that is what that is there's the Halo paper which is about recursion which is another interesting area but it's not implemented in Halo 2 maybe right now maybe in the future it will be but like this is why where I'm like uh it's like this is where this grinds migration because it's just like so confusing like uh you know these things because cycles of Curves which is what people think Halo is which which is what Halo is is not implemented in Halo 2. just one other thing which sorry I've been one thing which is it complicates matters which I'm certain that that I've I've um uh a coupled in this as well is that all these proving systems are also kind of attached to Brands um well they're Brands they become brands for the companies that use the where the people inside this company inventing them um and so I think that that also has a big effect on on the on the confusion aspect as well and yeah yeah I mean I was just going to ask because I was curious in sort of sounds like you've looked at the Halo 2 library and there are 15 different Forks every time I go to a crypto conference I learn about two new Halo two forks at least one of them implements something I want and I didn't know about it it's a big counterparty Discovery problem but I'm curious you know in the modular stack we see like clearly separate roles like data availability solving execution it sounds like they're kind of two now like the the first or the arithmetization part and the polynomial equipment part do you think they'll be two forever like is this the final decomposition already at three where you have um your arithmetization then darker parts is something that your polynomial IOP can deal with and then you can use whatever planar commitment scheme you want so it's already I think like a three layered thing and then you add folding for fourth layer and lookup tables for optimization so those would fall yeah is that more of a technique I have all of this this is actually my next question maybe let's let's finish our history but I'm getting into well I guess it is the next type of the history after sort of yeah well you're right you're right because I my net obviously the next one that a lot of people have been talking about for the last year is the Nova work which also leads to the hypernova and protostar uh work which introduces this technique of folding or accumulation schemes which I've always understood as being like deeply built in but it sounds like like does it change one of those three that you just described is it in polynomial iops is it in the arithmetization is it in the polynomial commitments is it a sub so similarly to how growth 16 were like was like this monolithic thing and we started to take it apart yeah yeah I think this technique started from a very specific application so in the Halo paper they were like we are using this bulletproof style polynomial equipment scheme and we can actually defer some of this to later and then slowly but surely we're like picking threads out of this and got this to be very very generic with protostar super recently I will also say I know Benedict you had done work that sort of did this before Nova but described slightly differently am I correct yeah we just didn't give it a fancy name so it didn't get any attention you know but you did do bulletproof so you knew no that's like the less you knew that's left yeah no you got to give it fancy names no being facetious um I think it's so the most important separation for me is like this sometimes it's called front end and back end where front end is what the developer interfaces this is how you code up your computation how you express your computation right this could be like we and we have this separation in normal Computing as well we don't need to think about uh like uh you know sort of snarks we can think about like your programming language right that that is the front end right this is what you write and CEC plus but whatever and first this is you know like something like the Halo 2 front and like sometimes it's R1 yeah sometimes it's like you know a higher level thing right like there could be also multiple levels in the front end like uh you know circum or whatever that then gets compiled down to something below and on the back end um there is the proving system and the proving system can also have you know multiple layers right it can have like as you were saying you know the the compilation down to a polynomial commitment and a polynomial commitment I would say that you know these these folding schemes give you uh a proving system with like very particular properties they especially work for these or they designed for these iterative computations so a computation where you have like a step that you do over and over again for example a blockchain it's just like one block and then another block and another block and another block that's the computation that you do over and over again it's a so-called iterative computation and for those we can use kind of these folding schemes or these IVC things so but this is all in the back end this is the sort of the technical infrastructure the like sort of the assembly or like no it's really not even the assembly it's it's the CPU right that does the that does the execution and again like CPU we can have you know multiple layers in there we can have the instruction set uh we can have you know all of these things and then some things fit together some things don't fit together and some things you know you can build a compiler to make them fit together this is where this picture looks sort of complicated but like really the thing that I like the image that I think you know helps sort of people think about it is just like front end like how do you express your computation hopefully in the future this will get easier and easier and you can literally just write it in you know I don't know like rest or like python um and then back end hopefully those things get more efficient and Powerful as we go along and which sort of like executes the the the the computation and both of these within each other within itself can have modularity which is the cool thing and the more modularity like I think historically what we've seen is exactly this trend that you just said usually like sort of the Innovation comes like someone looks at something like monolithically comes up with some you know uh like a genius new idea something like Halo which is by the way a beautiful genius idea like and Halo is more in the Nova like folding kind of line of work that's where I see it um and then people started like picking it apart and making it more modular and this enables new innovation and this enables like new sort of techniques um so we should you know like sort of always cherish and you know like I think the name here with the conference and the panel is very good because like the modularity seems to be extremely helpful for both new innovation and for understanding these things yeah I was just going to ask me one thing modularity helps me understand in the distributed systems context is what the hard axes of trade-offs are right like there's a trade-off between privacy and efficiency and a sort of counterparty Discovery matching sense there's a trade-off between trust and efficiency and kind of how much do you need to replicate your verification sense and these trade-offs don't change like you can make The Primitives faster but the trade-offs will always be there they're just properties of how the components fit together and what kinds of properties you want out of them holistically and I'm curious does the modular decomposition of like zero knowledge proof systems is it yet at a point to provide Clarity on these kinds of axes and you know what are they this is like benchmarking a lot of this stuff I guess I think not necessarily I'll say sometimes in our case modularity comes at some cost where because we're not looking at things monolithically we can't open these black boxes anymore and there are some small tricks and optimizations or things that we could have done otherwise that we don't do anymore not say that we don't do them at all like we do have systems that go in and break the black boxes um but I don't think the modularity itself I okay I guess yeah you can draw some lines of you know the trade-offs but it also draws them slightly out of like somewhat artificially but I guess maybe your questions are in in distributed systems we have like these pretty strong lower bounds right like we have some sort of like impossibility results that that are pretty strong uh what is interesting is that in zero knowledge proofs like uh there's some like lower bounds but like not really that many and that meaningful ones it's like it's it's not even clear right that proving is necessarily more expensive than Computing something right like if I want to compute a square root then I need to compute the square root but if I want to show to you that the square root is uh like correct I can do this with just a squaring right I can go in the inverse Direction which may be cheaper so it's not even clear or like maybe like proving it like executing a computation is sequential but the proving can be done in comparable so like this like as far as I'm aware of this like some you know lower bounds but these lower bounds are usually in in like uh stylized models and if I go out of these models I can oftentimes even circumvent these lower bounds so we don't like one of the Beauties is we don't have like strong lower bounds but this also means that like modularity does uh like um yeah it's it doesn't really like uh we don't have to like it doesn't like show oh you know here's like clear trade-offs yeah just to feather that yeah it's It's tricky because it would be nice to be able to clearly Define a Trader space but I think if you take a snapshot of the ZK landscape at any one time then yeah you can probably you can probably find some kind of trade-offs space between all the various streaming systems and little components but because the lower balance of nowhere near being reached yet yeah you flash for six months and everything you've done your analysis on is now Obsolete and it's been replaced by new stuff that's better just in uh I still don't think we've properly placed folding schemes and lookup tables into the like the where are they in this stack I still don't I've been sort of referring to them as techniques because I'm like just like something you use on top of but that it's not its own system in its own right I can take a step so I think like where I would Place lookup tables is they're just they're part of the second arithmetization uh so it's because there's a subset then it's a it's a way of conversing your lookup table into algebraic statements just like the like like the rest of the Edition like arithmetic Gates of your circuit so I think you can place it at that layer more or less when it comes to folding schemes I would say it's it's at a it's at a higher level than the than the underlying proving system because they're like there's been some work that like the protostar and pretty galaxy are basically the the actual proving system has left as a black box more or less like you need you need to you're the only requirement is that you are using um you have an additive homomorphism um between the I mean they're not I mean protesters and even to define the use of polynomials but assuming you're using polynomials then you then you have some kind of additively homomorphic commitment scheme for your polynomials so I would I would place it in a layer above the underlying pruning system and it's if it's a layer which if you system if you need to compose if you have some higher level architecture that just that composes multiple proofs together then you apply a polling scheme on top of your printing system to get that capability um so does it have could it have its own category would you yeah yeah okay and I guess the reason like those two that I highlighted the reason I wanted like so we've mentioned a few of them each one of these sort of has a line of work and there's researchers who are focused just on those things um something we didn't actually send and now I'm realizing I'm not sure I know this at what stage like lookups which I just brought up there was pluck up there had been work before as I learned on a panel yesterday there had been some work done before Mary Muller was part of some work that was doing that but then there was the planckish uh pluck up was created but was that like what system does it does it go with Planck does Planck then come with lookups or like a a second generation yes is that the introduction that's that's kind of what Ultra plunk was okay look up so it's yeah so pickups was it was it certainly wasn't the first lookup table scheme um I think it was the first well at least at the time I thought I don't think it was I think it was the first that had a like the access cost of your look of tables was constants um most of the ones before were quite like the number of constraints in your system that you needed um to do a lookup was logarithmic in the size of your table and look up it's this one gate I think there were systems before that did that but look it was the kind of the if I might if I make the claim that it's the first it was the first constant time practical lookup scheme is that controversial I'm not sure but yeah I think I mean I think also that you know this is another great case where sort of modularity was sort of observed later on where you know sort of a lot of these lookup schemes were invented and they were invented sort of in the plunkish land I'll say that like but you know most recently you know and and I think you know sort of for example this was observed in in a paper about ccs and and other things is that it's actually you can pull this out like you can you can pull it out and reuse this component in in other systems and it is not like tightly coupled with the Planck proving system um and and yeah so uh that's sort of a very nice observation where again like we've observed some modularity and and this this helped sort of improve the the space I want to just kind of go over what we've already said just again just in case people have like aren't fully following so so far we have polynomial IOP polynomial commitment scheme arithmetization subset lookup tables and somewhere else in somewhere is folding schemes beforehand is recur does recursion and folding schemes get to live together yeah okay same same thing ready is there okay so that that sort of maps for roughly what I had written down here but it's nice to see it map though um but is there anything else what else is there so like going back to your point like as we Define these components researchers can focus in and optimize them and find new combinations find interesting properties of other parts that we just described that interact really into like nicely with yeah I don't know if I describe that so well but we're looking for new things we're on it we're on a search now I think I think I think that's I think I think you can add other layers of abstraction but I think that's where you would like you'd stop writing soundness proofs and papers as in like if you like there are plenty of high level constructs but maybe you could well yeah you're saying we're at the end of all all modularization this is it no no no we're done maybe maybe at the end today but I think Benedict has some some words to say on that I don't know I mean like there's even you know like you can go even lower level right so one of the things that okay this is like uh another slide rant like when people it's not around like when people start learning about you know like if sometimes if people approach me like you know oh I want to learn about cryptography and it's like oh but I've been looking at these elliptic curves and like they look so complicated and what I didn't you know I got stuck and I like probably like after I wrote like I don't really understand elliptic curves and like for at least like the three first three years of my PhD and I'd written Bulletproof by that I really did not understand elliptic curves and that's totally okay because again we can use abstraction right these elliptic curves are just you know one tool like they're they're mathematically they're then uh an algebraic group um with a group operation where the discrete logarithm is hard or something like this right and we can beautifully use abstraction and you do not need to understand how these other components work and there's like sort of this whole like cryptographic layer where for example a lot of these proof systems use a hash function and we have like you know there's a new hash function coming out every every week and especially there's been a lot of focus on on these uh snark friendly hash functions so you know that's another component or there's different elliptic curves uh with different like security and efficiency properties uh that's uh you know sort of another component at the you know cryptography layer that's a really good point let me just to add to that maybe to try and systemize that like a bit more the um like all of these stock systems they all of their security proofs boil down to um a relatively common set of computational hardness assumptions but that I mean like basically you're saying basically you basically prove that the only the only way for a adversary to break a printing system is that they can solve a particular problem that we assume is basically impossible to solve um and so for example like one of the common ones is the elliptic curves discrete logarithm problem basically saying you can't find the discrete logarithm and and so yeah there's a whole there's a whole low very very low level abstraction rate of The cryptographic Primitives themselves where you where you connect where you're at like the the actual like um constructions that you um pull this hardness assumptions out of um and yeah there's a whole other level field of work and this again beauty of modularization you don't need to understand it it's totally fine just sort of deal like be happy with abstraction when you're trying to look at things like you know really be happy and and sort of embrace the abstraction embrace the modularity I would say maybe that's one of the points of friction that I want to see sort of sold now it's this friction between the back end and the front end because all this ZK stack we've been talking about so that's mostly in the back end these things change very quickly and we get better and better very quickly and they also like you're essentially changing the interface right that you have with the front end so there are new things that are available for the front ends and it's really hard to know like okay can I start thinking about a good front end for this yet can I start thinking about a good language or a good representation of computation to throw to these back-ends or are they just gonna you know is the ground gonna move under my feet yeah this is actually to you Chris a little yeah I mean looking at things a little bit more from that perspective as we do it seems like the really hard problem is dealing with differently sized finite Fields that's like the essence of the problem that we see from the higher level language perspective is that we want to you know the choice of the field to me is like an implementation detail that should live in the back end and the front-end program should be portable across this choice so that like as there are different interactions between um different different systems and the underlying systems change and certain things become cheaper with small fields or whatever or you need certain cycles for recursion these are like details in the back to me that we want to abstract but it's difficult because at the same time in order to get like efficient execution you kind of need to know about this detail when you're writing your programs so there's this like bleed through of something that is really at the very bottom like this is the thing that you're abstracting over right there's this bleed through all the way to the fronted language and I have not seen a convincing like General approach to translating between just modular arithmetic over different finite Fields all the mathematicians I meet tell me it's really hard it's like discrete algebraic geometry like hard stuff but I don't know I'm curious I mean yeah I think you hit that end on the head when you said that is there's an efficiency problem there where like every abstraction layer has it has an implicit cost associated with it and right now like over the last 10 years I'd say we've been building up a lot of abstraction layers from what used to be just a core monolithic printing system in like the glow 16 or BCG TV 14 I can't remember their names um but yeah the one the one that's really missing is is is creating something which is field agnostic and I don't think it's it's probably not going to sum up for a while because it's a hard problem like the the one way to solve it is you basically you create a a virtual machine so you instead of instead of writing turning your program into constraints for a specific printing system you turn them into operations over some imagined virtual machine that you then prove in your underlying grooming system um and then ideally and then that assumedly will then not work in finite Fields when we're working just regular you know base two fields um well everything's just um yeah but like we are we are not well depend it's far to use case specific because the the slowdowns and that are gargantuan I feel like if you often I think that the history of ZK snarks and ZK proofs kind of tracks similar to the history of computing in that if you think about where we were in the in like 1990 it's similar to where we were in 1936 when like Alan Turing was writing paper saying in theory we could do this wonderful stuff but God knows how um and then you know the the like the very early ZK snarks they were basically the the very first digital computer systems vacuum tubes where you like you know forget about cost like custom programs like you like your programs you had to like hard rewire your computer like in a plug board to get different programs because it was that low level and then as as this as the performance improves you start to layer on more Obsession levels so you start to get primitive programming languages in in the in the computer space and that so and that that could be translated to creating some of these very basic abstraction layers that we have today but we're still we're still early you know I say if you can if you can draw analogs between the path of ZK prism path of cryptography we're in like the 1960s at best yeah but you know in the 1960s we went to the moon so maybe we can do that again one Moon if you draw that analogy out I think the conclusions are pessimistic not optimistic yeah we haven't been back to the Moon that's true well it was damn hard and expensive so right right true true so are we saying they were doing something damn hard and expensive that won't be repeated in the future but we did it on like the computational power of like I don't know probably like my watch that I'm not even wearing but like yeah um all right well I think we've covered most of the questions I mean I had I had one last one but that's kind of in the weeds you'd sort of mentioned hash functions and I was thinking like hashbrowns with parent-based where do we place this is this under the polynomial commitment scheme so like the you go into the polynomial commitment fry based will be using hash function kzg is using pairing based yeah it's a technique no exactly I think it's a technique that is used it's very tightly coupled usually to the polynomial commitment scheme okay um like each cup polynomial commitment scheme requires is built on a different cryptographic Primitives um which then defines a lot of the efficiency that's the problem that we were talking about that it bleeds up right because it does Define a lot of the the efficiency properties but it even defines you know to some way like unfortunately it defends in some way like how you can express your computation um and uh but it also yeah defines your security assumptions and and your like trusted setup and all of these things and um then the the other thing that I was saying the modularization there is just that there's many different elliptic curves and you know like kzg can be implemented on many different curves um and which one you choose again different trade-offs like all the way up yeah like like the end depends like oh what does ethereum have pre-compiled for right because uh yeah and also the the landscape of Primitives is is much more slower moving than the high level construction is because these computational hardness assumptions uh you can only really get consensus around them over time because it's not like you have it's not that you can formally prove that a particular computation cannot be done um in in polynomial time that's kind of the P equals MP problem so for example pairings were around since the 1980s elliptical pairings they weren't used in commercial cryptography software until the mid-2000s I think I believe just because um people just didn't trust them didn't trust them uh it takes about 20 years but well I'm sure we'll see new premises we're going to start to see I suspect polynomials based around lattices turn it cropping up in in ZK you say pairings 1980s I think so what you said they were invented in 1980s like from like 19 yeah yeah yeah yeah but um as like the cons the concept of using a billionaire pairing in a cryptographic protocol was was 1980s wait you're about to say something else what did you just say um what was I going to say that oh just I was just saying we're going to see some New Primitives eventually polynomials based around lattice is it cool I'm kind of curious about like a higher level like elliptic curves are like one of the most primitive algebraic structures that you can use like there's there's a whole field of like things like tour space cryptography um or you know you can add dimensions on and you can get possibly some interesting properties and anyway but so we'll we'll see what happens yeah maybe that's sort of the last so the area of potential improvements or Direction you're thinking like lattice I guess that would open up all sorts of cool new techniques is there any other like specifics what trilinium Maps yeah well I love the curved pairings or bilinear map um which basically allows you to quasi multiplications but if you had a trilinear map then that would open up um it's okay so I'm this is this is this has been a long time since I've looked into this and I'm a bit of a dilettante in this field anyways so um take this as like just like some guy on the internet but um I think it would open up the ability to create fhe type constructions with elliptic well whatever whatever is you have the trail in your map with well the same with the elliptic curves are additively homomorphic I think you could think you can get something which is multiplicatively homomorphic or the trynlinear map maybe completely or which is partially so it was under the impression that Berliner Maps allow you like one multiplication and if you go training you get one extra one so you get two multiplications we need probably more if you want like fully homomorphic so the thing that trilinear Maps like one of the key this is you know outside the zika space but really exciting is like the thing that trilinear maps and more generally these multilinear Maps uh give you is um obfuscation program obfuscation which is the sort of the Uber crypto primitive because what it means is that basically I can have a smart contract um which can have an embedded secret key and nobody can read the secret key so for example like the the smart contract could literally store some um some balance in it and and nobody could you know see sort of the balance and it can update it locally or it could have like yeah some sort of I think we had some talks on like private State and public State uh I guess you talked about this and and like with obfuscation yeah these things would become significantly more powerful and and yeah there's really cool things that could happen there um that's sort of the part of the crypto future out there like yeah yeah exactly like everyone in the private transaction World somebody has to own encrypted State and control it and you need their and they're there The Entity that needs to be able to construct the proofs of computation over the private state which is kind of a pain if if you know you need to for example liquidate that person they're not going to make you a liquidation proof um and also yeah if you but yeah anyway I could waffle for ages I think we're getting the sign to leave the stage which is a bummer because I didn't leave any time for questions I think we I had too many um but you've met these wonderful panelists and I I guess if you can if you have questions please come join us after thank you so much thank you to all of you thank you thank you very much and now we're gonna have 20-minute breaks break so basically we're gonna resume at 4 30. is hello hello welcome back after the break I hope you had fun now I would like to introduce to this stage umaroy who's going to talk about aggregation is all you need hello I'm Uma one of the co-founders of succinct and I'm going to be talking about aggregation is all you need foreign okay so I'm going to start with some background on succinct so in particular if you look at the ZK landscape today there's been a lot of focus on two types of applications uh one is ZK EVMS and there's a lot of teams we know that are working on ZK dke VMS okay um sorry about that okay it's back on the correct slide so we're interested so there's ZK VMS and then there's also privacy preserving protocols like tornado cash and it's distinct we're pretty interested in exploring uh the rich application design space beyond just these two types of protocols that are really well explored today so we thought a lot about how else can ZK help scale and make blockchains better and we started by working on ZK like clients which is basically verifying consensus protocols inside a CK circuit to allow for efficient verification of consensus protocols in evm so in particular to get into a little more technical detail on how this works you would have a source chain that had some consensus mechanism you verified the consensus in a ZK snark and then you verify that proof in an execution layer very cheaply and then you can run a succinct on-chain like client so that one chain the target chain can talk to the source chain natively and this solves a lot of existing problems with interoperability protocols today where generally if you want to transmit information or data between one chain and the other you have to rely on a trusted multi-sick or some trusted group of entities and with succinct on-chain ZK like clients you can basically do interoperability without these trust assumptions and you can get much more secure interoperability and the first DK light client we built was a ZK like client for ethereum so our first protocol telepathy which has been live on mainnet since March uses our ethereum ZK like client and with that you can send arbitrary messages from ethereum to any other chain and you can also read ethereum State on all these destination chains because you have the ethereum stay root on these chains and then it's also useful for bridging information from ethereum's consensus layer to its execution layer and so we have a few people using this one great example is gnosis chain is using our telepathy protocol to secure their native bridge and then eigenlayer is using us to operate the restaking protocol by getting ethereum consensus information in the execution layer so we've built this ethereum ZK like client but we want to expand ZK and operability by supporting more consensus protocols so there's only really a few consensus protocols that matter tendermint is one that's commonly used across a bunch of ecosystems because it's the native consensus protocol of the cosmos SDK another consensus protocol that's used by the substrate SDK which is used by the polka dot ecosystem is Grandpa and babe consensus and so there's a few consensus protocols that we at succinct care about proving in a ZK circuit to kind of expand the domains that can talk to each other through these ZK like clients and the ZK like clients are in general useful for two different things one is L1 to L1 bridging so for example having an ethereum chain talk to a cosmos chain that's using tender mint and then also another like subcategory of this that will be the main focus of my talk is data availability layer bridging so if you have a DA layer you can Bridge the state of the da layer to ethereum and so I'm really excited to announce that we're working with Celestia to build a ZK bridge to bring Celestia state to ethereum and this is kind of like the big scope of my talk and one question you might ask is okay why are we interested in bringing saucity estate to ethereum and so here's a really helpful diagram to show why that's useful basically in the modular stack a chain can decide to have their data available on Celestia but then they might decide to settle on ethereum so this diagram shows how an L2 operator might send proofs whether they're ZK proofs or they might send like uh you don't send an optimistic fraud proof but you have that settled on ethereum so you send it to your L2 ethereum contract and then you send your transaction data or your da to Celestia and Celestia has this super scalable really high throughput da and so for the roll-up it's much cheaper for them uh to operate their role up in this way and then you would use our ZK like client to basically attest on ethereum that the data is actually available on Celestia and so Roll-Ups kind of get the best of all worlds in this way and it's much cheaper for them to operate so there is an existing protocol to bring Celestia state to ethereum It's called The qgb which stands for the quantum gravity bridge and we are basically turning that into the zkqgb uh and one of the a lot the zkqgb has a lot of benefits so right now the quantum gravity bridge is this kind of sidecar on the Celestia protocol and one of their core values is you know really having this minimal protocol that's minimally simple and only does one thing which is data availability and so by having the zkqgb we can take out the existing qgb and really simplify the core Celestia protocol and then just take the existing Celestia validator signatures verify them in a ZK snark and then move that out of this core protocol uh and this is really nice because also Celestia can scale their validator account without having to worry about an on-chain like client gas costs and we in general can reduce their gas costs a lot uh for the existing qgb by bundling all these DK qgb verifications together so now I'm going to dive into like some of the technical challenges that we faced while making the zkqgb for Celestia and in particular ZK tenement and one interesting thing to note is as I mentioned a lot of chains use tendermint so this is like actually reusable work throughout the cosmos ecosystem so in general when you verify a consensus protocol in ZK you have to do a few different things you have to verify signatures you have to provide hash functions and you have to do some decoding and in general the pseudocode looks generally always very similar you verify signatures you make sure at least two-thirds of the validators have signed then you have to prove that the existing set of validators is like the correct validators and then you have to Merkle prove any inform important information against the header such as a message was sent or some amount of money was deposited in a contract or burned to be minted on the other side and then in particular tendermint has some particular technical challenges associated with their consensus algorithm so they use the signature scheme ed25519 and you have to verify these n ed25519 validator signatures unfortunately these the signature scheme has no aggregation in it like the ethereum BLS signature scheme so that is a technical difficulty and then tenderman also has no Epoch time so in the worst case you might have to verify many headers in a row and then finally when tenerment was designed they didn't really design it to be maximally snark friendly so they have these snark unfriendly serializations used throughout such as protobuf that have various challenges with being implemented in a circuit so I'm going to Now cover some of like the more specifics of what we implemented at succinct to basically have our ZK circuits be able to handle tendermint and hot verified tendermen consensus in a snark so here's an outline of the techniques we used and then combined together so one interesting thing to note is that verifying validator signatures and hashes is actually embarrassingly repetitive and a very parallel task if I verify one signature it has nothing to do with the validity of another signature so you can really trivially parallelize the verification and if you just do this naively in a circle in a normal circuit and you just verify 100 signatures serially you're not taking advantage of the innate structure of the problem and so we think we can do a lot better and so we call this idea ZK simdi uh simdi stands for single instruction multiple data which is like a concept that's prevalent and you know a lot of other contexts like gpus and AVX instructions but basically it provides this form of data level parallelism which lets you compute the same function f on a bunch of different inputs X1 through xn in parallel and so we realized that Starks are actually really convenient for these sorts of parallelizable computations so typically people use Starks for VMS like a lot of the ZK VMS or even zke EVMS are written in Stark based languages and so Starks had this like single state transition function that repeats across all the rows of the circuit and as a result they're much more lightweight to prove and often they have much faster proving times than other arithmetizations like Planck and so we figured out a way to arithmetize constraints within a stark to implement an abstraction very similar to simdi so as I mentioned before we have this like kind of abstraction that lets us in a very general way specify an f a function f that we want to compute independently over a set of inputs and compute a bunch of outputs and in the particular case of signature verification f is just the function of verifying a signature and X is just the actual signature that we want to verify is valid and this is like a very simplified diagram of what's going on but basically in our Stark that's verifying in parallel a lot of these signatures we have 2 to the 16 rows and then we have 256 signatures that are getting verified throughout the course of the circuit so they're not getting verified serially what happens is that we have a um is there something uh so anyways is there something that happens where basically we are able to verify all the signatures in parallel and then we have an accumulator column that accumulates and verifies the results of all the computations uh together and at the end is a random linear combination check to make sure all the verifications actually went through okay so at a very high level okay we've talked a little bit about how this abstraction works and we built this like nice framework to let us do these computations in parallel and then we compared it to like some existing implementations and so in short uh these benchmarks were taken on an M2 Mac but end to end our Stark framework uh proof generation for 250 verifying 256 ed25519 signatures took 80 Seconds and so the proving time per signature is around 320 milliseconds in contrast if you were just to verify one signature in the ponkey2 proving framework using planckish arithmetization that would take around 17 seconds and then if you verified it in gunar which is the Groth 16 based proving framework it would take around 14 seconds and so you can see our like abstraction of basically paralyzing uh verifying all these signatures as a batch results in a much faster per signature verification time which is what we would need for some verifying something like tendermint which you have to verify a lot of validator signatures because they're not aggregatable so I've talked a little bit about how to do this like parallel computation of the signature verification another interesting thing is that you can further use recursion to further reduce the end-toed latency of these parallelizable computations so in particular at the root of this tree each leaf or sorry the leaves of the tree at each Leaf we verify a batch of signatures using our ZK some D abstraction and then we actually verify each of these batches in parallel if we want to verify something like a thousand signatures so each Leaf is a stark and then we recursively combine the verification of the Starks together and then we're able to do this in a tree-like structure so the end-to-end latency of our whole computation is simply the depth of the tree which is log 2 of the number of signatures that we actually want to verify and this means that even though we're able to throw more compute at something the n10 latency of verifying a lot of signatures is greatly reduced so one problem is that when we're doing all this stuff with ZK 70 with the Stark based framework or the recursion we use this proving system called plonky2 and in general recursion friendly proof systems are typically not compatible with the evm in the evm if you want to verify proof really cheaply it's best to do it in grot16 or something pairing based because ethereum has pairing pre-compiles so our solution is that we have to wrap a recursion friendly proof system with an evm compatible snark and so what in particular what we do is we take ganark which is Groth 16 or ponkish kcg based and then we take that and we wrap the ponky tube proof and then we verify it in evm so our proof system composition where we're combining three different proof systems a stark-based one a punkish fry based one and then a punkish kcg based one unlocks the best of all worlds you have really fast proving for batches of signatures then you have really fast recursion for reducing unten latency and then finally you wrap it all in the final layer that gives you really cheap evm verification and you kind of need all three of these components to prove a consensus algorithm like tendermen that has all these like extra challenges associated with it and in practice I kind of touched upon this we have this particular proof pipeline where we're doing this proof composition and aggregation so the first step of our proof pipeline is we have an application specific circuit that can recursively verify batches of things and so in this application specific circuit we kind of have the business logic of the consensus protocol where we're verifying validator signatures maybe we're verifying headers and hashes and other things we need to do then we have a recursive circuit that verifies the proofs from step one and it normalizes everything and makes the proof size and the custom Gates a constant and then finally we have this Groth 16 recursive circuit that verifies the proofs from step two and basically what that does is the cheap evm verification so we have this like three-step proof pipeline uh that composes to get us all the properties that we want which is it's really fast to generate a proof and then also really cheap to verify and just to throw out some benchmarks about the proof of recursion so the recursive circuit for ponkey2 which is step two in the three-step process I described is actually really fast they heavily optimize their framework for recursion and so in net it takes less than two seconds to do the witness generation in the proof time and then for the final wrapper circuit for cheap evm it takes around like 16 seconds to generate the proof so it's actually still very feasible and manageable to generate this proof and then as you can see the on-chain verification cost is around 400k gas and probably could be optimized further so it's very feasible to run something like this on the evm today and then finally proof aggregation we think is super important so so far we've done proof composition of a bunch of different proving systems but eventually you can imagine that we would have a bunch of different consensus protocols that we're verifying in ZK circuits so we'd have this Grandpa proof of consensus we'd have tenement proof of consensus we'd ever ethereum-like client and maybe we have a bunch of other proofs as well and one nice thing you can do is you can take all these different consensus proofs that are coming into ethereum which is a very constrained computational environment and you can aggregate all of them which can save even further on the cost of verifying all these proofs and so when you aggregate all these proofs you dramatically reduce the cost of verifying them on chain and you can also verify proofs that aren't like clients as well and in the end we think this will be like a huge unlock for making gas costs much cheaper and then also you can have the state of all these different chains on ethereum more frequently because the gas costs of verifying an individual one will be much less so yeah this is a meme about kind of all the ways we stocked these different techniques to finally get to something where it's actually fast enough and feasible to verify in evm so our ZK simdi which is a Starkey based framework will be open source soon it's written pretty generally so that basically if you have a function that you want to prove over a set of inputs in parallel you can use that and write a circuit and we'll open source it soon and we want people to contribute and collaborate on it and use it for whatever like parallel functions they want to prove and then organic based Punky 2 verifier is actually already released under an MIT license it's open source it's available at that link and we would love for other people who are using ponkey2 we know it's like a proof system that has a bunch of users uh because it's very fast to use it to verify their proofs on ebm and also collaborate and contribute so if you're interested in using any of these things you can check out the GitHub repo and then also talk with me after and ask if you if you have any questions about using it thank you very much Uma [Applause] if you have any questions to Uma find her in a Lobby and I would like to introduce to this stage Jason [Applause] all right so the title my talk today is scaling data reach applications on ethereum with Axiom and the starting point of this is the realization that if you're writing smart contracts today on ethereum or really on any blockchain VM you're really operating in a very data starved environment if you look at the listing for this cute penguin on openc and you try to identify of all the pieces of on-chain data on the page what can actually be used on chain you'll find that the answer is only the owner namely Zac Efron all the other Rich information on the page like the transaction history the historical prices and all that good stuff that openc users get to see is simply not available to your smart contract and this is not just an inflammation flaw of ethereum any blockchain that wants to be decentralized can't impose the requirement that validating nodes can access history otherwise that would require all full nodes to become archive nodes now of course developers are very creative and they work around this in many ways today they have this trade-off between putting more data in state and paying for that or by reducing the security somewhat and relying on trusted oracles which in many cases is a fancy way of saying that the team itself puts the relevant data on chain in a fully trusted way and so developers who want to scale data access on chain today really have to trade off between increasing their cost or reducing the security of their application so at Axiom we're thinking about whether we can scale data Rich on-chain applications now on blockchains we have a special tool namely in any blockchain the current block always commits to the full history of the chain and that means we can use cryptography instead of consensus to access on-chain history so how does this work on ethereum the current block is linked to all pass blocks by a ketchak chain of block headers and of course every pass block commits to all the information in that block namely the state of ethereum at that block as well as all transactions and receipts the problem though is that if you try to decommit all the way back to a million blocks ago in ethereum that's going to be prohibitively expensive you can never do that in the evm so what we realize at Axiom is with that we can shove all these verifications into ZK we can check a Merkel Patricia tri-proof as well as a chain of block header hashes and make that feasible to verify on chain this has a couple side advantages of providing scale and accessing the historic data in composition so what we've packaged that into is something we're calling a ZK coprocessor for ethereum smart contracts can query Axiom on-chain to do a combination of historic data reads and verified compute over that data we generate the result off chain and also provide a zero knowledge proof that everything we computed was valid once we verify that proof on chain you can use that result in your smart contract however you like and because of the zero knowledge proof every result that we return from Axiom has security that's cryptographically equivalent to something you're accessing on chain in ethereum itself so let me talk through what the two components of Axiom give you the first component reading arbitrary historic on-chain data means that you can scale your application while interoperating with existing applications that are already on the chain unlike something like a roll-up you don't have to move your state or really do anything to access more data on the compute side we envision supporting computations that really cannot fit in any blockchain VM either today or in the future you might imagine running some sort of local neural network inference that's never going to happen on a Time shared Global computer and so I've told you what our ZK coprocessor is and I now want to talk about what it can enable so I've drawn a rough graph on the x-axis is the amount of data you're accessing and on the y-axis is the amount of compute you're using to process that data obviously they're correlated if you have a lot of data you're going to use more compute so in the beginning we think that ZK co-processing will make the devex for certain operations that are already possible but at Great cost in the evm much simpler this would be things like Computing a trustless volatility Oracle verifying a user's account age or Simply Computing a consensus level randomness but where I think it really gets exciting is when the data and the compute both get ratcheted up you might imagine accessing fully trustlessly the historic balance of any erc20 token at any historic block or if you're running and designing an on-chain protocol you could Define objective slashing conditions over the entire history of your protocol participants that cause them to be punished or rewarded and once ckml gets sufficiently fine-grained you can imagine adjusting your parameters of a D5 protocol based on machine learning algorithms applied to the historic data on chain in this way we think you can bridge the gap between traditional web 2 applications that take in vast streams of data and process it to the current trustless on-chain applications that are data starved so let me walk through what the state of ZK co-processing is today so we just went live on mainnet with trusses access to any historic block header account or contract storage variable two weeks ago and this week at ECC we've launched transactions and receipts to test net so in this way we allow smart contracts to access any piece of execution layer data on chain today to compute over that data we offer the ability to write custom ZK circuits to come to a result for a developer all of that can be verified on chain fully trustlessly now what does that mean for your actual application I'm going to walk through a few examples in a very concrete way suppose you want to access a user's account age what you can do is trustlessly read the historic nonce of their account at two different blocks then you compute the first block that has a non-zero nonce and deposit the age of their account on chain we have this running live in a demo on our website today suppose you want to enhance your governance in traditional corporate governance there's a very complex governance structure it's not just one stock one vote most governance today is simply one token one vote and I think the reason for that is it's very hard for governance to actually know any other information about the participants to have a more complex voting weight scheme with Axiom all you need to do is trustlessly read the history of your users voting by looking at the on-chain events then you can compute derived quantities like the number of times someone has voted their voter participation or even things involving when they vote and how reliably they vote you can then compute the custom voting weight using that and really tailor your governance to incentivize whatever you'd like in D5 you might imagine adjusting fees for historic participation and standard exchanges like finance and NASDAQ obviously if you're a higher volume Trader you get a fee rebate in D5 today everyone gets the same fee and we think that kind of violates a fundamental law of economics the only reason it hasn't happened yet on chain is that amms actually can't know how much their users have traded to implement that with Axiom all you need to do is trustlessly read the trade events of your users on chain add them up and then apply the appropriate discount to fees so all of the applications I just talked through are possible today on mainnet with Axiom but I want to talk through where we're going so we started by giving smart contracts access to the execution layer data and we think ultimately there want access to all data once Cancun lands in September we'll be able to access consensus level data on ethereum and perhaps through Bridges like succinct we can access data from other blockchains and roll-ups after that we think developers want to process the data they're getting and then the most native format possible for ethereum that's going to be simulating the result of view functions through zkevm proofs once we're able to do these first two steps we'll essentially have a ZK version of an archive node or indexer which caches these values and returns them to Smart contracts that with lower latencies finally we think developers will want to use different forms of computation that exceed the balance of blockchain VMS and we think the best way to provide that will be through a ZK native VM so we've started on the first piece of this roadmap and shipped it to mainnet two weeks ago and we're excited to continue over the next months and years if you want to try out Axiom today you can check out our developer docs at docs.axiom.xyz and my remaining time I want to talk a bit more meta level about the usage of ZK in blockchain applications I think a lot of developers today are very excited about ZK as a concept in technology to be frank they don't really know too much about it and this week I've been talking developers about using ZK and they view it either as a back black box which either they like or are very afraid about inside Axiom we started something we're calling the open source program to educate developers about how they can develop CK circuits and what ZK can do for their applications so in the first round of our program we had a number of community-written Open Source circuits for things like fixed Point arithmetic ed25519 signature verification and BLS signature verification so we're opening up the second round of applications next week you can go to the URL above to apply and we'd love to see what you can build using ZK thanks so much guys thank you very much he [Applause] actually this is the last part of our truck so I would like to introduce to this stage Tracy who's going to be our moderator Uma and Ismail foreign hello everyone hello hello okay so uh we have a pretty unique set of teams here today um for the most part over the last few years as people have thought about CK snarks they've thought about how they relate to Roll-Ups and validity Roll-Ups but I would say in the last 12 to 18 months there's a handful of teams um that have started exploring what it looks like to use Starks for Creative applications Beyond Roll-Ups and I think this panel represents a lot of those teams um maybe we could talk a bit about um some of the unique opportunities that introduces why we think it's interesting and compelling but also some of the challenges that come with it so one of the first things I think we can start with is just like a motivating question around ZK and and the trust assumptions sort of inherit in your interest in ZK is the belief that like trust is very important in blockchains for years we've had Solutions like off-chain data processing that sort of either happen optimistically or just with an assumption of trust what about trust assumptions is important to you and why do you think it's so important that we do it cryptographically instead of relying on some of these weaker assumptions what I like about ZK is it gives you an opportunity to establish a point of trust and then inherit computation from that point and I think the the notion of trustlessness is oftentimes misleading so you know when we think of a ZK roll-up what you're functionally doing is you're inheriting trust in a state transition from the consensus level collateral and when we think about co-processing or we think about any type of access to on-chain data from a contract that can't be accessed to the vm's native execution what you're trying to do is get as close as possible to your computation running on the same trust assumptions as the underlying base layer so it's not trustless per se as more as much as it is the inheritance of trust from a very specific designated point yeah it it's a different boundary box a little bit than where we were before but I think if you look at the base layer of no trust this moves it up a level it is any other any other thoughts on that why is it important for you guys yeah we've been talking to a lot of application teams and really trying to educate them on what ZK can do for them and as I'm sure everyone in the audience knows although we talk about these very secure ZK based systems in reality a lot of things on chain today rely on development teams being honest even a lot of the systems in production will eventually become you know fully secure optimistic rollups but today have permission sequencers and I think where in the next year or so I see ZK really being valuable is in cases where social consensus will not accept a trusted Oracle and those are typically cases where composition is very important so if you're a protocol team maybe your users will accept your trusted Oracle but where it really gets difficult is when another protocol wants to compose on top of that protocol well that protocols users trust this random other protocols team I think the chain of trust gets very tenuous I think ZK really helps in establishing more clear trust abstractions and boundaries yeah to add to that I think there are some existing situations where we've already seen trust be super problematic abuse and then eventually leads to things like Bridge hacks and results in like very tangible amounts of money getting taken from users and protocols I think we're all aware of all the bridge hacks that have happened uh very recently and I think in those situations it makes it really clear why the existing trust assumptions of multi-cigs or whatever other mechanisms they currently use aren't acceptable because it has already led to material loss I think that's a great one composability is often overlooked but if you have like a key trust assumption in the in the middle of a chain of trust then you can't really build another highly valuable protocol on top of something that has weak trust assumptions so it's a good one um yeah and so it's sort of implicit in that conversation it was um an attempt to do it a bit differently than we've done in the past to avoid some of the bridge hacks that have have happened help improve composability and protocols how are your teams thinking about doing it a bit differently uh maybe starting decentralized or enabling that very early in your product's life cycle yeah so what we focus on at LaGrange is supporting data parallel computation on top of large chunks of on-chain data to be provable efficiently so broadly speaking when we think about the space of on-chain data we're functionally constrained by the ability to access from the execution layer the majority of the state that has been created in the the canonical history of a chain and this is even more prevalent when we think about the modular context when you have a series of different execution spaces with a series of different state structures transaction trees receipt trees and block histories and so what we focus on is to allow you to treat this what is more or less massive on structured data as if it was your data Lake in web 2 to run SQL mapreduce rdd massively parallel processing computational models on top of this to be able to derive and extract properties that are relevant to your applications function you know maybe pulling it in a bit to focus on decentralization and yeah and so I mean I think functionally decentralization in the context of this question has to do with a few relevant vectors so firstly like you have to think about the decentralization's approver which confers a liveness assumption on the overall protocols that inherit computation from it and moreover you have to also think about the the Assumption of where you're deriving data from and a single chain context it's very straightforward when you get multi-chain some of the work that the sync does it gets a little bit more opaque you know you have to derive data ideally from the underlying consensus of a source chain or from a closer process like a like client is you can get to that yeah I would say one really important piece for this is for users to be able to audit what every team is doing and what's actually been deployed like I think that the four of us on stage here can talk all we want about how we our teams have a security mindset and all the things we're doing but ultimately users actually have to be able to verify and put our systems to the test so I think there are a lot of things that go into that one is just standing the test of time in a production environment the second is having a lot of transparency around code base being open source having a reproducible build for any verifier you Deploy on chain and third is adopting cutting-edge security techniques like formal verification or fuzzing to give a higher degree of guarantee to users so I think all of these systems are going to be difficult to trust at a super high level until they've been running live for years and whatever we can do to allow users to audit that more quickly is going to move the space as a whole forward yeah to talk more about security I think vitalik Advocates even in the context of zkevms for this two-factor approach where maybe you have one computation that's done in ZK and then maybe you have a trusted tee sgx based two-factor or maybe you have many different implementations of the same function I think one really interesting thing about ZK is that at the end of the day all the functions we are Computing are just f of x equals y and so it's very clearly specked and so you can actually have multiple redundant implementations of the same computation very easily because it's just inputs and outputs and the circuit has to be the same and I think that will be really powerful in the security story of actually getting ZK adopted and for people to feel comfortable using it in their DAP in a very critical context yeah those are good points do you how are you thinking about I mean multi-proofer kind of implies a separate software stack do we do any of us think maybe we would push in the direction of a full separate software stack for another prover I mean the t's may be a shorter path where you you don't necessarily need a whole new stack I think you can already see that there's a few Primitives that are implemented across a few different proving systems at the end of the day I think all of us are on stage are doing pretty similar stuff with hash functions signature schemes and other cryptographic Primitives that are fundamental and it's really not too hard to take the same primitive and re-implemented it in a new stack and we already see multiple implementations today so I think it's quite feasible to actually have multiple implementations and something we can push towards so I'm not advocating for a different prover for the same proving system but more multiple redundant implementations and even different proving systems I I think it also requires being thoughtful over you know the the scope of what you are proving I mean I think if you're proving something like a like client where you have a very fixed set of parameters over what a correct execution of that is it's it's more straightforward than if you're building a general purpose VM when you're incurring a significant amount of technical debt when you're anchoring to a specific proving system potentially if there's a change in the state of the art and you can't have your your back end be agnostic your front end be agnostic to the back end and so there's complexities there I think that are inherently incurred as you as you develop applications that have more and more zero knowledge intrinsic to the corporate the core purpose and I think in those situations having having multiple back ends becomes an imperative if you're trying to ensure that you can stay up to date and you can your performance can stay relevant yeah that makes sense what are you doing in the meantime like we're not quite there to a multi-prover world I think most most all of you are working on getting into production pretty soon are you putting in Gates or checks in your contracts or you know every proof needs a signature along with it for now um how are you thinking about that in the short term make sure you don't have a catastrophic bug in prod yeah we deployed to mainnet two weeks ago and we put in gating on the prover so if there is a circuit bug in soundness we certainly will not trigger that bug we've also put in time lock upgrades on our verifiers so that we can actually fix any issues that come up we do feel like this should be a temporary phase until we're able to introduce stronger security techniques like formal verification yeah I think we can follow a lot of the in production ZK Roll-Ups today they all have similar setups where they have approved provers you have time locked upgrade and governance over the verifiers I think all of those things are very reasonable because again if they are in the critical path and there is a hack that can potentially be catastrophic I think we think about following their lead and their design choices and we think that's like a very reasonable short-term trade-off before we get multi-prover and trusted execution environments we agree I mean I think there's good precedence over teams who have have pushed large production code bases with complicated underlying circuits and have done so in a way that has to date been more or less secure I actually had a conversation with a an auditing team that will remain unnamed that had done some zika evm Audits and they were nervous they said to themselves like I don't know if we understand this well enough to really put it in production we've like done our best here um are there things Beyond audits that you're trying to do internally at your companies maybe to have a culture of security or or help ensure at the code development time that uh when you go to production you don't have problems yeah I mean I think I think having a culture of security is very important and I think you need to be very clear on your code reviews and your best practices as you're implementing and developing your underlying infrastructure I I'd extend that broadly and say like I think just from a business operations standpoint right now you should be resourcing and hiring people who have an understanding The Primitives that they're working with especially when you're building these highly complicated systems it's important that that the work that's being done is done by people who have an awareness and a context for how the things they're building works yeah we think that of course Standard Security practices are very valuable but actually I think there's some very obvious things which are very helpful one would be having less code just having a more minimal and well-designed system so you don't have a lot as large a security surface area the second would be looking at the interfaces between ZK and blockchain systems where the two systems are really of quite different natures and we think these boundaries are places where issues are more likely to arise so of course circuits might have bugs but I think it's much more likely you just completely Miss parse part of your circuit and do something quite obviously wrong yeah you can even see this in the bridge context where a lot of bridge hacks have been due to trust assumptions getting violated but then other Bridge hacks are simply due to Smart contract bugs I think goes to support use point maybe zooming out a little bit and thinking about modularity given where we are um and Uma I guess you you just announced that you're working on a Celestia Bridge or any of the other teams thinking about sort of modular DNA layers in their environments or are you mostly focused on specific chains we we focus very heavily on modular with how we're developing our infrastructure I think being able to permissionly permissionlessly support data access in a modular context is very important I think especially as we see a proliferation of new execution spaces whether those be you know roll up as a service providers or l3s on top of existing scalability Solutions I think it's very important to ensure that in terms of State access that you're not constrained by by the ecosystems that you're interrupting with or interacting with principally yeah we're currently focused on evm but I actually want to point out another aspect of the word modular that I think ZK is very useful for if you are able to use ZK introspection onto the city of chain as part of your application you can sometimes dramatically simplify the on-chain architecture of your smart contracts basically you don't need to be recording a lot of extraneous information to state that you could later read using ZK and so we think that this can contribute to a trend in smart contracts actually getting more modular yeah I agree with that I think State access on chain has led to development practices that if you had principally better data access or and principally better compute on top of that data could be alleviated and I think if we look at like optimistic Roll-Ups and we look at the the the the bisection game there there's things there that can be simplified drastically I would argue reducing some of the implicit security assumptions to it cool um let me think if there's anything on that topic I I guess yeah one one of the challenges um of having like a a proof or a light client built off of a given chain I mean in L1 we can kind of um assume there's a lot of economic stake behind this uh this root of trust if we when we go into a modular chain ecosystem where you have a root of trust that maybe has lower economic state or longer times to finalize I think that introduces a challenge in in the level of trust you can you can put in that how are you uh alleviating that problem I think LaGrange I know you've published some thoughts on state commitments and I think this challenge will also impact Axiom as you guys try to look at other chain ecosystems yeah I think the core challenge is essentially although let's say an optimistic roll-up might have a longer finality period let's say seven days users really demand some sort of weaker guarantee that can hold much faster and so we think it's actually more appropriate to leave that sort of guarantee to the application if you're trying to withdraw 100 million dollars from optimism maybe you should wait seven days before someone else accepts it if you're trying to play a game on optimism who cares just accept it and so we think it's important that for the end user the guarantee that you're precisely offering is extremely clear I think part of the complexity with having a clear guarantee for the end user application is that it opens the design space up for Less transparent infrastructure providers to have opacity over the underlying design decisions of their protocols and this is I think the concern with a lot of cross-champ protocols today that originate messages from optimistic execution environments and so one of the things that our team works on is using existing ethereum like existing ethereum valid asset collateral with eigenlayer to be able to assert a early degree of economic attestation behind the validity of a reported State transition by for an optimistic rollup and the reason that we think this is very valuable is you can have Bridges permissionlessly consuming state from a shared layer with a clear amount of economic trust and economic security behind the state that they're using and it means that if you want to understand how much security is behind a given attestation estate you can very quickly look at the size of the committee and the stake within that committee and you can derive that assertion from there and you don't have to worry about you know whether or not a k of n assumption for an arbitrary bridge that could be used in some intermediary protocol has the a sufficiently decentralized underlying validator set we think of this as like very much a public good great maybe we could spend the last few minutes here um zooming out and and thinking about uh sort of ZK beyond the blockchain I think snarks in general just received no attention uh or minimal attention outside of the the crypto domain and I think a crypto serve it is a good incubator for this technology to kind of grow up and get mature um but over time there's probably an intersection with zika snarks and and the broader internet at large is there anything kind of exciting and interesting that starts to happen as we see more verifiable computation used throughout the internet are we all just crypto Maxes so I've done some academic work on zkml putting some of the largest known machine learning models into ZK and as consequence I've had to explain to some pretty well-known machine learning professors like what is ZK their reaction is always the same number one is that's impossible like you got something wrong number two and this takes varying periods of time for different faculty members is okay maybe it's possible but it's useless for us and so and number three some of them are like oh maybe in this like Edge Edge case it might be useful and so I think it's a lot of Education in finding the places where having verifiable computation actually makes sense in a non-hostile environment typically their response is hey like why would I use a snark I could just run it or I trust Amazon more than your weird crypto system and so I think it's very sobering in in showing us that as a space we need to be delivering real world value that's exogenous to this relatively insular crypto world that said I think one Trend that pushes a lot of people I know who've been bearish on crypto bearish on ZK forever to be very interested is the the rise of AI people are very worried about spoofing about deep fakes and they really want a notion of provenance to exist just the other day I called my bank to verify my identity for a wire I'm pretty sure that's just not going to be a thing in a couple years and I think people are very hungry for a solution and I do think ZK can play a role in that I think use cases like zkml and worldcoin are an interesting uh case where they're not fully they are in crypto but they're also kind of Bridging the real world and it's a very real world application of zkml but of course also World coin is going to be settled on an OP stack roll-up and so they are actually using a lot of crypto properties as well in their system I think use cases like that that straddle the real world and the crypto system are really important and what really excite me and so looking forward to more of those I'm sure there's more ZK ml use cases for similar or new sorts of products like that and one thing I'd also add is I think when we think of verifiable computing crypto we we have a tendency to think principally about the succinctness of the computation and we don't I think most of us here are not talking most about about the the ability to compute verifiably on a private set of inputs and in the web 2 context there's a lot of examples of where that is and will be highly viable and I would say you know if we think about like Enterprise transfer of data and the inability for most for a lot of web 2 companies to have effective effective orchestration of computation across shared data assets to mitigate fraud to have better user experience and customer experience like the the fragmentation of data within major companies today makes it very difficult in financial services in the healthcare sector for there to be applications that can act in the best interest of of the underlying user base while still being able to preserve privacy of each of the Enterprises that has that has that data cool are we doing questions I don't know okay I think I think we're pretty close to time but I also think we're wrapping up the end of the conference so maybe we could do a few questions from the audience there's the one talk after this but okay sorry we can do all the descriptions of course anybody all right okay I know maybe you want to say something we don't bite what's a storage proof yeah just a general question about like see I how much more efficiency gains do you think is do you think there's left for ZK proofing systems like from here is it a 10x 100x is like a 2X because you know people are still kind of skeptical about like how practical GK is you know from approving from a computational perspective yeah I mean I think that's a good question but I think if we were to assess the the trends in computation over the last 20 30 years there there are a number of buoying factors that will result in ZK becoming increasingly performant irrespective of the underlying proving systems that we're talking about I'd say that you can you can discuss improvements in the proving system as well as improvements in underlying Computing infrastructure that both likely will have have positive effects over time yeah I think currently uh I think all of us perhaps run our circuits on fairly commodity hardware and there's a lot of ZK Hardware companies that are out there trying to make Hardware level improvements to these proving systems and make it faster so that's one Frontier to push and then of course there's also new proving systems all the time I think Nova came out this year and perhaps there will be more in that line of work that make a algorithmic Improvement so I'm very optimistic that the algorithmic Improvement Plus the hardware Improvement is going to result in huge gains to ZK yeah if we want to think about the theoretical limit Justin Taylor put out a blog post I believe last year discussing this essentially there are two sources of overhead one is in converting a normal computer program into a ZK circuit there he thinks that the limit is maybe a factor of 100 and the second is in the actual proof system where maybe you lose another factor of 10 in the limit so that would add up to a 1000x overhead over a normal computer right now we are nowhere close to that so I think we can easily get 100 to a thousand X Improvement without any hardware and one last thing I'll add on that is that doesn't account for the hardware side of things which typically is between 10 and sometimes up to 100 times Improvement um depending on the specialization of the hardware so fast proofs of the future all right I think that's do we have time for one more yes we do okay thank you so as someone who's been on a team who's seen um you know developers probably not elegantly put data on chain and you know they're just pushing a lot there I'm very excited for this idea of what you guys you know calling co-processors any predictions on how long it'll take to make the switch from these very inelegant on-chain protocols to you know these in my mind much more elegant protocols a year two years three years like how long do you guys think that that switch will take I I think the principal constraint there is twofold I think there's developer adoption and there is whether or not developers feel comfortable altering the Paradigm and the the function of their application to now include new primitive irrespective of how secure those programs may be I would say secondly there there's the the question over whether or not there are clear instances on chain now where there are applications that could leverage better data access and we'll leverage better data access in um changes that they will make to their underlying infrastructure in a shorter period of time I think another factor that comes into play is really social consensus I think right now it's frankly still pretty difficult to have trustless versions of a lot of the things that developers just putting up the number on chain for I think once it becomes easy enough users will start demanding it and it's something that we'll see slowly and then suddenly okay thank you everyone for your time and enjoy the rest of the conference thank you [Applause] all right so um yeah just before you all leave I want to say a big thank you for coming to our ZK track as part of modular Summit um we are a little bit ahead of schedule but I've heard that everyone's pretty set to go ahead I just I have one are we going to be doing anything about the chairs I think because we're going to have a just two speakers so maybe we just set that up a little bit yes but yeah how have you enjoyed the day so far do you want to make some noise are you excited we're we're rounding it out day one modular Summit it's very cool all right so I think maybe we'll just pull this well that's gonna be really loud okay sorry I also have a crappy knee right now so I'm the worst person to do that very nice all right yeah that looks more like a Fireside right good maybe we can even grab some of those mics all right I want to introduce to the stage Mike and Mustafa who are going to be doing a Fireside to wrap up day one of the modular Summit let's give them a round of applause welcome [Applause] all right everyone everyone having a good modular Summit let's give a hand to Celestia for organizing this Maven 11. this is amazing and this is the last panel of the day so everyone gets a trophy and a cookie for staying and watching us thank you uh Mustafa I've got a lot of questions for you about Celestia about the history and how we've gone from lazy ledger to here and sort of what you expect from the future but I was told actually that you at one point got arrested for hacking the CIA and I've just got to get you to tell that story on stage so give us the inside scoop what's the history here yeah so uh we're starting straight with that okay um yeah I mean I kind of like got into programming at early age and um you know there's like the first programming language I Learned was PHP and I started like thinking about ways that you know programmers could make mistakes in their code and then I started like learning hacking uh naturally through that way like my first experience of a hack was when I was doing my math homework I didn't have a calculator with me but how old were you when you were doing this well like the calculator thing like was when I was I was like 11. and I don't have my I don't have a calculator but I need to do my math homework so I search online for like an online calculator and I found this like shitty like Pearl calculator on this um maths professor's website I think it was like University of Maryland and it was like a text box it was like a text text box where you can like type in sums and it would give you the result so I was thinking I wonder if he implemented this in the most basic way that programmer would probably implement this which is it was using Pearl so you the way you would do it is you would take the user import and you would input it into a function called evaluate and that basically evaluates computer like Pearl computer code so not only can you type in like sums into this calculator you could actually type in like computer code into the calculator and the server would execute it so I would so you could actually hack the entire server to this exploitable calculator and so I managed to get access to like to this to this University server you know by typing computer computer commands in this calculator and then you know email the professor and he managed he tried to fix it and then I found a way around it and you know and that was kind of like a very interesting experience for me but then I kind of got like more involved in hacking when um I started becoming more involved in like internet activism um I was involved with the groups like Anonymous and laltec um anonymous was doing denial of service attacks against various entities that they were protesting against say for example like when we clicks and when PayPal and visa and MasterCard block donations three clicks they do the denial of service attack against PayPal and Visa MasterCard to take them offline and you know it was like thousands of people in the chat rooms kind of coordinating this but for me I thought that was kind of interesting but it didn't really do much except for gain some media attention so I decided okay why don't I actually try to you know hack something and like get into get information that could shed light on wrongdoing so then I started so then I like found some technical people on I found like I noticed like a few people that were more technical than others and I brought them into a private Channel and kind of started a hacking group through there eventually spend out into LOL SEC so just to sum this up by the way the difference Mustafa between you and me when we're 11 you know if you if I didn't have a calculator I just didn't do my homework and try to hack a new one out of the out of the web uh and just to bring us up to the current time and I think you're about 16 years old at this point right you're sort of casting around and looking for a righteous group that you can hack and you uh you settled on the CIA which is uh that's a bold Choice um tell me how does the the CIA take to hackers are they you know is that particularly welcomed yeah I mean so the funny thing is like you know CIA was one of the many things that we kind of like attacked but the funny thing is like the CIA wasn't even technically a hack it was a denial of service attack so it's like developed service attack you're not actually getting confidential information so we just took the CIA we took the CIA website offline and that was like that that's like a very basic thing like that's not even like the most you know we did more advanced things than that but like that was the thing that got the most attention um or one of the things that got the most press attention because like it was very embarrassing for the CIA to have their website take um taken offline did they uh thank you for this and say thanks for showing a vulnerability in their system or were they a little less forgiving um that can be in some very strange ways all right we'll dig at that a little later but uh I think this is a great story to tell because I sort of want to um I think many there are a lot of very unique people in the crypto space and I think the people who have made an enormous impact actually come at this not from a short-term money-making perspective but from an ideological how can we change the world perspective so I think that's just important background for folks in the audience to understand uh now I want to actually move us forward a little bit in time and talk about lazy Ledger which was originally the name of Celestia and sort of the white paper and idea that launched this entire wave of modularity that's frankly blossomed into something super amazing that we're witnessing you know this week in Paris so tell us a little bit about how lazy Ledger came to be and what are the early iterations of that modular idea look like yeah sure so um I've been interested in peer-to-peer systems for even before Bitcoin existed like I was interested in BitTorrent and peer-to-peer file sharing because I used to like download stuff from The Pirate Bay and that was very interesting it was very interesting to me that people could kind of just download stuff and kind of like permissionlessly and the interesting the reason why I'm interested the reason why I was so interested in PHP systems is because so many people were trying to shut down their Pirate Bay and but to this day it's still online so it's like it's extremely censorship resistant and that's thanks to technology like BitTorrent and the hd's so then I learned about Bitcoin in 2010 2011. I was following that and um I was kind of like following the research conversations going on it was like this IRC Channel called Bitcoin Wizards where people were discussing theoretical improvements to bitcoin and um I noticed that it was like a one megabyte block size limit in Bitcoin and I was like asking people what what you're going to do when it gets reached and people weren't worried about it at a time they were like it will it would never get reached but then it got it got hit pretty soon in around 2013 2012 and you know transaction fees became very expensive and then the Bitcoin Community started debating about how to fix that problem and you kind of like split into two camps one Camp wanted to increase the block size and that can sped out into Bitcoin cash and another camp um I wanted to use layer two technologies and payment channels and lightning Network and that was the Bitcoin cap that kind of prevailed the Bitcoin main Network but the reason why they didn't want to increase the block size was because the fundamental principle of blockchains and cryptocurrency is that end users should be able to fully verify and validate the chain and if you increase the block size it will make it more expensive to for users to run full nodes so then I started thinking about like well um about this problem more and then I started doing a PhD at UCL with 2016 focusing on layer 1 scaling and we were at the time people were talking about sharding that was like the most interesting to get installation at the time and I could alter the people called chain space which was uh like the first sharding protocol for smart contracts to be proposed I was also this was around the time when ethereum 2.0 was researching sharding but all of the problem with all of these proposals were that they weren't dealing in the K they weren't dealing with the case where a shot goes bad it was pretty much like a block size increase like the security model for this proposal for us like increasing the block size but there was no way to actually validate what the shards were doing to fix that you need to have fraud proofs and Decay proofs and that's what Roll-Ups are doing but at the time the reason people um there was a missing problem to making foolproof and Decay proof works at work which was a data availability problem which was an unsolved problem at the time and um so I started like doing more research into the data availability problem and I co-authored this paper with italic on how to scale data availability using data availability sampling and then I realized that this is actually basically the core primitive that makes a blockchain work like a blockchain fundamentally at its core is basically a data availability layer and the consensus layer so then I proposed lazy Ledger which was a kind of a paper that proposed a blockchain that or layer one chain that takes layer one back to its core components that's why it's called lazy Ledger because it's a lazy blockchain that does not do any computation only those consensus and data availability and this and those this was an idea I proposed about three months before optimistic Roll-Ups were proposed so when optimistic was also proposed everything kind of clicked together because in my paper I didn't really have a fully fleshed out execution model an optimistic rollout provided there so then it kind of made a lot of sense to actually build this because um roll-up Century roadmaps need a scalable data layer um yeah I actually want to you know get into the weeds of data availability I think that's a word that many people understand kind of on a surface level but it's such an important sort of Roadblock in terms of realizing the Grand Vision that many great talks have actually laid out today um so I kind of want to actually go through like and maybe just Define sort of the basic components of a stack which to me is the execution the data availability the settlement and then the consensus and can we focus on that data availability question like how would you like let's say there were a bunch of five-year-olds in this audience like how would you explain the importance of that and then why is solving data availability such a critical roadblock not only for scaling base layer infrastructure but for the cost of apps yeah so like here's how I would explain it when Bitcoin was created bitcoin was created to solve What's called the double spend problem when the double spread problem is is where is this fundamental problem in digital in creating digital cache where if uh Alice has a certain amount of funds how do you prevent others from spending their funds the same funds twice and the way that you prevent that is by having a blockchain that orders transactions because if Alice tries to spend their fund twice then only the first transaction will go through and the second transaction will be rejected but in Bitcoin and this rule where only the first transaction can go through or the rule where you can only spend funds that you actually have is enshrined into every Bitcoin node so that if your if you run a full node and you receive a block that has an invalid transaction your full node will execute every transaction and reject the block a block that contains any invalid transactions so miners can't misbehave in that way but depression that kind of lazy Ledger was well um and why that availability is so important is well it's a thought experiment which is what if you created a what what is the simplest version of Bitcoin you can create what if you had a version of Bitcoin with no rules about what transactions can go into the chain which means imagine a version of Bitcoin where um conflicting transactions or transactions that double spend coins actually allowed to be on the chain well how would that be how how can that still be secure how can that still prevent doubles building problem well it's pretty easy all you have to do is make sure that the clients simply ignore the second transaction right so like technically you don't need to enshrine computation or transaction validity rules to the chain itself you can push that away to another layer or you can push that away to a client side node so where the layer that you're pushing it to Simply ignores those invalid transactions and if you do that then you're basically using the blockchain not for computation but only for a ordering and B data availability and the reason why ordering is important is obvious because you need to know which transaction came first to know which is the real transaction that actually got to spend spend those coins and the reason why you need data availability is because you need to know the complete set you need to know all the transactions that happened to know which one even came first in the first place like if the transaction if not all the transactions were published only some of them then you don't know if there's a missing transaction in the set that wasn't published that might have come before that makes sense so I want to you know understand from the perspective of I mean you know in kind of simple words it's a lot of applications they need data they want to do it in a way that's cheap and I want to start uh you know painting a picture for those in the audience like if we solve this problem what Market structure changes are going to happen here so you know from a cost structure my understanding for especially Roll-Ups and things like that is that data availability is a massive cost for them right critically it's a variable cost that will it doesn't get cheaper it's not some fixed cost you can amortize across a whole bunch of users it's actually something that is going to scale relative to the amount of transactions that happen so I want to get a sense of let's say you know these apps start using Celestia for data availability they massively lower their cost what is the impact of this from a market structure standpoint do we see lots of new apps launching are there business models that are enabled with a lower cost of transactions that end up happening that have been recruited from happening before like what are the like first order implications of this I mean like the kind of like most immediate obvious implication is you have cheaper cheaper data availability leads to cheaper transaction fees and I do think that a lot of usage of web3 applications have been bottlenecked by the fact that transaction fees are too high like if we had cheaper cheap transaction fees I would honestly genuinely think that we would see a lot more applications being deployed on web3 and not just D5 applications but also you know for example to give an example like this will be discussed in the gaming track tomorrow but there's various on-chain games that only are only practical with a high data 3 port or even for financial applications imagine you wanted to just use like the whole original purpose of Bitcoin was to use it as a peer-to-peer cache system but we don't have a single um kind of widely used blockchain that is used usable as a peer-to-peer cache system because of the transaction fees um so if we if if we have like scalable the a you have cheap transaction fees then it can actually be used to what the original purpose of Bitcoin was which is which is peer-to-peer cash not just you know a store of value or um or some investment or something like but people hold because they think it will go up or trade indexes and so on and so forth um I think that's and and I think this the other kind of effect is uh yeah we'll see a lot more applications become practical as a result of Cheaper transaction fees but also by having scalable the A and having on a modular blockchain stack we'll be able to see a lot more experimentation with different execution environments with certain things that people wanted to do before are not possible so some examples of that is that um like various projects um have Manta for example have modified the evm to add certain zke or privacy friendly opcodes that are possible on the standard evm or for example curio have modified the evm to um create a 0.5 tick game engine to be able to run a real-time strategy game on with it which wouldn't be possible on a standard evm and historically it was impossible to do that without deploying a new layer one if you wanted to deploy a new execution environment which is a lot of overhead to use like a web 2 analog imagine if you had to have a physical server somewhere just to experiment with a new programming language or new database today that's not the case because you can just put up a virtual server in the cloud on AWS or digital ocean so you can you can think of roll ups on a scalable DNA error as like virtual blockchains that enable people to experiment with different um execution environments that unlock things and enable things that are fundamentally weren't possible before I have a question for you Mustafa which is I sort of found myself wondering as you were just discussing there there's kind of this question that's a little fuzzy to me of like just as a hypothetical you know with the proliferation of many different execution environments now it's not just the evm many data availability layers right used to be just ethereum da but now we've got eigenda and Celestia Avail other providers and different uh you know choices for settlement and consensus as well it's very possible in a very near time in the future that you could be using an application that's built on that Solana virtual machine using Celestia as da but settling to ethereum and the question is what chain are you on at that point and where is the lock-in for these different environments yeah um basically the question uh like the whole point like the whole point of engineering is to kind of shift away from a world where you have this tribalistic crypto environment where it's like this Chain versus that chain you know it's like you know ethereum versus Lana versus Avalanche which is a very Zero Sum mindset um and long term is important for changes to have a social mode but long term um you know for crypto growth for crypto good to go the mainstream ultimately users care about like usable products not necessarily you know which which chain that they're on as long as the chain has basic unnecessary decentralization and security properties and yeah and I think like yeah people right now kind of frame things like you know this app on this chain you know like you need to swap on polygon you need to swap on Avalanche C chain in the future like people would be thinking in those kind of terms people will be thinking okay this is the app and it uses this stack underneath like when you use it when you use a when you interact with a website today on the web you don't you don't necessarily um care too much about like what stack is using underneath in many cases you don't even know and like when you go on Google you don't know is it using Linux is it using FreeBSD you don't necessarily know as long as it provides properties that you need yeah I want to um actually do a little thought exercise here and imagine that you and I are sitting down it's five years from now when we're having this Fireside and I've got some questions for you to dust off the old crystal ball here and I want to ask you about how things have played out during that time what does the market structure look like and what are some of the big changes that we might not be super obvious today so you know we've heard from a lot of great projects today a lot of our uh you know these past couple days basically you know new uh approaches to scaling infrastructure new types of specific blockchains I mean what is the market structure sort of look like for the modular stack I mean are we gonna live in a world of thousands of blockchains like how many different general purpose Roll-Ups do we really need are these going to be across two or three trust environments is the multi-chain future really going to play out so I can never just ask one question at a time here yeah like from an engineering perspective when I um kind of started this last year what I envisioned happening in a like two years and it's been yeah already kind of materializing today was a world where you can like go on the docks and you can click and roll up as a service provider and you can deploy a roll-up chain in two seconds where in fact you have a world where deploying a roll-up chain for your application is more is easier and more convenient than deploying a small contract and so there is a potential world where there's millions of interconnected chains that share security similar to how today there's many millions of web applications running like we've seen a very similar evolution in web 2 you know 10 15 10 15 years ago if you wanted to create a new website a web application you wouldn't you would use like an existing hosted service provider like you might use I don't know like um Squarespace or Blogspot or WordPress but that's very limiting so now today if your application developer and he wanted to deploy a new web application it's like easier and more convenient to Simply deploy a virtual machine on the cloud on AWS or digital ocean then using a shared hosting provider and I think you'll see a very similar evolution in web 3 where share the web hosting providers are like not analogous to shed smart contract platforms and in the future it might not seem obvious now but like in the future the obvious ways to develop new applications will not be to deploy your deploy a smart contract on a shared smart contract with environment that everyone shares um but to deploy a new all-up chain similar to how on web 2 you deploy a new virtual machine for applications so uh you know at the risk of asking potentially a spicy question here uh you know I noticed that so in this this future sort of envisioned state of yours where we might have millions of Roll-Ups you know I can't help but notice many of the Roll-Ups today you know have these multi-billion dollar valuations you know a couple million Roll-Ups at a couple billion dollars each starting to talk about some real numbers so I mean how do you kind of see that shaking out is there a consolidation in the future here how many of these general purpose ones we really need yeah we don't need like there's millions of Roll-Ups won't have multi-billion dollar valuations like they were they won't all be like massive roll-ups they will be like they might all be small small applications just like how there's different websites that offer small services or small applications you might imagine like um dowels having their their roll-up chains like similar to you know how organizations have their own Discord servers they don't share the same Discord server as everyone else they have their own namespace you can imagine that Dallas might have or diff or projects might have their own rollup chains that interoperate with other roll-up chains understood um what about I mean one uh sort of theme and maybe I'm reading a little bit too much into this and other folks think differently is that I think you know five years into Bridges we can all admit that it hasn't been as smooth as we thought it might have been at one point uh and some of especially like intense based architecture sort of points this idea maybe we're not going to find out Bridges as easily as we thought and they'll kind of be a couple of different sort of trust zones or economic sort of zones I mean how do you kind of see the you know it's very easy for me to imagine a world where there's an you know apps that pick out hey I'd like this particular you know evm or execution environment in Celestia for da or whatever but how much do you actually see assets and data being interoperable between these different base chains yeah I mean I think interoperability is you know absolutely critical and that's kind of like the reason why we're building Celestia as a shared security layer that can be used by Roll-Ups to interoperate um without fragmenting their security and that's why we want to we want to in the cosmos ecosystem we want to replace these committee based IBC Bridges um which are which have a heterogeneous security model that's fragment security with a more homogeneous security model where roll up share security and use frauden's UK proofs not committees but it is the case that Bridges today are very janky and very not don't have a good user experience but honestly I honestly think that the fundamental problems like there's there's there's practical solutions to all these fundamental problems it's just meant it's just fundamentally a lot of it is just like an engineering slog that has to be and a lot of missing pieces of infrastructure that just need to get built so like for example if you take the fact that you know Optimus optimistic roll-ups you know have a seven day challenge period to withdraw from the roll up to the L1 or to another chain you know that's that's solvable with um Atomic swaps um you can if you do it if you have a atomic swap to swap to swap tokens that's instantaneous and that's what projects like codex do for example um and there's problems like you know you have to have multiple fee tokens to bridge across chains that's also so very solvable and that's like skip is is something that is solving those problems um it's just like it's just it's just it's just I think fundamentally it's just that we're very early um like you know it's comparable to using the internet in the 90s like it was a very janky experience you know you couldn't like stream video for example um you have to you know connect you have to manually click connect and then dial and then do a dial-up connection that takes like a minute to run just so I think it's just I think it's just a matter of being early and I think um there's the the challenges will be solved yeah I I tend to agree with that and that's kind of a nice segue into the you know the next line of questioning here and I don't know do we have infrastructure to take questions from Mustafa from the audience if I kind of want to maybe leave like five minutes at the end or so for that but um you raise your hand if you have any questions if you interrupt yeah make it a we can make it a true fire side here um but one one thing that I'd love to get your opinion on Mustafa is how we eventually end up bringing uh more app Builders into the space because I think the thing that we all want to see is especially in this next cycle a couple apps that really take product Market fit and bring millions of people on chain I think that's the goal here and one theme that's come up many times is sort of this chicken and egg problem in between apps and infrastructure where we need good infrastructure to build good apps but then the infrastructure also has to serve the apps that exist and it's kind of like which comes first so how do you kind of think about that problem and then maybe we can talk about like how to bring more more Builders into the space yeah I mean it's definitely like a complaint that people bring up that it's a valid observation that there's a lot more infrastructure projects it seems like there's a lot more infrastructure projects right now than actual applications and you know that might seem wrong but I actually think that's kind of fine in in theorem uh because like fundamentally I think the reason why there's not not all uh not a lot of MP developers it's just like it's just that a lot of the things that seem easy to do with rep 3 is actually not possible to do because of various either scalability or execution environment challenges like take like for example like take the fact that like you know people want to build um like the Uber in web3 that's like a very stereotypical classic thing people say oh yeah that's why isn't Uber on web free like theoretically it's possible it's just like all the tools are very janky like it's impossible to deal in an ethereum L1 no one's going to pay 20 for transaction to be there's you need tooling for you need tooling to um like share location data right in a decentralized way that tooling's you know this that tooling is being built you know there's like PDP um to kind of like exchange messages in an essentialized way um yeah I just think it's just a matter of so I think it is actually a good idea that there's a lot of infrastructure projects out there because to make the developer experience less janky and like more practical to build the things that seem obvious in hindsight like you know like uber for web free but haven't been built um yeah yeah completely agree with that sentiment I think maybe one thing I'd also love to frankly just ask you is the um you know the founder and developer of the Celestia ecosystem is like what is the right way to do BD from your perspective and there's a little bit like there's a couple of different approaches you could take kind of like the bottoms up ecosystem approach that you know several blockchain ecosystems have employed successfully and then there's like a little bit more top down in actually asking people to you know incentivizing Builders and apps to come onto the platform like how do you kind of think about building an ecosystem and doing BD within the context of web3 yeah I think it really depends on um what you're building um like you know if you're building something that's fundamentally new and you're it's the only thing like you're creating a new category and you're the only product that provides that then you probably don't need to do as much you know direct BD um and a community will naturally form around that uh but if you're doing something that's a more competitive to things in existing category then you probably you'll your main differentiator will probably be having a better BD team so I I really think it depends on on the product and and I see like roll up as a service providers doing a lot of BD um to kind of attract roll-up developers on onto on and so on and so forth but really uh so this year um yeah we do have a we do um have a large community of people naturally building us last year where we do have also BDT members uh trying to um kind of like help people and explain the technology but the way I see it is that um we're just trying to create like a distributed community and we're trying to we're trying to bootstrap a modular stack like we're very happy to have competing the Alias on uh come and talk and kind of come to the summit even though we're we're co-organizing it because ultimately modular stack is only incredible if there's actually a free developers have choice in the stack like people are only going to build if there's there's a there's a lot of choice so that they know that they're not locked into a specific ecosystem yeah I understood I think uh you know folks are it's been really inspiring to watch what Celestia has done and what you guys have achieved over a relatively short amount of time and you know I think one of the things that there's uh no shortage of love for in crypto is a little bit of drops of alpha so like what can folks expect from Celestia you know not revealing anything you can't obviously but like what should people be looking out for over the coming months and year or so yeah I mean like next Milestone from here is mainnet uh currently planned in the fall of this year and that's kind of like what what kind of like heads down shipping um working on right now we're trying to you know ship mainnet as soon as possible um because there's a lot of like people uh in the ecosystem that we really need a DA layer a scalable the layer and nothing exists right now screen will have eip4844 in a polygono bell and so on and so forth but there's literally like no da layer right now that's actually usable as a kind of a DA layer that provides more than like 10 kilobytes of throughput um so it's like that's a thing that we really need to unblock people on and we also have like a lot of um kind of like people joining the ecosystem and making um announcements integrating different parts of the stack uh you know we recently we had um integration with the op stack where we provided a data availability interface with OP stack so people can deploy upstax roll-ups on Celestia using Celestia as a DA and ethereum as a settlement layer and I think you'll be seeing a lot more of that and a lot more Integrations like that with other stacks nice I've got my last question here and then we can open it up to to questions from the audience but um you know if you had to uh I'm always interested to hear from from sort of leaders in this space like the two things that you find yourself thinking about the most or maybe it's like a worry when you're falling asleep at night like man I really just want to make sure that we get this done like what are those like maybe two things for you at the current moment um well I guess like from a very low level perspective I um I'm very active on the engineering side of things um you know trying to make performance improvements when necessary and like I try to follow the developments like people using Celestia to see what the what the what the pain the pain points and bottlenecks are uh with our testness that we've done but from a more high level perspective um the one thing that I kind of think about is to what extent in the long term as crypto goes mainstream we do World users kind of care about how decentralized a blockchain is or to what extent a blockchain kind of conforms with the core values of crypto which is decentralization sensitive resistant verifiability because fundamentally like from a user perspective like if someone just created a centralized blockchain with like two validators from user experience perspective it's very similar experience so I kind of like I I think about like like one Theory I have is like um the market will naturally just evolve to centralized Solutions if users don't care but so far we haven't seen that to too much of an extent like we haven't seen like a an overly centralized L1 with like a proof of 10 10 equivalent of authority that'll be a very low hanging fruit to build that could that could have like a billion DPS because it's not decentralized um so I think about like how could we like how um communities or blockchains have social modes and users do care users care about using applications that they know or they they believe are actually decentralized and sensitive resistance but I think about like to what extent that will hold true in the future as because like we're still we're still very early like you know crypto hasn't really a management option but I wonder once we do reach mainstream adoption to if our if the ideals of crypto will ever become significantly diluted it's actually um and real I've actually asked myself a similar question I mean I one thing that I've the way I've sort of phrased it to myself internally is do you think there needs to be some sort of Overton window shift right in order for people to adapt like you can't so for instance privacy I feel I feel like this comes up often in privacy discussions I mean in web 2 right I mean the one thing that's been proven after you know 20 some odd years is that users like don't really care that much about their privacy and most the vast vast majority will not take even very basic steps to protect it and how are you going to build privacy related infrastructure for people that simply do not demand or seem to evaluate with their actions almost at all so do we need some kind of Overton window shift from a societal standpoint for some of these Market structures to take place the way that we want or what do you think about that yeah I mean like we've seen web 2 evolve in a very similar way like um you know like the early days of the web it was much more decentralized people had their own blogs um they weren't sharing data with big Tech now we have you know people just using Facebook most most of all these using Facebook sharing the data with everyone but um in web 2 the interesting like unfortunately like humanity is very reactive and not proactive like before 2011 um most websites were just using HTTP like you would log into Facebook uh like it's actually something I saw like when I was in my hiking days like the thing is Tunisian government was um capturing people's logins to Facebook because Facebook did not have HTTP enabled or enforced on login page but the thing that changed the fundamental thing that changed and made people have a big push and to encryption and the encrypted messaging apps and HTTP apps and more privacy was once Northern um leaked um the NSA files in 2011 or 2010. and that was kind of like a big moment where there was like it was like a massive difference before and after like before nothing was encrypted basically like no one gave it care about https like all the messaging apps were not encrypted and after that everything started and you know everything it was basically encrypted by default you know WhatsApp is now encrypted you know everything uses https um and then you know there's been various scandals that actually made has made web 2 privacy the Forefront of um many people's mind like you know the Cambridge analytics Scandal you know people like Facebook has a huge problem it has a huge perception problem with privacy and ultimately unfortunately I think like it could end up similarly to web3 like right now people don't care about uh Financial privacy but they're they're probably will be all like decentralization to some extent but there probably will be a moment in the future where people learn that actually is very important because they'll be that kind of like a Pearl Harbor of you know financial transaction privacy um maybe someone is doing something very bad with these all these Unchained transactions you know like outcome analytics is is one of them for example doing de-anonymizing people based on their on-trained activity for example yeah I tend to agree with that guys we're in the final minutes here and I want to open it up to the audience to see if they have anything they want to ask Mustafa so there's a lot of Da solutions that are going to be live in the next year in two or two which is really exciting what do you think are the known unknowns around having these things live around scaling them like what do you think might potentially break and and the kind of open questions around the design space here yeah I think there's a few open questions um I think one of the biggest one is how do we get people to run light nodes because the way that the the only way to securely scale data availability is through having data availability sampling night nodes and the more the more light nodes you have the bigger the block size you can have but historically you'll be kind of over the past decade we've had a model of web3 where people just interact with the essential wires or PC endpoints which kind of defeats the whole point of web3 anyway because that's like a very web 2 model you might as well just use web2 because you're just interacting with a centralized database and trusting it trusted dead body so I think like we need to think about like ways to get more people to run light nodes and like maybe by having I know Mina is doing some great work on this they have a browser version of their light node so you can actually run the I mean in your browser so I think that's a good first step and then we need to figure out how to integrate these light nodes into wallets by default so for example instead of metamask connecting to infuria metamask should run an in-browser light node in the background and connect directly to the ethereum basically Network and I think it's very important like known um unknown that people should think about like how do we incentivize and get more people running wallets with light nodes thanks for the talk it was really enlightening to hear um so I think one difference between the last bear market and this one is the fragmentation of the ecosystem that now we have so many different layer tools going on that historians might call this the L2 Wars that are going on now so I was wondering is someone creating a Dap how do you choose the right L2 to build on and what happens if the L2 you've built on fails you might be able to retrieve account balances but can you retrieve transaction history reputation all of those other aspects good question um I would actually say that yeah there's a lot there's a lot of components in the modular stack there's a lot of l2s but I would actually say um it's actually less fragmented than the previous Market bull markets because in the previous bull markets like it was way more fragmented because people were just building alternative layer one networks that but did not collaborate with each other at least with layer twos um they can all call uh coordinate with each other in the same stack and you know like um like part of the month advantages or what we're trying to achieve for the modular stack is that you can replace components in the stack so you know if you deploy something using a specific type of if like if you deploy something on arbitrary roll up and you can swap that out with the optimism roller for example or that you're not locked into a specific vendor that's like a very important component of the modular stack I think that's more practical when you take into example of roll-up app chains like if you if you wanna develop a roll up app chain let's say I don't know using the op stack you don't and um you choose last year as a da if Celestia fails but you can replace the the Celestia with a different day layer very easily because there's a common da interface so I honest I wouldn't necessarily say fragmentation I would say that there's more freedom of choice and sometimes that that's not necessarily always a good thing sometimes there's too much choice and that's very difficult for developers to compare the trade-offs but that's also an open problem which is how do we um get developers to understand the trade-offs between these different components and execution environments in the stack um thank you um great talk Mustafa and Mike thank you so much for the for the uh all the details I love that you touched the business development uh area and I think it's great that you guys have a dedicated team inside the inside the protocol uh I would say that probably many of the projects at least at the start may not have this so I was wondering what are some learnings that you see in terms of like what works what doesn't work in terms of Business Development or like maybe some advices that you would give to your builders um and maybe like what's your strategy so you were saying like that would depend on of the product itself but like in particular case for Celestia as you are like closer and closer to the main net like how would you like what are would be some strategies for growing the ecosystem that you you would want to explore yeah I mean I think like our kind of like overall long-term goal is to bootstrap a kind of like self-serving community like um I think um I like ideally like we're trying to create a new category so um I think for any protocol the success of any protocol should not depend on any kind of centralized BD team otherwise it's not really essential it's not really the essentially spectacle you know and so that's why we've kind of try to create a community that a modular community and that can kind of like um have a kind of like a mass or a network effect and so like if you think about like other pieces of infrastructure with network effects like let's take AWS for example AWS has a massive community and of Developers and the like there's the like you don't need you don't need a centralized BD team of WS because it has a network effect where every loss of lots of apis integrate lws so and so like it's very easy to use AWS because it has wider Community Support and so for Celestia and other dealers there's a very similar process there where we want to make sure that Celestia as a da is supported by as many da interfaces and roll-up Frameworks as possible you know we started with the op stack integration building we want to have the community develop more Integrations so that Celestia you know will be like the default the a option for these you know roll-up stacks and eventually you have like a cell a community that kind of bootstraps yourself and you don't necessarily need like a centralized BD team to kind of push it Forward just like you know ethereum doesn't have a centralized BDT team for example guys I think unfortunately that is all the time that we have guys everyone give it up for Mustafa what first of all excellent event thank you for the chat this was really great come on Back To Top