hello everyone Welcome to our Twitter space today on all things zkl we're waiting on a couple of of speakers to join us so we're going to get started in just a couple of minutes all right looks like we've got everyone here so again Welcome to our conversation today on all things ekl I'm Stephen Smith from the worldcoin project and I'll be hosting and facilitating the conversation today want to make this U very casual and kind of open zkl is a very interesting and timely topic of course there's a lot of lot of interest in in the topic you see it headlined at a lot of conferences there's a lot of sessions on it Etc so today we've got like a great set of participants from some of the leading projects in the space and we're very excited about it I think what we should do first is just kick off with a quick round of intros let everyone tell us a bit about yourself and what your company's focus on right now Jason I believe you're in the call Jason from Z conduit you want to kick us off just give us quick and tell us about yourself and which company's focused on yeah thank you thanks for inviting me I'm Jason I was a math professor and built various other Technologies our company zand we focused on is building a compiler that lets people turn arbitrary deep learning or machine learning models AI models into Z knowledge proofs um so you can take an onyx file which is a serialization format for for an ml model and you can hit it with our compiler produce a Prov and and a verification verifier that's verifiable on chain very nice let's see Ryan and Daniel are you on from yes sir hello Stephen how you doing this is Daniel hey good to see good good to hear you again yeah hello Stephen this is Ryan Ry we're both here Stephen we want to make sure that the whole party was here absolutely sounds good I like it yeah I guess a brief intro hi everybody we're modulus Labs or Daniel and Ryan specifically but you can call us modulus and we are working on a custom zero knowledge prover built from the groundup to support AI operations so it's all towards the kind of purpose of supporting large and sophisticated AI models to be run in this accountable way and I'm sure we're going to dive into what all that means but super exy to be here very cool to be surrounded by so many friends very te very nice um so we also have DC Builder from worldcoin DC you want to give us a quick intro can I tell us what to your focus on yep hello I'm DC and I a research engineer at worldcoin on the protocol team and yeah I've been recently researching zkl um and I've g a few talks about it and I'm really excited to have the discussion with you all today all right very good as I mentioned I'm Stephen I'm also from point I work in the protocol area Andross the company on a few other initiatives do a little bit of research here and there mostly just anything they asked me to do for the most part just one bit of housekeeping to let everyone know we're going to try to reserve a few minutes at the end for audience questions so if you you know we' got a really good crowd it looks like so if you do have any questions just ask those in the Twitter thread and we'll do our best to get to those at the end I'm most likely going to have quite a few myself as we go so I'll try to make it a little bit interactive but we give plenty of time for our guests to to talk so look I think it would be really good to start off with a little bit of a level set on what dkml is so most people that that are in the space and will watch or listen to the recording probably have heard of ZK or zero knowledge Tech and most cyc ml or machine learning but they may not really know what zkl actually is or what it's targeted at so DC you want to kick us off well definitely anyone wants to chime in but DC you want to kick us off and just take a few minutes and give us an overview of what Z what is zkl yeah zkl stands for zero knowledge machine learning and it's essentially the creation of zero knowledge proofs of these computations that happen instead of a machine learning model um so to boil those two concepts down to its constituents so Zer knowledge stands for or stems from Zer knowledge cryptography Zer knowledge cryptography is an area of cryptography that allows us essentially to prove computation and it also allows us to essentially hide parts of the computation to a verifier and the sort of standard definition is that we're able to create a proof to a verifier that some computation happened correctly According to some definition of that operation and where essentially able to verify that operation in a lot less computational steps than it takes to actually perform said computation so for example um maybe maybe many of you have heard of Zer knowledge Roll-Ups which are these scalability Solutions on ethereum think darket or scroll or ZK sync so what these are doing essentially are sort of proving that these blockchains executed transactions correctly and then a verifier can verify that these indeed happen correctly and so they the verifier can update the current state or his current representation of the state without having to compute all those transactions themselves and thus saving a lot on execution costs and so in the context of machine learning we're able to prove to a verifier that the prover ran some machine learning model on some input and created some output and we can prove this computation end to endend and we can also hide parts of this computation that's the general gist or definition of of what ZK ml is okay sounds good Jason Ryan Daniel anything else to add just a general overview of zkl and what it means you do it one one thing that I find really helpful as a way to to understand what it's doing is if you're familiar with a digital signature like the signature you used to send a transaction in Bitcoin or ethereum a a a ZK proof is basically you take the signature function and you replace it with an arbitrary function so it's like programmable signatures and the trust model is very very similar that is when I get a signature that someone signed to send a coin I can believe that they really signed it they knew a secret they knew a message and they produced a public output and this the trust that we get from zkl proof is of a similar similar security properties very good yeah I just wanted to I just wanted to add and say for folks who are maybe a little less familiar with the machine learning Paradigm uh so the whole idea here is you can think of machine learning models as almost like this fact type thing where your inputs are for example like images of some sort that you might want to classify maybe cats and dogs and then the output of this Factory should be basically a zero or one based on whether it is a cat or a dog and so inside this Factory you have a bunch of kind of machines Each of which are operating over these images and basically the state of the Machinery is the architecture of the model so like which machines come first what are they doing to this image and then the actual kind of knobs and things that you're tuning inside the machines those happen to be the parameters to the model so these are the things which are being learned when you train the actual model or when you're building this Factory from the ground up so in zero knowledge machine learning one of the cool things that you get is that you can effectively validate that a particular Factory was used with all the Machinery that everyone cares about and agrees on and that some set of consistent parameters were used so some sort of consistent tuning for these machines were used without revealing the actual ways that the machines were tuned which usually turns out to be the IP that people care about and so that's where the zero knowledge or the hiding property comes from oh very good zoom out because I don't have a strong background in machine learning is it like a summary to say that you know with the machine learning you have a model has to be trained ZK mou allows those models to be trained on what people might consider sensitive or personal data could be like healthcare related data for example is that kind of and again super high level in a nutshell with what zkl helps us yeah that's a great question Stephen zkl can refer to basically ZK inference or ZK training in ZK inference the statement that you're proving is that some machine learning model which has already been trained and this part you need to either trust or test is actually the one which is being run whereas in ZK training it's more that some machine learning model is being trained using a specific algorithm over a specific data set so if you do care about basically the fact that for example uh mid Journeys models are not being trained over Getty Images or things like that then you would need something akin to basically ZK training but in yeah but basically in in the other case it's not actually about the sensitivity of the data and it's more about the fact that a consistent model is being run okay makes sense yeah thank thanks for that yeah again the split between inference in training I think when we lump those together under one umbrella ZK melt so like ZK training I like how phrase that that's super clear to me what you're referring to in that case all right look so we got a we've got a pretty good level set and hopefully that was informative to everyone and know it was for me so with that kind of as the background um I want to shift gears a little bit and it's really a question for all but I'll start with the modulus Labs team so what are some of the the specific use cases for zkl and why is it such a hot research and development area at the moment yeah yeah we can answer those questions in Reverse I think or I guess we think that it's such a hot area because these are like very exciting letters to put together zkl this is like a lot of letters yeah excep exceptional really strong s but I guess ernestly ZK is this and maybe Jason can add some color here too but ZK is this holy Grill of cryptography right the kind of security standards uh yet absolutely perfectly preserved it can be verified just forever right without kind of an additional need for interaction from the prover this is all very exciting on the cryptography site and of course on the AI or compute side this is like the infinite Express expressivity right or perfectly like creative cauldron of the compute Revolution that we're experiencing now so getting to the marry those together is very exciting I guess briefly on use case and there's a lot to dive into here of course we can see this Paradigm applied to all kinds of things that are already very exciting in the web 3 context and potentially even more beyond that and so I guess for modulus Labs we we really started by building what we thought were pretty fun but maybe silly proof of Concepts across def gaming nfts and trying to apply this idea of accountable machine intelligence onto these existing paradigms that are very exciting in crypto but I don't know we can dive deeper here maybe we should start with a high level overview first Jason I don't know if you want to ask add something yeah sure let me jump in something about the high lever overview that's a little kind of like the bull case I think ZK is sometimes thought of as one of these as you said big crypto technologies that can steamroll over a lot of other Technologies so a lot of complex constructions that we can build Now using signatures using other cryptographic Primitives we probably are just going to end up ripping them out and replacing them with more general purpose to knowledge computation and then we're seeing the same thing happen with AI and machine learning right where people suddenly are ripping out complex ass soltions and complex Lo business logic and just replacing it with a language model or some other kind of capable model The Bu case here start with the alphabet soup right we we can replace these two take these two Steamroller Technologies and combine them of course they there are also limitations to that more reasons people are so excited about it cool very good yeah it is a super cool acronym by the way so it'd be almost impossible to form it into a word but zkl really resonates so what about use cases like like real world concrete examples of how this is being used or potentially could be used medium to near term uh you know in in the future um let me jump in so one one of the big one of ways to think about the use cases I think is to realize that in in an old school organization a web 2 organization sfware company or any other company a lot of machine learning a lot of AI is getting bued into the business processes right you can't run over without ml you can't run a recommendation engine you can't run Amazon without ml increasingly you can't even run a government service without ML and with this sort of web pre-mission we're trying to convert these things these organizations into organizations that can run onchain and can be onchain and so the big picture is we're going to need a way for these onchain organizations to run machine learning models to understand the outcome of soft judgment so yeah that's just that's a big picture and then the applications are things like games as I mentioned identity is of course is close to the heart of world coin even Finance other things okay how about how about you Daniel or Ryan what are some top use cases that come to mind when you think about ZK ma yeah well I mean we can start you know close to home I think I see one of the listeners is Mosaic I think they're using AI algorithms to try and maximize yield for their customers but potentially folks like them and other defi protocols who want to implement algorithms into their uh calculating how they rebalance their pools implementing more complex strategy they may want to show their customers that there's absolutely no way for them to tamper with the result of those AI decisions and in those Contex is it might be very beneficial to implement almost a ZK a badge of approval or a ZK promise right in that kind of earlier Factory analogy it's like the operator of the factory throws away the keys right they can't really mess with the internal workings of that process anymore so that's on the defi side of things and I think we're just starting to see more advanced algorithms being brought into that domain right now and and of course there's also use cases in gaming right not to Shell our own project too much but we built a game recently that pit an AI model against the human playing Team the world in this case and we were able to implement a betting mechanism on top that relied on the security guarantees of ZK here in basically saying that the AM Auto powering the game a agent here could in no way be secretly swapped out or surrender to the whims of the operator who might be biased to way that the success of any given game one way or another and so you know we're getting to work with really cool folks like AI arena in leveling up the kind of guarantees that they can make to their customers on the AI outputs that might be important in their in-game economies um and of course there's some really exciting stuff happening in the identity space with being able to operate high integrity analyses AI powered analyses over user biometric information without them ever losing custody of that process or their own data but maybe DC can add some more color to that or yourself Stephen yeah so that the So Def is great gaming is great I think there's some ID of course is great I think there's just some incredible use cases I look at healthc care right which that that has a potential to to affect so many people and improve their lives and I see one example being like probably some of the most private data people have are like their healthcare related data or genetic data for example but there's a lot that can be learned and inferred from that with a really intelligent model model for disease prediction for example um things of that nature where it'd be great to run models over that sensitive data to be able to benefit just everyone it's almost in that case web 3 becoming an on-ramp for what's traditional web 2 type things or even web one type things or maybe not even web Centric type things so that that's super exciting to me do you see anything to add like in use case we didn't mention that it resonates with you um I think I think we talked over more like most of the most prominent use cases the more obvious ones for me like a good way to think of zkl or like its capabilities and as with respect to to like blockchains is that blockchains are inherently really like computationally constrained and therefore they're not really powerful in terms of computational power and since ml like ml algorithms are quite the opposite they're extremely intensive in terms of like how much computational power you need for inference though but still it's still quite hard to scale these services and so something that zkl enables in my mind is just bring any of these machine learning computations on chain and then prove the output uh of these computations onchain without having to run them onchain which is like this over overarching theme that we we've been talking about with all these use cases as as more and more ml computations become relevant or useful in in different contexts whether whether it's like identity games Financial as there's more and more of these there will be a need an inherent need to bring them onchain and thus something like proving um these results using Zer knowledge machine learning might become relevant so that's like an encapsulating framework that I used to think about it cool very good excellent yeah thanks for all that great use cases pretty exciting quick reminder if you do have any questions as we're going continue the conversation be sure to drop those in the Twitter thread be happy to answer them we should have a few minut at the end so I want to shift gears a bit again so we talked about what zkl is some of the use cases that we see and based on those use cases what's the current state of development what can we do to address some of those use cases how close are we being able to do that or are we already able to do that in some cases and I'll start with Jason let's start with you on this one yeah so the I would say that the stateof the art is moving extremely fast right in the past nine months for us I think we've gone to give you a sense like the first kind of proof we did n months ago was a sort of took more than 100 GB and I don't know half an hour or something and now we can do Mist in less than two seconds and less than a megabyte or less sorry less than a gigabyte and and then now we're moving on to models of the next scale that in the few hundred million parameter you know so maybe half a gig type size so we've come maybe eight years of development in we're kind of going at a pace of about a year a month in terms of recapitulating the history of ML and I think that we're going to see Z what as ukl models can do whether the ones made by modulus or by us or by other folks are going to catch up to the state-of-the-art in general AI General ml as soon as the end of the year maybe maybe another year we'll see but that we that that lag is going to get very small very soon so just I would encourage everyone who's on the call think about if you didn't have to worry about performance what cool applications would you make now's the time to think about that excellent excellent segue there and Daniel and Ryan I'd be interested in your thoughts on this as well like how close are we being able to address some of these use cases we're GNA talk about performance in the next little segment if you will but I'd be interested in your thoughts on how close we are currently to being able to address some of the use cases that we went over yeah I think since we started we've always maybe this is just privilege of coming from primarily before crypto anyways an AI background but we were just always excited about bringing as large of a model as possible on chain and you know just imbued the chain with as much kind of machine creativity as possible I think a good example of this is when we first started our first proof of concept our first project had about 70,000 parameter three layer feed for neuronet anemic tiny little model our latest project is I think about 3.7 million parameters and our upcoming project which is a ZK G SC or a model which outputs pixel art that is verifiably authentic is going to be even larger than that and certainly that can be larger in both the size of the model itself but also in terms of the volume of outputs and what's actually being brought on chain and so I think in a very practical sense so maybe mirror what Jason's saying here we are seeing more competent projects being brought on chain maybe with a slight lag again but nonetheless that lag is getting smaller each day and certainly we like to think that the kind of proving improvements that we're bringing on the Zero knowledge side what we spend most of our time doing will be another step change in the right direction within the next year or so so it's definitely very exciting look forward to bigger and more exciting models in your onchain services soon okay all right appreciate that and DC you did actually and recently published an excellent article on the like a survey of ZK the landscape some of the things we're touching on today around use cases so what what do you see at the current state of development just summarize it when in your own words based on a lot of the research you did yeah so for me I wanted to understand what are the current limitations or bottlenecks and go down from there so I took a look at the current State ofthe art or of ZK like what proving systems are good for these specific computations that we're proving inside of ml I took a look at different toolings whether it's Ezekiel from Json and Z conduit and some other like libraries that do some sort of like transpilation from one NX to circom and whatnot and it I also really enjoyed the the article and block post from modulus Labs the cost of intelligence which sort of surveyed which proving systems were fit for different differently sized machine learning models and I think that even though like we're at a really early R&D phase the developer experience is improving rapidly for using Z knowledge proofs to tailor them to to a specific use case let's say zkl and with the current developments of both better developer tools for creating proofs with better Hardware that can optimize specific computations that are happening inside Z circuits and just in general more mind share and more people thinking about problems within this domain the use cases and just the overall usability of zkl is improving rapidly but yeah I think we've mentioned most of the things that that that are happening currently cool very good and yet another really good segue you mentioned that modulus Labs did a did an excellent article on cost of intelligence is that what it was I believe that's what it was titled but but essentially it included like a lot of great benchmarks so shifting gears into you know we talked about use cases where we're at today what what needs to happen next what's and if if you can for modulus and Z conduit share maybe like your near charm road map if you're not comfortable sharing it that's okay too because I know a lot of times this is still a research Frontier for everyone and they're they're wanting to figure a few things out before they publish but in general what what needs to happen next what do we need to improve upon to make some of these use cases more of a reality and Jason I really liked your comment on don't worry about what can be done just think about the use cases for zkl and count on people figuring out how to accomplish it so yeah so Jason I'll kick it over to you what do you think what needs to happen really next to to realize yeah sure so we're we're really at a f as I said the progress is happening very fast I think a lead bullets not silver bullets phase where it's just engineering work we don't need any more theoretical breakthroughs we have to apply the ones we already know about and do optimizations I'm pretty confident that we'll be proving as big a model as you like quite maybe a bit short of gp4 because we don't have access to the weights in the pretty near future so I think the most important thing is applications so I think some of the other folks speakers on the modul and W coin you guys are better than we are thinking about applications we're lower down in the stack and we like to think just make a compiler and let people think about applications but to me the big question is really we have we've talked about some sort of vague applications or kind of proof concept stuff that we're thinking about but I really think that the people on this call the sort of growth actors and developers are going to come up with the best use cases I think the best use cases of ukl have not been discovered yet and that that opportunity is still out there so I think what we really need the most is that good thinking about how this can fit into a larger ecosystem how this can fit into society how it can benefit Society oh excellent yeah we mentioned that a couple times the blog post you folks did over at modulus on cost of intelligence so I'd like to tie that into what what did what does your team believe that needs to happen next what are some things we need to research out improve can be done or improve upon like what would you say top of that stack yeah I think that con concretely we um the number we try to remind ourselves is let's say it takes a second to run a model right now it takes on an order of thousands of seconds to prove that model right and that's improving very quickly but we are laser focused on bringing that down 100x right until we we hit the kind of 10x range or or may God forbid sub 10x range we think that's when really very exciting use cases well design space just expands so much more and for modulus we're we're a little longer term here we're very excited for obviously tooling side to get a lot better Ezekiel is an incredible step in the right direction there but we want to make sure that the step after the proving step is also going to be supportive of these expressive models are only going to become more populous and more capable with time and that's definitely something we focus on as well as of course totally right on the money there the use cases developing these especially like high volume high intensity sophisticated models the stuff that got us excited in the first place making sure that they have a home here in web 3 and that we can bring the culture of decentralization as well as permissionless software to what I think will be the most exciting Computing par Paradigm of our time AI it's a total privilege to get to live in this intersection and the water is fine y'all should come and join there's no piranhas here I like I said that and I love the the mental model of the intersection of ZK and ml like I worked at the electric Coin Company for the last three years parted joining World coin and of course we're all about ZK right some of the pioneers of ZK Tech but just the intersection of these two technologies is incredibly incredibly exciting and in terms of like where things need to go next I know we have and I'm going to kick this over to DC to explain like one one of our our use cases involves being able to do things like in a mobile environment which means that we've really got to optimize for that use case in general being able to do some of this on mobile but do you say you want to talk about one of the use cases that's on our research road map in terms of doing things on a mobile from a self- custody perspective MH yeah so worldcoin is building World ID which is a proof of person Ood protocol that's privacy preserving and part of world ID is just making zero knowledge proofs about uh essentially being a real human being that's verified by an orb and essentially what you're able to do is that you're creating Zer knowledge proofs locally on your mobile phone and you never reveal any sensitive or public information or s or any private information to to to the chain in this case when you're uh generating a world ID you get a private and public key pair you have essentially this representation of your eye called an iris code that's stored inside of a smart contract instead of Merkle tree and this is like a pseudonymous random string of bits and you're able to essentially prove that your your iris code is indeed in the smart contract and this essentially allows you to prove personhood another thing that you can do is essentially verify that it's a unique person doing these things by having this private signature that signs these transactions associated with a zero knowledge proof and this essentially obscures which Iris code or which synonymous string of bits is signing or verifying a proof of personhood if you associate this with a an action let's say like voting inside of a defire protocol governance um um proposal for example you're able to essentially prove that you're unique and that you haven't done an action before but in for the context of zkl specifically when a user is getting onboarded the way that you generate this unique privacy preserving identifier is by using ml right so you take an image using our custom built Hardware device called the orb and this Hardware device then runs a bunch of checks to prove that you're human and after that it creates like a unique bit of strings for each person and you're you submit this string to the smart contract so if if you want a user to instead of having to rely on the orb to compute this this identifier you're able to sort of just take an image with an orb once the orb assigns the image with its private key and you're able to self-host this biometric information on on your phone in an encrypted Manner and you'd be essentially able to generate your own world ID locally instead of a zero knowledge proof as long as you have a zero knowledge proving Library that's able to prove um the iris code generation and then just submit your newly create created Iris code alongside with the proof to the world ID protocol and you'll be able to create a new identity and if ever comes the case that um a new model comes along that's more performant more scalable more inclusive faster any of many improvements possible improvements you'd be able to recompute your iris code without having to go to this Hardware device again that's like one of the better use cases for zkl that we can think of instead of rcoin but part of this is that you'd be having to create a zero knowledge proof of this really expensive computation it being an ml model in a very computationally constrained environment it being the mobile device of a user that may have a 32-bit Android phone which may have like very small amounts of RAM memory for computing these proofs cool very good yeah no it's super it's super exciting and I like the thought I go back to the work modulus is doing around efficiency of proof system and the environments we can run and Jason's comment around let's think of the use cases and then figure out how to solve them from a scale perspective yeah we could have a super long Twitter space I think on this topic but we probably should shift shift gears to take a few audience questions we did have a few that came in and I will just read those as they are and then whomever on the on the panel wants to pick them up could so the first one is coming from Chesley it says how is ZK used in fraud prevention so I think it's more of a ZK general question but I'll yeah I think there's some ml application there as well yeah I'll just throw that out how is ZK used in fraud prevention uh I can take a stab at it so yeah so I think the fundamental or one of the fundamental guarantees that ZK gets us is effectively that of validity not to take yall through too much of history but some of the original kind of practical zero knowledge proofs or sort zero knowledge snarks were actually developed by Microsoft back in 2012 and 2013 basically gpr2 Pinocchio and the whole point there was that Microsoft was a little worried back in the day that since Azure was a brand new cloud service that folks wouldn't trust Azure that basically you were Outsourcing your compute to this totally untrusted execution environment and that if Azure wanted to cheat if they wanted to run a different algorithm if they wanted to for example in the fraud case not run the fraud algorithm that you care about but rather a modified fraud algorithm which does allow some transactions through which are fraudulent because they were bribed or something like that that they would be able to do and effectively what ZK gets you in this case is it allows you to show that the same fraud algorithm is being used each time right that there's no way that anyone can basically tamper with this or can basically or as Daniel was saying earlier that basically the operator of this fraud model is has thrown away the keys and the only thing left for them to do is to operate such a model in the sense that basically you would like to do fraud detection via machine learning you would like to know that Thea detection model itself slash the person who is running the model cannot be tampered with or cannot tamper with the model and that is what the ZK validity gets you excellent answer anyone else have anything to add on that that was an excellent summary I can also give a little bit more context in the context of smart contracts so smart contracts are sort of these programs that run on top of this network let's say it'd be ethereum and let's say that there's a smart contract which has some State let's say the state of balances of all tokens that are deposited into the smart contract if I am able to prove to the smart contract that I am able to create a state transition on ethereum which transfers all the tokens to me then I can essentially prove an exploit I can prove that I'm able to take this smart contract to a state which would mean that this it's being exploited or it's exploited if there's some sort of rule set to Define what it means to be exploited let's say withdraw big parts of the deposits somebody would like a white hat um uh sort of security person would be able to prove to the smart contract that hey you have a vulnerability I'm able to exploit like a big portion of your funds and just extract them to myself the protocol could be automatically Frozen or like some emergent piece of code could kick in and override these things to disallow withdrawals and like trigger an upgrade to revoke all permissions and just give the funds back to the users there's a lot of things you could do like this just proving exploits using Z right like creating a zero knowledge proof of an invalid State transition thus proving an exploit to someone which would prevent fraud in this case cool very good excellent answer okay next question comes from Nathan it says for situations where an operation has been proven what kind of ux experience do you imagine for users to understand an operation has been proven would this be wallet side sight side and which I assume means host side and accepted standard or protocol on chain so again for qu for situations where an operation has been proven what kind of ux do you imagine for users to understand an operation has been proven so any anybody want to take that or any thoughts I think that's an excellent question and actually so in case one we've proven an operation to the chain and the verifier is running on chain then it's very easy because the action that's fired off by the fact that the proof went through the execution of the model the Ence was proved is recorded on chain results in a successful transaction I think in the other case where we're using this for sort of General Enterprise purposes or the TSA trying to prove that our screening algorithm is fair or we're screening CDs and trying to prove it's fair it's a really good question and it's really a deep question of how does how does you know if you get a green check mark how do you believe that green check mark is correctly computed if you're say a non-technical user and that part I think I don't have a good answer but it's a really good thing for us to think about collectively yeah yeah it's definitely an excellent question and I think the ux around it is I think to be defined up to the application in multiple choice the key being decentralized settled on chain in some fashion so that means it can be verified in whatever application setting you're in I think is is a key so um so we're actually bumping up on time it's been an hour that was really a quick hour actually so I really enjoyed the conversation and hope everyone did as well but I want to thank first of all thank the speakers thank everyone for taking time out of the day to join join the conversation and talk with us a bit thanks everyone for joining the Twitter space appreciate all your participation and questions and everything this will be recorded so we'll have it we'll have it out there later if you're looking to learn more or get involved there's several good resources out there one there is a telegram group called zkl Community it's a great place to pop in learn things ask questions and then also on GitHub there's an an or called zkl DC community and there's a repo under that called awesome D zkl which has just a ton of great learning resources on zkl and yeah if you have any to contribute we'd love to have those in there there also but again thank you so much for everyone that joined us speakers great catching up with a number of you again and look forward to talking soon thank you so much everyone and hey massive kudos to DC Builder here for organizing the ZL Community a lot of that is spearheaded by DC so AB y y hands off yeah great work DC yeah all right thanks everyone have a great day have a great day bye [Music] bye