hey everyone this is Cena with another episode of into the Vite code my guest today is Georgio constantopoulos the CTO and a general partner at Paradigm so in this conversation we talk about ref the ethereum node and SDK that georgos and team have developed and we really dig into its architecture and design decisions that allow it to be performant stable and extensible we also talk about some more personal topics like Georgio's philosophy on Engineering Management and putting in place feedback loops and towards the end we zoom out and we talk about a potential future for ethereum as a world computer with that I'll leave you to it and I hope you enjoy the last couple times we Fung out I've heard you kind of talk about it being disappointing that all of the rollups seem to kind of have the same feature set or to be small iterations on the same thing um why do you feel this way and you know what what's the alternative vision of what's possible right now so maybe let's go back in history to the original side chains world so side chains was a concept introduced for the Bitcoin uh chain where the goal was okay Bitcoin is slow it's all supposed to be immutable let's try not to mess up with it a lot let's try to introduce addition sandboxes that help it scale and innovate and the original Hope was that things that were happening on side chains for Bitcoin you would eventually either Port back Bitcoin or you would like Port things from Bitcoin to them and you know that would pave away for like a seamless upgrade now fast forward we had the layer 2 Vision for many years where even back in the day we worked on plasma we W on State channels then we discovered okay plasma was hard to generalize we discover State channels were hard to scale multiple people and we came up with a compromise in the middle solution that we now call the rollup um and the rollup is like a really powerful construct it lets us construct trustless uh scaling solutions that are cens resistant while having fewer uh participants running the network versus say having have hundreds or thousands of people operating the network uh to produce the next blog but the the innovation component somewhere got lost along the path so one thing that you will observe if you look around the world in like any one building layer twos today is that they're mostly Forks of the GU software or they are importing a lot of gu as a library U because G is the best implementation that exists right now of a layer one node and as a result everyone building Layer Two nodes generally deres their derives their their work which is amazing and big kudos to the GU team what's the issue with this is that people are afraid to diverge from what the G stack provides uh and why is that because people tend to Fork the software and when people Fork the software they don't always know where to go and modify things so when people go and make pirate modifications here and there the software might have unforeseen consequences uh this might have unforeseen unforeseen consequences I remember not not to interrupt your train of thought but I remember that George Hots did a small commit to the op Fork of gu back in the day right like for four years ago or something like that yeah George had worked I'm not sure if he had worked on the core op or yeah George had worked on like op gu at the time which was on a very different system from one that is today and yeah all the changes that were done like back then were crazy actually I had written a blog post on how does optimism's roll rollup work which would go really deep on that and optimism had use that as one of their M main onboarding guides for new employees for a long time at the time so getting back to the core point the if everyone is like forking from a well-known node implementation and they're afraid to diverge then the Innovation component got lost maybe we achieved scale but you're kind of Bott length on feature development by that by the rate of rate of improvement of the Bas L1 so I think what would be really exciting would be to have a a world where you can iterate further than that um a set of libraries that can help you go and like innovate and experiment now experimentation is really expensive in the sense that you will need a lot of security tooling around it to make it work or you will need a lot of process to to make it work because otherwise if you look at for example say in the cosmos ecosystem when they try to induce the AVM modules uh whether it is a chain like Os or be chain or anything like that there's like a a lot of work that you need to do to make sure that you don't get exploited by some issue and there's like multiple great posts showing how hey the cosmos s dekay evm module got exploited got exploited when it had an a weird interaction with some other module in the cosmos SDK and that's very important to avoid um so yeah like I'm a on the one hand you could say disappointed but like I would take maybe the more positive view on it which I'm really excited by some of the work that we've been doing with the w project to facilitate layer twos that are able to innovate and go beyond what let's say the Basel one gives them and to get to that Vision we have been developing a project that we've been calling W alphanet where re alphanet is a highperformance experimental op stack test net rollup so let me just break this down it's a high performance rollup meaning that it's built for breaking through what we call the gigas per second limit it's experimental in the sense that it's going to allow us to try out new things it's built on op stack which means that it leverages the layer two stack that Optimus has built and is able to use fraud proofs forced L1 sequencing and leverage all the many years of research that we've done over with the entire Layer Two community and it's a test and rollup well it's a test net roll because we don't have experience running things on Main net and we'd like to run that on test to be able to iterate very fast so by being experimental like and that's the core point I want to drive home uh we're able to try new things out so two weeks ago where we last met CA like we hosted Frontiers which was a conference focused on all of the code that we've been building the community and we launched or soft launched with conduit uh re alphanet with all the features available in the prag hard Fork so the upcoming ethereum hard fork with have implemented everything both on L1 but also on L2 so we launched a for the duration of the event and Network that has all the features from prag such as account abstraction the the the AVM object format and other smaller eips but these were the two most important ones and that is our first let's say foray into starting to accelerate because I think we need to accelerate not just on the performance um I think we need to accelerate on the feature development of things and my hope with all the software that we've been building uh spearheaded by re alphanet is that we'll be able to escape a lot of the local Maxima and a lot of the things that people have told you for a long time that they're hard or that they're just too complex that people hopefully can develop them in days or like hours instead of you know weeks and hundreds of millions of of dollars and whatnot what what is it about the architecture of w that allows us to iterate at faster speed is it something about the architecture is it something about how you've kind of thought about constructing the teams working on W and how they kind of interop operate with each other like what is the scaffolding that allows faster iteration at a layer of the stack where you know security and missteps are like you want to be very careful about those sorts of things that's a great question W on the surface level is an ethereum L1 node it's built to be fast modular distributor friendly what do these three things mean fast means that it's very efficient and it's able to go fast modular means not modular code libraries and stuff like that not modular blockchain which is a very popular narrative that you will hear out in crypto modular code means that we were very very intentional about the interfaces between various components allowing you to swap them in and out um and thirdly contributor friendly means that we are very intentional about extremely good tests and extremely good docks and extremely good contributor um like maintainer culture such that you can come to into the code base and contribute new code without you kind of messing something up without knowing too much about the code base so these three things are very important for what I'm about to say now which is that W on the surface level is just an L1 node but like how we relate to the W project is that it's an SDK for building nodes so re re exists on no one that's the node that has five to six% main net adoption right now and hopefully we'll have more in the future re alphanet is the L2 it's the first party L2 that we're developing to accelerate and to try things out and the re core SDK is the SDK for building l2s so what is what does that all mean it means that we took a very intentional process for building the node in a way that while we're building the node we can gradually extract components that are reusable and create very clear abstractions that you can plug into whereas the alternative would be let me take G Fork it make a ton of cowboy patches on it and uh then be then end end up getting hacked whereas what we observed is that with the r core SDK um where we applied that on alphanet we on 1500 lines of codee just 1500 like that's not a lot uh you can write that like with CLA in like you know half a day maybe these days uh you can build a new L2 with new experimental features that people would tell you they're crazy you can do that in one day why because the boundary that that you can modify is so tight that yes you can write your business logic but you cannot screw up by accident and so we were very intentional about our abstractions such that they give you exibility without letting you shoot yourself in the foot and there's a lot to say about the team also but I want to First focus on the point around the code and how it was laid out we can we can get to the team later yeah totally so yeah it was very intentional about the code like at the risk of going a little too deep but I think it would it would be interesting how how should I think about the the kind of core abstraction actions that exist inside of the W SDK like what um what are the core components of a blockchain node what makes it easy to be able to go in and and Implement an experimental feature like can you kind of peer underneath like some of these abstractions a little bit absolutely so actually during the week where we did Frontiers we run a full team offside where we gathered and we roadmap the full R core is Decay so I'm going to tell you a bit some about how does a node uh do things right now so usually when a node is operating it receives messages over RPC um these messages alternatively come from the peer-to-peer Network so how do things get ingested by your node it's either peer-to-peer Network or RPC now afterwards usually the node when you get after the network component uh locally the node is a processor it takes bytes like process them writes them to the database returns some responses so what you have is like RPC P2P execution um and then the persistent layer which is the database specifically in crypto where you have light clients so for example this is like less prevalent in areas like um I don't know in Solana for example you don't have a try it has a flat State commitment as we call it after execution you also compute the state commitment and that state commitment is what we call in ethereum the state route which is a very powerful concept which allows you to do layer twos light clients and other useful and bridges for example so the modifiable components that we had to get done in W was the RPC the peer-to-peer the execution the state commitment and the database and these are very hard to like generalize in a very principled way just because um if you modify one thing in one component you might also need to modify it in another component and that ends up being like a lot of work and where else you would see that is the cosmos SDC and subrate where they were built from the ground up to be you know sdks for building chains um whereas we took a bit of a different approach where we said okay instead of um building the SDK first and then building chains let's build a very high performance node and then as we're adding new chains using our knowledge of how notes should work let's try to generalize part of that so let me give you some examples if you have a modular execution component what you can do is you can run evm but you can also extend the evm what are the most vanilla ways that you can extend the evm the most easy one is new pre- compiles this is what everyone does these days and I don't find that inspiring but it's something useful to do uh where pre compiles you should think of them as you know very high performance things that you cannot bolt into the evm so they native code that gets executed outside the AVM U new OP codes uh which again custom op codes custom promiles more or less the same thing um but you can actually do much more interesting things for example you could do a mixed runtime you could do evm plus some other execution environment for example arbitrum uh is trying to do that where they're doing you can write evm code that is written in solidity but you can also write rust code that compiles down to something that you can do a nice runtime that's mixed between the two there's also another excellent project by Leo Al who of the etherum foundation called r55 um which is a risk five mixed runtime with evm so I think there is something to say about like modular execution layer allowing you to have like dual runtime so and bootstrapping something that starts with evm but also allows you to also top into like non evm developers at the same time and this is a tail as all this time you know Solana was basically saying oh okay we'll let developers write rust and clearly that uh that resonated with many people yeah so in terms of developer friendliness dual dual runtime just to kind of double click on this point so you mean that there there can be smart contracts that when you kind of look into their bite code they kind of there's two different uh instruction sets or two different uh VMS and based on what the kind of interpreter sees as it starts parsing that contract it chooses what runtime to go into and this allows you to kind of both have this evm equivalence thing that we all want so you can you can kind of keep you know allow ethereum developers from everywhere to continue writing solidity but also not get pigeon hold into being backwards compatible forever exactly and I think it's very important to be backwards compatible in the beginning to at least boot trap but I think you realistically cannot prevent people from like trying new things out like people would try all sort of crazy things so give them a nice way to do them um that's why for the Mixed runtime that's why I'm a big fan of the uh evm object format proposal which is slated to go live in ethereum in the next hard Fork prag which is it's a principled way to reason about your bite code right now the bite code does not have any you know explicit form it's like you know take the solidity bite code compile it and it's like a bunch of op codes like next to each other whereas right now with a with evm object format we can create containers that have an explicit prefix that says hey this is a risk five uh bite code layout and this is an evm bite code layout and The Interpreter can and the All The Interpreter doing it on The Interpreter level is easy you can always inspect you know the first bite but if you want to build developer tooling around that which we have built a lot of developer tooling with Foundry alloy and others um it's important that the developer tooling knows what the bite code or without having to execute the file like you want to just a code base yeah ex exactly exactly so it's very important that like The Bu code format is very structured such that it can support having a principled way of like reasoning about multiple run times so like throughout our conversation what you will hear me say a lot is like principled principled principled because everything you want you can do in any Cowboy way that you want like we're fine with that we know how to do that we've been to hackathons you know throughout the years we and and we've seen like code bases that are able to do the thing but they're not principled at doing the thing whereas if you want to build something for 10 years out and if you want to build a proper team around all of that stuff you need to be a bit more thoughtful and so that's a core principle of how we do things like we try to be very adamant about okay this has to be done the right way and that the right way will take more time will take more effort but it's always worth it yeah so just to get back on the SDK and the modularity point and like getting under the core components one exciting component that you can modify for developers is the WR time people have started doing that right now we'll see where people get um other very basic things that you could expect are customer PCS so people want customer PCS people actually sell them if you look at Alchemy or any of the other big RPC providers they give you you know end points that say Alchemy uncore gc20 balances uh and they do some nice things they're not that crazy things but they're like nice things it's useful to expose some additional rpcs uh how people have been doing that again is that they Fork death and they try to do it on their own and anytime I've seen a code base that does this on their own it's a bit painful to watch whereas in the W code base we have examples that illustrate hey like by doing do extend RPC in your main file you can get a new RPC in you know 20 lines of code where you instead of doing you know Fork Fork W and do all your mods God knows where what you do is that you do use R CLI or import re CLI and then in you build your own binary where you're importing WTH as a library so that goes back to my point that ref is not just a node it's an SDK for building nodes and it's an SDK for also building not just L2 nodes but for for modifying the existing L1 node whether it is a custom RPC you can think of the same for a custom transaction pool for example because maybe you want to build a high performance uh memory me pool that you know maybe me and you we have a very high Network High great networking connection between the two of us and we want to create a private mol for that another example that you can use W as an SDK for is for me so me people are generally thly adop of any kind of bleeding edge cryptotech that you see these days because if you can get faster if you can get more expressive that's that's more money for them and what happens with all of these guys is that they look at the fastest thing that exists and they want to modify it but if they modify it again in an unprincipled way they might do something wrong and that will cost them dollars so for us giving them a great API modify the node and to extend it and to use it for me was extremely valuable and we know know that like me people have been using the W project since the very early days like even since the early Alphas uh and they're always pulling the latest main uh on GitHub to make sure that you know if there's a 1% benefit they can get access to that more recently flashs released our Builder which is an me Builder built in Rust built on W and they also presented that again last week at Frontiers where that's clearly demonstrating the the power that you get from importing a bunch of very polished libraries instead of you having to figure things out on your own and this goes back to our core principles as a team that we exist to empower developers and by empowering developers you don't just you know build great stuff you think very deeply on like how can I prevent my developer from shooting themselves in the food how can I make sure that you know you don't get one dissatisfied customer customer quote unquote in this case um for your software yeah um other areas where you can think maybe maybe hold on you got a you got a lot of stuff to share maybe maybe again double clicking here just to make sure I and other folks understand let's take the meev example or say our bu our Builder um how would they use the ref SDK like what are they what are they exactly doing totally yeah so the simplest thing that you can think of an me Builder is it's just a modified uh block build sorry this is like topological um what's an M Builder it takes a bunch of transactions and packs them up into a block um by default nodes look at their mle they pick the top whatever transactions by fee they package it up on a block and they send it out that's usually called the vanilla local block building um whereas an M Builder they have a different algorithm for computing that uh best block and not only that that algorithm does not only pulled from the mol it pulls from other sources of the so-called private order flow or from other you know mols which might be behind some services for example uh fiber or chainbound service and others so what flashboard would like need to do or what would any like custom F Builder need to do here is like swap out the default quote unquote payload Builder with their own custom building algorithm and that's all you need to do flash specifically went the extra mile like to do some of their own modifications and optimizations and other stuff but the most vanilla thing that you should think about is literally I need to swap out the payload Builder so usually swapping the payload Builder would be a lot of work here in our case you implement an interface and in your main file you say you know do payload Builder my payload Builder instead of the default one and it's and it's great that's it yeah so in general the whole no this Decay how it works is that by default it gives you good implementations of things and then you can override them when you have imported the so-called node Builder so in software there's this very common pattern called the Builder pattern where what you do is you know if you have a structure called Fu and you want to instantiate Fu you do Fu semicolon Builder barbaz do do whatever and then you do do build and that gives you back an instance of that that's a very common pattern everybody uses for a ton of small things but I think this is the first time we've seen it done you know at such a big thing for a node yeah exactly and by being able the Builder pattern is amazing at this because the node is a very clear system like clear components that can be swapped in and out and I can just say dot this dot that and I have a great API that's totally idiot proof wires everything everything inside for you and just uh just very useful uh and people like that so far uh makes a lot of sense yeah and that's actually how our Layer Two integration also works so re also supports op stack as I said earlier and on the op stack front instead of having to reimplement the whole thing we had a few thousand lines of code that Implement op stack specific modules and then at some point we do dot types even the types are configurable so like if you want to have a a blockchain with additional or different transaction types that can also support that which is again really powerful because now you can introduce account abstraction types for example or the deposit transaction type and again you don't need to be an expert on the whole codebase you just need to like learn hey here's the integration guide so you need to do this much instead of you know having to like navigate a 100K plus LC code base into the bite code is sponsored by splits are you tired of sacrificing security for usability splitz believes is still way too hard for teams to self- custody their onchain assets there building a new kind of Internet native Bank on top of ethereum splitz makes it easy for teams to manage the whole life cycle of their finances from structuring revenue sharing agreements using payment flows like splits and waterfalls to managing those earnings once you receive them using pasis and smart accounts splits is being used by teams like protocol Guild Zora song camp and others I'm a big believer in them and recommend checking them out you can learn more at Splits .org this is awesome it's going to be such an amplifier on the productivity of the whole space I feel because before you would have to try to understand the whole system and hold it in your head and make sure that your small change doesn't break anything else in the dependencies but now you're giving people like very clean like Dependable interfaces and you're like you can swap out this piece and do whatever you want you want you won't break the rest of it exactly exactly exactly so it's like putting you you know it prevents you from like you know derailing uh in unexpected ways yeah uh this point about me Builders makes me think of this concept of building with feedback loops that I've heard you talk about and like this is an interesting feedback loop like how do you think about how do you think about feedback loops in the context of w but also is this something that you kind of more generally use in your in your work uh yeah so obsessed with the with the theme of a feedback loop if there's no feedback loop you know you have a big issue uh I think this carries over from software to people management to your personal development as a human actually uh and maybe you know before getting too philosophical uh just on the r side I think there's three basically core feedback loops um there's the performance there's the extensibility and the there's the stability one so performance stability extensibility what do these three mean for performance and when you have a feedback loop like in software at least you want to Define some kind of metric that you're trying to hit uh and you're using that like as your feedback loop so for performance uh we defined the metric that we call gas per second I think that's kind of like an obvious thing and I'm shocked that like people haven't been using it more more widely where basically we say all right for high performance we're going to try to Max the gas per second that the node can do and we're going to establish a very clear methodology about that and for that methodology our goal is you know chart chart number go up methodology like the metric is defined according to methodology or how do you Benchmark in this particular case so it it is for this piece of hardware for this piece of load um you try to not make any changes that might um you know influence your measurement because when it comes to benchmarking benchmarking is a scientific project and you have to be scientific and in science there's this very standard thing that you change only one variable when you're measuring things if you change more than one variable you don't know where where did things happen so in our case we fix the hardware we fix the load uh and we measure uh let's say every week uh and this then proceeds to give you a gas per second number and if that number went up we're great if not we had a regression and that's a problem in the ideal case people that do any kind of performance benchmarking they should be having regression tests that check on continue on the CI so on GitHub if there is something that showed your code be you know 5% slower for any reason uh 5% more expensive in gas would be an alternative because you were writing a solely smart contract your CI should fail and you should not accept that pull request so that's a general cultural thing that I think that anyone that respect themselves in doing high performance they should have CI uh for performance that's uh table Stakes I think that's super important uh now on performance gas per second is our feedback loop so that gets us to high performance but we also use other things like common tools like flame graphs where a flame graph is like a PNG image that you get after profiling your code which gives you a bunch of like wide red bars and your objective is take that wide Red Bar and make it like less red and like less less wide like if you think about it like in a very simple way thing that takes a lot of time in the image your goal is like make it take less time in the image so if you can do that that's a great profiling you know Loop for you to be in because you're like okay I made that smaller and when you've made it small enough and you see other big ones you're like okay that one is worth more of my time instead of the thing that I've been looking at and like a common issue that you see in many Junior developers is that they spend like hours and hours optimizing and refactoring a bunch of code that never gets hit so that's like a common mistake like if you see anyone doing that like please avoid so that's on the performance front on the stability front there's again a very excellent feedback loop for us which is the the biggest stating Network that exist today ethereum so if ref is good enough enough for people to run it on ethereum L1 then it's stable simple simple statement to believe ethereum stakers in particular um they stake you know 32 e which is a lot of money um per node and if a w node has a problem then they wouldn't run it so if we see people more people running W that's amazing uh and that's also in part like you know many times people will ask us um do you want WTH to overtake ye or nethermind and for us no like the answer is like obviously no like because there's nothing there there's nothing to win almost uh by by trying to like grow too big like you want to keep it up to a level where you can sleep the feedback loop is around stability you know you don't get any additional prizes by being a bigger percentage of the L1 Network exactly exactly exactly exactly precisely so for us it matters that like enough people use it U but you know after some point like that tapers off I don't know how much that is but I would think you know I I literally don't know how much that is uh but as long as people trust it um then like we're really happy with that and that gives us confidence that it's okay this is good software um the third one is the extensibility feedback loop which is um or the extensibility feature and the feedback loop is how many people import W in their dependencies so if we see that go up sorry feedback loop metrics you know pick your pick your term right um so if more people use W in their cargo tble um then that's a success uh and that's a clear case of w being used as a library not as a node so the first two are re being used as a node this third thing on extensibility is being used as a library now there's a fourth one which uh it's kind of like a not as commonly said which is how the number of GitHub contributors uh I care a lot about the number of GitHub contributors I think that's actually the key thing that has given us all of the adoption and distribution that we have so far and I think it's a key thing to measure how many new contributors do you get um so right now we have about 800 um contributors unique contributors across all of our code bases of note we employ less than 20 so that's you know big amplifier U across all of them um and of this 800 for example the ref Pro project had 500 pull requests and 50 contributors uh including our team 50 uh in the last month which again it's a lot and that's why it goes back to my point on you don't want to be forking W because if the W code base changes 500 PRS a month uh at some point you will need to pull the Upstream changes into your project and I I don't know if you've been through that but anyone that's been through a rebase uh long rebase when when the Upstream branch has diverged way much from yours you want to you know it's like just ridiculous so it's like not doing your taxes and really like being two months late to it totally yeah exactly exactly might be worse um so these are the four things that I measure let's say when thinking about the success of the W project now on top of them we also have the the rest of the tool the the tooling which is like Foundry alloy like like building revm like wagne VM built by the weam guys so there's a whole ecosystem of things around them to support the core which supports the core ref project so there's a full stack let's say coverage so even though ref is the SDK it has a bunch of like very useful things around it because you know like if I'm a solid developer I don't know rust or I don't know re I just want like to write my things but if re adds a custom feature then custom feature needs to be exposed somehow to the to the developer so that's why for us it has been very important to have the full stack um ownership and some people might feel uncomfortable by it but we think it was necessary to like get us to the next level and we think that the numbers speak and we think that this has been important for the entire ecosystem so we're really proud of that work and we intend to continue doing a lot of it mhh yeah it totally makes sense I mean it's um it's like a vertically integrated open-source stack so you kind of get the benefits of both of those approaches and yeah if you make a change deep in the stack you want that to be exposed in all the libraries above it before we forget how how do you use feedback loops in the context of teams or even personal development yeah so well some it's important to be able to take feed feedback I feel um and feedback is not to be confused with like me shouting at you or me you know ranting at you about something that you screwed up in some way there's like an art of like giving and receiving feedback that many people are not you know interested even in engaging and giving good feedback is really hard uh it requires you to think it requires you to not vent at someone for something that you felt they didn't do right uh so I think that general for team development and for personal development being able to a react to feedback without ego is extremely important when somebody gives you feedback it's very important to repeat the thing back to them to make sure that you understood it and to brainstorm with an open mind about it um if you have any sign of defensiveness not only are you going to make sure that the other person believes that you you didn't hear any of their feedback you are going to deprive yourself from them giving you feedback in the future because they're going to be like okay like Georgio is an idiot he doesn't listen why would I bother you know it took me so much like go and give him that feedback like and he clearly didn't hear why doesn't matter whereas if you give me something and I engage with it with an open mind and be like that's a good point you know I didn't think about that uh and then you Riff on it together and that's quite exciting because that means that a I heard you two I want to grow three you know that further deepens our trust actually and I think there's like very little benefit to having relationships in your life where you need to walk walk on eggshells uh or you need to be sorry maybe I'm butchering the phrase but you you want to be like careful I think that's right you want to be careful with like what you say like to that degree uh because ultimately then you have a fragile relationship and I think with everyone in your life you want to be like deeply engaged uh authentically and honestly uh and so you should a be willing to take the feedback and engage with it two when giving the feedback you should be extremely thoughtful and not like attacking almost the other person uh a lot of this and I know you asked me like before we had this call like a lot of this like our learnings that I got from working very closely with Matt my boss where basically I came into Paradigm as a you know uh loud Greek uh let's say and and uh we went through a lot of you know sessions a lot of feedback Finly enough where we iterated on achieving like a very open and you know truth seeking call it uh way of engaging with things versus being very stuck in your socialized or self authoring mode and you want to be very as much as you can you want to be self transformative and not hang on let's say your quote unquote previous identity because you know you just have ego about it and I think that like really important for like growth basically saying you know it's not like me it's like you know here's my goals here's how I need to evolve and I'm willing to forget uh the things and unlearning is the hardest of all because you might have deep habits that are deeply rooted in your psyche from you know maybe 20 or more years ago uh and you just need to work uh through them and that's hard yeah yeah and when it comes to teams I think a lot of this applies I think it's very important for teams to have well in teams it might be a little rougher like when you have like larger teams so I think it's very important to like protect uh the candidate or the employee when you're doing things you want to facilitate when you can you want to facilitate ways for people to do things anonymously because even though you might have a very quote unquote safe culture around things you know for example in the open source culture that we have inside of our teams or even at Paradigm people are generally super direct so the the culture is say what you think everyone like got it like everyone is like H developed enough that they will deliver the feedback the right way but like if you censor that's like 10x worse than like saying something that might hurt a little uh and you might need to develop like some like thicker skin sometimes but I think it's still important to have the backup that okay always you should allow people to have like a safe place to like uh give the feedback because you know no matter how much trust you have sometimes people uh don't just don't want to do it so I I want to like basically Drive the point home that it's not just you you you just cannot be ruthless about it you have to be also very empathetic about uh the other person and that's again that's very hard that's very important to develop and I think many times heavy engineering orgs forget about that and they develop this kind of like Cutthroat culture um which I don't exactly love despite being like very direct myself mhm totally I've liked the kind of radical cander framing in the yeah when when I was at the EF we did a we kind of did a shared Workshop around that with a bunch of people on the team and it was really nice to kind of have that shared language after too um thinking about how you kind of have designed these feedback loops in the context of w right so there's performance stability extensibility with like specific like metrics that they're being benchmarked on is there any way of applying that to yourself as a person or is it is it like more of an intuitive process of like I know I want to grow in this way or or do you have like actual metrics under the hood ER I wish I I had such discipline um I haven't uh but I've been trying like uh I I I would like to get for example like when it comes to like physical training I actually think that like you know being great at your job also requires you like to have like a healthy body you know if the hardware is not good the software is not going to is not going to run so I've been trying to like do better that but I think sometimes it's like hard to apply you know even though you know things like very well like I've been blocked on this at times in the past so definitely like a lot of room for improvement there so no I don't have like as much of a training mindset like in everything in personal life um sometimes I I find it like very useful like regularly train in music so for example like if you have like a set of things that you want to do and like regularly improve uh whether it is playing an extra genre on your guitar or you know like playing a longer DJ set or something like that I think that these are like really interesting ways to like put yourself in a training mindset so I think the most important thing for all of this like just going and doing the thing like over and over which I I'm like good at quantity leads to Quality yeah exactly go and do the thing um going back to W with these uh four feedback loops how do you and and kind of the both the the code structure and like all the teams that are working on this how do you then decide what to actually work on how how are you prioritizing a road map to maximize these metrics yeah so this depends on team size right so what good you here will not get get you there so like for us we're in a very interesting point in the project where we are in dire need of structure and of like more concrete ownership management and so on so just to give you some context we've been running this open source work for like three four years basically as a pirate crew um everyone on the team was hired from GitHub without an interview process the interview process was effectively The Continuous continuously showing up on GitHub uh which is really exciting but also it mean that we never were super intentional about uh okay here's like the the structure right so also another cool thing from Frontier I think you said that literally every single person is from a different country oh yeah yeah that's a a fun stat as an open source you know project to have that that sort of a thing yeah of the 16 and you know if you add it depends on how you count but yeah like basically we have over 15 like we have over 15 people and over 15 different countries in the team yeah uh which like really exciting actually like not to derail from the previous point but uh it's really interesting to see how if you have multiple esls in the same room um the communication is kind of like forced to be like concise precise and all of that because otherwise you know you you you would not understand anything you know and I think most of us like speak pretty good English but like again like you like you can even hear it like on me like throughout the call that like you know sometimes you will mess something up and so it's like really important to like be really precise like in your communication especially because of different countries and different languages the other thing is a time zones so that also trains you very heavily on like being like very precise in your acing communication which I think is like a big predictor of a team success like being able to be good writers um so yeah on management feedback loop yeah like basically we were running the team like as a pirate crew for a long time where decision making and prioritization was kind of like um you know basically me calling the shots for everything just because I have a lot of context from the Paradigm investing in research work and from working with many companies in the industry many most of which are in our portfolio and you know like you could think of me as a almost like in the PM roll where you receive like a bunch of context you mash it a little and then you send it off to the team for execution um you're like a w node yourself yeah yeah iaz like I wish I wish I was that fast um so the the structure is going to be really needed though because all of this worked so far well when we had you know few people using the software whether it is Foundry or W or alloy or or any of these like a lot of people use our software but our priorities come from like a very you know specific High signal group of people that like we have curated over time um but over time just the surface that you're covering grows maybe you want to end up commercializing part of things in the future like we want to be you know ready for like any kind of change that we want to introduce in the future and so I think it's important that uh we introduce structure in the team so we are going through that excise right now where we're developing more structure in how decisions are made how priorities are decided uh you know if you come in like to me we're good friends you know I don't suddenly pzero everything that you request uh and that we actually have like proper priorities uh that are respected such that we can say you know in the next three months this is what's going to get shipped versus people overworking themselves or unclear priorities coming in or dropping worse uh or dropping requests that come in that are actually important that we said we would prioritize but we didn't so I think the most important thing of introducing like clear structure which is like an obvious thing but it was not obvious to us until we felt the pain um is a to avoid people burning out two to ensure that you have perfect SLA if you said you would do something you do it and you know no zero drop balls allowed and I think that's actually a general in principle of being good professional um which to be clear I I'm still like learning at it and I will still make mistakes on which is zero drop balls and I think you know earlier you were asking me what the feedback Lo or like what's a metric that I hope to hit for myself I think that's a great one zero drop balls no exceptions no excuses if you drop the ball it's not about you making up an excuse it probably goes back to time management so it's not like you saying hey I'm I missed something it means that hey you did actually Upstream did not manage your time appropriately and this goes Team Management I like zero zero dropped balls is like a more it's like a it's a more is a better way of saying like inbox zero and the sort of thing that that's getting at it's like it's like inbox zero tied into the importance of the thing that that that we're talking about totally and I'm and I and I definitely recommend being an inbox zero type of person I I think it's like required especially if you're like in a you know in a routing role yeah no one other thing um I was as was asking a few mutual friends for potential questions in advance and one thing that Liam shared was that you have this habit of writing these like zoom out kind of comprehensive documents at you know specific points in the journey and these um you know these These are both helpful for you to distill what you think but they're also helpful for communicating to other people kind of a shared Vision yeah um what's behind that practice so behind that practice um how do I go about this so I came into part I'm a very tactical or instinctual uh person uh many times I did the right thing many times I didn't do the right thing but like all the time it was kind of instinctual uh not as intentional if you if you will kind of like go where your gut takes you I think over time I realized that you have to zoom out slow down take a breath like realize what's happening uh and I think I have a dog that I call zoom out which which by now is like an 80 page doc with like notes um and also a journal where I would like you know on a weekend I would go and like just jot down my thoughts well I think it's very important to do this because a you want to declutter so you want to have a safe place for like many of your strategic thinking or your the feedback that you want to give to people and then you want to look back at all of this and like synthesize into some higher level Insight so I don't think I have like too many you know crazy things to share about this it's just you know trying trying to create self-awareness uh Liam is particularly very good at that like he has like massive doctor he notes every week that he that he I've seen them on obsidian yeah exactly exactly yeah so I never went through that um what I really like is like um ending the day with a very important question um that's like really top of your mind and then thinking about it uh in the next morning uh before before any inputs I think that's like a very good way of leveraging your subconscious to interesting to basically get things done um what sorts of questions like like uh something important strategic for your work or for your self- growth something that's blocking you something that doesn't have an obvious sens something that you need to like actually think about you know it sits a lot at your stretch point right so like really training yourself to like operate at this stretch point is really important uh and I think the the reason for the zoom out like dogs or like write ups that I do for myself uh and for others honestly because you know after you have digested your thought you can then package it up and send it to someone hey here's like what I thought about recently um or here's like one area where I improved on recently um yeah I think these are very important uh just to give you a concrete instance of this like I've been trying to redo my entire schedule basically I'm like in this mode where like I have massive um my calendar is basically like insanely insanely bad like it requires you to like retake control of your day so like one way is like okay how do I re architect my day um and being very intentional about that and thinking about that and then communicating it to people around you is like extremely important and you know any experienced like EXA that might hear this conversation might think hey this is like obvious time management stuff where for me was not obvious because I went from like very deep IC mode where you know I would like code every day to like eventually like not coding almost at all um and you know I think that many people uh that are capable Engineers might go through a similar Journey so it might be useful to them and less so for you know the experience exact that juggles uh a ton of um a ton of meetings or a ton of demand their time yeah yeah that's a really nice one to close the day that way and start the morning with that I was I've been feel like I've talked about this a couple times in previous conversations but I've been reading um the long Alexander Hamilton uh uh book and he you know he was a very very prolific thinker and writer and this for for many days you know across the years he would go to bed thinking about a question wake up have strong coffee and just write for like hours exactly and yeah it's it's a because there's a certain way in which you're fresh in the morning before everything else clutters the Mind pre input pre email don't check your phone like no screen just go you know the other thing like that like you might recall or may not recall I don't know like from when you were in high school or middle school is like you know sometimes you have a puzzle that you need to solve for you know the over the weekend or something or some task that your teacher put you and you know your end of day there's no way for you to solve it um but somehow you jot it down and then you wake up the next morning and voila like the solution comes to you or with other people this happened with showers uh and stuff so I think it's kind of like interesting to to see how the subon uh does its job at times yeah um I'm curious to dig into one more ethereum question before before closing before before like a a couple fun personal ones um so when you you know you you have a lot of kind of unique perspective and context across the space and you're kind of building for this world where um the kind of ethereum node is no longer just a node like ref is not only a node it's an SDK and there's you know this is going to lead to this prolif proliferation of layer twos of services around blockchains you know the whole kind of transaction supply chain is changing in many ways um maybe taking because this this could be a very big question I basically want to ask you like what do you think the ethereum stack ecosystem will look like if we snapshot it in like three years time but that that's a very big question and I'm curious like what your how you how you would answer the L2 Centric version of that do you think we're going to exist in a world of every application is on its own Layer Two and how do kind of interoperate with each other what does the account model look like like do you have a coherent vision of what that future looks like yeah I have one Vision which you know people on my team anytime I talk about they think I'm a bit crazy so I really like the idea of a decentralized cloud um or you know the so-called World computer from back in the day um clearly you cannot scale one monolith to World demand clearly like well established just look at the internet you know there's like clusters of nodes like that don't talk to each other don't synchronize uh across everything you can probably do a lot um but at some point you just are going to hit a limit and maybe demand at some point out trips or outpaces how much Supply you can provide with like better hardware and whatnot which is the classic you know ethereum Sol debate um so from my point of view what I've envisioned is that you know how like when you go to Docker and uh Docker on your top right runs in an icon uh and it just runs on the background and it runs a bunch of containers so here's like my you know uh Galaxy brain way out there take sci-fi thing and you know take it as you will my ideal take is that you download you know the W app you drag and drop it to Applications uh and then you run it it runs in the background in the background it would run a stateless Staker and execution layer why stateless because it should take very little resources and without derailing too much when we stay say stateless it means that it's a node that receives transactions the blog also contains all the information required to execute these transactions this is typically called a witness um and by doing this kind of like stateless execution you're able to follow the chain without having to have you know a terabyte worth of data um on your drive which is very important is the so-called statelessness e stateless ethereum uh many question marks on how we get there uh so might not be two or three years uh many ways to go about it there's verticle trees you can maybe do it with like binus and binary binary trees there's like many many ways to go about it but the the core thing here is that like at the foundation you know the clock is an ethereum one stateless node um and this and like in my mind I relate to ethereum as you know the the ntp the the clock protocol of uh the new world uh now on top of that you have a bunch of layer twos running so again the layer twos that you would run uh you could opt into running them in this like little app um that runs on your laptop on the top right of your screen uh that app would run a couple layer twos and these l2s they would for example facilitate the payments uh layer of all of this so the E theum part just facilitates maybe big settlement maybe the clock but like most of the execution definitely happening on L2 this is like a tail again as all this time uh this is like the vision that like I subscribed to working on from the beginning and I and I still believe that heavily now the more interesting thing that happens is that okay what do you do on top of that um well what I would really like to happen is that anytime your machine has any idle resources so let's say you have 5% 10% CPU time that you're not doing anything with you have maybe a terabyte lying around you have some memory you have something lying around this like little W app on the top right of your computer says hey do you want want to earn 10 bucks for leing part of your compute uh or your drive or your storage and this has to like operate like with like to to get to this has to like work on top of a payments layer on top of like a decentralized network and like many people have like gone about like trying to do things about this like way back when there was a Golem there was I exac the whole idea is like not new by by by no means right but I think path dependency is like a real thing uh and it really matters you know how do you execute to get there how do you get everyone to run nodes uh to get to that world that's why for us it's very important that bre is used as a library and as's a node because then everybody will be familiar with the ref apis and at some point there will be enough people running re on L1 on L2 that you won't need necessarily you know the the Mac that I'm like having this call on to be running it there's going to be both some servers there going to be my local Mac that run it and they all form a decentralized compute Network or a decentralized cloud so my ideal case would be something which is able to run any kind of computation that's like idle that facil any R of any hardware that's idle can run any kind of computation that might require using some kind of like snark to do the verifiability it might require using some trusted execution environment um God knows what that will be but like what matters to me a lot is that the whole stack can like be run like at a minimum like locally and doesn't need to be uh the highest performance parts it can be parts of the whole thing so again stateless ethereum is useful because you can do you can follow the chain without running everything that's why there's going to be probably many many l2s that like can facilitate specializ operations so not everyone will need to run every L2 uh and then on top of that you will have services that like run parts of things so for example if you have a machine learning model you would have parts of the weights living on my machine parts of the weights will be running on your machine and maybe you can do some inference that goes between the two of us um another another case would be that that I find like really interesting is just um like all of the common use cases that you see today on Bridges or on anything else like avss or symbiotic networks I think all of these things can be captured basically or encapsulated in this like you know little W icon that would run on the top right of your screen so I really like that as a you know abstraction that you know all you get is like a little icon on the top right and like it pays off like uh your machine basically with the idle resources while it's charging in the night and that boot traps a decentralized cloud um but to bring it back down to reality um because I think you know we could ideate World computer Visions all day I think it's going to be really important that you have ethereum L1 a bunch of l2s and maybe some offchain services to facilitate things until we get there and the l2s will be running the defi ecosystem like anything higher performance they will be probably be localization so the defi stuff will run on a defi set of chains the things are like very payments related might like need to like live on a higher through chain that like does doesn't have like strict total ordering um if you're doing something with a game maybe you need do something different so I think you know I definitely subscribe to the app chain Vision uh for all of that um I think people want you to believe always in tribes and false dichotomies uh I think that the answer is always yes everything you know there will be you know super chains that like host multiple l2s there will be one gigantic chain that hosts a lot of applications there will be other specialist things like it's really hard to say today what will work just because you know and don't kill me but it's kind of early right it's like 10 years in you know people like you know we always we always say that but you know it's still day one like in many ways in the sense that if you look at rollup WTF that conduit build you know the whole ecosystem is not doing consistently 100 megag per second um and 100 megag per second is not a lot um it's literally not a lot and and you know one would say that once you have like much more Demand only then you know true specialization requirements will arise but right now like any specialization requirements that arise or like you know any like tribalism that you see it's kind of um Downstream of uh I don't know narrative and downstream of uh frankly like virus fundraising ER Dynamics behind the scenes so I don't think that it's you know like I don't think we're at a point where we can make an educated call on what the case will be like a or b you know a not b um so yeah my take always is being that it's going to be all of them and you you you just cannot predict really and like anyone that like tries to like go on the one side or the other there's some kind of uh uh personal incentive behind yeah I really like that Vision I feel like I mean the the having a stateless node on your laptop I mean it you also basically get your own direct access to these internet utilities right like you can send payments to anyone you can you can do all of these things and and then the the actual compute is being run on this mesh of like everyone else's devices yeah I I I would recommend everyone in your audience to read like there's a great blog by the tales scale CEO called the new internet uh it's an amazing greed showing how you literally have centralized intermediaries just because like there was no better Network topology at the time and the P2P Network just didn't take off whereas tail scale is a VPN uh but like a VPN is like just like a very like uh you know people relate VPN as like software to do piracy uh they don't relate to it as soft Ware that enables you to bootstop you know local n locally Sovereign networks that don't rely on like big intermediaries so I think like all of these like decentralized web decentralized Cloud visions that like got us engaged in crypto in the first place I think the vision is still very much alive and I think we might be at the point where we can actually execute on it where you know 10 years ago we had the vision but we didn't really have the technical chops I think now we're starting to have the technical chops and again I don't want the narrative to run ahead so like being very present to things we're like at sub 100 megag per second we have a lot of work to get the gigas like no real offchain services like you know that are decentralized really exist yet so you need to like maintain a lot of sobriety while doing all of this while kind of retaining uh the the original excitement that you have for this Vision yeah um is why do you think it matters for us to have this truly kind of decentralized mesh you know this is running on our laptops vision of this world computer like to me there's there's something really aesthetically beautiful about that idea of like you know it's like it's it's a real kind of internet in a way and if I think about you know I I feel like when I kind of Reason about this I'm always kind of of going to some you know aesthetic or like you know this it feels more elegant in a way and if you are able to um if individuals have this kind of like agency over their interconnections and access to information and access to money and resources and all of this stuff this is just like a really powerful thing to like build into human civilization at this at this point in our in our longer Journey um but how do you relate to this because because it's also like we we don't need to do this right like we could also just be more practical and more more pragmatic you know and use some bigger computers in the mix here totally and I think many people will always tell you hey you want decentralize you're an idealist you're not like down to reality well I'll tell you that a a Distributing Network should actually be cheaper and faster than a network where you need to like go and hit a central server why because the compute and all of the resources that you want to access can actually sit closer to you and they don't need to come from a hyperscaler that has a very clear commercial incentive to like you know eat you up on the prices like if it comes from the neighbor and that that was like also another common project like from like many years ago which was called what was it called grid plus um again all of these ideas exist um if you would get the resource that you need from a local network that is like closer to you you lose Less on the latency because speed of light of a packet going back and forth or you know in the electricity example obviously you know you lose power further away something is um so a I would say that you can actually do faster and cheaper uh if you're on the so-called edge of a a network and this is like the you know in Enterprise SAS there's the whole whole buzzwords around Edge Computing serverless Edge all of that stuff um yeah I actually kind of like it you know like Cloud Flur is doing a lot of that cloud flare started as a CDN where the idea of a CDN is okay let me place like a bunch of servers close to Consumers so that they get lower latency and they realized oh my God people don't just want to serve static content they actually also want to do work but if they're going to go to the the Google server why don't I just place the compute where my CDN was and so it's the same principle if you think about it so to anyone saying you know uh look you know this is like to idealist like my answer is like always like look at what cloud Flur is doing or what of the edge compute guys are doing something will work in that neighborhood I think it's too early to say what will work but something will work and that's uh what we're going for and you know these are the practical quotequote business requirements you know faster and cheaper like if you want faster and cheaper if it lives closer to the edge in and it comes from idle resources that wouldn't do anything I would think that you can achieve that and then of course I think that privacy is super super important uh not you know we're obviously in in a in in an era where um you Sur Global surveillance just like keeps will keep getting worse unless the Defenders uh level up uh and I just think it's important to arm everyone with a defensive technology like that hey I have a small ask here if you've been listening to these conversations and want to support what we're doing here I would really appreciate if you leave a rating and a review for the podcast it might seem like a small thing but it actually makes a big difference to help other people discover the show also thank you and I'll see you again soon Back To Top