there are two programs I use for image editing the first one's called pixel Mator it's essentially Photoshop but you're allowed to own it instead of renting it by the month that's the desktop app that's the one you choose when you want to create Something New by hand when you're still figuring out what the image should be it's got rich UI it's fully featured it's exploratory you gradually pin down the image you want the other tool I use is image magic and that's the command line tool that's the one you use when you know what you want but you need to script it up automate it scale it out to run over a million images each of those tools is great at what it does and absolutely terrible at what the other one's good at and it seems to me that's just an accident of History there's no fundamental reason why the oneoff userfriendly image editing tool is completely different to the scriptable image editing tool we could in theory have one unified tool set well my guests this week Kon and Dennis would agree with that but they've taken the idea much further than just a unified tool set they've designed a language called graphine which is all about describing and chaining together image editing operations into a big graph one that's developer extensible in Rust and then on top of the language they've built a user-facing desktop app called graphite which looks like Photoshop or inkscape but is really just manipul cating a tree of graphine Primitives which you can use to make your image or you could then get your hands back on as a programmer to turn into a script I'll give you an example you take a user's image you send it to your graphic designer and they design it up and send you back a nicely designed Avatar the file they send you back isn't just one Avatar it's also a program that you could run across all the images for all the users in your system and the graphic designer never knew they were writing a program it's a very clever approach to unifying these two needlessly separate worlds through the magic of creating a specialized language and then building tools up upon that language so I think we should find out how it's all built I'm your host Chris Jenkins this is developer voices and today's voices are Dennis cobbert and Keen Chambers [Music] I'm joined today by Kon and Dennis gentlemen how are you hi I'm doing great yeah me too this is the joy of having two guests at once there's instantly that race condition on who gets to speak first yeah and it's more difficult with the latency involved oh yeah yeah yeah well we'll work around it I'll try and do some stage management for this conversation and we'll see how we go so you two are working on something fun um you're working on an image editor in Rust and there's a lot we have to get into on that especially on the technical details which is Juicy but I guess the first question has to be and I'll direct this at you ke on why does the world need another image editor yeah um so basically everyone is always asking for one but it seems that no one has a solution to offer one um so there's definitely an actual Community interests amongst many people both hobbyists and actual professionals who feel like either their own existing editing software is being abusive to them or they simply don't have a good free option that exists if they're using Linux for example it's not available or if they're using really any operating system maybe if they're just a kid just learning that was kind of my background when I was growing up is um having to use like an old uh like purchased copy from an employee discount of Photoshop growing up um but I can't get updates to that so at this point it's pretty expensive to subscribe and um at the same time people who are using for example um they're sort of limited by its old interface design approach and um a number of limitations that it's able that it's able to provide or not able to provide depending upon your perspective um with the kinds of things that you do with it um so basically it's really time to inject sort of some fresh ideas into this landscape and um really have a nice interface space but also something that's way more flexible by making it also kind of a a generative art tool using programmatic Concepts from uh from actually borrowed from the 3D industry a lot there's a lot of um node-based generative procedural non-destructive editing approaches and that is some really really useful idea like a useful set of ideas that need to be brought into this realm of 2D editing that is something we've got to get into because generative has become a loaded term in the past couple of years it's actually developed an entire new term a new definition with with AI but also generative in the sense that you know traditional algorithms that can be scripted together to generate content live or based upon Dynamic data or at different resolutions or um being powered by certain data feeds or anything like that kind of that's that's all within the realm of what we're ultim ultimately making graphite do okay and that's because we have rust powering everything and we have our own language built up on on top of rust so that's just a little teaser there but we'll get into that we're going to dive all into all of that but I have to ask you before we go into that does it I mean the scope of an image editor especially you look at you look at something like Photoshop or word or Excel and you think I could do the simple version but they are now so huge that you don't stand a chance of competing with the full thing right does it not terrify you trying to build an image editor it is truly massive in scope uh just unfathomably massive in our road map we have to you know actually spell out all the all the things we know about and there are so many things we don't know about at the moment that have to be built as well but all those that have been built out you know that is a decade long road map that we will continue crunching towards as we as we make progress but we have been making progress and it's so far been succeeding um it's still relatively early on but it's also you know it's a useful tool at this point so um it will take a long time it's very very ambitious I think Dennis you can go and give the quote about ambition yeah so um if there's one thing that graphite is not if there's one thing that graphite does not have it is a lack of ambition yeah basic a quote from me no it's quot from me um it's basically we oh no sorry yes yes you're right that's a quote from you we end up like we look at something think oh that could fit that's a good feature that's useful we'll just add it to the road map and because this is such a universal approach of like because graphite has such a universal approach of building an application it just a lot of things fit and we can we could extend it to so many Realms and do so much useful things it's yeah the scope is pretty huge for example someday the same platform that powers the rest of the graphit editor for graphics could also be moved into being like a digital audio workstation you know like more than a decade down the road but it is such a such a generalized approach to making a graphics editor or to making really any kind of editor that it's like a it's more of a more of a generative scripting tool with a creative editing environment built on top of it and we can just keep extending that you're gonna have to take me into that because I've I've played with graphite and it looks it reminded me a bit of inkscape that kind of SVG editor there's definitely something deeper going on under the hood but I I didn't immediately see the general editor Supreme universe that could be a digital audio workstation one day so what's your architecture give me the high level all right so in well graphite foremost is an image editor and a vector editor so one of the principal goals was that we unify both the vector editing and roster editing experience there shouldn't really be a reason why you need to have two different programs with two different UI concepts for doing images and okay the this was one of the founding principles that's what we want to unify we also don't want to lock the user into like we want we want to we want the user to be as flexible as possible and one of the general ideas of graphite is instead of like we when the user does something we record that as the US user's intent but we don't destructively modify something so we have this idea of both roster vector and non-destructive ISM right the non-destructive ism allows us to record whatever user does and then non-destructively so well I should give an example so in Photoshop if you draw something you have an like an IM this is developer voices we have a like a an array of pixels an array of image types then you draw something the stroke modifies that array of image like array of pixels yeah and that's done if the user draws another stroke you modify the same array again that's destructive operations what we do in graphite is that we record the brushstroke what the user Drew and then we can apply this at runtime to compute the final texture so the idea of graphite is that instead of applying everything the user does is we record what they do and we'll get into how we record that that's the not graph idea uh but we record the user intent and then we can like we we can Replay that we can compute what the result would have looked like if we applied the operation directly we can do after the fact and we can modify things this is making me think is that explicitly inspired by what's happening in kind of functional programming and event driven systems world yes so exactly exactly what we do is that like essentially if you have a if we just draw a box in inkscape that is we represent like every every operation the user does is reflected in a note graph the note graph is a visual representation of your document within graphite within graphite so a box would just be a single note that produces a box shape Y and we can think of this note as a function it's a function that doesn't take any input and returns or width and height for example yeah yeah for example yeah it takes width and height as input and returns a box as output if you then in the US like in the user land you draw a box if you then change the color of that same box instead of making it a note that returns a red box we instead first we modify the not graph to First have a note that creates a box and the output of this note is then fed into a second note that takes a box as input and returns a red box as output like it's a color node it modifies the data flow oh yeah so I've got a function that's from EXA input to redder output then presumably I've got nodes like opacity and stroke yeah yeah whole library of functions right and all of this like every node is the function if you modify something like if you modify the document we provide tools for you like we provide the same editor experience as you would have in other editors so for example if you in Photoshop draw Strokes they would be Riz as pixels what we can do instead is that we record The Strokes and then have The Strokes as input data feed that through a function which rizes them and we then display the result okay allows you to do what that allows you to do is that you can modify The Strokes after the fact if you didn't like that one squiggle you did you could modify it and the rest R result is updated okay to what degree can I play around and compose that if I I build up a chain and then I say actually I'm going to delete the box at the start of the chain and change it for a function that returns to overlapping circles do I still get the same pipeline after that yeah yeah definitely okay because it really is just a toolkit of functions that transform the image and our goal really is to make a bunch of useful uh useful Graphics editing operations like a big big catalog you know as as many as we possibly can that are useful um so people can really do whatever they want with it and use it as a toolbox so if you're making a toolbox the first question has to be to what degree is that exposed to the user I mean is the user expected to just use these functions or to write them as well so we have different levels of abstraction for the for the different functions the different nodes and some of them are intended to be used inside of other nodes so they're sort of like lower level implementation parts of more uh abstract nodes like nodes that do more but have more complexity so the idea is that they're composed and we have just like you have one function in programming you know calling other sub functions which generally have smaller more Atomic units of of complexity and abstraction um same thing where some of our nodes in our catalog may actually be very technical they're just like we have like an unwrap node that actually unwraps an option um but other ones are higher level you know for example like mirroring a shape or um doing something else that involves combining shapes together or involves doing some raster operations that are complex like dehazing an image um and those may be written purely in rust or they may be composed out of sub nodes that build up how they're implemented um and the goal is to have lots of different levels of abstraction people can access but we'll probably categorize them differently um and then Additionally the tooling so that's like the actual interactive tools that are very similar to photoshop or illustrator or inkscape or um those tools they have a predefined set of nodes that they operate on and those don't even require working with the node graph but they provide a completely traditional editing experience that everyone is used to from all those editors and those operate on the nodes they're used to operating on but that will be of course a subset of all the nodes right yeah I I mean I've played around with this and I like you draw a shape and you fill it in and it feels just like any other image editor but then there's a button and you can flip over to seeing the node graph right yeah so I I'd say there are basically like three levels of abstractions that user can choose like the very basic one is they just use the editor as you would any other editor and the tools do the modifications in the background like we we build the document graph for you you don't have to do anything you just use the tools that's like the first level of abstraction then you can click on the button to Overlay the note graph then you can sort of explore what the inner working is how this all works that's the second level of of extraction xtrap abstraction you can then also start like modif ifying this building new nodes actually work in the editor to modify things you then can start extracting the not graph by building your own custom noes out of existing ones so for example if you you want to make a clever color color reassignment function that reassigns colors based on the rainbow you could do that and make a note for that and that is a note built out of other nodes which can be abstracted and then yeah shared saved and used in other images have a package manager like an asset store and it's essentially the equivalent of crates.io or npm or any other package manager but it's transformation yeah exactly that's the that's so the third level of abstraction and then we have a fourth level of abstraction which is well it's the lowest one you just we literally will allow you to write rust functions currently that's all defined at compile time but in the future we will have support for writing functions at runtime in the editor to modify the code oh okay you should speak to uh the FX guy who's uh doing like live reloading of um rust code maybe you're we're building towards as well yeah live reloading exactly okay I have to pick up on something you said this um unification of um vectors and raster images because I'm thinking no graph you I I get this in the image editors I use that you can turn vectors into resters you can turn splines into pixels but that's usually a one-way operation you can't turn it back so that makes me think how many different kinds of node do you have and what's what's the path between them because there are some routes you can't transform between yeah so we sort of have this idea of one way you know things are a one-way Street usually um and a large part of the actual node design process is how do we keep things as pure and as useful as possible for as long as possible so if you're working in Vector land before you rasterize we want to have as much tooling as you know we want to build the the tools and the nodes to be capable of keeping your data in raster land but beyond that there's actually even higher like additional levels along that spectrum of purity and for example um a node might start out as a simple Vector shape that's described as well actually let's say text so text can start out you know text is vector-based but also text can represent paragraphs and spacing and different sorts of layout information so the text data type can become Vector but it actually has a higher level of Purity before it becomes just plain old Vector paths and then at some point after that it then becomes Pure Old raster um so I mean sorry not pure pure opposite of pure dirty old raster um and then the raster of course is the base level because that's what what gets s what gets sent to your screen but there's actually even one level of abstraction or Purity or whatever we want to call it um within the idea of raster which is that we'll pretty soon move towards talking about what resolution agnostic or adaptive resolution we sort of have two names for it refers to but the general idea is you can zoom in and it will reender your content at that new resolution or you export an image at a really high resolution and will reender the entire document at that higher resolution and some RTO data is more pure in the sense that it is able to be represented as something that gets rendered at the contextual resolution that it's being viewed at that it's being rendered at um whereas other types of of raster data is actually what we call like a bit map it's just like it's literally just a width and a height of pixels um it's just a simple image and in that case you can't reender it at different resolutions because it's just a finite amount of actual data and in fact it has no position in space even um you'd have to place it into the document and give it positional and transformation information but otherwise it just exists as just a piece of data you know just an image file so are we saying that there's a lot of places on your no graph where it is just a function chain presumably with caching which we should talk about but there must be some operations that say I'm sorry that thing you're about to do is going to turn that branch of the tree into a single node and you can't reverse it um I'm not sure if I understand the question exactly because the graph is always representing your program it's so there's you sounds like you're working very hard to ensure that like it's always a function a chain of functions to a thing but they're not some operations that say well um I've got to destructively turn this into a raster or if you man to avoid that throughout yeah okay the the way you can get around that like that is very useful to do it's a basically a cheat code because turning things into Rost is very easy and then you don't worry about anything the other way of going around it is to pay a hefty performance price and just don't do the rasterization and we do the rization live okay in a sense what graphite is is very well in our architecture is built similar to that of a game engine we have a game world like you think of your document as a scene in the 3D game okay for example you could if you build something out of shapes you can walk up to them and you get a higher resolution render and that's the same like if you think about your document as a 3D game world and your camera as like what you're currently looking at this is what we do in graphite we re we basically reender the document every frame based on what you're currently looking at and that is what this what this is what we do with the Adaptive resolution system for example blurs you can't like you usually apply a blur once and then you can't modify what's beneath it yeah so in Photoshop if you blur something you can't change what's like this Source data what we can do in graphite is that if we compute the blur at runtime you we allow you to change the underlying data and then we have to recompute the blur so we don't do destructive operations instead we do them at runtime and try our best to provide caching and make that as fast as possible okay yeah I can see that appealing because you always have the thing of like once you've committed to a certain blur level 2 hours later you can't go back and change the the blur thing yes exactly okay so tell me about caching because in order to make that performance you must be making some very difficult decisions about what to cach and what not to I will preface this by saying it gets a lot more complicated as a result of the Adaptive resolution system so we'll get into that but it's actually very different from other traditional no-based editors uh because we actually have to treat it more like a program like an actual R program and less like it is simply just flowing one operation into the next operation so we actually have bidirectional flow of data so go ahead go ahead Dennis yeah all right so basically what we do well the the concept we're using is also used in programming it's called memorization M which is used to cast the output of functions but first um I can explain how our caching model works okay so the way we Define a note graph is that it like every note is a pure function so to speak or at least it's item potent so given the same inputs it will always return the same outputs that is a very useful property so we do actually enforce users like if users write functions they have to be item potent what all that allows us to do is that we can if we build like a document graph we can take the leaf nodes and compute the hash of that node like all inputs we hashed the inputs we hashed the node and now we have a new hash and this hash is like a function identification if you are familiar with theoretical computer science it's sort of the G number we give each function a number and this is a unique representation given the same inputs it will always return the same outputs that's what we do for our entire document in graphite graphs have to be directed and a cyclic so we don't have loops that makes things a lot easier yeah but what this allows us to do is that we do a bottom up like like topological sort Tre traversal and we then can compute the hashes of every node given inputs and the node itself what is the output hash this is reminding me of gate yeah exactly same yeah it's a Merkle tree more generally I think it's called a Merkel tree right yeah yeah it's a Merkel tree so given inputs and thing itself we compute a hash and this is what we use for caching like the well at least the first layer of caching okay so if you have the same input and the same output uh well same if you have the same node you can assume it will always behave the same so what we can do is that we just insert cache notes into our note graph and the function of a cache note is if you call it once and the cach is empty it calls its inputs computes the result stores the result in its own struct and if it's called again it just returns that result so if I have two in a simple case if I have two boxes of the same width and height you'll only be Computing that once right yes exactly we automatically D duplicate nodes based on this hash system okay y That's do you do you prune that as you go along I mean what do you do about evicting things from the cache CU that could get huge we currently prune if a node has not been used in two subsequent e like executions of a not graph then we prune it from the active list like list of active notes we keep around okay and also I think at the moment just because we haven't built something bigger as a up system that is a little smarter but at the moment I think we also just uh throw away anything before the last memorized output is that correct um yes we the last evaluation of of the yes and this this was very like this was easy still if we don't think about any inputs but like you could argue if you do the game engine thing and render the same object at a different resolution the nodes will be the same the thing is you have like at that point we have a function like again the entire node like the entire graphite document graph is basically a function it's function given these viewport parameters like this zoom level this viewport translation panning what's the result pixels that I throw onto the screen right and for this function if you zoom in and out like the node IDs won't change because nothing in the document graph has changed but we need to be smart about caching and we have to take the input arguments into account and currently we just recomputed if the input arguments have changed so that would be if you change the width or height of the rectangle for example then those input arguments change because the hash changes but if you simply move the camera around then it does not uh it does not update that that's equivalent to generating a game world and then walking around inside of that world if the character is walking around and the camera's moving we're not regenerating the world uh we don't have to recompile the program that generates the content because we're statically describing a scene that is not changing but as you walk around or if you move the camera around or in this case move the editor's uh navigation of the viewport around um that is not changing the scene the scene is guaranteed to look the same we just render it at a different resolution or we render it at a different location but the scene is static after we have compiled the graph that makes sense is this why you're saying um changing resolution makes things more complicated that's why we have to use this Merkel tree approach right where where the resolution itself is an input to the no graph it's a it's a different input um and it does not it does not invalidate the caching um so there's sort of two types of inputs there's the inputs that pertain to the current way of viewing the static scene and those do not change the way the caching Works they don't involve the Merkel tree they don't involve anything like that but then there is the um like if you're actually changing the width of a rectangle or something you're you're modifying the scene so now your actual scene your actual game world has has changed and in that case things have to be recomputed if you blur the rectangle afterwards you have to go recompute the blur so in that case the scene is no longer static it changed even if you're viewing it from a different location or you're not viewing it from a different location so those are the two different types of inputs one describes the scene you're actually updating the world and one of them is just viewing it okay yeah I think I'm with you I think I'm with you so I want to dive into because there's there there are two parts to this cuz beneath all of this there is a graphics description language of some kind exactly yeah that is graphine yes yeah right explain graphine to why why what does a separate language do what does it look like so graphine is essentially just a functional programming language and every like every note that you draw in the not graph is bjective to rust function okay well we at least every Atomic node we do have the concept of allowing you to group nodes together into high level nodes like functions like you can compose functions and write new functions that contain the different functions but conceptually every Elemental like if you double click into a graphic node you see what did the subnet NW work what the node is made of and for some noes you can't go any further because it's just a rust function right and what we do is that we take this description of the user note graph which is our like description that's the program the user wants to run and we then compile it and assemble it into like at runtime linked rust functions so we pass the input to all our computations we can also do optimizations if we want we currently don't have them yet because we I want to build in more General like tree like graph rewriting system first but okay we could modify the graph and then we assemble the output very much like a general like like a compiler and yeah in instead of assembling to BU code because we're usually like first of all we're targeting we mostly targeting web assembly right now like execution in browsers and what we do instead is that we at runtime link rust functions [Music] we take the compile output and every Atomic node is a rust function and we then link them together into one single big function which we then can call in the runtime so this is our current execution model okay okay yeah I can see I can see that if you think of a picture is being made up of nodes and Transformations it becomes quite a lot like a compiler yeah I can definitely see that yeah and in that sense that's the programming language while graphite the editor that is more like your IDE and we sort of have a visual IDE instead of a code IDE and then also we have the node graph because the node graph is the the ual description of your code that code can be also modified in the editor but ultimately it is a graphine program and graphine is very much its own language but it's also kind of a language built out of rust in what sense is it his own language because what you've described to me so far sounds like a compiler of rust functions to a specific Target well it's a compiler of our input graph like it's a it's a graph and not rust functions yet the rust functions what example don't have any nesting in our input compiler we like also we do a couple of things one of the things is that in Rust you can't have functions with varic variadic arguments you only you have to specify how many arguments a function has and in like in graphite we want to have nodes that have multiple Arguments for example what is the width and the height so you you got to have both descriptions not just a sing description yeah okay so what we do is we take this highle user facing description of what the note graph looks like and then we can transform that into an actual execution model like we do the topological sorting we flatten the input graph we resolve inputs we do type checking that's another thing we do type checking our compiler okay and then we can dynamically link the functions together and what when you say graphine is a language is there like a textual description or is it entirely constructed in memory by the editor well it is we do have well we have the do graphi file format okay and that file format contains a description of the graphing language because it's like the embedded document that's the descrition is it something I could or is a machine read description it's mostly machine readable it's like a description of it's a textual description of the visual not graph so you would usually use the visual not graph okay that makes sense and the use case here this is something we'll be building soon is copying like a group of nodes you can copy them and paste them into stack Overflow or something and someone else could copy that text and paste it back into their editor and that would allow you to give like a group of nodes to somebody else okay yeah but but in the end that's just that's just the serialization of the visual representation of the format okay I I think we should take one step to the side and just quickly ask why you chose rust for this particularly um sounds like maybe a question for you Dennis well I didn't make the call like I joined on because the project was written in rust or like was aiming to be written in Rust that's why I joined the project so that's one half of your reason those are some ancillary benefits we get smart people like Dennis oh yeah but the like the original rationale was I think largely about the tech stack and the ecosystem uh we have wgpu and that is a a very very core part of our of our goal and meeting our our our design guidelin sorry our design requirements um is having uh it support both the web and also all the native platforms and having wgpu as an abstraction layer to be able to take your graphics API calls and put those out if you're on a Mac to metal and if you're on Windows to DirectX and if you're on uh Android or Linux to um or a Chromebook uh to Vulcan and if you're on the web then it will be to the web GPU API that is provided by JavaScript um although we're still waiting a little bit on the uh all the browsers to actually roll out support for that so we can properly deploy that uh we're sort of in this limbo land at the moment with web GPU not quite being being fully deployed so we're not actually using it by default and we're using some workarounds until it is but hopefully within like the next half year I think it should roll out in all the other browsers but back to R um so wgpu is the the crate that provides the implementation of the web sorry of the the web GPU API but for Native platforms and it abstracts over both web GPU for browsers and then also all the other platforms um so that's a huge part of it the other aspect is that we've always wanted to have a desktop client and a web client and you can either use C++ with ins scripton or you can use rust with WM binden and the just the r the the WM back end um to provide a WM binary that can then be loaded into a browser and run by people such as 10 years ago myself if I was in school having to not have to download a remote desktop program to remote into my own personal computer at home just to be able to use blender or photoshop or whatever professional graphs tools that I could not use on the library computers or the the Computing lab computers at school um so that has been very much a goal is having um a tool I can use in a browser and also a tool I can use with Native performance on desktop and really there's C++ within scripton or there is rust those are kind of the only Tools in town to do heavy duty web assembly development and I can probably also give you a bit more on why it was actually a good idea like that's the initial why we chose it or why even chose it um some of the benefits of rust are like first of all we don't really have to deal with undefined behavior and hunting down memory bugs that is a huge relief we did have at one point have undefined behavior and I spent like a week frantically trying to remove every little unsafe like every last unsafe code in Rust in graphite and it was yeah it was really annoying and really stressful and in the end I found out that was that like the simd implementation in the Chrome browser was just faulty and like that just did undefined Behavior it's yeah it's never the compilers and never the browser fa unless it is and then well yeah and you can apply to Google for a little uh medal that says you found a genuine browser bug and I could I had so many so many rust compiler issues and like compiler panics and you know you've done you've reach a good place when you sack fold the oh like cause the rust Panic Handler to panic then you get a double compounded called fault it's fun times yeah no but um using using rust is great because it allows you both the fine R control and we do need that because we do need to squeeze out the last little bit of performance and have fine grain control about what assembly is generated and it also provides powerful abstraction that abstraction that allow application to scale and that's just a very unique combination and very fun I'm also I'm also very much a highle programmer I'm normally here coming from like a JavaScript background I like thinking at the very high level and not thinking about what undefined Behavior I just invoked by accident um but rust actually provides the Best of Both Worlds and it allows me to be really happy doing highle programming but we actually get that C++ level performance and C++ level granularity um it really is just kind of a C++ but with 25 years of hindsight that is what rust is okay yeah yeah I can I can totally buy that description of it it does make me think there are couple of if I were naively trying to do this there were two problems I would instantly expect trying to get a program like this IM rust working on the web and the first is you said your dynamically linking rust functions at runtime that sounds like that's going to break in a browser and well we're not where we don't use D well the term Dynamic linking is a bit overloaded we're not using Dynamic libraries that's not supported yet at w we're waiting for that because at that point we can actually generate like we can then at runtime inject new code that has not been compiled into the application you can download some blur functions from bl.com yeah like what we do is that we like if you think about how the assembly is generated there's an actually an assembly call instruction which calls a function pointer you provide it with a function pointer and it goes to that function and executes that code what we do is that we essentially exchange that function pointer it's yeah it's like a if you think if you think about it and you think like you have a big array of functions like a global function table mhm and each node knows it's going to call this function but it doesn't know when we compile the note it doesn't know which function it's going to call and then what we do in the compile is that we fill in those holes we say node red you call the Box function and then like this we can chain together the nodes so the box is colored red yeah okay that makes sense you got yeah dynamically take a moment to explain because you said the red color calls the box I don't think we properly explained this so far but con conventional graphs would take the box then they'd color it red then they'd output it but in our case because we are starting out from the fully compiled program that's been linked together we have just a single blob that can be called with an argument and that argument is going to be the resolution you're viewing it at and the location you're viewing it at and then that calls let's say the red function the the color the Box you know the color the something red and you're saying okay I need from this color at Red function and that color red function says okay we we need a shape somewhere um but we don't know what the shape is yet we're going to call the Box generator function and it's going to then take the viewport information as well the the bounds that you're rendering at and then that box generator function says oh you're on screen or oh you're offs screen if you're on screen you produce the coordinates you know the actual Vector path give it back to the coloring red function and then the coloring red function has the data it needs to color it red and then return that back so we're going both directions we're starting out from the final program output and calling it with the viewport information and then if something's on screen we render it and if something's not on screen we actually return no data because that becomes cold and then it returns no data and the color red function has nothing to return as well so it also returns empty data and and the result is you have nothing rendered because there was nothing on screen right yeah this makes me wonder is there is it possible or is there a future it's possible for me to say I've designed a graphics pipeline in graphine and what's a good example where I I would like a program that takes a photo I a JPEG I give it a random and I call it in the command line and it puts a happy little frame around the edge and makes it grayscale could I use that as my Graphics processing language rather than just UI tool you hit it on the nail there and then you could compile it out as a standalone CLI program and invoke it as you would otherwise use like image magic for a similar effect yeah yeah used image magic for exactly this yeah you can design it in the editor a nice wizzywig editor uh what you see is what you get editor do everything you want expose your inputs and you could also have text as an input and have like a birthday card generator that takes the name of the person and and their age and their photo and their Hometown or something and creates like a nice personalized thing that you can then compile to a standalone program invoke it as a CLI program or as something that is like a a process that runs as a web server and response to web hooks or to API requests for like HTTP get requests um there's a number of ways you could compile this out this is obviously still the future we're talking about but that is very very much exactly what we have planned like like what he what you can like I did that actually for my batches thesis I needed a I did some like performance optimization with GPU programming GPU programming and one of the things I did was write a small like graphine CLI which allows you to input a graphite file so descript a textual description of the not graph provided with input and it generates the output for you so we do have like experimental functionality for that implemented but that is definitely something that you will be able to do and use use case we want to support yeah how far will that go because I'm I'm thinking like I was thinking about the games world and how much of their work is asset pipelines could I Define a scene that said okay if I change my blender model please rerender a hero shot for the main character I'll give you a blender model these are the rules to make it look like a pretty thumbnail that's I think that example or well not not exactly that example but that was one of the things that inspired the quote with one thing that graphite does not have is it's like it's a lack of ambition because we did think about well couldn't like if I have a 3D model couldn't I just plug the 3D model into the not graph and then have a note that renders the 3D model to a picture and then applies some other things to it and yeah our answer was yeah we could do that that's perfectly feasible um so yeah we could you could just use the vender model as an input to the not graph yeah and then render like set the camera settings we would of course use a like pre-existing render we renderer we we would need to use something like the blender renderer and wrap that in graphite or in graphine as a note but you could use that as like a one process and if you changed the 3D model you could just regenerate the hero shot don't need to and apply post processing Etc and you could even do a couple weird things with this one is you could still use the adapter resolution system where as you zoom into your 2D viewport inside of graphite inside the editor that changes the camera render parameters for the 3D scene and you end up rendering the 3D scene with as much resolution as you need you never have to pick an AR an arbitrary resolution to render at you just keep zooming in it'll keep rendering the 3D scene at that viewing resolution the other weird thing you can do is because 3D scenes are composed of triangles you could actually render not to a vector shape not to a to raster content but you could actually render every single triangle as its own Vector triangle layer and then start modifying with your vector effects you could start modifying the 2D projection of that scene with your vector effects okay yeah that would be a horrifying large SVG file but but there could be use cases where that's actually useful like if you wanted to create let's say a Taurus and then you want to take that Taurus and get sort of like a 2.5d effect but you you can then start modifying that Taurus in Vector land um it's a perhaps an easier way to render something out of 3D into 2D um but keep it as a vector maybe you don't have every single triangle be separate but you have like the view of all the triangles in the entire model becomes a single vector path um where it's not a bunch of triangles it's just one single shape composed of all the triangles but you can still make that into Vector land and that's perhaps a useful way of creating like an outline or a silhouette of something that would be harder to draw by hand or you could then rotate it you could every frame because we'll have animation support going with that ambition part we can have you know animation where it will then rotate that object but keep it going to vector and then you can do subsequent Vector effects yeah presumably there is a a future where I could handr write my own rust functions and say you know what I'm going to have my source image day to be a rest query or a SQL query yeah yeah okay this is fun because I I I have done stuff like this in Shell scripts with image magic and it's possible but it's not fun yeah yeah and like one of the unique things about graphite is that or well first of all one of the main considerations I had for like while designing graphine language is that I wanted it to be like it wanted to give the user as much power as possible we never want to limit the users in what they can do so that's also one of the reasons why we don't have a traditional runtime like usually you would have a runtime you execute one node then you manually pass around the data to the second node and then you run the second node we don't do that we don't have a runtime it's all linked together as a function and the users could theoretically write their own runtime they could modify how nodes are executed and basically all of the features we've added to graphine to our language have been features that are implemented as notes so that's one of the sort of fundamental principles we want to have a simple language and build the language features out of the language itself and one thing nice like this Paradigm is very powerful for example if you want to think about batch processing in graphite we will never have to implement batch processing mat processing is basically just we have a folder of input images we want to apply this operation to those input images for example if you if you're a photographer and have a spec of dust on your sensor you would apply the Magic Eraser tool at this location yeah so and in photop you have this match processing input file you then apply the operation and get that as an output and save the output what you could do on graphite is that you take a sample image apply the operation and that's then a program it's program from input image to Output image with the apply like with the Magic Eraser yeah yeah and what you could then do is you stay take a step back and think about the entire thing you did like all the editing operations as a program and then wrap that in a second program which like has a note that reads all the files in a folder applies your program you did like your image editing your edits as a function to those images and then save the output again so something like processing is just an emergent behavior of our note specification yes I can start to see now the the big ambition here is to make image editing a language first with all the power that programming language brings to solving problems exactly I like that and not just file export you know you could also export to a database because I know you just mentioned SQL queries um same for a spreadsheet like let's say you're designing a trading card game and you have different you have different images you have different text for the actual title of the card you have different stats and abilities you have different levels you have like flavor text and you know all the different information you could put build your entire catalog of cards in a spreadsheet at like an Excel spreadsheet or a CSV file and take that run it through this batch processor and output P PDF files of every single card in your entire game or put them into a grid put them into like a a grid that can be sent to the printer um to be printed you know one card per per page you know one card per grid cell within a page as the specification of the printer requires finally for next year's Ruston we have a practical way of making individual Pokemon style uh conference passes that would also be a really good use case yeah okay so that I'm I'm convinced about the language angle let me step back up into the other hard rust thing that occurred to me that we have to get to to complete the picture which is I can see you write this language you compile it to wasm all hunky dory but then you've got to worry about building a UI on top of this and that seems like something that isn't fully fleshed out in the rust world and is taking you into the JavaScript land in wasm world that's me how have you I'm the web developer I'm also the the UI designer so it's a pretty iterative or it's a it doesn't have to go back and forth between people in different departments since I'm the one designing the UI and then also implementing it uh but it helps that I'm I have a web development background so I'm but is there is there code like a mixture of rust and JavaScript or is it all rust yeah so 90% of the code is rust but then that other 10% that's the web code that I pretty much almost entirely wrote myself um and the way that that works is we have some typescript so that of course compiles into JavaScript and we have spelt and spelt sort of takes the rust model of doing a lot of doing as much as possible of the heavy lifting at compile time wherever possible do the compile time work instead of runtime work to make it faster ultimately to run on the user's computer but felt transforms a combination of HTML and javascripts so you have a single spelt file that defines a component like a number input or a text field or a slider or a drop down menu and I have I've built like 40 like 30 or 40 of these components for all of different widgets um to implement our UI libr like our UI design system um for all these different widgets and each of these are just components and spelts job is to transform the HTML combined with the CSS with some templating built in so you can actually update different pieces of data live so the the numbers you're seeing in the number input for example they change live um but we try and keep this as as lightweight and limited as possible um so that way the actual spelt files are very small they account for probably the average size of one of these files might be like 50 lines or something some of them are a little bigger um and they take some JavaScript sorry they take a message that's passed from the rust world and subscribe to so for example we have a layout Handler and a layout Handler Handler will take a message received from the rust world and it says create this diff of component changes so if we have a number input it will say Okay replace this number widget with a different number widget that has a different number and then it displays that so spelt then receives the request to change it and it will go and actually do the update to the web Dom so the Dom is the the tree of elements that are act actually HTML uh code living in your browser and rendered by your browser um so it keeps it pretty lightweight because it's just passing a message saying replace this widget and this widget gets just it gets replaced and then the spelt code has compiled at compile time the exact API calls to swap out the text in that field for example okay so it's driven by events from rustland yeah and interesting that's actually very useful because we have a well def we we have a well- defined interface for communicating from rust to JavaScript and to those familiar with rust we actually use nested enom so we have our fronted messages and in general message delivery that's also an interesting topic but not quite relevant so well it's a bit bit of an aside but we have a well defined interface to communicate from rust to the front end and we try to keep that as front agnostic as possible so the idea is that in the future when we have a native backend we can use the same interface to communicate with the native backend as we do for the current web front end no native front end sorry yeah same interface for communicating with the native front end or the web front end and that's a nice lay of abstraction a very well very clearly defined interface presumably that's two-way as well you've got a series of messages the front end can sent to the the way that works is when you click on something or you have a keyboard input um so mouse or keyboard inputs basically those create events in JavaScript so the browse that's just how the web model Works in general is that anything you do with a mouse and keyboard those produce JavaScript events and we simply have the components or a global system that forwards all the inputs from the user into the back end and so if you click on an actual button the button is going to receive that click and then it's going to say oh I'm supposed to to zoom in it's it's the zoom button you click the zoom button and it immediately like a as soon as it's clicked its call back its Handler sends a message into the rust world and that's how we invoke the rust side of of the architecture is that that call stack begins like your actual call stack if you're looking in the debugger for example um the call stack begins with a click event in JavaScript and then you run a couple functions and those those functions pretty much immediately call into the r world and then you've got the web assembly running so the web you know the the actual VM that runs web assembly code does all its stuff goes and circles around through our messages in the backend does all the rendering does any kind of changes that are needed and then ultimately it produces changes that have to be sent back to to update the data in the front end but again we keep this very very lightweight with the front end so um we we've seen a number of people who have just tested out graphite and they've been public you know commenting on some public forums that we haven't even that we would normally not even be reading because we're not part of that discussion but I've read people organically in the wild about how they feel like this doesn't have the unresponsiveness that a usual web app does because of our careful attention to the architecture to make sure that we have as much responsiveness as possible um to feel more like a native app as people have been describing it compared to feeling like a web app yeah but ultimately it's even though it's all running in the browser ultimately it's kind of a Thin Client thick server architecture exactly saying exactly okay okay yeah I can see how that would leave you mostly in rustland where you seem to be very happy y so I I'm thinking I'm thinking about how getting getting my hands on this and my first question with that is actually more technical one is this open source can I start hacking around with the code and maybe writing my own functions yet absolutely okay tell me about how I would do that um yeah so I guess I'll I'm the I'm the project manager I can talk about the contribution Parts um yeah so it's all open on GitHub um additionally our website has a reasonably extensive developer developer documentation section if you click on the volunteer button at the top and then go to the development uh like the contributor guide it tells you how to actually install the program and um you know install your development environment um so you know we're using pretty much the the common ecosystem thing so obviously you need to have Ry installed the rest compiler and it's tool chain that comes with with it um there's a couple other programs that are used to kind of combine together the web architecture with the backend architecture um so it's watching for changes with both the r code and the the JavaScript code and if you change either of them it will automatically reload the app that you have running in Local Host um and you just open it in your browser and the nice thing is that it's very very quick to just open in the browser and it takes you know half a second or quarter second or something to open in the browser so every time you make changes you don't have we still have to wait for the rest code to compile which is unfortunate takes a little while um I wish that we could find Solutions and if anyone anyone's listening to this and knows good tips and tricks for breaking up the code base into smaller parts that can be compiled independently that would be very helpful to speed up our development workflow um but yeah anyways basically it's we've also got a number of issues that are easily beginner tasks so people can go and update you know either fix bugs or update the functionality for how a tool interacts a certain situation or a certain Edge case um or given certain user input U and then also we've got bigger features and especially the language Parts the that's more Dennis land and I guess he can talk about that in a second um but all the language parts so graphine and um also we didn't touch too much on GPU compilation but that's also a big part of graphine um getting more of that Graphics programming into the graphine language and he talked a little bit earlier about how we try and make nothing be special with the way that the compiler does things avoiding special case handling for how the compiler works we want to actually make as much of this built into the language as possible so actually using the GPU and also compiling your programs into um something that g the GPU can run as a compute Shader all of that is actually built in the userland side of the graphine language rather than being something that is like fundamental to just the the runtime uh because we have as little of a runtime as possible so all of that basically is part of graphing but it's also the user side of graphing the userland side of graphing as opposed to the compiler back end um so yeah so someone were a rust expert and knew the way around shaders they would be a useful contributor they would be amazingly useful because we've only got Dennis working on graphine and that seems to be most commonly the thing that we uh we run into blockages with uh in terms of our meeting our road map goals is often times like for example we want to move on from Vector editing to raster editing this year that is our our main goal is to actually start working on raster editing and become more of a Photoshop or alternative instead of just being an inkscape or illustrator alternative um and that means unblocking the the GPU related compilation and sorry rendering I mean the GPU related rendering uh tasks which which is all graphine land so dis yeah and we are so sort of blocked on some ecosystem changes um but that's uh first of all you asked if you want to write your own Noe that's actually fairly simple you can like I like last year we redid our entire system how we Define nodes and now it's basically just you write a function like you go into rust file write a function and we then use some special like Pro macro magic it feels like just writing a function but then magically it's going to appear in the graphite UI and we even autogenerate like a settings menu for you like a properties menu based on the input types your function takes so that that's that was actually a nice uh devx Improvement and quality of life Improvement that I worked on last year oh nice that also makes it easier for like researchers for example people with more of a research background in image processing or any other kind of like computational geometry actually there's a number of algorithms for example one that we really want to figure out is a convex hole of any shape so any Vector shape so not just a polyline but actually a vector shape with bezier curves creating a convex hole which means taking the equivalent of taking that shape and wrapping a rubber band around it where you level off everything that is concave that is I've looked in the literature and I have not found an algorithm that actually does that on bezi paths so that is a great re research opportunity amongst many others a very simar style where people can actually just write a single function it doesn't requ learning the rest of the codebase just write a single function um but it implements some kind of image processing algorithm or computational geometry algorithm that uh you know does one of the many goals we have for these nodes so anyone who wants to you know implement the computational geometry of of a convex ho then it's just a simple function I can see that being very appealing to the kinds of people that research those functions and then want to make it look good for and usable for like when they publish right yeah exactly and it's it's basically like you know it's a graphics processing toolbox yeah yeah yeah it's a graphics processing toolbox which means also that uh it means also that like researchers the research community in general in Academia they could use it especially in the future once it's a bit more robust I think it's probably actually going to be a pretty common choice where instead of Co like posting some random code on GitHub as part of their thesis or something which basically Falls Into Obscurity it could instead have all the other parts of the ecosystem that would make their lives as researchers easier and then they can actually publish it onto our future asset store and then people could actually use it instead of it just sort of Falling Into Obscurity oh yeah yeah that would be very cool very cool uh and that's like that's also one of the like you asked about well we talked about how this is all very ambitious but that's one of the key like insights if and one of the advantages of it being open source if we provide the tools for people to build something if like if they use the editor and think huh this note would be really great they could just do that and that sort of takes the load of our shoulders and we can yeah then then this all becomes feasible all all of a sudden because building everything ourselves would be just a lot of work and we're trying to basically build a platform and gain momentum to have a volunt tears help contribute and build an amazing software together yes that's the other advantage of turning into a language you can then turn um features into libraries exactly makes a lot of sense okay so final question then if someone's not feeling quite that ambitious and just wants to use this as an image editor I'm going to have to ask you to be very honest about this one where is it today why would I choose it instead of a different image editor today and where would I not choose it yeah so six months go the answer was basically to be totally honest it was more of a toy more of a prototype um it was not something that I would have necessarily recommended people use as part of their ordinary day-to-day workflow because it just had so many cases where like the type system you would just keep running into weird edge cases where like two nodes are just not compatible with each other for no particular reason that was six months ago and additionally the performance because Dennis did a huge number of operations over of optimizations over the summer and before then it was just you couldn't really do much with it uh because you'd pretty quickly hit into the performance floor of or performance ceiling of what becomes impossible to work with but six months down the road now that we've had Dennis work on improving all that over the summer um now it's actually reasonably useful if you're looking for a vector editor that doesn't have some like very Advanced features like there's definitely a number of things that you could do in excape you could do an illustrator that we just kind of haven't gotten to for example if you draw a shape any random shape and you want to round its Corners you'll have to actually go add a node for that as opposed to just using a tool and that kind of thing might feel a little less intuitive but it's actually not that hard you just click on a single button and it adds a node that adds rounded corners but we have a little bit of limitation as well where for example you can't change how much to round each individual corner it has to be the same value for all your Corners so it's like little things like that where you might run into a problem and we haven't supported that yet but all the general things you need for Vector editing I'd honestly say you're going to have an easier time jumping into it for the first time um because the interface is just much more intuitive you've simply got the tools on the left you click on the different tools U those are just the traditional tools that you find in illustrator you find in I guess even inscape probably has the same tool Library as well um and you have a big canvas in the middle there's very few buttons to distract you or to make you feel confused or overwhelmed and you just draw some things you've got a layer panel you've got some properties to change the color of something after the fact uh it's just a very streamlined simple editor that I think is pretty useful for anything if you're doing a laser cutting project or a vinyl cutting project you you could use that for crafting and and CNC work you can also use it for doing graphic design um pretty much anything that isn't overly Advanced or if you want to get Advanced and start using the node catalog to do things that you actually cannot do in inkscape or illustrator or any other Vector editing program you can start doing procedural effects and um for example what I created was a morphing between different bottles of uh potions so different like potion flasks and it could like morph between like a a tall Stout rectangular one and a like a very swoopy um like bulbous one and a triangular one and then I could also morph the level of the liquid of the green potion inside of it up and down by just Dr dragging all these sliders around you can check out our uh check out I guess the different social media profiles that um we've posted that on and if you want to see that video but it is something you just could not do in any other any other Vector editor at all um so that's the kind of thing if you really want to you know take more of a programmatic approach take your your developer background and start putting that towards like creative Graphics editing um I think the world generally is called like that that world of programming to make visual things is called creative creative coding that's usually the name but also generative art and yeah we would really actually find it very valuable to see people creating that kind of those kinds of um those kinds of art because that helps us figure out what the use cases are there are so many possibilities that we just never thought of and if we can just add a few little nodes or a few little settings to make those kinds of things easier we would you know love to see more use cases from people and that informs the design yeah actually help me as well CU I um I I make I make thumbnails every week for YouTube and I have exactly the same process every time and I do it from scratch every time yeah there's there's like one carard oh well two two sort of disclaimers to this first of all we don't have a stable document format yet we right are still in flux we're working on that we're trying to figure out like the best possible format that doesn't lock Us in in the future but currently we can't guarantee that something you worked on six month ago six months ago will work in the most current version of graphite that is the first thing the second is that currently the performance for image operations like we support the image operations but currently that might be a bit slow that's with pixel based raster data yeah so you would need to like test it out to see if that works for you but those are sort of the two disclaimers I have what kind of slow are we talking are we talking like the the uh zooming isn't going to be buttery smooth anymore or is it the user interface is going to chunk it depends it also depends on the resolution of the image you're talking about because okay yeah like with like big 4K images we also at some point run into memory limit limit with the browser because currently web assembly only supports up to 4 gab of memory and if you have very hard if if you have like a raw images and very big images that's going to max out your RAM pretty soon yeah that makes sense so and solution we will have going forward yeah the solution we have going forward is that two things one is that we will be loading all of that data onto the GPU and then it does not have to exist on the CPU or at least if it does it can exist in the JavaScript side as opposed to in the web assembly memory side so that will allow us to get over that limitation but also I believe they're rolling out support for uh 64-bit memory yeah addressing in web assembly to different browsers over time I I don't know exactly the status on that but I do know that it is coming downstem pipal okay yeah yeah it sounds like you have heck of a lot of work ahead of you but it's all tractable right it's all possible on which note I feel like maybe uh the biggest blocker to your productivity is Dennis and I should unblock you I would say Dennis his time working on graphite graphine usually is the biggest the biggest blocker for our continued work on the editor okay excellent in that case I will leave you to carry on with it Dennis and K thank you very much for taking me through it yeah thanks a lot was a great time and lot can I CL Dennis and get 10 more of him that would be wonderful can we just pull him up 10 times on the screen right now as we close this out he could just talk with a slight delay for each of him I'm I'm adding extra work for you to edit now if you feel like you might be just like Dennis contact us on this number cheers folks thank you gentlemen if you want to try graphite head to graphite RS there's a desktop app and you can also run it in the browser Link in the show notes as usual I have to say when I first heard about this project I wasn't expecting something that looked quite as polished as it does they have done an excellent job so hat tip to them and I hope graphite has a bright future give it a try but before you do if you've enjoyed this episode please take a moment to like it rate it share it discuss it over dinner in excruciating detail even though your partner isn't actually interested in programming is that just me either way we'll be back soon with another episode so make sure you're subscribed for that but for now I've been your host Chris Jenkins this has been developer voices with Kon Chambers and Dennis cobbert thanks for listening Back To Top