0:04 - Audio is good, video's good. Let's get started. So, welcome everyone. It's been a while.
0:13 - Uh, let's talk about what we did that you might not know about. So, you
0:19 - know, I see people tuning in. Probably you're interested in Zigg, but you don't use it yet. You want to know like, okay, I haven't really checked in. like what
0:25 - happened in the last year and what are those crazy people up to next year. So
0:30 - that's kind of what this stream is for. So,
0:35 - uh, to start, I'm just going to go over some, uh, well, before I start, uh, Loris, do
0:42 - I need to make any announcements before I jump directly into, um, like like, um,
0:47 - recap and road map? Uh, or can I just go straight into the
0:52 - agenda that we already kind of had planned? That's a question for Loris. And while
0:59 - he's thinking about that, I will uh I'll just guess the answer to that
1:05 - question. So, I think I'm supposed to make an announcement about Dermal
1:11 - software you can love uh 2026. So, I don't know if Matt Knight uh Okay,
1:20 - this is the Italian one. Uh I think we also have software.lo.
1:27 - There's the Canadian one. Okay. So, the website is still on 2023.
1:33 - Uh, however, this is an announcement. Um, there will be a software you can love Vancouver in 2026.
1:41 - So, uh, put that on your your mental calendar. I don't think we have dates yet. Um, but I know that Matt Knight has
1:48 - already started doing the organizational work. So, uh, yeah, kudos to Matt Knight for taking that on. And
1:56 - uh yeah, just just get ready cuz when that that's going to be a really nice that's going to be a really nice event.
2:03 - So what's next? Uh I'll talk about the other announcement maybe afterwards. So
2:10 - I'll I'll jump next into um uh into kind of reviewing some of the
2:16 - accomplishments that we've that we've done in the last year. So, if you don't
2:21 - know, uh, we do put a lot of work in these release notes, so you can get a lot of the information I'm about to give
2:26 - you by following it. But hey, this is text. Some people like text, some people like video, and this is for all the
2:32 - video people. So, here you go. Uh, let's just review some some stuff. So,
2:39 - one thing um that we've focused on in recently is a package manager. And so,
2:45 - some people are already using that. And if I if I demo my little project here. I
2:50 - think I already have a a a binary I can run. Oh, there we go. So, this is just a
2:56 - little I'm I'm playing with a little like immediate mode UI. I'm doing some font rendering. This is using Vulcan
3:02 - playing with trying to learn Vulcan stuff. Eventually, I hope to turn this into like a like a music thing. Um, but
3:08 - what I want to show you now is just a little feature that we added that maybe if you don't know about it, you really need to know about it. And it's Zigg
3:14 - Fetch. So zigfetch is part of our u tool chain for the package manager
3:21 - and what I want to highlight right now is this particular workflow. So in this
3:26 - example um I have uh let's look at my dependencies
3:31 - briefly. So I have
3:37 - I have this dependency here on the shader compiler. Um this is a package made by u Mason Romeli uh which compiles
3:45 - um GLSL into Spearv. Um in the future we actually will have a
3:51 - zig back end for this and actually we have a proof of concept um that you can play with. Uh shout outs to um Ali
3:58 - Chugri and uh Robin Verder for working on that. Uh but in the meantime we have
4:03 - this shader compiler and I need to update it. So one of the workflows that you can now do which is pretty handy is
4:10 - uh you can just take that if you want to just update to uh like the latest master for example I can just do zig fetch
4:20 - save and I do get plus and then this link and that's going to get me
4:26 - it's not going to link to the master one it's going to resolve it to a commit and put that
4:32 - commit in the um in the file. So the change that that actually made was uh
4:39 - it's now fetching with git protocol. It's locking it into that particular commit and it calculated the hash for me
4:44 - and and instal it on my system. So now I'm ready to go ahead and rebuild. Um you can see that change there. So if you
4:51 - haven't been using uh zig build fetch, that's a nice workflow. So just keep that in mind.
4:57 - Uh let me see what's next on our agenda here. I have a little We had originally
5:03 - planned for Loris to kind of like um walk me through these
5:09 - uh through the agenda, but now I got to I got to run the show. So, I got to scroll up a little bit on my my notes.
5:15 - One sec here.
5:25 - Yeah. And we will do um Q&A uh at the end. So feel free to get your questions ready.
5:32 - Okay, that was uh Ziggfetch. Let's move on. So the next thing um
5:40 - I'll keep these highlighted. Uh the next thing that I want to highlight is
5:46 - uh the fact that we've we've now made um x86 backend enabled by default in debug
5:53 - mode. So, if you didn't if you don't follow our um our devlog posts. Um it's
5:58 - kind of nice. There's an RSS feed, uh they're all just kind of on one page here. Um
6:05 - and so there's this there's this this is the post where we announced it, but I I
6:10 - can give you a little demo here. Um the milestone that we've gotten to uh
6:15 - specifically uh Jacob Young has gotten us to um and and Matthew Lug especially
6:20 - with like a lot of the front end improvements uh have is that this the
6:26 - the self-hosted x86 back end the one that does not depend on LVM is
6:32 - um at 64-bit. It's now uh
6:37 - it's now robust enough that it's now the default. In fact, let me show let me
6:42 - show you something. Um, so here I have let me actually just use one terminal
6:48 - since I'm streaming here. There we go. Uh, let's take a look at
6:54 - running the behavior tests. And just to make it even, I'm going to disable these
7:03 - because those apply not to LVM. So I have a build here of the compiler
7:09 - and sorry it's over here and if I run the behavior tests with LVM
7:16 - um we can observe how long it takes and we can see how many tests it runs.
7:23 - Okay, so that was um 1,000. Let's just take note of this little data
7:28 - point here. Oops.
7:35 - Okay. And now if I get rid of this LLVM flag, it's now doing the default. And
7:41 - because I'm on x86 computer, the default will use our own back end. It will skip LVM.
7:47 - So you can see that was a lot faster. Uh and
8:00 - uh we can see actually it's passing more tests. It's skipping fewer
8:06 - and it's um it's actually having slightly more coverage than LVM. So uh
8:12 - you know it's it's the story is a little more nuanced than that. But the point is that um the x86
8:18 - back end is if anything more robust than the LVM back end at this point. It's not strictly better, but it is mostly
8:24 - better. And um uh so now it's the default. And it's the
8:32 - default because as you can see it's a lot faster. Um just to do a quick little
8:37 - demonstration of that, if we can go into here, let's see my hello world here.
8:44 - Okay. And let's do uh build exe hello.
8:51 - Yeah, that's what I want. And then lift LVM. Yeah. Okay. So, you can already see it's a little faster, but let's let's race
8:58 - those. And if I just collect about 5 seconds of data for each one. Let's take a look.
9:07 - Oops. Okay. Okay, so we can see that the using
9:13 - LVM was this one. And by not using LVM and by compiling
9:20 - uh with Ziggs backend instead of LVM, uh we went down from 1 second to um 225
9:26 - milliseconds. We used less memory. Uh and then this is kind of just an
9:31 - explanation for why it's faster. Yeah, shout outs to um Matt Knight for
9:39 - uh naming uh for the name. We actually named this um CLI tool on stream. So you
9:44 - have embedded boy to thank for that.
9:50 - Uh okay. So that's that's the x86 back end. It's now the default. So try it
9:55 - out. Um that's um that's this is going to debut in uh 0.15.
10:04 - uh which will be released in about a month. Plan is for August 1st is a
10:10 - tenative release date. Uh here's a good question. Um if your
10:17 - project uses a C library, do you still need to use LVM? No. Uh the um I don't
10:24 - remember what the default is. We might still default to
10:31 - uh if you have any libraries.
10:37 - No, we don't. Okay. No. Um the answer is no. This back end works fine if you link
10:42 - CC code. Um you can still use it. Yeah. Um in fact, I'm pretty sure the project
10:50 - I just demoed a second ago with that little window was uh using the self-hosted back end. Uh and that's using like um Vulcan C libraries and
10:57 - stuff like that. Uh and yes, it is also the default for running build.zig. So, your your Zigg
11:02 - build commands will now be faster as well if you're on x86. Um, and now I know you might be
11:09 - wondering, what about if I'm on one of those Mac computers? Well,
11:14 - hold on. We'll get to that. All right. Why did I lose my little agenda notes? Hold on. Sorry.
11:22 - We'll get there. That's a foreshadowing. Okay. But be for now, let's move on to a different topic. Uh so I want to show
11:29 - you another thing that is now happening by defaults. So if
11:36 - uh let's look at some C code.
11:41 - So one thing that um a lot of people discover Zigg for is um Zigcc.
11:52 - So we have we have this it acts just like a C compiler with caveats.
11:59 - Um and sorry before I move on to this topic I do want to address Casey Bounder's point uh which is a good
12:04 - point. I forgot to mention um on Windows x86 backend is not the default yet
12:11 - because uh we need to make some cough linker enhancements before we can enable it by default. Um so that that is a
12:17 - caveat there. Uh we'll get there. We'll get there. Yeah, one thing at a time.
12:23 - Um, and that's the answer to this question. So, the answer is it does work on
12:28 - Windows, but the problem is uh we need to make some linker enhancements before it's uh before we can turn it on by
12:33 - default. So, that's that's coming up.
12:39 - I don't know if we'll get to it in time for 015, unfortunately. Um, but it it's definitely a higher
12:45 - priority issue. Okay. Uh, so let's talk about Zigcc for a moment. So, one thing that's important
12:51 - to understand about zigcc is that it's not trying to be while it is trying to
12:57 - be compatible with the C compiler uh command line interface, it's not trying to be literally exactly the same as um
13:06 - as clang. Uh in particular, we have different defaults. Uh and the defaults kind of map to
13:13 - Zigg's paradigm. Um, in fact, the way that this works is
13:18 - that we we collect the C command line arguments and then we translate those
13:24 - into the equivalent of what zig command line arguments would would um would specify
13:31 - and then we lower that into uh commands that we pass to clang and
13:37 - all things like this and that that's kind of a crucial part of what zigcc offers. I'll give you an example. So if
13:44 - you do zigcc uh like O2 um what actually happens is in the
13:51 - command line parsing we interpret O2 as release
13:56 - uh release fast. But if you pass in O2 uh F sanitize equals I don't know um I
14:05 - think it's just I I forget what the command is. Maybe it's just f Sanitize. If you pass if you enable um Ubisan
14:13 - uh oh there we go. If you do this one uh we interpret this combination of flags
14:18 - as release safe and then later we probably just lower it to exactly these
14:24 - um these flags again. So why do we go through that maybe seemingly unnecessary like middle state? Because that middle
14:30 - state affects all sorts of other stuff. Um so for example
14:36 - by default you're getting debug mode and because you're getting debug mode we do enable um undefined behavior sanitizer
14:44 - and one of the things that we recently um added uh shout outs to uh David Rubin
14:53 - I believe that's your name. Um we now have a nice printing for this. So, for
14:59 - example, uh some somebody shout out their favorite undefined behavior.
15:13 - How about How about signed integer overflow? H gray header had it. Good job. Um okay,
15:20 - so we're going to do uh arg c
15:26 - compiling a file without a trailing new line. Oh man, that's a really good one.
15:33 - Okay, we're going to do arg plus um ah
15:39 - what one to the uh 31. Okay,
15:46 - I guess I could have just done that, huh?
15:53 - Okay. So, let's try uh
16:00 - there we go. So, let's see what happens if we compile this. Oh,
16:06 - I'm too ziggpilled for this. Thanks, Matthew.
16:14 - Uh, okay. So, let's try um let's try compiling this code with ZCC.
16:23 - Okay, there's a little spoiler. Lib Ubisand there.
16:28 - Now, some of that stuff only had happened once. And I'll show you that if I make some changes, such as by adding a pointless comment. Uh, that was an
16:34 - instant rebuild because those things that took a while to rebuild, those were just um like support libraries. Those
16:40 - only have to ever be done exactly once per target. So, I pretty much never have to do that again until I switch versions
16:46 - of Zigg. Uh, so no, let's try running this example. And it's going to Oh, I
16:55 - I hit um I hit undefined behavior before I hit the undefined behavior that I intended to hit. Oops.
17:03 - That's a perfect example of what I'm trying to show. Well, just for fun, let's try to get the
17:08 - intended undefined behavior. But, uh,
17:15 - I guess that's why I instinctively reached for Python here. I just didn't even trust C even a little bit.
17:22 - Okay, so let's try that. Uh, wait. Long. What?
17:30 - Format specifies type int but the argument has type long. Why is it long?
17:41 - Oh, wait. What?
17:47 - One doesn't fit in an int. What do you mean it doesn't fit in an int? Of course it fits in an int. Oh, I need minus one.
17:57 - Oh, okay. Yeah. Yeah. Off I won. Classic. Okay. So, if I run my example, uh
18:03 - oh crap, I wanted six. I have to rebuild too. Okay, there we
18:09 - go. Well, this is the example I was trying to show. Um now if I pass an argument I get a undefined behavior and
18:16 - we get the uh actually nice panic with a back trace um with a message that
18:21 - explains the problem. So um anyway point being
18:27 - the default uh of zigcc already was uh was this turning on
18:34 - undefined behavior sanitizer. The new thing that I'm trying to show you that we did recently was adding by default
18:40 - the uh undefined behavior sanitizer library uh which tells you the problem. So
18:47 - that's a lot more friendly for um uh for beginners who are learning C. So
18:54 - I would argue to you that um Zigcc is a great way for uh students to learn C
19:03 - programming because this is
19:09 - this is so much more uh helpful than just getting like wrong behavior at
19:14 - runtime or like you compile with optimizations on and then uh it stops working
19:21 - and then uh yeah and the way this is implemented by the Okay, it's it's literally just calling panic in the zig
19:26 - standard library and then the zig standard library is doing the stack tracing. So it's pretty it's pretty neat. It's pretty nifty.
19:39 - Okay, so that was uh undefined behavior sanitizer with zigcc.
19:44 - Um moving on uh let's let's look at some language changes.
19:50 - So again, I want to tell everyone that if you read our release notes, this is a great way to upgrade your code in
19:56 - response to language changes because for example, if you go to language changes, and in fact, let's go to a different
20:02 - one. Let's go to this one. Um, actually, let's go to a breaking change. Uh, this
20:07 - one for example. Okay, so we'll tell you all about all the language changes. For example, export changed to be a pointer.
20:14 - Now, we'll give you a reason why we changed it. And then we'll tell you if your code looks like this then you can
20:19 - update it like this and now it will work. And in the best case scenario we'll also give you like a compile error
20:26 - so that you can search for the compile error. I don't see too many of those right now but um
20:33 - anyway point is upgrade guide will help you if you're trying to upgrade. Um, but I want to show you uh
20:41 - one of my favorite new features in Zigg, which is um labeled switch.
20:46 - I think to demo this um I'm going to open up
20:55 - uh the Zix tokenizer.
21:00 - So, if for those who aren't super familiar with compiler development, um tokenizer is kind of like the first
21:06 - thing that a compiler does to your source code. It just looks at it and figures out like what's a space and
21:11 - what's a keyword and what's like punctuation and stuff. Um so, Zix
21:17 - tokenizer, it's going to in it's going to input a string and it's going to output an array of tokens. Zix tokenizer
21:23 - is uh 1,776 lines and
21:28 - um it's basically a state machine. Pretty much every tokenizer is a state machine.
21:35 - So here's our like state here's our possible states that the tokenizer can be in. Uh it's just an enum and the
21:42 - heart of a tokenizer is just basically this. It's just a big big loop that switches on the state and what um what
21:49 - bite in the stream did you look at next. Now, if you haven't seen this language
21:54 - feature, uh check this out because this I think it's super cool. Uh
22:01 - as you may know, um Zigg does not have goto. Uh as you might not know, uh Zigg
22:08 - actually used to have Goto. Fun fact. Yeah, we deleted goto.
22:14 - That was never against goto in principle. Um, but what I determined is
22:19 - that it's redundant with labeled break labeled continue and especially
22:28 - now that we have uh labeled switch continue. So what you might find
22:34 - interesting here is that um let's look for lowercase state
22:42 - and defer. That's a good point. Yeah. Um yeah. So the so what you do is you label
22:48 - the switch statement with a with a label and then now it's eligible to be continued like a loop. So what's super
22:54 - funny about this is that when we switch we're actually switching on a constant value. We're just switching on the start
23:00 - state and we we don't even store the state in a variable. See that? There's no variable that stores the state. So
23:06 - this just control flow just goes straight to here. It doesn't even have to check a variable or something.
23:15 - Um so then we immediately start looking at the next character and we handle uh
23:20 - we handle like the end of the string or we handle a space and everything. Every time we terminate um one of these
23:28 - blocks, we do a continue, a labeled continue. So this basically just jumps back to the switch with a different
23:36 - uh value. And so but what's neat about it is that if if this value is um
23:43 - compile time known, doesn't have to be um but if it is compile time known, that is just a go-to. So for example, um this
23:50 - is like continuing with uh string literal enum value that's going to jump
23:55 - directly to here. That's a go-to.
24:00 - That's it's just a go-to, but it's structured. So it's not going to like not initialize variables or something.
24:07 - Uh and we can also find examples of not compile time continues.
24:14 - Um well a lot of them are compile time but
24:20 - um point is that that generates really ideal machine code and even if we use a
24:26 - runtime value which I didn't find an instance of um but if we did what's nice
24:33 - about that is that it will then do the branching logic at that location and
24:38 - then jump directly from there to another location. So what that does is it moves
24:46 - um instead of there being like a while loop here where that's uh there's a
24:51 - really like um mispredicted branch, right? Because uh this this switch here
24:56 - will be mispredicted if it's in a while loop because it's it's not likely that the next bite in the source code is
25:02 - going to be the same bite as the previous one. That's kind of the point. Um, but when you move all of the
25:10 - branching into these inside here where it jumps directly to the next place it needs to go, what you get is a um more
25:19 - branches to be predicted. And well, here they're not even being predicted. They're just jumping directly, which is
25:26 - nice. Um, but even if they're runtime, you end up with more branches that can
25:31 - be more likely to be predicted. So it really helps the branch predictor uh to use this pattern. Uh and in fact when we
25:38 - switch the tokenizer to this pattern and to take advantage of this new feature uh does anyone remember
25:44 - uh was it this one?
25:50 - Okay. Yeah. So this person updated the zig tokenizer to use labeled switch and
25:55 - the they reported uh 13% speed up just by doing that switch. So that's pretty
26:01 - neat. Um, and what I really like about this is that not only is it faster, but
26:08 - it's also nice. Like I actually think that the source code reads better this way than with a while loop. Cuz if you
26:15 - just goes from here to the end of the loop, you kind of have to check two places to know where it's going. But even if I'm in the middle of this big
26:20 - state machine and I see this, I know exactly where this is going. I know it's going to right here. I don't even have
26:26 - to like double check the top of the while loop, for example. So, I I really like this u this syntax. I'm happy with
26:32 - this. Nice.
26:39 - Uh nice.
26:46 - Okay. Well, let's move on to the next topic. So, that was uh labeled switch. Just a little highlight from the last year.
26:53 - Um okay. Yeah. Next, I'm going to highlight some tool chain improvements. Um, these come from uh, Alex Ron
27:00 - Peterson, our newest core team member, and he has improved our target support a
27:08 - lot um, in more ways than what I'm about to show you. But what you might have
27:14 - noticed, or maybe not if you're just tun tuning in out of curiosity, is that our
27:20 - download page is quite a bit bigger. Uh it doesn't even fit on one screen
27:26 - anymore. So in addition to Windows x864, ARM 64 32-bit, all the Linux stuff we
27:35 - have um actually adding Long Arch and S390X.
27:41 - These are thanks to uh Alex RP as well. But not only that, he also went in and
27:46 - learned how um uh how our gibby and
27:51 - muscle lipsy strategies worked. He adapted them for
27:56 - um FreeBSD and NetBSD. And then look at that. Now we even have all these. And these are all automated on the CI. So
28:03 - now every every dayish um if you're on any of these operating systems, you got to build a Zigg. And
28:09 - that's just going to you're going to be able just no uh no no no dependencies other than lib C. So you can just
28:14 - download that, unzip it, and uh and you're you're off to the races. Um
28:22 - and so we already have the ability to cross-co compile these things. So for example, if I just did uh if I just run
28:28 - like the standard library tests, I could do um
28:34 - FreeBSD for example. Uh but will yeah I think pretty sure
28:41 - this worked before. Um but the thing that works
28:49 - now is we also provide headers. I guess I need lib C.
29:05 - Uh we also provide headers. So, okay. Well, I'll have to look into this. This seems like a recent regression of sorts.
29:10 - Um, so that means that I should be able to demo I think
29:16 - uh I should be able to do our our C example
29:22 - then. Okay. So, that's just a standard hello world. And I can even do CC hello.
29:31 - So, this is if this is for native. Okay. obviously. But now we can do let's do a
29:37 - weird one. Uh power PC big Indian power PC 64 freeBSD.
29:46 - How about that? And we even have undefined behavior
29:52 - sanitizer. Amazing. Okay, so I'm hitting that same problem.
29:59 - We'll figure out what's going on there. Does anyone Does anyone know already? I haven't actually tried this before.
30:06 - I mean, we got builds. So, am I supposed to do
30:13 - uh I don't know. Anyway, this is this is
30:19 - not going to be a problem. I'm sure we can get this fixed before the release. Um, so sorry for the bad demo, but yeah,
30:27 - shout out shout outs to um, AlexRP for all this uh, tool chain support. So,
30:32 - we're we're going to be looking at all the targets kind of moving up the tiers with this next release. More support,
30:38 - more testing. Jacob is saying, I need LC for Zig CC. But what you might not know already,
30:45 - Jacob, is that that is the default with Zigcc. That is fully redundant to pass in LC
30:53 - with zigcc. I also tried it over here.
30:59 - Yeah, I don't know. I think it's probably just like a recent regression of sorts, but yeah, we'll get that fixed. No problem.
31:06 - Okay, so Oh, that's what it was. Yeah, I knew there was something I was forgetting.
31:11 - Okay, thanks Alex and thanks Matthew. Right, so uh yeah, he mentioned this too
31:16 - and I just totally forgot. So part of the Zig Target triple is um operating system versions and um so he made the
31:24 - decision to support starting with 04 starting with 14.0.
31:29 - Uh so there we go. That'll work now. Uh there was some reason that it was
31:34 - going to be a bunch of trouble to target um
31:40 - 13. Anyway, um so there we go. And I can't run this because this is a FreeBSD binary. Um, let's see if file knows
31:47 - about it. Well, yeah, I also can't run it because it's Power PC.
31:53 - Okay, there you go. So, we just made a uh Hello World for um 64-bit big Indian
32:00 - Power PC for FreeBSD 14, which is the newest release.
32:06 - Uh, pretty cool. Pretty cool. Yeah. And uh and Alex did a great job um
32:12 - uh contributing to the um uh the libc tool chain that we have for keeping the
32:18 - stuff up to date. He did a good job keeping all of our um like we have these wiki pages for updating libc.
32:25 - Uh and so like here's what you have to do when gibb updates and like here's muscle and then um now there's a section
32:34 - for freebsd and now there's a section for netbd. So, he's done a great job um documenting everything he's doing and
32:40 - making it a repeatable process for us all. Uh so, yeah, that's some really good work.
32:47 - Okay. Uh yeah, thanks for uh figuring that out, Matthew. It's good.
32:54 - Okay, moving on. So, that was uh yeah, some tool chain
33:00 - updates. Ah, yes. Okay, let's move on.
33:06 - Did he get a dedicated welcome blog post? Yeah. Yeah, I did that for sure, didn't I?
33:11 - Don't tell me I forgot to do that.
33:16 - Wait, I thought I did that. Oh no. Okay, well that's on me.
33:25 - Why do I remember typing a welcome blog post for him? Did I do it in the release notes? Maybe. I don't know. Did I do a
33:31 - draft and never publish it? Oh no. Okay, that's my that's my
33:37 - feeling. Did I dream doing that?
33:44 - Okay. Well, let's move on for now.
33:50 - Okay. So, let's talk about file system watching.
33:57 - File system watching. Yes. Yes. Yes. Okay. How do I even want to start
34:04 - showing this off? Well, I think I'll start by showing
34:10 - our workflow for building the compiler. Um, I'm on master branch right now. Uh,
34:18 - let's let's undo these changes.
34:24 - Uh, yeah, we will. Don't worry, I won't I won't forget about saying this.
34:29 - our workflow. Uh, sorry, let me back up. Right now, I have another branch that I'm working on that has required a lot
34:35 - of changes. Uh, it's required working through a lot of compile errors and um,
34:42 - let me show you what that workflow looks like uh, a little bit. So, there's
34:52 - Yeah. And if you if you haven't been paying attention to Zigg recently, this is this is the part where you should like put your uh I don't know, put your
34:58 - counter strike down and and pay attention for a second.
35:03 - So, here we're going to build the compiler. Uh I'm going to turn off some stuff I don't want. Uh I'm also going to
35:10 - turn off like um building an executable. I just want compile errors. And I'm going to
35:18 - ask the build system to stay alive. And notice when any input files change
35:24 - and re and rebuild. And I'm also going to enable incremental compilation.
35:32 - And I'm going to make compile errors print at the bottom for convenience.
35:40 - So this is going to build the compiler all of it from scratch. And it's doing that right now.
35:47 - Um, nothing is cached other than like compiler RT. So, it's it's rebuilding
35:53 - like 100% of the compiler from scratch, 15 seconds. And that was with our own
35:59 - x86 back end that I was demoing earlier. So, that's uh about half a million lines
36:04 - of code done. Now, uh let me put this on the bottom for a
36:12 - second and let's go start editing.
36:17 - I know I passed no Ben. I'm I'm only looking for compile errors right now.
36:24 - So, let's say I want to go in and I want to actually delete the um I want to
36:31 - delete the uh breakpoint builtin. I think it's a bad built-in. I'm going to delete it actually. So, let's go. Let's
36:39 - go find that. Okay. Okay, I think I found this. So,
36:46 - I'm just going to delete this code here. Okay, I got some errors.
36:54 - Okay, let's let's keep that up there so I can see those. Well, that makes sense. I just deleted something, so I'm going
37:00 - to get an error. Um, I better go into here and delete that then.
37:10 - Uh,
37:16 - okay. That had to rebuild quite a bit actually. Let's keep going. Um, so now I want I'm
37:23 - in this file. Okay, I got to delete this.
37:28 - Oh, okay. That one was pretty fast. Gota delete this.
37:35 - [Music] Do I delete what's going on here? Switch must handle all possibilities. Okay, so
37:41 - I need to delete some break points. Okay, now you get the point I'm making here. I I am getting instant feedback on
37:47 - on all these changes that I'm making. Uh like if I want to go in here
37:52 - 165. So I'm going to handle this compile error now. Save.
37:58 - Okay, that one took a little while. We're going to have to redo some uh x6
38:04 - back and stuff for some reason. Uh, let's see what else.
38:12 - Let's delete from here. That one was instant. Uh, print here.
38:19 - Delete from there. Instant. So, what you're seeing right now is I'll
38:26 - show you my command again. Um you're seeing a combination of
38:32 - um turning off building a binary, having the build system watch for
38:37 - changes automatically, incremental compilation, and just showing all the compiler errors at the bottom. And what
38:44 - I'm trying to show you is that you can get instant compile errors in your workflow uh by setting this up for
38:50 - yourself. Uh and that that is new and uh as in the last year.
38:56 - So, if you don't have this setup, uh I highly highly recommend the setup
39:01 - because you're going to um it's really going to unlock uh per like
39:10 - your own potential as a software developer because when you get instant feedback like this, you don't you lose
39:16 - the temptation to, you know, alt tab over to Firefox and like look at social media or something. You know what I
39:22 - mean? Like you can stay focused a lot better.
39:30 - Yeah, worth noting ZLS can leverage this feature. So, you get squiggies, right? Is that is that how it does it? Um,
39:38 - yeah, this is in 014. Uh, and it's also, correct me if I'm wrong, Matthew, but I
39:43 - think it's improved since then. So, like bug fixes and enhancements coming out in
39:48 - the next release. Uh but also
39:55 - uh that's sorry that's the next topic. Okay, let me just see if there's any questions here. Uh this example is is
40:02 - goofy. I'm not I'm not actually intending to do that. So you can just forget about that. That was just an
40:07 - example. I'm just trying to show off this combination of of stuff.
40:14 - Uh clearing the screen for printing errors. That's probably a good idea. I think we have an issue open for this. Another idea would just be putting a a
40:20 - blank line. Well, yeah, we'll get there. We're working on that. That's the easy stuff, you know? That's the stuff that
40:26 - you can That's like kind of the paint on top that's easy to to to tweak. Um
40:33 - yeah, so if you don't have this workflow, uh I recommend upgrade, figure it out. Make sure you get this workflow
40:38 - because it's really really nice. Now, um you might have noticed
40:44 - that uh oh yeah, and Matthew just confirmed what I said. So, this is available in
40:51 - 014, but um there's been enhancements since then. So, um it be it's good to
40:56 - it'll be good to get the upgrade. I hope that you can map that in the
41:02 - quick fix list in Vim. I would love that. I I I don't know how to do that right now. I don't I do my best not to edit Vim plugins.
41:15 - Okay. Yeah. Now, this is the topic I was about to get to. So, you may have noticed that I disabled um outputting a
41:20 - binary. So, this is the equivalent of passing no emit bin to build.exe. And
41:27 - obviously, that's not ideal. What we want is we want to get instant rebuilds
41:33 - that we can also test. Um, and that does work experimentally. So I can
41:40 - um actually let me just try this and I'm going to try this knowing that it might
41:46 - it might fail. So just keep this in mind. Um but going back to this DAW
41:51 - project uh I think I need I need some like Vulcan stuff in my path. Uh I might not
41:58 - be able to build this. It might have to require updates. So bear with me here. Um but I will try
42:05 - turning it on. And this is with um
42:11 - creating an executable. So I I'll make a change and then see uh I don't think I have way
42:20 - enabled so I need X11. Oh, is this because I made that change?
42:26 - Oh yeah. Oops. Well, I updated the shader compiler, so I have to rebuild
42:33 - it. Let's just let that run for a second. I think it should be fine. I got a pretty beefy setup over here. Um, but
42:39 - the point that I'm getting at is
42:45 - uh the thing that we're working on next is making this also work when you're uh outputting an executable, not only when
42:51 - you turn that off. So, yeah, we'll get there. As you can see, the front end is working quite well. And um the thing
42:57 - that's holding us back is um uh the linker. So, we need to make some
43:04 - linker enhancements. And then we can um uh then you'll be able to do this while
43:10 - generating a binary and testing it and getting instant rebuilds.
43:17 - Uh as for this question, I think I merged a fix to that last week. So if you haven't
43:24 - checked in a week, uh try now. Um but so I've I've done watch on this. Now this
43:31 - generated a um so let's just do a new window actually.
43:42 - Okay. So I can run uh I can run the
43:47 - the program. It's still watching it. So let's go ahead and make an edit. Um I
43:52 - don't know what kind of I haven't touched this project in a while, but let's just
43:58 - uh let's let's Let's make it read the wrong font name and crash.
44:04 - Okay, so I actually hit save on that. And now it should crash when I try to run it because it can't find the font.
44:11 - Okay, now see this this said success, but it generated a a not viable binary.
44:16 - The linker did a miso compilation there. So that's a problem. And if I go back,
44:23 - it's still Yeah, it's not it's not good. So as you can see uh this feature is not quite ready for prime time when it comes
44:30 - to generating executables. Um but it will be after we make some uh linker enhancements. Uh in particular that one
44:36 - was for the elf linker and um earlier we noted that we needed cough linker
44:43 - enhancements to make x86 back end enabled by default for Windows. So yeah,
44:48 - we'll need to make some nice linker enhancements um to unlock some of the next things coming up for us.
44:57 - Um, was that all I was trying to say about watching an incremental? Let's
45:02 - talk about bugs. Um,
45:07 - yeah. So, we recently until we until recently we had a bug that caused not
45:14 - same file system error on Linux. I believe that has been fully fixed
45:20 - as of last week. Um, however, on Mac OS, uh, there's still
45:27 - a problem with the watch system. Unfortunately,
45:32 - uh, Mac OS doesn't give you very good watching primitives. Linux actually
45:38 - gives you pretty good watching primitives with the FA, what is it? FA notify.
45:43 - This API is pretty nice, actually, uh, especially after like 5.1. Um,
45:50 - but Mac OS unfortunately is not good. So, we did the best we could with only
45:56 - using like Mac OS SIS calls. Problem is that it only detects um like editors
46:03 - that atomically update the files. If they actually just make writes, then it doesn't detect updates, which is a
46:09 - problem. Um, so our plan is to switch to using whatever that stupid foundation
46:14 - is. I forget what it's called. Um there's some like framework you have to link for watching like file system
46:20 - updates and it um
46:27 - I don't know I think it just watches the whole file system or something in user space and then you basically are
46:32 - querying that database but I guess yeah fs event stream. So I think we just basically got to dl open this thing in
46:39 - the build system and then that'll fix the problem but uh that's not done yet. So file watching on Mac OS is uh it's
46:46 - got a big asterisk next to it. Um what else?
46:56 - I think there was one more thing I wanted to mention here. Uh
47:02 - I'm sure I'll remember it in a moment. Okay, I think
47:08 - that is my segue.
47:18 - Okay, let me briefly speak about
47:26 - Yeah, we'll get we'll get there. Uh, one more thing before I segue into that. Uh, okay. Translate C. So
47:34 - for a while uh just kind of quietly in the background um Vea has been working
47:39 - on a translate C package for Zeg which is based on his uh C compiler which is
47:47 - called arrow or ARO I don't know uh and it's done. So this package
47:54 - actually is fully compatible with our existing translate C test suite. Uh it's
47:59 - ready to roll. Um, so right now when you use
48:05 - uh let me just go into Oops.
48:10 - Let me just go into a directory here. So here was our example from earlier.
48:17 - Okay, if I do zig translate c,
48:24 - it's going to spit out some text. I guess I have to link lib. It's going to spit out some text. And uh there's a
48:31 - bunch of crap that C defines, but we're also going to get our main. And hey, there it is. So this is um this is our
48:39 - code translated into Zigg. And
48:45 - uh it actually works. So if we like for example um
48:52 - do this and then we do we do build actually I can just do run
49:00 - there you go uh so this totally works this is based on clang so we are using
49:07 - um we're using clang a
49:12 - uh to to iterate over it and then um translate that into zig. What um beta
49:18 - has done is um
49:23 - completely bypass clang. So this this package works on on his um C compiler
49:30 - which is written in zig pure zig uh and it's it's up to par. So for
49:36 - example um wait I don't actually have this package
49:42 - integrated yet. So, I would have to spend a few minutes setting up an example. Um, but
49:50 - the next step here would be changing it so that when we run this command line, uh, it just doesn't even
49:57 - use clang at all. It just uses ro libc. What's interesting about this is that,
50:03 - uh, we actually could hook this up to the back end. So, for example, we
50:10 - could implement like zigcc that doesn't link clang at all and it would just do
50:17 - RO translation and then run that through ZG and then we get something out the other side which um could be a fun
50:24 - stream. Maybe a follow-up stream or something if we if we have a whole bunch of time remaining.
50:31 - Uh but we're not going to do that right now. Um, suffice to say,
50:37 - we're well underway for eliminating our clang dependency. We're well underway for eliminating our LLD dependency, and
50:45 - we're well under the way for eliminating our LLVM dependency. And with that, I
50:51 - think that might be our uh segue into revealing the secret.
51:04 - Okay. So, uh, for people who haven't been following along too much, don't worry
51:10 - about it. For people who have, uh, you might be maddened by the fact that, uh,
51:15 - Jacob Lee has been teasing us all with his secret project, and he's not telling us what it is. Well, he told me finally
51:21 - so that I could demo it for you on stream. So, I know the secret. And the secret is
51:27 - it is a ARM 64 back end.
51:32 - Tada. Oh, wait. I'm not in the right directory. Tada.
51:40 - Okay. Uh, let's play with it. Let's see what it can do.
51:47 - So, I'm going to build it from source.
51:53 - Okay. So I'm running build the compiler.
51:58 - Okay, it's built. And so now we have a stage 4
52:05 - that can do ARM stuff. So what should we do?
52:10 - Uh let's try running the behavior tests. Now I am going to have to add some flags
52:17 - here since it's a work in progress back end. Um, so here we're saying target ARM 64
52:23 - Linux. We're saying don't use the LLVM back end. Obviously,
52:29 - uh, Zig's debug mode is actually pretty sophisticated. Zigg's debug mode has all sorts of safety checks and all these
52:35 - fancy stuff. Uh, so by passing really small, we actually make the task that the um, backend has to do smaller.
52:44 - Um, we also don't support compiler RT yet. that requires a bunch of AI stuff that's not done yet. And finally, I'm
52:50 - going to run the tests in Kimu so that we can actually run the behavior tests.
52:56 - Okay, that was fast. Uh, and there you go. So, we're passing
53:03 - um, 154 tests and skipping, 129. So, a
53:09 - little over half are of the behavior tests are passing. Now, uh, don't get too excited because,
53:17 - um, a lot of these are testing the front end. So, kind of like once you pass the
53:23 - first test, you pass a bunch of them. Um,
53:29 - but, uh, it's good progress. And now, the other thing that's interesting about this is that, uh, Jacobly has shared,
53:37 - well, we you can also just look at the code. It's public. Um, oh, M. Matthew wants to see the uncashed
53:43 - build. All right. All right. All right. Here's your uncash build.
53:52 - Okay. Yeah, that was definitely cached. Okay. It's still fast. Is this still uncashed?
54:00 - Well, this is a debug build of the compiler. So, we're actually kind of measuring the machine code quality of
54:05 - the x86 backend. Uh, so that's that's what that's doing. We're we're measuring
54:12 - the machine code quality of debug mode x86 back end. Okay. Anyway, uh what was
54:18 - I saying? Right. Okay. So, um Jacob has shared uh some of the cool things about this back end and it is a it's kind of a
54:25 - fresh take on on on the back end. Um it's got some unique ideas in it. He's playing with different data structures.
54:31 - He's already done a couple rewrites to try different things. Um so he he gave me this little demo to show. So here we
54:38 - have uh a file just to show a couple different kinds of
54:46 - codegen. Um so the idea here is that here we're getting four arguments but
54:54 - we're kind of um passing the arguments shifted one to the
55:00 - left. Here we're getting four arguments but we're kind of passing the arguments shifted one to the right. And this is
55:05 - kind of an a neat little optimization demo available here. So if I run uh
55:12 - if I run the back end, I can compile an object file. Uh I can
55:18 - do an object dump on that. And
55:24 - let's take a look. So let me actually try to separate these.
55:29 - Okay, so what's interesting about this is
55:41 - build select. Okay, notice here that we've moved um these registers in order
55:49 - 0 1 2 3 in order to call the next function right here. But in this one
55:57 - um we it does it backwards. See this 3 2 1 zero. So this allows it to use exactly
56:04 - only four registers to make this call. Exactly only four registers to make this call. However, for these it's actually a
56:10 - rotation. And so um there it's like mathematically necessary to introduce a
56:16 - temporary. So then we have a temporary and then four registers which makes sense. Um but it's just kind of neat
56:22 - that already it's a debug backend and it's doing little little things like this that are nice.
56:28 - Um, and so yeah, it's still it's still too early to make, you know, concrete
56:34 - claims. Um, but I will say that our goal, and and Jacob can get after me if I shouldn't say this, but our goal is to
56:42 - not only beat LVM at um compilation speed, but also beat
56:49 - LVM at uh machine code quality for debug mode uh at the same time.
56:56 - And I it's it's it's looking possible to me based on uh Jacob Le's work.
57:03 - I'll I'll give him a minute to object.
57:14 - All right, let me look through some of these questions.
57:19 - Uh what's up with this Kimu thing? I'll answer this question. Um, so part of the Zigg build system,
57:26 - well, let me let me back up. Uh, not talking about the build system yet. Part of Zigg's unit testing system,
57:32 - uh, is, uh, let me go to just like a fresh
57:38 - a fresh example.
57:46 - Okay. Okay. So, if we just make a unit test, um it's just going to be empty. Okay.
57:53 - Just an empty unit test. So, if I do zig test example.zig,
57:58 - uh it's going to run that test. And the default thing it's going to do is just run it natively. Um but, um you can just
58:04 - tell it um to run something with a different command. So, if I just say
58:09 - like echo, for example, that's just going to run echo, nothing else. If I do
58:17 - this, it's just going to print the name of the test. So, what another thing you can do then is just give it a different
58:24 - command like chemo uh arch 64.
58:29 - And now it's going to just run it's going to do run this in ARM 64. But
58:34 - obviously that's not right. So then I would also need to uh do this and then
58:40 - there you go. You can now run unit tests for a foreign architecture. Um, this is also hooked up into the
58:46 - build system. So, for example, if I do um build, I just get the help menu here.
58:52 - Um,
59:01 - uh, right. So, this option is always available in ZigG build. um you can just enable
59:07 - this flag and if you do that then when you try to run tests with the build system uh the build system will detect
59:14 - when uh it might need to use Kimu to run your binaries and it will just do that. Um or you can just not set that and not
59:21 - worry about it but it can be uh can be handy. Generally the rule of thumb that we have here is that we'll add
59:27 - integrations for popular projects like this but they'll never be enabled by default.
59:34 - Um, okay. So, let's see what else is going on.
59:44 - Uh, what about release mode? Yeah, the plan right now um for for the foreseeable future is the default for
59:50 - release mode will still be LLVM. Um, however, in a long-term future, I I can
59:55 - envision a world in which we have optimizations and uh just completely
1:00:01 - part ways with the LVM project. But that is probably a post 1.0 um idea. I don't
1:00:08 - think I'll try to do that before 1.0.
1:00:16 - That's not a question for me, is it? Okay. Um,
1:00:23 - so I think maybe we can move on from that topic and feel free to uh ask me a
1:00:29 - question later in in the official Q&A section and we we can revisit the topic. Uh, but for now I'm going to move on.
1:00:38 - Uh, okay. Yeah. So, next I think I'll show
1:00:44 - you a little bit about what I've been working on. Yeah, I'm I'm fine with it. you can do
1:00:50 - that. Okay. So, what I've been working on is
1:00:56 - resurrection of async awaits. However, it's so much more than just that. Um,
1:01:09 - I actually gave a talk on a lot of these topics a couple weeks ago in Amsterdam
1:01:15 - and I will post the talk here when it's available. It's not available yet. I put
1:01:20 - all my talks on my website in order. Um, but I can also just go over some of the
1:01:27 - basics again. So, here's a branch. Um, I'll preface by saying I feel like
1:01:36 - I feel like it's solved in theory. Uh, I like the the stuff that I did with
1:01:42 - async weight before it never felt finished. It never felt like it was good enough. Um, I feel like there is a path
1:01:50 - towards realizing my vision with this new thing. So, I'll preface it by saying that. And the idea is
1:01:58 - kind of simple actually. Uh, it's just for some reason some of the simple
1:02:04 - ideas, they're just hard to think of. Wait, this is not what I wanted.
1:02:09 - Okay, so you know how in Zigg programs you always have to pass an allocator
1:02:14 - everywhere? Okay. Well, now you're gonna have to also do that with IO. IO. What is IO? IO is everything pretty
1:02:22 - much. So, like async is part of IO. Um, await is IO. This is a little homage to
1:02:28 - the Go programming language. It's basically um async uh without an await.
1:02:35 - It's just like crippled async8. Um, we also have cancellation. I'll get
1:02:41 - into that in a minute. But then things like opening files, uh doing networking, mutexes, conditions, uh timing,
1:02:51 - and there's going to be like much much more.
1:02:57 - Um it's all related. Anything that can block the current thread of execution,
1:03:04 - including like large CPU tasks, uh they belong here. And so what this means is
1:03:11 - that in in Zigg projects kind of one of the first things that you'll do,
1:03:17 - you know, in like your main is you'll either well you'll yeah you'll
1:03:22 - you'll either choose or you'll implement a um IO implementation.
1:03:28 - And I have an example of this. Uh
1:03:33 - I think I'll just find it on my G. Oh, here it is.
1:03:38 - Okay. Okay, so I put this together already. Um, so kind of in Maine, one of the first things you'll do, you pick
1:03:43 - your allocator. This is a familiar pattern, right? Okay, you're also going to have to pick your IO implementation.
1:03:49 - So maybe you pick an implementation that's based on a thread pool, okay, and then you grab your IO from there. Or
1:03:55 - maybe you pick your implementation from uh a like green threads event loop and then you get your IO from there. Doesn't
1:04:02 - matter. In this example, I can uncomment this one and comment this one and the program will have the same um output.
1:04:11 - Not the same behavior, but the same output. And then you're going to get basically async. And keep in mind that
1:04:17 - um once the keyword is dropped, these can go to being a regular function name. So they won't have this weird syntax. It's just a function call. Okay.
1:04:25 - Uh now the point of this, and this is kind of like a lot of demos all squished together into one. Um but the point of
1:04:33 - this is that you can encode into your logic doing stuff in parallel. So here I
1:04:40 - encode the idea that I can calculate the first half and the second half and then only later
1:04:48 - only later do I need the results. So I start this stuff I do a bunch of other
1:04:53 - crap while I'm waiting and then now I need the results. This allows you to
1:04:59 - encode that logic and then um the user of this code which could be a different
1:05:06 - person gets to decide how to execute your logic based on which IO implementation they pick.
1:05:13 - Exactly like how you take an allocator as a function
1:05:18 - and the caller gets to decide how you allocate your memory. There's a ton of
1:05:24 - benefits for this. Um, just off the top of my head,
1:05:30 - uh, resource leak checking, um, uh, testing, better testing, um, not
1:05:38 - have to write your code twice depending on whether someone wants to use it in event event loop or not. That's the big
1:05:44 - one. Um, there's a bunch more. What am I forgetting?
1:05:52 - Uh it's just so nice and also by doing this in um yeah bring your
1:05:59 - own operating system. That's a good point. So you can take a package that now that used to have a dependency on
1:06:06 - the operating system and now it only has a dependency on the IO interface. So if you make your own operating system, all
1:06:13 - you have to do is implement this interface and now you can use all this Zig code that's now works on your hobby
1:06:20 - operating system. No one even had to care about it. They just had to use the interface. Uh
1:06:29 - yeah, and then because it's in userland and it's not like keywords of the language, it actually becomes a lot
1:06:35 - easier to implement some of these like uh nicities like cancellation and select. So for example, I implemented
1:06:41 - this Q. This is the equivalent of a go channel. Um I also was able to implement select
1:06:48 - in um in userland in the interface. So for example, and this works by the way,
1:06:55 - this is a working example using the branch I just checked out. Uh so
1:07:01 - select is a feature where you give it multiple uh asyncs and then like the first one to finish you get it gets run.
1:07:09 - Um, now what's interesting about this one is that it's also combined with cancellation. So you can see here we
1:07:14 - start these three things and then we defer a cancel
1:07:20 - defer cancel defer cancel. So um whenever we return from this function uh
1:07:27 - all these resources are cleaned up and everything. So it's what what I think is interesting about this is that zig is
1:07:33 - not a garbage collected language. However, there is pretty minimal boilerplate here
1:07:40 - compared to like what this would look like in Go, for example. Um,
1:07:45 - I mean, arguably it might even be longer in Go because of the lack of try.
1:07:51 - Uh so to me this is the dream because now I can encode
1:07:57 - uh I can make like a reusable package that doesn't
1:08:03 - decide which operating system it has to run on and it doesn't decide what order
1:08:08 - things have to be run in. Like if I can do two things at once, I can encode the fact that I'm doing two things at once.
1:08:13 - And if my user just wants to like use my Zigg library inong along with some like basic C code, they can do that. All they
1:08:20 - have to do is choose the the IO implementation that's the dead simple one that just does things like right
1:08:27 - like um it does things eagerly at the async and then await becomes a a noop
1:08:32 - and then it all works. Um, so this this is the future of uh of
1:08:39 - async8 and we're headed this way and um other programming languages can't do
1:08:46 - this because they don't have Zigg's um single compilation unit strategy. So all
1:08:54 - that work we put into incremental compilation, all that work we put into making our single compilation unit
1:08:59 - strategy work is what's enabling us to move forward in this way. And it's also going to depend on uh restricted
1:09:06 - function pointer types which is quite an interesting proposal. Um which long
1:09:12 - story short just makes this code be uh not have some downsides that it would otherwise have.
1:09:19 - Um where am I going with async away? What am I missing? Matthew, are you paying attention? I think I think I hit
1:09:26 - all the points here. Uh right. So then the only question
1:09:33 - there is what about um co- routines and that just becomes a question of uh is it
1:09:39 - possible to um that just becomes a question of is it
1:09:45 - possible to make an IO implementation uh with uh stackless co- routines and
1:09:51 - that's at this point it's a separate follow-up question. Um, so that's um
1:09:58 - I'm thinking likely yes because we'll need to be able to do that for certain targets like if the target doesn't have
1:10:05 - the ability to implement yield in user space then we need the ability to do it with stackless coins. um but
1:10:13 - they will become a much lower level primitive that users usually won't touch
1:10:19 - like they will become um uh they'll become an implementation
1:10:26 - detail of an IO implementation and not something that users need to care about at all who are writing even who are
1:10:33 - writing package reusable packages. So only like standard library authors or
1:10:38 - like people who are creating like alternate IO implementations or other advanced stuff will care about these um
1:10:44 - stackless corine primitives. Uh let me try and click this link.
1:10:53 - Oh yeah yeah yeah. Okay. Why does single compilation unit enable
1:11:00 - this type of async? um particularly because uh what we need it it all has to do with
1:11:08 - stack usage. So the problem is that um whenever you do two things in parallel,
1:11:14 - you are now heap allocating the stack of the other thing that you're doing in
1:11:19 - parallel. And when you have a v table, in other words,
1:11:26 - when you have a reusable interface such as allocator or um or IO,
1:11:34 - you're going to have a function whose value is known at runtime. The problem is we need to know upperbound stack
1:11:40 - usage of that function. But if you only know which function is being called at runtime, then you're going to have
1:11:45 - you're not going to know the upper bound stack usage. There's also a problem with recursion, which we can get to in a minute.
1:11:51 - What this um what this proposal accomplishes is
1:11:58 - it means that you'll be able to mark functions as being uh it means that the compiler will be able to automatically
1:12:04 - determine that a given function pointer can only be one of a limited set of things. And because of that uh it can
1:12:12 - say that the stack upperbound usage is the maximum of them. and and that allows
1:12:18 - um that allows you to pre-allocate all the possible function call stack that
1:12:24 - you need in order to do something in parallel. Um and if we didn't have everything in
1:12:29 - one compilation unit, then we wouldn't know uh we wouldn't be able to do this
1:12:34 - proposal because you wouldn't you wouldn't be able to know what um you wouldn't have a closed set of possible
1:12:41 - functions that a function pointer could point to.
1:12:48 - Is this something I should click? Oh, yeah. This is the say I I know like a set of zig issues with just by number. I
1:12:54 - think this one's called like safe recursion. Yeah, I got it.
1:13:01 - Yeah, that's related for sure. Uh yeah, so that's where we're headed.
1:13:07 - And in the meantime, the reason that I' it's been taking a while uh for this branch to make progress and get merged
1:13:13 - is because um I'm uh I'm also right uh I'm changing
1:13:21 - standard io reader and standard.io.riter. Um I'm preparing them to be assimilated
1:13:27 - into this pattern and into the asyncate pattern. Um and that's unfortunately
1:13:33 - quite an involved change. Um, but I've uh I've switched tactics recently to try
1:13:39 - and do this more peace meal. So, I'm expecting to make progress on that soon. Um, but I have to say uh you I'm sorry,
1:13:47 - but you have to prepare for some major breakage because um changing standard io reader and
1:13:54 - writer it really uh it basically just makes you like fully rewrite any code that touches it
1:14:01 - and it's a huge pain. So, uh, I'm sorry, but I I'm strongly convicted that this
1:14:07 - is the future. It's going to be great. I'm sorry that I didn't get it first try. I think this is my like fourth
1:14:13 - iteration on Async8. I I really tried a lot of other things first. Um, yeah. I
1:14:19 - don't know. I'm not I'm not smart. I just try a lot and and never like settle for something that's not perfect. That's
1:14:26 - the only like thing I bring to the table. So I that's the the downside is a
1:14:31 - lot of breakage. So I am truly sorry for that. Uh but I really believe in how in
1:14:37 - this being like a good strategy for the future and I'm really excited about it. Um
1:14:43 - yeah. All right. Let's see if there's any questions here. Does this mean that async8 won't work
1:14:49 - across dynamically linked boundaries? Kind of. Um, so you can annotate uh
1:14:57 - extern functions and then you'll be able to have them called. Um but
1:15:04 - uh it's kind of the same problem that um like out outside of uh like if you have
1:15:11 - if you have an event loop um
1:15:16 - you really don't want some third party code to then do a bunch of like file system reads and writes uh outside your
1:15:23 - event loop. Kind of defeats the purpose, right? You need it to participate in the system. Uh so it kind of creates that
1:15:32 - problem. However, because we have IO as an interface, um
1:15:38 - one thing we we can do is we can implement
1:15:43 - um well and on uh
1:15:48 - and for some targets we actually control the lib C code too. we actually can provide libc functions that then
1:15:56 - actually just uh call into the IO interface. And so in some cases we
1:16:03 - actually would be able to make this situation work optimally because uh as
1:16:09 - long as we we can calculate upper bounds on the stack of the functions that are being called um and as long as we can
1:16:19 - control like uh the IO then we can totally make it integrated. But there is also still the possibility that like you
1:16:26 - can just write some code that does the wrong thing, right? You could literally just write a function that says like crash the program and put that in a
1:16:32 - dynamically linked library and then like yeah if you call that you're going to crash the program or if you just put a
1:16:38 - bunch of if you just put like sleep 100 you're going to mess up the event loop you know in in that code. So in other
1:16:44 - words um there's definitely a potential for code on the other side of a
1:16:50 - dynamically linked library boundary to not work. However, there also will be a lot of tools to make it work if you try.
1:16:57 - Uh, I hope that answers the question.
1:17:03 - Will there be sync and async reader and writer? Uh, so yeah, the way Yeah, I didn't explain this very well. Uh, so
1:17:10 - the way the way that this is going to work that
1:17:15 - I the way that I tried to make this work in the past was by like making everything generic. So you'd have you'd
1:17:22 - have the same code but then it would get instantiated in a sync way or it would get instantiated in an async way. Uh
1:17:30 - what I've what I've moved towards instead is a
1:17:35 - um a non-generic approach. So the new the new writer and reader um
1:17:43 - interfaces for example. Let me let me before I switch uh let's look at the current one.
1:17:50 - So let's look at writer for example. Uh
1:17:55 - how does this how does this work?
1:18:02 - Okay, this is kind of like the writer. I wrapped it around this anything but it's really not a good idea. So forget about
1:18:07 - that for a second. This is basically what everyone uses generic writer and it's relying on generics obviously.
1:18:15 - Okay, now let me switch branches. I'll show you briefly what I'm working on.
1:18:25 - Okay, so the new writer is not generic. So this is just a file. This is a strct.
1:18:31 - It's got some fields and when you want to satisfy this interface, you have to provide
1:18:36 - uh some you have to provide this function and optionally these functions which have defaults. And if you do that,
1:18:43 - then you get all these methods for free. Um, and what's nice about that is that
1:18:48 - all the code in this file is not generic, which means that the same machine code, you'll notice a lot of it
1:18:54 - is formatting code, the same machine code will uh be used in all of the
1:19:00 - writers uh in all of the streams. Uh
1:19:06 - so there's no opportunity here for this to be generic or in other words instantiated in an async manner or
1:19:14 - instantiated in um in in a in a sync
1:19:20 - manner. Um
1:19:25 - so where was I going with this? So my point is that it all depends on how you implement the
1:19:31 - IO interface. So if you implement the IO interface for example um in a single
1:19:38 - threaded blocking manner then you're going to get singlethreaded blocking code here like you're going to implement
1:19:43 - drain and well in many cases you actually just
1:19:48 - chain these things together right so like for example if you have a uh compression stream then you you provide
1:19:55 - a writer that you can write bytes into the compression stream like like um uh
1:20:01 - gzip for example you provide a writer so that you can push bytes into it and then you also
1:20:08 - uh are given a writer that you push bytes out to. So it just kind of lives in this this kind of chain. It actually
1:20:14 - has no actually doesn't need to know about IO at all. Uh you can just make it a almost like a pure function if in a
1:20:22 - sense right it just takes in the bytes it does something and it pushes out the bytes. It doesn't even know about the operating system doesn't know about IO
1:20:27 - blah blah blah blah blah. Um in some cases though you're kind of at
1:20:33 - the end of the chain. So for example um the implementation that is in file. So
1:20:41 - this is the writer that provides the writer interface but at the end of the day it has to write to um an actual
1:20:48 - file. It has to actually do like you know pix.rightv for example.
1:20:58 - Uh what's my point? My point is that this uh interfa this
1:21:04 - this implementation of writer will need an IO interface. You'll have to it'll have to be given one and then it will
1:21:10 - decide it it will be calling io. write V or whatever. And
1:21:17 - that call will determine um whether or not the drain function is
1:21:26 - async or sync. And then uh the restricted function pointer types
1:21:35 - will be able to do just like it takes the upper bound of the stack. um it will
1:21:40 - take the like if they're all sync then it makes them all sync but if any of the functions are async then it makes the
1:21:47 - function pointer um async and because it knows that they can um anyway point is
1:21:54 - it has enough information to make it all work um and we don't have to implement
1:21:59 - stackless co- routines for this to already be useful because we can even just provide a thread pool implementation we can provide a single
1:22:06 - threaded blocking implementation we can provide a green threads implementation And those are already like extremely
1:22:11 - useful. And then we can additionally provide a fourth implementation based on stackless coins later enabling more use
1:22:18 - cases that weren't possible before. But it's already uh enough. It's like it's
1:22:24 - already the concept is proven like we know it can work. There's not a danger of that not working.
1:22:31 - So it all depends on the IO implementation what you do basically. And the point is uh the point is
1:22:38 - reusable code like you can see it right on the homepage right like what's the point of Zigg it's a general purpose
1:22:44 - programming language for maintaining robust optimal and reusable reusable is super important all three at the same
1:22:50 - time the point is you should be able to make a package that is not only robust and optimal but at the same time is also
1:22:56 - reusable for everyone else even if they're on the hobby operating system that they just wrote yesterday they
1:23:02 - should be able to use your I don't know like JPEG package that you made two years ago. They shouldn't have to add
1:23:09 - like special support. It should be reusable code. Not only reusable from um
1:23:14 - source code perspective, but also in this case literally reusable machine code like the same, you know, ARM 64
1:23:21 - machine code that this generates will be used in uh in the binary under like multiple conditions. So there's kind of
1:23:27 - like two reusable code angles to that. All right, I hope that made sense and it's a little bit rambly.
1:23:37 - Okay, let's move on. I know people probably have questions. Keep the questions and we'll do actual Q&A
1:23:43 - session momentarily. Um, yeah, actually I only have one more
1:23:48 - topic to get through and then we'll do Q&A and then we're I think I have one more announcement and then we're done.
1:23:56 - Okay. So, next
1:24:01 - ask me this later. I will I can go over this. That's a good topic. But before we
1:24:07 - do that, let's move on. So, let's go back to master branch.
1:24:13 - Uh let's talk about fuzzing. Now, unfortunately, I haven't had a lot of time to work on
1:24:19 - fuzzing. Okay, good. Keep those in mind. We'll get we'll get there momentarily.
1:24:26 - Uh, now I haven't touched this in a while, but it is definitely most certainly
1:24:32 - part of the road map. Uh, because I dipped my toes into fuzzing a little bit. Um, thanks to uh Loris for getting
1:24:40 - me kind of like showing me the ropes with AFL. And I read I read the AFL source code, which is pretty neat. And I
1:24:47 - started working on an in-house uh fuzzing tool chain. And I'm now before
1:24:52 - anyone says not invented here syndrome, like sure we're definitely guilty of that. U proudly guilty of that, you
1:24:59 - might say. But man, having an in-house fuzzing tool chain is going to be a
1:25:04 - gamecher. Like there is so much you can do with in-house fuzzing. You could like
1:25:10 - the worst thing about fuzzing is how annoying it is to set up all the infrastructure around it and like it's a
1:25:16 - total pain in the ass, but people still do it because how much value it provides. Um, so if we can have
1:25:21 - integrated fuzzing with with Zigg, it's going to be crazy. Like it's so convenient. like people will uh it
1:25:29 - actually saves you time because you can write a 100 unit tests and spend like
1:25:34 - weeks on that or you can just write one fuzz test and then delete 99 unit tests.
1:25:40 - It's so good. Um but in case you didn't see what is
1:25:45 - done and what is available, I think I think we have something in the like
1:25:51 - hello or the init template. Uh, let's
1:26:00 - let's go in here. So, I don't know, maybe I broke this because I haven't touched it in a while, but let's take a
1:26:05 - look. So, this is our init project. Uh, let's do
1:26:11 - it. Build. What are we doing here?
1:26:16 - Okay. Oh, big help. That's what we're supposed to do. So, this project offers us run and test. Let's do test. This is
1:26:25 - our init project. I think Loris worked on this recently. Okay, that seemed to work. I don't see the text uh one fuzz
1:26:33 - tests found though. I wonder why. I was expecting to see
1:26:39 - that. But anyway, if I look at the source code, I see try passing fuzz to zigg
1:26:44 - test and see if it manages to fail this test case. Um let's indeed let's try that.
1:26:56 - So when you do fuzz mode, it actually runs unit tests once, finds out which ones are fuzzing test cases, and then
1:27:02 - rebuilds your unit tests in fuzz mode, which instruments the binary and also
1:27:08 - does things to help fuzzing work better. Um, and
1:27:15 - this is a regression in the build system. So, I I think I probably broke this,
1:27:22 - but you can kind of get a little sense of uh what's going on here. Um, it
1:27:29 - actually creates a web interface that displays your source code using the same logic, the same like web assembly binary
1:27:35 - as the Autodocs, and it shows these little like traffic lights, you know, red uh red or green depending on uh
1:27:42 - whether the fuzzer found that line of code or not. Um
1:27:49 - I think let me try just one thing really quick.
1:28:02 - Seems to be like a debug info issue. So I was just going to try a different release mode.
1:28:09 - You generally want to run your Oh, there we go. You definely want to run your fuzz tests in um release mode because
1:28:15 - you want more iterations. Uh I haven't played with this in a while. I imagine it might be broken.
1:28:25 - Okay, there does seem to be a problem there. Okay, well you can kind of get the idea. So it puts these little little
1:28:32 - lights on like areas of interest and if it got run it turns green.
1:28:37 - So I think something regressed because uh at one point in time it would
1:28:43 - basically just find this password instantly. Um but yeah, like I said, unfortunately I've been a bit
1:28:48 - distracted. Uh so it's just all very experimental. Um but it is relevant for
1:28:54 - the road map because it is something that I fully intend to get back to. Um
1:29:00 - my plan is to make uh Oh, Casey Banner already knows the issue. Okay, let's take a look.
1:29:06 - Okay. Well, we can look at this later. Um, what was I saying? Yeah. So, it's
1:29:14 - extremely experimental. It's just just dipping my toes into this thing. But um
1:29:19 - man, when we when we have this thing like full like when we're ready to say that this is like ready to go and it's
1:29:26 - uh ready for prime time, this is going to be gamechanging because
1:29:32 - if you've never played with fuzzing, you should try it because it's incredible. Like the genetic algorithm just figures
1:29:38 - out how to explore your code. It figures out which inputs are good and it just
1:29:44 - it's it's like find your bugs before your users do. It's like you don't why do you you don't need a bug tracker.
1:29:49 - Just find all your bugs. It's so good. So that's that's where we're going to go. Um yeah, so unfortunately, you know,
1:29:57 - small team can only do one thing at a time. Lots of uh lots of big things to do, but um it's coming for sure. It's
1:30:05 - coming for sure. That's the point. Okay.
1:30:14 - Okay. Well, I think I think that was pretty much everything I wanted to go
1:30:19 - over as far as demos and road maps go. So, I think we're ready for doing
1:30:28 - community announcements. Uh, now I think I know one of them, Loris,
1:30:35 - but is there another set of community announcements? CDMs.
1:30:40 - Okay. Yeah, figured. Okay. Why is my
1:30:48 - Oh, there we go. Okay. I got I got the list, right?
1:30:55 - Okay. Three announcements. Number one, uh, if you are, you might notice this
1:31:02 - new little thing here, setting up automation. If you are setting up your CI and you
1:31:08 - want Zigg on there, um, have a look because you can
1:31:14 - uh, well, you probably don't want any downtime on your CI, right? So, you might want to have a look at community
1:31:20 - mirrors. Um, this is a way for uh if you want to volunteer and run a community
1:31:27 - mirror, you can help uh you can help ZSF by just mirroring the tarballs.
1:31:34 - And uh if you are a user of of Zigg and you
1:31:40 - just want the tarballs um you can get faster downloads and avoid downtime by
1:31:47 - integrating with community mirrors because uh we do not guarantee an upline uh sorry we don't we do not guarantee a
1:31:53 - uptime on ziggling.org. We do our best but we optimize for cheap running on a
1:31:59 - cheap computer and not worrying about it too much. We don't we don't no one's on call. We don't wake up in the middle of
1:32:04 - the night if it goes down. So, uh, so use mirrors if you want your CI to stay online.
1:32:10 - Um, and if you, some people are already running mirrors. Uh, thank you for that.
1:32:16 - Let me just take a look. So, yeah, thank you to um, Emmy, thank you to Stevie,
1:32:24 - thank you to Linus Grow, and thank you to Silver Squirrel.
1:32:31 - Uh yeah, more details on that page. Okay, next we have
1:32:40 - uh Okay, I already made this announcement at the beginning, but let's make it again at the end.
1:32:46 - Oh, also thank you to Frank once we fix our own TLS implementation. Yeah.
1:32:53 - Uh okay. Uh, in case you missed it at the beginning of the show, um, software
1:32:59 - you can love Vancouver 2026 has been announced. So, I think the
1:33:09 - There it is. The only website I visit. Uh, okay. So, the website's not up yet.
1:33:14 - This is 2023. Um, but it has it is it's official. Uh, Matt and I is planning a
1:33:21 - software you can love 2026 Vancouver. So, uh, just get ready.
1:33:27 - I don't think there's any call to action items beyond heads up. Yeah.
1:33:37 - Uh, you know, funny story, uh, soapbox rager, because, uh, we actually have a,
1:33:44 - um, here, let me show you.
1:33:51 - We actually have a Vancouver north of Portland, but you have to keep going
1:33:58 - to Seattle and then keep going to get to the real Vancouver. But when you drive
1:34:04 - home from software you can love, you leave Vancouver and then it says and
1:34:09 - then there's a sign that says like welcome to Vancouver. It's really trippy.
1:34:15 - Okay. Anyway, uh if you come to that 2026
1:34:21 - should be fun. Um we had a good time last time and I think I think Matt Knight has some nice ideas for uh how to
1:34:28 - take it to the next level this time. And also, if I understand correctly, Loris is also helping organize it.
1:34:36 - So, should be nice. And finally, uh we have Zigg Days. So,
1:34:44 - Zigg.day, that's a new website. Um, oh, you know what? Shout outs to my wife for
1:34:51 - making her first ever poll request ever, uh, which turned these little pins into
1:34:57 - these little zero heads. Um, okay. So, this is a new thing that
1:35:04 - we're doing. Uh, there's a format for meetups that Loris
1:35:11 - and I are a big fan of. And the format is instead of sitting around and listening to people talk all day, what
1:35:18 - you do is you meet up early, like 9:00 a.m. and you you do a little icebreaker.
1:35:24 - You talk, you know, 30 minutes, you get to meet people a little bit, talk about what you're interested in. You go around
1:35:30 - the circle and everyone talks about things that they tentatively plan to work on that day. Everyone brings their
1:35:36 - laptops. Um, next you you sit around a bunch of
1:35:42 - tables and well people form groups. So if you want to uh go join someone else's
1:35:48 - project, you can just do that. So then people form groups and then for one day you uh just hang out with each other and
1:35:56 - hack on stuff and then at the end of the day you again before you like at 5:00
1:36:01 - p.m. or something uh before you disperse you get together again go around the room again and everyone just talks about
1:36:08 - what they learned that day or what they did or just take takeaways from their little hack session. Um, I personally
1:36:15 - think this is a really fun format for a meetup and so, um, I'm looking forward
1:36:21 - to hosting one in my hometown, or not my hometown, but my current town that I
1:36:26 - live in of Portland, Oregon. So, we didn't schedule an event yet, but my uh my friend Mason is going to help
1:36:35 - me organize, and we're thinking soon, like within a couple months.
1:36:41 - Uh I believe Milano just had one recently. Yeah, May 17th. Nice.
1:36:48 - Uh I guess this shouldn't say upcoming anymore, huh? Um, and I don't think San Francisco is
1:36:55 - ready to go yet, but what about Vancouver? No events planned yet. So, this is kind
1:37:02 - of like an upand cominging um up and cominging thing that we're starting to do. We're hoping to have our first
1:37:08 - Portland one. Uh, yeah, within the next couple months. And
1:37:15 - there's one more point I wanted to make. I don't know. I forgot. Sorry.
1:37:26 - Right. Uh the point I wanted to make is um this website is also a call to um to
1:37:33 - you if you want a Zigg day in your town where you live in. Um, you can be an
1:37:40 - event organizer and Loris will give you uh kind of like code owners of a c of a
1:37:45 - like the subdirectory of of your city and then you can kind of just run the event and you can edit this website with
1:37:53 - automated pull requests. I believe that's how it works. Uh, and then people who want to come to your event for
1:37:58 - example, they can uh they can get notified and subscribe. Um so you can
1:38:05 - take advantage of this platform to help uh get your um meeting your meetup uh
1:38:10 - bootstrapped. So point of this announcement is to entice people who
1:38:16 - want to organize an event to do so. Let's get some more Zigg Days going. Yeah.
1:38:23 - And of course you can um you can put your own flavor on it. You don't have to do exactly the thing as I as I explained
1:38:30 - it. Um, each event has email updates, RSS,
1:38:36 - and also an IAL. Uh, the link down here tells you how to become a Zigg Day organizer.
1:38:50 - Yeah. All right. So, that's Zigg Day. And with that, I think we are ready to
1:38:56 - go into Q&A. So, go ahead, hit me with them.
1:39:05 - This is where previously uh Loris was going to do the um
1:39:11 - uh what's it called? Ch ch ch ch ch ch ch ch ch ch ch ch ch ch ch ch ch ch ch ch ch ch ch ch ch ch ch ch ch ch ch ch ch ch ch ch ch ch ch ch choosing of questions to ask me, but I think I guess
1:39:16 - I'm the I'm the one in charge. No.
1:39:27 - Oh, we forgot about Zigtoberfest. Okay, sorry. One more. One more qu before questions.
1:39:35 - Okay, Zigtoberfest. Yes. Oh, look, an up-to-date website. That's nice. Uh,
1:39:42 - right. Oh, cool logo, too. So, this is a event. Uh I I believe it's
1:39:52 - a one-day conference and this is in uh was it in Munich or Berlin? Uh Munich.
1:39:58 - Uh the people who went last time, they said they had a good time. Seemed pretty fun.
1:40:03 - And uh I think it's mostly talks uh in the beginning of the day and then maybe
1:40:08 - you you uh go get some dinner after or something like this. Is there anything What is there to say about
1:40:13 - Ziggtoberfest?
1:40:19 - Oh, there's recordings from the previous ones. I don't know. It could be a nice uh a nice event to go to if you're in
1:40:25 - the area. Oh, and it's at University of Applied
1:40:32 - Sciences. Specy and pretzels. Oh man, that's
1:40:38 - making me not want to go to be honest.
1:40:45 - Okay, well there you go. It was nice last year despite the spzy.
1:40:54 - Okay. Uh let's move on to Q&A.
1:41:00 - So let's see here.
1:41:05 - Here's a question. Is stage two compiler the same as stage three for x264 Linux
1:41:11 - debug builds? Uh if you're talking about literally this binary
1:41:19 - uh this one this binary is created by compiling this
1:41:28 - file. So if you want to know more about this uh I suggest to read
1:41:36 - I suggest to read this blog post. Um, this explains how we
1:41:45 - how we bootstrap. Um, this this explains how we build from
1:41:53 - source after we got rid of our C++ code.
1:42:01 - [Laughter] And so yeah, basically this file is
1:42:07 - created from web assembly and then we create this and then this creates the final one and then from then on it
1:42:13 - doesn't matter like if you do a fourth one you get the same results out again.
1:42:20 - Um are timeouts in async code going to be handled through user provided
1:42:26 - cancellation logic? Oh yeah that's a good question. So basically every
1:42:32 - function that does IO gets an additional error in the error set called cancelled.
1:42:40 - So pretty much every so so your function is going to have to take uh you know do
1:42:48 - thing you're going to have to take an IO parameter
1:42:57 - I don't know like read file or something. So you're going to get this error set right
1:43:04 - and there's a bunch of different ways that reading a file can go wrong. Now, there's going to be an additional way that file that that can go wrong and
1:43:10 - it's canceled. So, you're just gonna have to handle that or usually just don't handle it, right? You're going to
1:43:17 - just do it'll just be part of your like, you know, else return. So, now you're going to also
1:43:22 - have, you know, cancelled in your error set or it can obviously just be
1:43:28 - inferred. And so, that'll just bubble up correctly. And because it's an error, if you have any like air defer blah blah
1:43:35 - blah, this is going to run in case your IO thing gets cancelled. It's kind of beautiful, isn't it? Actually works like
1:43:42 - extremely well.
1:43:47 - Uh, in our proof of concept, by the way, we have a proof of concept with IO ring. uh and can and cancelling stuff like
1:43:55 - really cancels stuff like it it sends the the SIS call to the colonel um to cancel like an inflight cued operation
1:44:02 - um in the in the E-ring.
1:44:07 - Uh can you Okay, so I want to just keep the chat alive by not scrolling up. So
1:44:13 - if I didn't answer your question, can you please just repeat it? Uh and then I'm just going to like miss stuff, you
1:44:18 - know, just repeat it like every five minutes or something. Does that sound good?
1:44:26 - Right. So there's a question about function coloring. I want to address that. Uh so
1:44:33 - you can think of taking a um allocator as a
1:44:39 - function coloring thing. Uh because let's say that you want to
1:44:46 - call fu. Yeah. From do thing. Well, I actually can't call it cuz I don't have
1:44:53 - an allocator. So, I kind of got to have one.
1:45:00 - And now I can call foo. So, that is kind of a form of coloring, I guess. But to
1:45:07 - me, this is a fine form. Like, it's not a problem. Uh
1:45:12 - what I didn't want was to have this function
1:45:18 - like you know how uh like in Rust there's what is it like async standard library or something.
1:45:27 - That's not how you not how you spell that. Okay. So this thing um which is
1:45:34 - not official by the way. So this is docs.rs RS async std.
1:45:40 - Yeah, look. See, this is not Rust. This is like a different group of people.
1:45:48 - I think. But anyway, my point is um that's what I don't want. That's function coloring,
1:45:54 - right? Like the fact that you have to have two standard libraries. I want one standard library and I want it to work in a event loop context and also in a
1:46:01 - single threaded context and also on web assembly and everywhere. everywhere.
1:46:07 - So, I don't know like what is or isn't function coloring? Uh, it starts to get
1:46:13 - kind of like academic, doesn't it? Like really the point is reusable code. That's the actual point and this pattern
1:46:20 - uh does enable reusable code.
1:46:26 - Yeah, I I agree with this take. So, yeah, the problem with like function coloring is code duplication, not reusable. So, as long as you have the
1:46:32 - same implementation fulfill all the use cases, you're good. And then yeah, who who cares whether it's technically
1:46:38 - function clearing or not? Not not the point.
1:46:43 - Okay. Uh oh, here's one. Is there any big changes planned for the Zigg build system or
1:46:50 - package management? Uh well, there's plenty of things that have
1:46:57 - been important things that are not done that have been planned for a long time. So, for example, if I look at build
1:47:03 - system and enhancement, um, and maybe like
1:47:10 - author me. Uh, so here's there's quite a few things
1:47:15 - here, especially if you kind of go to the older ones. Uh,
1:47:20 - I would say that some of these I would call big, you know, for example, um, yeah, like running build.ziglogic in a
1:47:27 - web assembly sandbox, that's kind of big. That's also very breaking. Uh,
1:47:37 - well, I I guess we there's some there's plenty of ideas
1:47:42 - here and a lot of these are planned and they're just not done yet. So, like for example, um, Zig Build System has some
1:47:50 - really serious limitations that we're all just kind of putting up with right now cuz um, more work needs to be done.
1:47:56 - And we'll get there. We'll get there. But like uh for example um detecting when a dependency has an
1:48:03 - update basic stuff we don't have that yet. Um dealing with breaking changes in
1:48:10 - a dependency uh and and not having like a pain in the ass way to patch only one
1:48:15 - thing. Um there's all these workflows that we have good solutions for like in
1:48:21 - theory like we have vaporware solutions for um but they just need to be implemented and they're not done. So, in
1:48:27 - other words, um, is there anything big things planned? Well, yeah, the things the same things
1:48:33 - that had been planned for a while and just aren't done yet. And you can see a bunch of these issues were opened, you know, like years ago. And like we'll get
1:48:39 - there. We'll get there. Just takes time, you know, and we'll try to do the most important stuff first so that people can
1:48:44 - get unblocked. And, you know, uh I I I think that I know a lot of people are
1:48:51 - waiting for 1.0, or at least they say they are. They're saying, "Well, you know, once it hits 1.0, you know, then I'll then I'll jump in. And that's
1:48:59 - understandable. That's reasonable. But, um, my goal personally is going to be to, uh, to make Zigg so compelling and
1:49:08 - so useful that people are willing to put up with uh, not being 1.0 yet because
1:49:13 - it's still worth it. It's still so good that they're willing to put up with the instability. That's my goal. Uh, because
1:49:19 - I don't want to tag 1.0 until it's until it's ready. And it's not ready yet.
1:49:28 - Uh, what's the blocker for getting the x86 back end to work with LLD? Uh, I don't know the answer to that
1:49:35 - question, but um, to be honest, I I'm
1:49:40 - not going to work on that at all. I'm instead going to work on eliminating LLD completely. Uh, it's just it's not too
1:49:48 - much work like we could like uh, you know, trying to take on LLVM. That's
1:49:54 - that's a a super long like you know years'sl long effort trying to fully
1:50:00 - subsume LLD that's accomplishable like pretty soon you know so I'd rather just kind of do a sprint on just eliminating
1:50:07 - cough LLD dependency uh and then and then that question just doesn't
1:50:12 - matter anymore you know what I mean that's kind of like my um
1:50:19 - uh avoid local maximums mantra It's like I'm just not going to spend
1:50:25 - any effort on that. I'm only going to spend effort climbing the big mountain, if that makes sense.
1:50:42 - Uh how would things like mutexes, futexes, etc. work with new IO?
1:50:48 - Uh is there any plan to control the cache size for incremental Um let me address that second one first.
1:50:54 - So I think you're noticing that incremental is using a lot of zig cache
1:51:00 - size and reason for that is we
1:51:06 - uh incremental is designed to obviously to um
1:51:14 - take an already existing output and then minimally mutate it to the new state. So
1:51:22 - you build something, you make an edit, then the compiler makes a minimal edit to the output to in correspondence with
1:51:29 - the edit. Um that doesn't fundamentally produce a lot
1:51:35 - of cache garbage. But because we don't yet have uh robust state saving for
1:51:43 - incremental compilation um the default cache mode that we use creates a new artifact every time which
1:51:50 - can build up um could build up a lot of trash in the cache. So uh
1:51:58 - again I'm not going to solve that problem by trying to I don't know mess with how
1:52:04 - much size things are. I mean we already we already optimized that a lot but my
1:52:09 - point is uh the thing to work on there is finishing the state serialization so
1:52:15 - that we can go back to the default where when you use incremental mode it just
1:52:20 - updates your already existing binary. It doesn't create a new cache entry. Um and then that will stop creating garbage
1:52:26 - because it'll just keep editing the same file over and over again.
1:52:32 - Uh let me let me get your other question too because that one was also quite reasonable. I think uh where was that?
1:52:44 - Oh, I lost the other one. Sorry. Oh. Oh, yeah. Yeah. How does mutexes and futexes
1:52:49 - work with new IO? Um they're part of the IO implementation. So the uh
1:53:05 - uh okay so the interface this is the part that you interact with as a user
1:53:11 - has to define like a mutx and this um
1:53:16 - I guess we have two possible implementations here. So we basically just define UTX as like
1:53:22 - an integer and um and and then a IO implementation
1:53:30 - gets to use that integer and they have to fulfill mutx lock and mutx mutx
1:53:36 - unlock um uh vtable functions.
1:53:41 - So uh and but the important part the
1:53:47 - important point here is that um
1:53:52 - the locking is implemented in the interface. So it does have to coordinate a little bit with the with the
1:53:58 - implementation. So it is a little tricky. So the interface has to have the state. The
1:54:03 - state can't be on the other side of the v table. And uh so then in lock
1:54:10 - um it it it tries to do the lock without calling the virtual function and then if
1:54:17 - it can't then it calls into the implementation there. So there is a little bit of coordination that we have to do like the the implementations do
1:54:24 - have to satisfy like a certain AI kind of um
1:54:29 - uh but there's some there's some ways we can tweak this and ultimately it's yeah it works fine. So then you can in in
1:54:34 - this function you can implement it in a way that integrates with your IO implementation. So if that's a thread
1:54:40 - pool you literally just lock a mutex. If that's um like I ring you actually just
1:54:45 - yield and start doing something else and then you try to uh and then you you
1:54:51 - don't even schedule the the the fiber until the mutex gets unlocked. It's
1:54:57 - pretty it's pretty nifty. So the mutexes will work like correctly according to the like IO implementation uh that you
1:55:04 - pick.
1:55:15 - Uh is the file system watcher code accessible from std? Okay, I understand
1:55:21 - this question. So in the build system we have uh let me go back to the main branch.
1:55:29 - So in the build system we have this um this code which implements file system watching and as you can see it requires
1:55:36 - like different logic on Linux or on Windows or on here or we have only KQ
1:55:43 - and on Mac OS it has that limitation about whatever that thing is. Uh so the question is like can this code be more
1:55:49 - general purpose and I understand the desire for that but uh having used these
1:55:54 - APIs a little bit I I kind of think that file system watching is just like not
1:56:00 - very abstractable. Uh I just I feel like whenever you do file
1:56:06 - system watching you kind of have to tightly couple with the application
1:56:12 - logic. It's just unfortunately it's just kind of a resistant topic to
1:56:18 - abstraction. So I have no plans to try to like generalize this and put it in the standard library. I think that
1:56:25 - answers the question.
1:56:40 - Loris, did I miss something good? How am I doing on QA?
1:56:49 - Shouldn't the allocator also be part of the IO interface? Couldn't getting
1:56:54 - memory also block on some systems also memory kind of is external in a sense as
1:57:02 - files are for example. So basically writing to memory versus writing to a file for example could be
1:57:07 - seen as IIO like if you memory map something that's an interesting point
1:57:18 - H also in
1:57:24 - uh just in kind of like collaboration with this question if we look at uh the
1:57:32 - implementation of a MN and like green threads event loop. Uh
1:57:39 - we do have to allocate memory. So for example,
1:57:45 - like where do we even get an allocator from? Oh, okay. So we actually have we actually just have to pass one in on
1:57:51 - initialization and it has to be a thread safe. Um it is pluggable though. Could
1:57:56 - just be kind of any allocator. So I don't know. I could maybe see
1:58:04 - uh well
1:58:10 - no actually I think here's one answer to this question one valid implementation
1:58:15 - of IO is again singlethreaded blocking eager
1:58:21 - so whenever you call async it just does it and then this is nothing and then when you do this it's always a noop okay
1:58:29 - cancel does nothing. This does no. H sorry this does nothing. This does nothing. Mutexes do nothing. Conditions
1:58:37 - do nothing. These just do everything immediately. You get the idea. In such a
1:58:43 - IO implementation, which is totally a valid implementation of this interface, you actually don't need an allocator. Um
1:58:50 - it would be kind of inappropriate to put an allocator in the interface in that case. like what would it like it would
1:58:56 - be it would expose memory allocation functionality and then nothing else like
1:59:02 - it would nothing else depends on memory allocation. Uh so
1:59:08 - so that's my argument for why it does not belong in the IO interface. It's kind of I think people have a tough time
1:59:14 - with like well it's so big right there's going to be so many functions in here and like yeah it's going to get a lot
1:59:19 - bigger. It's probably going to take up like multiple screens with how much functions are in here. Um, but I actually don't think allocation is going
1:59:26 - to be in there. I think that's actually a lower level primitive that doesn't fundamentally relate to IO. Um, even
1:59:34 - though all these other things kind of like counterintuitively do relate. That's that's my current opinion. But,
1:59:39 - you know, we can we can see how the situation turns out.
1:59:47 - Uh, okay. So, here's a question. Reader has a vable and
1:59:56 - uh reader has a vable and IO has a V table. Does that mean each read op goes
2:00:02 - through two levels of virtual function call? Oh, that's a good question. Um, yeah, I'm pretty sure that does mean
2:00:08 - that. Uh, it's going to be one virtual function call per uh stream, right?
2:00:16 - Because also if you chain streams together, like if you if you take like a network
2:00:21 - socket reader and then you pipe that into like HTTP and then you pipe that
2:00:27 - into um uh like gzip or something, every
2:00:32 - time you do that, you're going through one like virtual function call.
2:00:37 - Um, and so yeah, one for and then at the very very end of the pipe, uh, you're
2:00:44 - going to be plugged into the IO probably at the beginning of the pipe and at the end of the pipe, right? So you're going to have one virtual function call into
2:00:50 - IO at the beginning of the pipe, probably like to read from the socket or something and then, you know, one at the
2:00:55 - end of the pipe like maybe you're you're writing to the file or vice versa.
2:01:06 - I think that answers the question. Uh oh yeah and then also related to that is uh yeah de virtualization.
2:01:12 - Uh yeah and this is where this thing comes in um restricted function types.
2:01:17 - So if Yeah. Yeah. And and this is this is critical. So if you only have one IO
2:01:23 - implementation in your program which is the common case it's not necessary but it is the common case. uh then
2:01:31 - your restricted function types for your read table will all have exactly one possible collie in all of the function
2:01:39 - pointers. And so those basically become non-verirtual like for free. Like I'm not talking about an optimization going
2:01:46 - in and detecting this. I'm saying that like the front end of the compiler will just spit out a direct function call
2:01:53 - because it knows that actually there's only one implementation for this given
2:01:58 - like function pointer. Yeah, that's a great point. Thank you.
2:02:06 - Oh, so many good questions here. How am I doing? I feel like I'm I'm getting a little um
2:02:12 - getting a little tired like uh been talking for two hours, you know?
2:02:20 - Loris, do you have any uh suggested questions that I missed? I've been kind of um Yeah, I can I'm happy to do a
2:02:26 - couple more. I've just been kind of missing a lot, so I don't know what I um what I missed.
2:02:34 - Oh, this is a good one. So, what's on the more medium to long-term horizon? So, not what I'm already working on, but
2:02:40 - like maybe tackling after the immediate priorities are handled. That's a good one. Okay, so immediately
2:02:48 - is streams for me, followed by um by
2:02:53 - like async.io stuff, and then fuzzing. We already talked about all that stuff. Uh
2:02:59 - okay. Yeah. So, I I do I am looking forward to getting more done on my music
2:03:05 - player and my music production software. I've been kind of
2:03:10 - uh unmotivated to work on those like right this month because I really want to use this async stuff in them. Um, so
2:03:19 - I think I'm going to uh I'm going to want to work on that more.
2:03:27 - And that's going to make me want to do um well, it's a
2:03:33 - it's a web server, so it's going to make me want to do a little bit more like web server stuff. Um, I actually think Zigg
2:03:39 - can be a great fit for web programming. Uh, and
2:03:46 - oh, that's a good one. Yeah. Um, I think right now it's not great and I'm not
2:03:51 - like super happy with standard library APIs related to web stuff. Um, but
2:03:59 - contrary to popular opinion, uh, I think like once we kind of get the standard library worked out and the async story
2:04:04 - worked out, I think Zig is going to be really nice for web development for like servers and um, well and for the front
2:04:10 - end if you're using web assembly and so I kind of want to I guess I want to get
2:04:15 - more into the ecosystem, you know, like I want to be a user of Zigg. I want to use Zigg to make a useful application
2:04:21 - that users use to make music or to listen to music and um and and help out with the ecosystem
2:04:27 - and and like make the package manager like really convenient to use and stuff like that. So probably for me just yeah
2:04:33 - getting more into like being a user uh in addition to um compiler dev, you
2:04:38 - know. Hope that answers that. Um pin strrus. Okay, so pinstrus
2:04:46 - the primary motivation for pin strrus was async8.
2:04:53 - I think this was an accepted proposal until recently. Tada. Uh
2:05:03 - now the new strategy for um async await does
2:05:10 - not require pin strrus.
2:05:16 - So that that pin holding the feature in place is now gone.
2:05:23 - Additionally, uh speaking from experience, the compiler code, not just the
2:05:29 - implementation, but also as a user reading the code, it did become a lot
2:05:34 - more complicated in order to understand result locations. And I'm not talking
2:05:41 - about types, those are pretty handy. I'm talking about pointers. So the idea that
2:05:47 - if you like return then uh sorry if you do like assignment for
2:05:53 - example then like the expression gets the pointer of the assignment result and then if you have like a sub expression
2:05:59 - it kind of gets forwarded through the result and stuff. Uh also a lot of people like to complain about this because it can result in like aliasing
2:06:05 - problems um which is annoying. So wouldn't it be nice if we just didn't
2:06:12 - have that anymore because we don't need it anymore. So that's kind of the like reason that this is now up for uh
2:06:19 - discussion again and not not already decided.
2:06:24 - Uh and as usual the default in Zigg is no to any new feature. So you have to
2:06:32 - you have to win against the null hypothesis of don't complicate the language.
2:06:38 - Um, so in order for this to be accepted
2:06:43 - again, there would need to be a compelling reason to add a feature to the language that is actually not really
2:06:50 - needed in order to accomplish any of our goals.
2:06:56 - Feel free to ask a follow-up question about that. I hope that I hope that's enough.
2:07:03 - Any
2:07:09 - plans for syntax sugar for the vtable based interfaces? Still no. I'm actually
2:07:14 - pretty happy with status quo having just done a lot of work on
2:07:21 - um on interface design. Oh, I have tea here.
2:07:27 - Oh, that's nice. Uh, one thing in particular that I noticed is that let me let me um give
2:07:34 - you an example. So, I'm going to go over to
2:07:39 - this branch again. Uh, this one.
2:07:50 - Now, what you're seeing here is a manually constructed interface. So in other programming languages uh they
2:07:56 - would just have like the keyword interface or it would be a trait or something like this and um there's
2:08:04 - certain rules what you can or can't do. So for example uh if we look at the equivalent um like rust traits uh
2:08:13 - brighter
2:08:18 - there it is. Okay. So this is the right traits. Cool. Cool. um you don't get to create like you know
2:08:27 - this field um in fact you don't get to create any fields because um they've
2:08:32 - decided that you can't have fields on traits and I'm sure there's a really good reason for that um but that's a
2:08:40 - decision that limits what the programmer can do. Now here we don't have this limitation. So what I've done is I've
2:08:47 - actually pulled these buffers from beneath the vtable to above the
2:08:53 - vtable. In other words, they're here. They're not inside. So whereas previously we would have to create kind
2:09:01 - of like a buffered writer as a kind of like thing in the chain, um it sits
2:09:08 - instead above the interface or in the interface. It's not in the
2:09:13 - implementation. Which means all these functions down here that are not generic and that are
2:09:22 - not virtual all these functions are in the interface. So for example write bite
2:09:28 - this is a you know an important one. Um
2:09:33 - the likely condition for this is that it goes here and all it does is just update
2:09:39 - the buffer, update the counter and then return. And this can get inlined and the
2:09:45 - optimizer can fully understand it. Um, static analysis can fully understand it
2:09:52 - and it only goes into the slow case if we run out of buffer. Point is that it doesn't make this
2:09:58 - function call only it rarely makes the virtual function call. And that's true
2:10:04 - even if we're sharing this code through the entire program for a 100 different IO streams. We don't have to create a
2:10:10 - 100 different versions of write bite. the same version of write bite can be reused and it's still efficient and it's
2:10:17 - because we put the buffer in the interface not in the
2:10:22 - implementation that's why you'll not find a buffered writer because the buffer is in the interface and you can see it's referenced all over the place
2:10:31 - um where was I going with this right my point is uh you just you can't do this
2:10:38 - in other programming languages because they don't let you do it But here we can. And the only reason that I like
2:10:44 - just got to play with this is cuz I didn't give myself an interface, you know, that like prevented me from doing
2:10:50 - that. So I actually think it's nice um to make your own to build your own interfaces.
2:10:57 - And furthermore, I think it's the appropriate amount of friction. I think that creating an interface should be
2:11:04 - kind of annoying and you should have to put a lot of thought into it and you should just usually simply not do it.
2:11:10 - And so I I I'm actually quite happy with status quo. And I I like doing this
2:11:16 - chore of setting this thing up was not the thing that took a long time in this branch. No, no, no, no. It was editing
2:11:22 - all of the usage code to to be um to follow the different semantics.
2:11:30 - Um so I'm still pretty keen on not adding any uh object-oriented programming basically is what I'm
2:11:35 - saying. Okay. Okay. Well, I should probably get to this topic then. Um, yeah. Uh,
2:11:44 - I think let me do one more. We'll do one more and then we'll wrap up with this. So, let's see if there's a good one to
2:11:50 - end with here then. Um,
2:12:05 - okay. Let's
2:12:13 - wait. I'm trying to parse this one. Uh, how will tightly integrating other
2:12:18 - operating systems with the standard library work? apart from okay if I understand this question
2:12:25 - um it's kind of asking like is implementing the IO interface
2:12:32 - 100% of porting to a new OS and if not like what's not included in it. I think
2:12:40 - that's kind of what what you're asking. Um yeah that's a great point. think that
2:12:46 - porting an IO implementation uh to a new operating system should be
2:12:54 - almost all porting work. I think there should be very little beyond that. Um
2:13:00 - but your uh your IO implementation is still
2:13:07 - going to depend on operating system bits. So for
2:13:12 - example um let me switch branches once more.
2:13:20 - So here we have uh sorry here we have an IO uring implementation of um of IO
2:13:31 - and my point is simply like yeah we need to
2:13:37 - call into IO ring in order to do this. So this implementation depends on
2:13:43 - standardoss.io uring. So um you could kind of call this just the
2:13:51 - IO implementation but my point is like yeah you have to go implement all these bits that this code is going to call
2:13:57 - into. Um likewise in the same branch we have um
2:14:06 - thread pool. Okay. So the thread pool uh it's it's
2:14:12 - not only for one operating system. It's for multiple operating systems. But in this case we depend on um like yeah
2:14:19 - standard.thread and standard.thread obviously is going to uh have to do operating system
2:14:25 - specific stuff in order to spawn a thread. So we have like a windows implementation pix a Linux aasi you get
2:14:32 - the idea. Uh, so
2:14:37 - it's yeah, I mean it's mostly just implementing IO, but that does involve just kind of a bunch of different little
2:14:43 - bits and bobs and those got to go somewhere and maybe those go in different files. I think you get the idea.
2:14:52 - Okay. Uh, yeah, I think uh I think I'll start wrapping up then. And so finally I
2:14:58 - should Yes. How do we support financially the ZSF? Right. So for those who don't know um Zigg project is
2:15:06 - managed by the Zig software foundation. This is a
2:15:12 - 501c3 nonprofit corporation. Uh not to be confused with a 501c6
2:15:21 - which are allowed to lobby the government. We are not allowed to lobby the government. We are apolitical. We
2:15:26 - are just trying to work on software. Um,
2:15:33 - we I'm due for making another one of these blog posts. So, I need to I need
2:15:39 - to do the one for last year. Um, but here's one from last year. So, we you
2:15:47 - can look here for our 2024 financial report. I'll I'll do another one soon for 2025, but you can see how we spend
2:15:53 - our money. You can also see how we get our money. And um we do like to try to
2:15:59 - keep a good slice of this as individuals. So GitHub sponsors, individuals, benevity
2:16:06 - um it's good it's good to be independent, you know. Uh and so whenever
2:16:12 - whenever we get individuals doing sponsorings of us, uh it makes us very happy because it helps us to um keep a
2:16:20 - diversified income and stay independent. So, we really appreciate all the individuals who sponsor us. Uh, yeah,
2:16:27 - and if you want to help out, um, that's honestly the best way you can help out. Like, we got people chomping at the bit
2:16:32 - for ZSF contracts, and if we have more money, we can give out more contracts.
2:16:38 - Uh, how do I do, Loris? Are we good?
2:16:44 - Okay. Yeah. Yeah. There's also uh Oops.
2:16:51 - I guess I'll just go navigate to these links then. Every.org, Zigg Software
2:16:56 - Foundation, Inc. Yeah. So, you can also go to this platform. We like Every.org. Every.org is pretty nice. It's also a
2:17:02 - nonprofit. Um, and so it's not Yeah,
2:17:08 - we're a little bit worried if you know uh Microsoft decides to cancel GitHub sponsors or something, we lose a lot of
2:17:13 - income. So, uh we feel that this one is a little bit more stable. That's our primary preferred way to get money. But
2:17:20 - um you know every way is fine. You can also do uh yeah you can also do GitHub sponsors
2:17:27 - which is also fine if that's if that's what's easier for you. Great. Um yeah really appreciate that.
2:17:34 - Uh and if you can convince your company to kick us you know uh 1k a month or something if you if you use I get work
2:17:41 - then uh awesome. Really would appreciate that. But um either way, you know, we'll
2:17:46 - we'll figure out the money thing and we'll uh make sure that uh we stay alive and keep making Zigg better and better
2:17:53 - for everyone. Okay, I think that's it everyone. Thank
2:17:59 - you so much for visiting. Um thanks for following along and um happy hacking.
2:18:05 - Yeah, take care.
Source