Transcript
Transcript prepared by Bob Therriault, Sanjay Cherian and Igor Kim
Show Notes
Transcript
00:00:00 [Marshall Lochbaum]
Part of the reason for that is that in an existing programming languages, writing vector instructions is just so hard because of the way the syntax is done and everything. These languages just aren't oriented towards this model of programming. Maybe if Singeli facilitates it enough, more people would get into it and then they'd be able to write programs that use more vector algorithms and that do faster stuff.
00:00:32 [Conor Hoekstra]
Welcome to another episode of Arraycast. I'm your host, Conor. And today, we have a special guest, I guess, who we will kind of introduce in a second. But first, we are going to go around and do brief introductions. So we'll start with Bob, then we'll go to Stephen, then we'll go to Adam, and then to Marshall.
00:00:51 [Bob Therriault]
I'm Bob Therriault, I'm a J-enthusiast. I've been enthusiastic about J for a little over 20 years now.
00:00:56 [Stephen Taylor]
I'm Stephen Taylor, I'm an APL and q enthusiast and just enthusiastic about being enthusiastic.
00:01:04 [Adám Brudzewsky]
I'm Adám Brudzewsky. My title is Head of Language Design at Dialog. So that means I have to be enthusiastic about APL, which is good 'cause I am, naturally.
00:01:15 [ML]
I'm Marshall Lochbaum. You may know me as an ArrayCast panelist. You may know me as the creator of BQN, may even know me as a former employee of Dialog or as a J-programmer way back when. But today, I'm here to talk about none of those, but another language that I've been working on lately called Singeli.
00:01:32 [CH]
Fantastic. That would have been too much for me to remember. All right, so I think we have three announcements from Bob and then one from Adám. So we'll go to Bob first and then to Adám.
00:01:43 [BT]
So to start off with, starting with first things first, [01] The J-Primer is something that I've been working on through the summer to update it. And you might say, "Why did I update the J-Primer. " Well, the last time the J-Primer was updated was March 29th of 2001. So there's been a number of things in the world that have changed since then, let alone J. For instance, did you know that J once had a form editor that you could actually go in and edit forms with it. But it doesn't have that anymore, so there were changes made to the primer. The reason the primer is really important is it takes you from first principles of J all the way up to designing an application in a GUI form, which is a pretty advanced thing to do for a language. It does it in about 108 steps. And I've, I think, improved the navigation. I've made it a little cleaner and I've made the content up to date. So now if somebody's interested in starting with J, go into the wiki, click, type in primer, it'll bring you to the primer. Any primer link you find takes you into the new version of the primer. And there you go. And by the way, I pronounce primer because I'm Canadian and I was actually born in England. And if you're in the US, what I'm talking about is pronounced primmer.
00:03:02 [ML]
I'm the only American here, I think, and I pronounce it primer.
00:03:06 [BT]
Well, but in the US, apparently primer is paint.
00:03:09 [ML]
Well, yeah. Well, I've I've heard the primer, the paint. I've never heard a distinction and pronunciation between the different meanings.
00:03:16 [BT]
I had to look it up because my next announcement's about what Ed Gottsman's doing, but Ed is from the States as well, and he always said "primer" and I was thinking, "Am I pronouncing it wrong." And then I finally looked it up, and apparently if you're in the States, quite often it's pronounced "primmer." Specifically for teaching purposes, the paint is still pronounced "primer." But if you're in the UK or Canada, it's "primer." So that's why I say it that way.
00:03:42 [ML]
Alright, so if I if anybody calls me out on this I just say oh I'm English.
00:03:47 [BT]
You'd have to ask Stephen about that, I suppose. I think it's probably the verification of the citizenship of the UK on this panel since I moved to Canada when I was like six months old. Anyway, talking about Ed Gottsman's thing, last, I don't think we actually mentioned this on the air. The last episode, I was telling everybody about this really neat search mechanism that Ed's been working on. And it's actually started as a jWiki browser. And then based on what Adám had been doing with Apple cart, which is a very powerful thing, Ed's taken that and applied it to the browser, all the Git repositories for J, and all the forum posts for J. And now with this device, with this interface, you can search through all those things, keywords or J primitives, and so you can type in a phrase and it'll pull up all the stuff in the wiki that uses that phrase, all the stuff that might have been used in any of the forums going back to 2008 I think is the earliest form, but maybe they're all the way back to 1998 now. Anyway, Ed's produced this thing and we're looking for beta testers on it. the things I should let you know is it does involve a hundred megabyte download of information, an sqlite database, because you do the search on your own computer and that expands out to about a gigabyte of information. So you have to have that much space and you have to be able to do that kind of a download. But once you've done that, it's platform agnostic, although you do have to be running the JQt interface because that's what this is based on. And we've been been working on it since about March of this year and it really is it changes the way you gather information on the language and there's also a video demo that Ed did and we'll put links in the show notes as well as a link to Ed's email address if you're interested in beta testing and that'll all be in the show notes and so if you go to the show notes you'll be able to access that and if you're interested test this out and if you are beta testing it's much appreciate it if you keep giving feedback, but even if you just want to try it out, you certainly feel welcome to do it, and I think you'll find it's a very impressive way to go through the wiki. And the final thing, my final announcement is, and I think I mentioned this in passing last time, but Eric Iverson has actually produced a full page how-to on linking JHS, which is the web-based version of J to AWS. So if you have an AWS instance, you can actually create a J server so that any of your remote devices can log onto this, and then you've got J running on any remote device you want. Of course, J already has an iOS app on it, but this is sort of wider than that. So this is basically, if you want to link back to your own server, and then of course, with AWS, you can decide how much you want to pay them how big a machine you want to run it on, but all that information again, this will be in show notes, is documented through Eric's instructions. That's it for my announcements.
00:07:04 [AB]
APL Wiki has been mentioned a few times and it's supposed to be this community project. [02] It was kind of boosted, bootstrapped by Dyalog, but the information there is often general, although probably skewed somewhat towards information that's specific to Dyalog APL. And therefore also the people who have been working on it until now have mainly been Dyalog employees or Dyalog affiliated people. I'm one of those. And I would like to step down as administrator of APL wiki and instead have somebody come in from the community who's not Dyalog affiliated. So that when there are questions as to how things should be run there, there will be more of a unbiased discussion between the administrators. So to you as a listener, if this sounds interesting to you, then contact us whichever way you want, whether it's to the ArrayCast or whether it's to Dyalog or to me personally, by any of the ways that I can be contacted online. And let us know, maybe we'll have a talk and see if it would be a nice thing for you to fit into.
00:08:18 [BT]
And contact@arraycast.com is the way you get in touch with us through Arraycast, and we will pass any of that information on to Adám.
00:08:26 [CH]
Awesome. And as mentioned, you can find the links for all of that stuff in the show notes. And I do not have an announcement, but I do have an apology on behalf of the terrible audio that you may or may not be listening to. It is not my fault. It is Linux's fault. I mean, it's my fault because I thought I had this working 20 minutes before the recording, but I think launching Zoom at the same time as Audacity is what's messed this up. And anyways, my mic through my headphones, which are much nicer is being not used and it's using the default mic and anyone that's been on Linux that's tried to do audio stuff feels my pain, but don't leave the podcast. We will be back in two weeks with a crispy audio experience, at least for me. Everyone else sounds great.
00:09:13 [BT]
Well, what about this episode?
00:09:18 [CH]
(laughing) I mean, still keep listening. It's gonna be a great conversation. But if you're wondering what happened to Conor's crispy audio quality, it's Linux's fault. That being said, today we are here to talk about Singeli, which is a language on its own. It is also the language beneath BQN, but I think for the most part, we're just gonna be focusing on what is the Singeli programming language. And with that, we will throw it over to Marshall, who I think you know quite well by now. But if you'd like to introduce yourself again, just in case the listeners confuse Marshall, feel free to, or if you want, you can launch straight into a description of what the Singeli programming language is.
00:10:01 [ML]
Oh, no. I wouldn't like to introduce myself. I would like to introduce the Singeli musical genre, [03] both because I think it's an interesting introduction to Singeli, the programming language, and also because I've taken the name from some very talented producers who have put a lot of hard work into building up this music genre. So I owe them a little respect. But Singeli, so it's a type of dance music that originates from the city of Dar es Salaam in Tanzania. And one of the first things you'll notice about Singeli music is that it's very fast. It's faster than pretty much anything else out there in terms of dance music, techno or gabber or whatever else. And so this is also a thing about the Singeli programming language that is maybe the first thing you'll notice, is that it's meant to be very fast when you program in it. And the other thing about Singeli music is that it's a mix of older African percussion styles with a lot of newer things, like there's all the new studio technology that allows them to push these tempos up. And so that makes a very distinctive beat that you get as well as newer delivery. It's a lot of wrapped in delivery and that sort of stuff. It's very fun though. So much like that, the SingLE programming language is a mix of more traditional compiled languages like C. But on top of that, controlling that, you have something that's not necessarily a newer idea, but it feels a lot newer, which is a pure functional language that works with concepts in the source code. So it's like a meta language above this that's controlling your program. And that allows you to write a lot of code, like if you have-- so it extends the idea of generics on different types and allows you to write a lot of code that does different things very quickly and adapt to whatever conditions you need for that specific case as well. So that's sort of the very broad overview. Singeli is designed specifically to be to write fast algorithms, but also to give you a lot of, well, to let you write these by working at a high level while still giving you control over everything.
00:12:22 [BT]
So I'll take on the responsibility of bouncer for this episode. Is Singeli an array language.
00:12:28 [ML]
I believe so. It's not Iversonian. That's an important thing. Like I said there too, well, yeah, Singeli has an array language, you might say.
00:12:39 [BT]
You're talking to the bouncer at the door here, buddy. You might get the rush out.
00:12:44 [ML]
How do you draw this on the Venn diagram, right. [04] If you've got-- well, now, that's not really true, because both halves of Singeli have array characteristics. So first, at the interpreted level-- so when you run the Singeli compiler, I call it a compiler because you're running it to compile your source code to object code. But it actually really just runs as an interpreter. Like when I'm implementing this thing, I'm basically writing an interpreter. And in this interpreted language, what you can do is you have. They're not called arrays, they're called tuples. But so you can take two tuples, you can take the tuple of 123 and the tuple of 345, and you can just add these together. It's one thing that the base language supports. Or 123 plus 2. And that applies to all the arithmetic built-in arithmetic operations.
00:13:34 [BT]
So quick question, you said 1, 2, 3, and then said this is a tuple. So is 1, 2, 3 comprised of two parts.
00:13:42 [ML]
Oh, yeah, Stephen isn't going to like this. But the way that you write the tuple 1, 2, 3 is you write-- there's no list notation inherently. But what you write is a generator. So that's what a compile time sort of function is called. You write a generator tuple, and then the curly braces, which is how you call a generator, and then 1, 2, 3, and the closed curly braces.
00:14:04 [AB]
How is that different from an array?
00:14:06 [ML]
I call it a tuple to avoid the confusion, because you have both the stuff that's happening at the compile time and what you want the CPU to do at runtime. You might think, if I say an array, you might want to be thinking about a pointer to integers that's going to happen at runtime. [05] So that wouldn't be a tuple, because that's a pointer type. So to distinguish these, I have some slightly weird terminology about what happens at compile time. So you've got the generator, which is different from a function that you can call at runtime. And you've got the tuple, which is different from a pointer that you use at runtime.
00:14:46 [BT]
And it sounds to me like a tuple is more like a process than it is an object, right.
00:14:49 [ML]
Well, it depends on what your perspective is. So this is the split personality. You can think of it as a C programming language where you get to do a lot of stuff at compile time. And in that case, a tuple is kind of, you know, meta information or something, because the CPU never sees this tuple. I mean, well, it does when you're compiling, but that's not what you care about. But on the other hand, if you think of it as an interpreter, then, I mean, it's just the same thing as k. You've got a list and you, and I mean, sure, it's called a tuple, but you've got this list and you can do whatever you want to. You can do, you can add it to other lists. You can do, you need a library for this, you can do a built-in library. You can do a plus scan on it. You can do all sorts of other k operations. So in that sense-- and I mean, you can even-- this is what one of the tutorials goes into. It's sort of an oblique way of looking at the language. It starts out by saying-- by just ignoring anything that's compiled and just says, oh, this is an interpreted language. And here's how you run interpreter stuff with it. And it writes like a Fibonacci program. I think it starts with "hello, world," actually. So you can actually use this as if it's just a pure functional interpreted language. And it looks weird, but not that weird. I mean, it's recognizable as a programming language. So in that sense, you've got the interpreted Singeli, which, I mean, it's not an Iversonian language, like I said. It's not the most array-focused language, but it seems pretty much like an array language to me.
00:16:18 [AB]
So it's functional, non-Iversionian array language, and type-wise?
00:16:26 [ML]
It's dynamically typed. Right.
00:16:29 [AB]
OK, so does that have combinators?
00:16:31 [ML]
There's a bind generator built in, so sort of. Although bind you don't really use anymore. And of course, it has closures and everything, [06] so you can define your combinators.
00:16:43 [AB]
But you can define combinators in in every functional programming language.
00:16:47 [ML]
You can define combinators in every functional programming language. I mean, if it doesn't have closures, it probably doesn't qualify as functional, but not every language has closures.
00:16:53 [CH]
Can I ask what you keep using the word "generator" and I think that's maybe tripping me up because "generator" is a very overloaded term and.
00:17:04 [ML]
That's part of the point.
00:17:06 [CH]
Programming language terminology. So generator, when I hear generator I think of like generators from Python which I think is probably the.
00:17:18 [ML]
Yeah, that's maybe one of the meanings it doesn't have, where you, you basically construct a list by yielding new elements.
00:17:25 [CH]
Where you basically construct a list by yielding new elements. You can think of like iota is a generator or a factory function in that. It can be lazily done, right. But that's not-- from your explanation, that does not seem to match up with the way that you're using that word. So can you explain what you mean by the word generator.
00:17:42 [ML]
So from the interpreted perspective, it's just a function. The problem is that there are also functions that you can call and that you can export to C as function pointers and that sort of thing. So I don't want to call them functions. But what the word generator also means is, like the primary use of the generator is to generate code. So in that sense, it's a generator. And also it's an extension of the idea of a generic. So the word generator kind of fits in with that nicely. So to explain why it matches up with this generic concept, so consider the idea of a generic function. You have a function which is generic over a certain type. And maybe this function takes a pointer to that type and it returns one of that type. It could be a function that adds up an array or something. In Singeli, there is a dedicated notation for this. But all this really is is a generator that takes a type and returns a function that it creates when you call it. So in that way, a generator is just an extension of the idea of generics that's more-- that's kind of designed in such a way that you can use it more conveniently as a programming language.
00:18:59 [CH]
I guess I'm still trying to wrap my head around calling-- calling your arrays, your quote unquote arrays in Singeli, are called tuples, the arrays that we know from interpreted Iversonian array languages like APL, JK, et cetera. And that's a tuple in the form of a generator. So are we just supposed to
00:19:23 [ML]
Well, no, a tuple isn't a generator. A tuple is like a generator in that they're both concepts that only exist at compile time. But I mean, a tuple is just a compile-time array, and a generator is just a compile-time function in this meta-dynamically-typed language that you have controlling everything.
00:19:42 [CH]
Wait, a tuple is a compile-time array. What's an array, then. I thought the reason that you were just delineating between tuples and arrays was because arrays were the compile-time thing, and the tuple was the interpreted generator thing.
00:19:55 [ML]
No, so there's some confusion about compile-time. By compile time, I mean that the tuple only exists at compile time. Arrays, I mean the data in the array only exists at runtime. At compile time, an array looks like a pointer type, which is just kept purely symbolically. All it knows is that its type is a pointer to whatever element it has. So arrays, I would say they're a runtime construct. But of course, you have some compile time stuff that knows about them. but it's not really working with the array. It's just it's working with the operations that you would apply to the array.
00:20:33 [BT]
So a tuple is at compile time, and it ends up creating what normally we would think of at runtime as an array. Is that right?
00:20:43 [ML]
No, it doesn't exist at runtime the tuple.
00:20:47 [BT]
No, no. The tuple exists at compile time.
00:20:48 [ML]
Yeah, yeah. And it's purely an organizational concept. So one thing you can have is a tuple of types. Because in this interpreted language, types are just first class values. [07] There's nothing special about them at all. They're just like symbols, basically. There is a literal type that's called a symbol, and that's a string. But types are just data. And so you can put these types in a tuple, of course, because tuples hold stuff. And then if you call a function on a variable whose type is a tuple, in order to compile that, what it'll do is expand this into three variables, actually. Well, I didn't say how long the tuple is. If the tuple has three elements, it'll expand it into three variables that have those types in sequence. And so you get, at the end, once you've compiled it, you get a function that's compatible with C that you can export to C and that will run where all the arguments are just flattened. So the tuple doesn't exist anymore. It's gone. can't figure out where the variables came from. I mean, I guess the variable names have some information embedded in them, but that's not-- like, I mean, that's just for debugging purposes, really.
00:22:04 [BT]
So are you copying the functions at that point, one for each element of the tuple. Is that right?
00:22:10 [ML]
It makes a new variable, like a runtime. And I guess I should say, what Singly does now, and so what I'm going to mostly talk about this in terms of, is it generates C code. And the setup is that it actually generates an intermediate representation, IR, that is structured in more or less an assembly style. So it'll say, start this function, add this variable to this variable, mutate this variable with this new value, and so on. And then there's a back end that compiles that to C. So there could be other back ends. Right now, a C back end is very convenient because with the other stuff that we're using it for, including the VQN implementation, it's just convenient to be able to hook in directly to C. So generating C code is a very easy way to do that.
00:22:59 [CH]
So let's recap, because I'm pretty confused. A tuple is a compile-time concept that does not exist at runtime. Array is a runtime-- I don't know if you want to call it a concept.
00:23:11 [ML]
Yeah, I mean, well, I don't really say it has arrays. I mean, it has pointers and you might consider a pointer to be an array.
00:23:18 [CH]
OK.
00:23:19 [ML]
And that was enough for me to avoid using the word array, but it's not. Really, all it has at runtime is there's memory, and you have pointers to it. And so you can load and store it a particular pointer.
00:23:32 [AB]
Do you also have vectors?
00:23:33 [ML]
No. Oh, yeah. So yeah, this is the way. I said that both halves sort of have their own array programming.
00:23:44 [CH]
This interpreted runtime view and the compiled compile time view. [08] So I've been talking about this runtime view where you have the two pools and you can do array programming with them if you want. But also, runtime, when you compile it, it's not as much of an array language because, like I said, in memory, all you've got is pointers and values. But because the CPU supports-- I mean, it supports these vector extensions, which I'm not sure if I'd call those array programming exactly, but there's certainly a lot that they have in common with array programming. Your generated code might-- and I mean, this was the original reason we wanted to design Singeli, because C is bad for all sorts of things, but it's especially bad for working with vector instructions. So you have these vector instructions where the CPU is going to hold for you a vector register that might have eight 4-byte integers or something like that. And then there's an instruction that adds two of these registers together as 4-byte integers. So in a way, that's also vector programming, right. You're able to add multiple values together at once. And there are all sorts of different instructions that work on lots of values. But it's always a fixed number of values. So it's not as much like an array language because you can't just say, oh, I have an array. I'm going to do this thing to it. If you want to do that, you've got to split this array into chunks that are the size of a vector register.
00:25:21 [CH]
So we covered tuple, runtime concept, array, not really something that is literally there, but morally, you can think of the pointers at runtime as arrays. When you said initially it's kind of a C-inspired language, in that inspiration was not arrays in C.
00:25:43 [ML]
Yeah, well, I mean, C just barely has arrays, right. It converts them to a pointer at the drop of a hat.
00:25:50 [CH]
Right, right.
00:26:03 [ML]
So C++, yeah, definitely. [09] You've got all sorts of vector and collection types. In C, I mean, it's sort of trying to cover over pointers so that they're arrays, kind of, but it doesn't try very hard to be convincing. So I didn't feel this is a very useful aspect of the language.
00:26:10 [CH]
That's a good point. I think my brain defaults to the C++ std array, which is a compile time, fixed length, data structure, which is definitely not (Marshall agrees) what the moral equivalent of an array (aka, pointer at runtime) is. OK, so that much makes sense. You've got your two halves, your interpretive view ... [sentence left incomplete].
00:26:34 [ML]
Yeah. And I mean, if you're programming in C, you'll still think of it as: "oh, I'm passing in an array", even though the actual type is a pointer (Conor agrees) and a length. So it's easy to think in terms of arrays. And I mean, pretty easily, you can form the tuple type that has a pointer and a length. So in Singeli, that's even more convenient and you can pass an argument that has this tuple type. I mean, you could even do this and always use utility functions for working with the arrays and never even deconstruct the tuple. So in that sense, if you want to think about programming at the array level, you can do it. It's just that there's no sort of abstraction that prevents you from going down into the level of the pointer, which is what the CPU is actually doing. So Singeli is designed to have this low-level control. I wanted you to be able to immediately go and say: "no, what's actually happening on the computer, because this is what I want to think about".
00:27:29 [CH]
You know, that makes sense to me now. So you got your your tuples, which (like we've mentioned multiple times now), are only a compile time concept. But you talk about these two views of the interpretive view and the compile time sort of view. But if my understanding is correct, the reason that you're delineating between these two things is because the Singeli code that you write is interpreted to generate an IR, which is then used to generate C code, which is then compiled, correct?
00:28:10 [ML]
Yeah. I mean, what you do with the IR is really just a detail. By the time the IR is created, all the compile time stuff has happened. So the IR is just a representation of what you want the CPU to do, which, I mean, it's not going to be exactly the same, because once you pass it through C, it's going to do all these optimizations and stuff. The idea is that the real important compile step is this interpretation that runs all this in Singeli and generates the IR. And at that point, all you have is low-level code, and there's no more concept of any sort of interpretation. There's no tuples, no generators, none of that. So at that point, I think of it as compiled and of course because you can't run IR you need to further compile it to actually get it onto your machine, but that's not terribly interesting even though it's difficult.
00:29:05 [CH]
Yeah, I think my brain is just hung up on the semantics. Like you're basically saying your compilation is interpretation, which is a bit of an oxymoron [laughs].
00:29:32 [ML]
Well you might think so but honestly I'm kind of surprised that this idea hasn't come up elsewhere although maybe it has. I definitely haven't seen it.
00:29:40 [CH]
That was actually one of my questions: is there some example or existing programming language out there that is similar to this that we can hang on to? Because this is definitely sending ... [sentence left incomplete]..
00:29:51 [ML]
Well, C++ templates are Turing-complete, aren't they?
00:29:54 [CH]
Yeah, but that's a compilation technique called monomorphization, [10] which is pretty well known. Like, is that ... [sentence left incomplete]..
00:30:00 [ML]
I bet it works a lot like interpretation. See, the thing is, all these compilers actually are doing something that's a lot like being interpreters. They just don't think about it this way. And as a result, they design these meta languages that (and I'm not saying this is necessarily the wrong choice, but they have these meta languages that) aren't good for programming. So you can program with them. Haskell is another one. There's even a link to this on the Singeli website. Somebody's done a fun example where you use Haskell's type system as a programming language. But it's just such an inconvenient programming language: nobody wants to think about it that way. And the idea of Singeli is: "why don't you think about it that way?" Because it's actually a really useful way to program.
00:30:53 [CH]
So wait, we gotta backup here so I can ... [sentence left incomplete]. I was in the boat for a sec and now I'm back in the water. So I would need to go look up what the definition of monomorphization is in the context of C++, but my ... [sentence left incomplete].
00:31:08 [ML]
Yeah, and I mean, I don't know exactly how any of that stuff runs.
00:31:11 [CH]
My expectation is that they would describe it as code generation, [for] which at least I need to, like rewind a bit. So what is the delineation here between interpretation and compilation, and then throw in code generation (aka in this example that we're using with C++ templates, monomorphization). And also for the listener that is totally lost, in C++ we have a template system which is morally the equivalent of something like parametric polymorphism and it is specifically called monomorphization, which basically means you can write a function like min or max that works on multiple different types. So I could write a min function that takes the minimum of two integers by just going: "int min(int a, int b)" and then I can implement it so that it returns the minimum of my two ints a and b. Or I could template that With some syntax and then instead of putting (int a, int b) as my parameters I could put my template T. So (T a, T b), where T is just some type to be determined at a later point in time. And what the compiler does is it's going to go and scan your entire program and see all the different T's that you actually need and use, and then it's going to go and stamp out an individual function for every single one of those use cases. So if you use your generic min function with ints or floats or int64s, it's going to go and stamp out a version for every single one of those, which is why it's called a monomorphization. It's monomorphizing the generic function to have an instantiation for every single version of that function that you need. Which is why, (like I said, I'd have to go and look up what Wikipedia and the standard calls it) I would expect it to be something like code generation. You are writing one generic template function and then going and generating (at least the compiler does this), a bunch of different monomorphized functions. Which brings us back to you, Marshall. So what do you see as the difference between interpretation, compilation, and code generation?
00:33:32 [ML]
Well, first, I mean, that's a pretty great description of what Singeli does [Conor laughs]. So I would say the only real difference with Singeli is ... [sentence left incomplete]. I mean, it will end up expanding this template for every type that the template is instantiated with. And I mean, instantiation, what I'd call it in Singeli, is calling a generator. So you have a generated function, which is a generator that returns a function. And then elsewhere, when you write it, it'll have exactly the same syntax as a C template instantiation. It'll have the curly braces instead of angle brackets. So you'll write function, bracket, type, bracket, and then parentheses, and you call it. And what that's doing at compile time is the way it instantiates this function is by interpreting the code. But what it actually ends up doing is, for every time, for every different type this function is called on, it will run the function in the interpreted view. So it runs the code inside the function again with this type as the argument. And that means as part of that, it creates a compiled function. And so that's the same thing as in C, instantiating the function. So, yeah, if the question is, what's the difference between interpreting and compiling, here, I mean, it's just a difference of viewpoint: that's all. The same process you can think of as either interpreting something that has the side effect of producing a compiled program, or you can think of it as compiling a program that has a particularly sophisticated macro system (macro or generator or generic or any of those things).
00:35:17 [ML]
So maybe to give my distinction between an interpreter and compiler, I do think these are two different concepts. What I would say an interpreter is, it's just something that takes source code and runs it and does whatever the code says to. So maybe that's printing output to the screen; maybe that's reading and writing files; maybe that's whatever else. So an interpreter takes the code, does what it says to do. [With] a compiler, you also feed a code in, but what comes out, it doesn't do the things you say to in the code. Instead, it produces new code in a different language that does the same things. So a C compiler takes C code in, and it produces, generally, machine code coming out. And then to run this machine code, you need another interpreter. The CPU is one common example of an interpreter. So you feed this code into your interpreter (meaning that a compiler plus an interpreter would also give you an interpreter). But the thing about Singeli is that ... [sentence left incomplete]. I mean, you might say: "well, it can't be both a compiler and interpreter; there's this category error, right?". They have different outputs. So they couldn't possibly be the same thing. And the resolution to that is to say: "well, Singeli is an interpreter if you consider the thing that the source code is telling you to do to be to generate this compiled program". And also, the other thing that you can do at compile time is run the show generator, which doesn't do anything but just prints (for debugging purposes) what value it was passed. So it's just a print or whatever function. In the interpreted view, you say these actions are what the source code specifies. The source code is specifying that you build a program as part of the way it's run. And then in the compiled view, you say: "well, I mean, yeah, sure, there's this show side effect; that's just a debugging thing". But what the code is telling you is that you've got maybe this function that's generic over a type and there are these instantiations here and so on. So you're translating this very complicated thing that tells you what to do, but with all these generics that are this meta language on top of it. And you're compiling that to this IR representation. It's only a difference of viewpoint whether you see this as an interpreter or as a compiler.
00:37:53 [CH]
I think I completely understand but I'm sure I don't. It feels like I'm back in the boat. And yes, this viewpoint is very confusing. I actually, I can think of and I will ask of a couple projects that I think do the same thing that you've described, but I think of them as compiled. And the reason that I think of them as compiled is because if the thing at the end of the day that you have (actually, that's interesting) ... [sentence left incomplete]. But no, we're talking about Singeli, not BQN. If the thing that you have compiled at the end of the day is an executable, so my guess is that you write Singeli code; you invoke the compilation process, which involves a first step of interpretation that generates you an IR that will ultimately be compiled; you end up with an executable that you are then going to run, correct?
00:38:53 [ML]
Singeli is right now used in CBQN. So it's compiled along with a bunch of C sources. It's not the entire interpreter. It's the parts that have a bunch of vector instructions or other kind of tricky programming that we wanted to speed up.
00:39:05 [AB]
So Singeli is used in implementing BQN and Singeli is implemented in BQN, am I right?
00:39:11 [ML]
That is true [chuckles].
00:39:12 [CH]
Yeah, I think my guess is that most people out there would share my viewpoint that it's compilation overall. And so here's a couple of projects that I don't fully know the ins and outs of, but (I actually don't know how Aaron Hsu does his first bit of this), but would you consider Co_dfns to follow the same process? [11] Because from my understanding, he has a first step of his program, which you may or may not call interpretation (I'm not actually sure about the details of that), that generates IR in the form of C++ code that uses the ArrayFire library. And then he uses that to compile an executable that will run your accelerated Dyalog APL code, which sounds very similar (spiritually or morally) to what Singeli is doing here. The question or not, I guess, is that first step that Aaron is doing with his Co_dfns compiler: interpretation? I don't know enough details, but interested to get your thoughts.
00:40:19 [ML]
Yeah, so maybe you could consider it that way. The thing about the Co_dfns is the source language is basically APL. There's no meta language there. You don't have something telling you: "oh, I have this one function, but actually I want this to be expanded before I even run it into all these different functions to apply with different parameters". The APL source language doesn't even have static types. So even the concept of saying: "well, compile this function for that type, and for that type, and for that type, when you're compiling it", it's not really there. So I think probably you could still frame Co_dfns as an interpreter, but it would be an interpreter for an extremely simple language where the only operation is really concatenation. And as such, this isn't a language that you can program in. It's nowhere near Turing complete. But the way that you would think of that is if you've got your program "A + B + C", you would say this is actually running a function "plus" at compile time but what the function plus does is to not actually add things, but instead tell you: "take these two values that I previously computed and generate an instruction that adds them together". And then the result is a pointer to this instruction that you'll be able to use later. So when you do "A + B + C", it adds B and C. And these B and C would just be symbolic values that represent something at compile time. And then plus tells you: "all right, generate this instruction B plus C, and [the] output is going to be temporary value 156". And then the next plus, it's adding A plus the result of B plus C, which is this temporary value. And so it generates this instruction, well: "add A and temporary value 156, and this is temporary value 157". So in that sense, you could say this is an interpreter, but all the functions this interpreter is using are exactly the same thing. They just do this stupid thing of telling you generate instruction to actually do what I'm supposed to do. And so in this way, thinking of it as an interpreter is pretty useless because all you're doing is taking a very simple process and jamming it into this model of what a compiler is doing. I mean, on the one hand, yeah, the interpreter handles all the syntax and stuff, so the compiling is like a form of interpretation. The compiler can say: "I'm not going to analyze the syntax; instead, my interpreter is going to analyze it". But when you have such a simple compiler, this doesn't seem at all like an interpreter to me. Whereas something like the Singeli compiler, which does do these abstract additions, but also calls generators, creates tuples, does all this other stuff, can do array programming. It's much more useful to think of that as an interpreter, because then you really understand what operations you're allowed to do.
00:43:55 [CH]
I think that makes sense. A part of your delineation between other languages or projects that you could argue do similar things is the sophistication of the generic capabilities that you described earlier, of the code that ends up getting generated. So like the second example (actually, in the back of my head, now based on your answer to the Co_dfns one, I think I have a pretty good idea), is single assignment C, [12] which we've talked to Bob Bernecky in the past about and I don't think that one would be in the same category. I mean you can speak to it differently if you want but I wouldn't even consider that interpretation. I would think it's just sort of a multi-step compiler that you're compiling your single assignment C code into C code and I know there's multiple backends as well so I think you can actually generate other code and then that code is compiled as well. But single assignment C doesn't have any capabilities really for generics or closures because that's just the way it was designed. So there's no generic folds or things like that. So similar to your answer for Co_dfns, I would assume that even if you were to consider or make an argument for a single assignment C as the first step being interpretation or maybe not interpretation, but comparable to what Singeli is doing, it falls short in that the generic capabilities and code generation capabilities of Singeli are far greater than that of what single assignment C has.
00:45:35 [ML]
Yeah, it'd still be pretty silly. But one thing I can say is if you have a static language (and I don't know if single assignment is compiling to C, it might just leave the type handling up to C), but if you have a language with static types, and you're going to compile, and you're going to do type checking as part of that then you have an extra step when you write "A + B + C", you can move another thing into the interpreter, which is the type checking. So when you're adding B plus C, I said for Co_dfns, these would be purely symbolic values; all they tell you is there's a value. Now you want to know the value and its type. If you have B plus C, maybe B is an 8-bit integer and C is a 16-bit integer. So if you have type coercion, which Singeli doesn't (well, I mean, maybe we'll talk about all the flexibility you have in Singeli; you could define it for Singeli, but it doesn't by default) and what you'd have is, all right: "I want to add a value of 8-bit integer plus 16-bit integer". So what I'm going to produce is an instruction saying, add these two different values of different types. And maybe it'll even produce another instruction that says, raise the 8-bit integer up to 16-bit width. And then it'll produce its result, which is, again, just a handle of where the value it produced is going to be, but along with the type of that value, which maybe would be a 16-bit integer. So you have a little more work. I mean, still, of course, no actual programming going on here, but you can see a little more why the model of compiling as an interpretation gives you something.
00:47:15 [CH]
All right, well, I've been dominating the questions, but I'm safely on the boat [laughs]. How's everyone else doing?
00:47:24 [ML]
And I mean, this took a long time to come up with. There were all these fundamental questions where (and I was working with dzaima on this too) we just didn't know what we were trying to do. I think I had the idea that it is an interpreter fairly early on, but it's really hard to work out the details and figure out what the thought processes are. Really what helped was going deeper and deeper into this interpretive model. But I mean, there are all these fundamental questions about "what does this thing mean; what does that thing mean" that were very difficult to sort out conceptually. So it's not at all surprising that this is a really difficult thing to get into at first.
00:48:09 [ST]
Would you care to tell a summary of what you think it's brought you or might get you?
00:48:15 [ML]
Yeah, sure. So we've gotten a lot of practical value out of Singeli. We use it, as I said, to speed up, particularly the vector instructions in CBQN. Now, of course, there are better languages for this. But CBQN [13] is made primarily in C. I mean, we could try something more modern like Rust or Zig or C++ even that gives us more template capabilities. But I've worked with C++ templates at Dyalog. And I don't feel like anything really gives the capabilities that I wanted, which are to have this ability to just (without even thinking), generate one function that's going to work on a bunch of different types. And also to specialize and say: "well, if the type is this and you're on this CPU architecture, actually I want to do this thing". So Singeli just makes that sort of programming where you need to deal with a bunch of different types much easier. And I've been able to do a lot of things in BQN in a week or two that I'd never finished in my years of working on Dyalog, because there was just too much complication around it. One thing I never finished was I have these functions to (well, I think there are a few things), but one of them is the functions to select from a small array. So if you have an array that fits in a vector register, the CPU actually has selection instructions for you that take an index into a vector register. Well, they don't just take one index; they take a whole register of indices, and they select all those different values at once. So normally on the CPU, if you're selecting elements 1, 5, 0 and 3 from an array, you have to do each of those with an individual load instruction. And so in terms of array language operations, that's one of the slowest things you can do, a single load for every element. But if the array is small enough and you can fit into a vector register, instead you can use these. They're called shuffle or permute instructions, and those are much faster. I was able to use those in Dyalog. I used the 16-byte vector extension for x86, and I had that for all the different types. There's only selection using one-byte indices so if you want to select two-byte elements, you have to do some weird arithmetic with the indices. But I got that working for these types. It had a whole bunch of macros and everything, which are just difficult to work with. When I did it in Singeli, it was a lot easier to take this (and actually CBQN now has these working in the 32-bit or 32-byte vectors of AVX2) and it was much easier. I don't remember (I have a bit of a problem because I don't have any testing set up on ARM yet) but I think we've got this running on SSE as well and probably ARM, but I don't remember. If we haven't gotten it running, it's just a matter of: dzaima needs to test it [chuckles]. It's not hard at all to adapt these things to different architectures and use basically the same code, where with the macros, that looked like something that was gonna be really difficult. One thing about C macros is that you can pass your type in as a macro argument, but if you want the width of the type ([or] if you wanna know whether the type is signed or unsigned) that needs to be another argument that you pass in separately. What you can do with Singeli generators is you just pass the type in and then from the type, you can compute because you have a full language available to you. You can compute the width of the type and anything else you want to know about it. Not only that, you can pass in just a variable of the type and the variable keeps track of its type, so you can ask: "hey, what's the type of this variable", and then do stuff based on that. Which, I mean, I think a lot of this is stuff you can also do with C++ templates but like I've said, I've worked with these, and they're really ... [sentence left incomplete].- If you're working at this level, you want to do a lot of programming at the template level (I mean, this is even called template metaprogramming) [14] but it's not designed as a programming language, and so it's very hard to program it. In Singeli, it is designed as a programming language, so it's pretty nice.
00:52:42 [BT]
So if I've got this straight, there's process from taking the original, whatever you're trying to do as a user, and goes through a path being interpreted, compiled, all the way through to the point where it's got an implementation, usually in machine code, I suppose, for a particular processor. And what Singeli has, I think, from what I understand, Singeli's done two things. It's moved the process along that path, maybe to a different one than you would normally have, but it's also changed the interface at that process. So you have control. If you just move the process along, it's just going to get either more complicated or less complicated depending on the decisions made upstream or downstream from where you are. But Singhali's also sort of taken that interface and then adjusted it so that you have more control over the things that you wish to have control over at that point. So you have processes at that point, whereas in other interfaces, you might have to rely on what decision might have been made upstream. Is that kind of a way to look at it.
00:53:54 [ML]
Maybe. So what is.
00:53:56 [BT]
Upstream being towards the original user and downstream being towards the final implementation.
00:54:01 [ML]
Yeah. Well, and the user being the programmer who's writing Singeli, right.
00:54:05 [BT]
I'm actually thinking more maybe in terms of BQN. You'd write something in BQN, it gets passed along, eventually hits a Singeli interface, and that Singeli interface is downstream from the BQN programmer, but that Singeli interface is also structured in such a way that you can control things so that as you pass it on down, you've got more control over the things that that are important to the original BQN programmer.
00:54:35 [ML]
I guess so, although the only possible thing that's important when you're doing these implementations is the performance. Of course, there's not just one single performance of a piece of code, but the performance across different inputs and so on. From the perspective of a BQN user, they don't care how this thing is compiled. So what Sengeli is giving you is not really any different from writing a whole bunch of very low level C functions that are all very explicit and say, I'm writing this function on this type, and it does this thing and that thing. Or even writing it in assembly if you wanted to, except that it's portable across architectures. But what Singeli does is give you a more powerful way to create all these dumb functions that do dumb things. It lets you figure out what's shared between these. I'm going to pull it out into a generator that performs this shared functionality. And one nice thing about the generators is they're so flexible. You can take a generator that does one thing, like a generator that does a plus scan or something, and you can either pass it numbers at compile time, or you can pass it these runtime values they're called registers that represent values at runtime. And so you can even take the same computation that you would perform at compile time or runtime and say, well, in this case, I know what the values are going to be, I'm going to go ahead and compute them during compilation. And in this other case, they're not known yet, so I'll pass in registers. And so that sort of flexibility gives you much more ability to write all these dumb functions that are better suited to what exact specific scenario they're designed for, as opposed to having a bunch of functions that are all generated from this code. And it's so hard to make specific choices within different functions that you can't easily specialize for the specific type you're dealing with.
00:56:35 [BT]
Yeah, and that's what I was referring to sort of as the interface in Singeli is different than what the C interface might be at that point. The interface in Singeli is more flexible. So you got points of adjustment that make it easier for you to do things that would be beneficial possibly for an upstream BQN programmer.
00:56:56 [ML]
And so, yeah, two particular things are you can easily check what type you're working with and not just check if the type is one specific type, but check if it's you say and x86 [15] instructions are really nasty about this, because there are some vector extensions that give you there's in some of the early instruction sets, there's a signed 16-bit minimum and an unsigned 8-bit minimum. But those are the only two. So you can choose and say, well, if I have one of these fast minimums by comparing the type against two different things. And this is such a dumb, easy thing when you're in interpretive language. Oh, I have a variable, and I want to check if it's A or B. That's the easiest thing you can do. But if you're programming in a template system, It's so far removed from the idea of an interpreted language that it can actually be a difficult thing to test for. And so, yeah, that interface, you have checking on types. And also, you can check, what is the architecture that I'm compiling for. What is this architecture going to support.
00:58:10 [BT]
Does it give you an advantage to looking towards perhaps compiling into a GPU. [16] Can you actually-- like, that interface, it doesn't give you that advantage at all.
00:58:20 [ML]
No I mean, I guess you could compile shader code with it, probably, I mean, because there are some GPU languages that look a fair amount like C. So you could probably have some modification of Singeli that does this. But the shader code is not when you write code for the GPU, the way that scalar operations correspond to multiple operations is that each one of them runs on a core, which is a completely different model to the vector programming or anything like that. So there's not really any clear way to use Singly for much benefit on a GPU.
00:58:54 [BT]
I think I'm out of the water. I think I'm in the boat, but I'm keeping my life jacket on.
00:59:02 [CH]
I mean, I think it's super interesting now that I've understood it. It's my mental model for it now, take it or leave it, maybe it's completely wrong, is that there are many programming languages, both statically typed, I think most of them are statically typed, that have systems for this kind of functionality. You know, C++ has templates, Zig has comp time, Rust has traits and generics, other languages have generic systems, all with various levels of ergonomics. You know, C++ famously.
00:59:47 [ML]
And you can add Haskell type classes to that.
00:59:50 [CH]
Yeah, yeah. Yeah. I mean, Rust traits are essentially type classes borrowed from Haskell. And Swift protocols, you can add a bunch. D has, I think it's called, compile-time constraints. And every one of these statically typed languages has a varying level of niceness or ergonomics around these systems. I think C++ famously has the worst one because it was invented 40 years ago.
01:00:24 [ML]
Yeah, I feel kind of bad about comparing to C++ all the time. That's what I have experience with. Not that I've never used other generic systems.
01:00:33 [CH]
I think most C++ devs, they know though that like TMP, template meta-programming via templates is it's.
01:00:42 [ML]
Yeah. I mean, my point is it makes Singeli look maybe better than it actually is to put it up against C++.
01:00:47 [CH]
Yeah. But I mean, it's extremely powerful system, but it is very, very difficult. Folks are interested, we'll add some links to the show notes by Bartosz Milewski, [17] who is the category theory for programmers book fame and functional programming fame. Before he became well known for that, he gave a bunch of talks at the C++ Now conference from the years like I think 2012 to 17, where he was showing that template metaprogramming was basically a Turing complete functional language and doing all this magic stuff. But it looked horrendous. Anyways, the point that I'm trying to get to here is that all of these languages have some facilities for doing this stuff Some are nicer than others, you know, I'd be I'm not sure if you know much about Zig Comptime I personally honestly don't really know that much but I know that many people like it and anyways, it seems like what Sigeli has done here is is take that Language within a language and made that like a first-class this is what this language is about. And because of that, it's much nicer than any of those sort of language within a languages that I just mentioned.
01:02:06 [ML]
Yeah, well, it's worth comparing to Comptime, because that is another pretty good effort, and Zig and D, [18] I think, are the, are at least two of the big names that are, that have some claim to have done this well. So, and I mean, I said I've never seen anybody do the thing that Singeli does, and it's a reasonable question to ask. What about these languages that have compile-time computations. Isn't that what you're doing. So the big difference between those and Singeli is that in those languages, the compile time language is designed to be the same language as the runtime language. So in particular, this means your compile time computations are typed. So it's like I don't think it actually does this, but it's a little like you compile your program once and figure out all the types of the compile time stuff, then you run the compile time component, and then finally your program is compiled. And the difference in Singeli is that these are designed to be two different languages. So you've got, and they share a bunch of syntax. There are a lot of constructs like the, I talked about, you know, addition that can be, that have both an interpreted view and a compiled view on them that are basically the same. If you write A plus B, it adds two things. But with a language that's different, What you can do is you can easily make types first class objects without having the concern of what is the type of a type. So I have what are called kinds. So the kind of a type is that it's a type. The kind of a tuple is that it's a tuple. Kind of a generator is generator, and so on. But these kinds aren't types, and they're kept dynamic. So you never have to declare the kind of a variable within the program. And you don't have a concern of this comp time variable is a type, so it needs to be declared as type type. But then the type of type has to be a type, and the type of that has to be a type, and so on. And that leads to a lot of foundational issues and difficulty. And Singeli just sidesteps all that by saying, well, it doesn't have a type. It has a kind, [19] and these are dynamics, so you don't have to write them in the program. And it's all done. I'm not saying one of those is necessarily better than the other. Singeli development has felt very smooth. I don't have to introduce a lot of complicated theoretical concepts or anything just because all I'm doing is designing an interpreted language where some runtime things are first class values. So that feels a lot smoother to me from a theoretical perspective. You also got this thing where, like I said, you can apply a generator either to a compile time number or a runtime number. And that works because of the dynamic typing. I don't have to say what types the generator takes because it just takes whatever you give it. So that being fully dynamically typed has some practical use, too. So yeah, I wouldn't necessarily say that the Singeli model is better than the Comptime model, but it has some advantages.
01:05:04 [BT]
So as the bouncer at the array language nightclub, I feel like I'm holding the caterer at the door. And there's a bunch of array languages inside. are asking me, "Why are you holding the caterer at the door. We use the caterer, we eat the food, let him in."
01:05:24 [ML]
And the caterer is just not actually that cool.
01:05:27 [BT]
Caterer doesn't have to be cool. Food's cool.
01:05:30 [ML]
I guess much like a caterer, the focus of the dance party is not on, hopefully it's on Singeli music, but it's not on Singeli the programming language. I mean, I do feel like it's very peripheral to array programming, but at the same time, when I'm doing this compile time programming in Singeli, I think of it like k, and I designed the tuple library, which, so it's a built-in library. You have to explicitly say include util/tupe. But once you have that, I mean, there are no. The Singeli built-ins don't act any differently from generators that you define. So at that point, there's no difference from using a language that has all these tuple things built in. And it's got scan and fold and flip, which is named after the k function that does a transpose. And so I feel like I'm doing a little bit of array programming. It's not the sort of intense array programming I'd necessarily do in BQN. It's more verbose. But, you know, to me it feels like it belongs somewhere on the edge of this array languages, not Iversonian Venn diagram.
01:06:35 [BT]
And I guess we probably should have mentioned right off the top, you have done a tutorial on Singeli. We'll include a link.
01:06:41 [ML]
There are three partial tutorials [20] contrasting with the BQN 3 and 1/2. The way I did this was I split it up, and there's one tutorial that explains to you how to use Cingelae as an interpreter, and eventually gets to the idea that you can also compile things with it. And it describes this as doing symbolic evaluation in the interpreter. And there's one tutorial where it starts with an explanation of how you can do things just like C and Singeli. And all that's different is a little bit of the syntax. And it eventually gets more complicated macro concepts. These are all readable standalone, but there's more stuff that I want to add to them. And then there's one where the narrators of these two tutorials who have very different personalities get together and write something and write actually something that we've discussed on the ArrayCast. They write a windowed minimum function in Singeli. And I haven't gotten to the point where I show the performance of this whole vectorized function, [21] but I've written all the code and tested it. And I have the method that uses a queue that I talked about. And I have this other scan-based method. And the scan-based method, when the windows are even slightly large, is somewhere around 10 times faster than the queue-based method. So like I said, total array superiority on that problem.
01:08:11 [BT]
Well, and when I read that third tutorial in fact, I read that third tutorial first, and it didn't make a lot of sense to me. Now it does, because it is the two points of view, and it's a conversation between them about how they're looking at the problem.
01:08:26 [ML]
Yeah, and there's no quotes or anything. So yeah, maybe that's not. But it has links to the other two, so.
01:08:31 [BT]
Yeah, yeah, yeah, no, but it.
01:08:33 [ML]
No, but the idea is hopefully you're able to sort that out.
01:08:36 [CH]
All right, so I've got two great questions that are short and sweet to end on. But before I ask them, I will defer to make sure we don't have any lingering, you know, Adam's drowning and he's looking for a lifeline, or Stephen's got his own boat.
01:08:51 [AB]
As an APL programmer, why should I be excited about this.
01:08:56 [ML]
Yeah, well, that's a very good question. And part of my issue with Singeli is that actually the potential audience for that, this is pretty small, at least at the moment, because there don't seem to be that many people who are interested in doing this sort of very low level optimization where you write all the vector instructions. And my hope is that part of the reason for that is that in an existing programming languages, writing vector instructions is just so hard because of the way the syntax is done and everything, these languages just aren't oriented towards this model of programming. Maybe if Singeli facilitates it enough, more people would get into it, and then they'd be able to write programs that use more vector algorithms and that do faster stuff. This is not the purpose, but maybe along the way they come to appreciate array programming more, because it that really does have a lot of ideas that are involved in vector programming on the CPU as well. But yes, there should an array programmer would be interested in this. I mean, if you've been writing stuff in C because it's too slow for your array language, then maybe you'd feel more comfortable in Singeli. Of course, you're going to have the pains that are coming with a new programming language and all that. Otherwise, I mean, if an array language is fast enough for you. Like there are reasons to use other languages, but the only reason to use Singeli can be performance. So if you don't have performance problems, you know, don't pick a language that's totally performance oriented. And I don't think the Singeli model is necessarily completely useless for programming that's not performance oriented. Like if you're programming in a way that uses tons of macros for some reason, then maybe there's some hypothetical language based on the Singeli model that would do what you're trying to do better. On the other hand, Singeli itself is so performance focused that I don't think that if you're not trying to write fast algorithms, that it's particularly something that'd be useful. Maybe it's cool to learn about, but it's not something that you practically want.
01:11:08 [CH]
All right. Well, time for my questions then. BQN has the .bqn file extension. Does Singeli have the .singeli file extension, or is it - Is it shortened.
01:11:20 [ML]
And I'm pretty sure they're, well, I don't know. I think there are probably other users of . bqn, you know, around the world. .singeli, I think, is safe, so.
01:11:33 [CH]
All right, that leads perfectly to my second question, which is kind of more BQN related, but we can extend it to Singeli. Have you taken steps/what are those steps to get GitHub to recognize BQN and Singeli as official GitHub recognized languages. Or have you not even looked into that.
01:11:56 [ML]
I have looked into it. So the GitHub's languages are managed by a tool called Linguist, [22] which I'm not really sure how closely affiliated they are with GitHub. I don't think the people who run the repository are employees of GitHub or anything. But GitHub is just going to pick up changes from them at some schedule. And what they've said is a requirement that on GitHub, you have 2,000 files that use the code, with some caveats. They really don't want you to game this. So any BQN users who would be inclined to do this, please don't try that. But yeah, so they want 2,000 files. And we have 1,800.
01:12:41 [CH]
Whoa, folks. If I hadn't stopped making YouTube videos, which typically come with a BQN file, we could already be there.
01:12:50 [ML]
Could be. So I'm pretty sure, if it doesn't happen sooner, by the next advent of code, definitely, which encourages people to write a whole bunch of BQN files, we're definitely going to get over that 2,000 mark. And at that point, we submit somebody submits a pull request to Linguist that has the grammar that does highlighting. And then they will hopefully accept that. They seem if you've got the usage, they seem pretty willing to work with you. They'll accept that. And then it'll be highlighted on GitHub. Singel is never going to make that. There's no way. But BQN, soon.
01:13:31 [CH]
No, never say never. And that's exciting. So now we are on, folks. We are on technically Linguist, but we're going to call it GitHub BQN Watch. Expect maybe not bi-weekly updates, but maybe once a month, once every couple months, because I am tired of looking at my .bqn files without syntax highlighting. And I figured compared to some of the other languages that I've seen that are recognized, BQN would be there. But now we have the official on the record, what Linguist is waiting for. We're only 10% away, folks. This is an exciting time.
01:14:10 [ML]
If you are going to watch that number, be careful. I guess we'll put a link to where they describe it in our notes. But be careful, because it's not just like the number. You can't just put in path colon star dot bqn. And you also have to add some things like not forks and so on to just try to get a better count. So do be careful in how you note the query. Otherwise, you'll get a number that's higher than 2,000, and you'll annoy the linguist people. And I want to do that.
01:14:36 [CH]
Which is something that you have clearly never done.
01:14:40 [ML]
Well, now, somebody submitted it pretty early once. But we don't want to annoy them again. Let's just keep it to one time.
01:14:48 [CH]
All right, well, this was thoroughly educational for me. And I'd probably speak on behalf of the other panelists as well. So yeah, thanks. I would say thanks for coming on, but I mean, you probably would have been here regardless.
01:15:04 [ML]
But today I saved four people from drowning.
01:15:07 [CH]
Well, I don't know if Adam said last time we checked in with Adam, he was still in the water.
01:15:14 [AB]
Yeah, but I have no reason to get out, right.
01:15:16 [ML]
It's warm.
01:15:17 [CH]
He's a good swimmer, you know. And what about you, Stephen. Are you in the water, in your own boat, in our boat.
01:15:24 [ST]
Not wading, but drowning.
01:15:27 [ML]
So you're saying I didn't save you from drowning, you were never in danger.
01:15:30 [ST]
There you go.
01:15:33 [CH]
All right, well, thanks for being our guest for the day. I'm expecting next time you'll probably be demoted back to panelist. But if folks have questions or things they wanna do, they can go check out Singeli, the GitHub page, or they can email.
01:15:51 [BT]
They can email us at contact@arraycast.com. [23] And for sure on this one, check the show notes 'cause we will have show notes for links that are available to you. Should you want to actually try and do some work in Singalee, which to me sounds like it's something that probably some people who are interested in speed and from being on the APL farm, I know there are a number of people who are really interested in that stuff. I wouldn't be surprised if they at least look at it to see what either they might do or contribute to it or even try and develop in it. So there are people who are into that, and they are the, I'm trying to think of the right term to use. They're the infrastructure. They're the people that exist below all these array languages and enable those of us who use array programming to do it, is that there's somebody there making it work faster and better when it's required. And I just want to express my appreciation, Marshall, to people like you who are doing all that work.
01:16:54 [ML]
Thank you. And I have to share some of this with dzaima, who does at least as much work on BQN implementation as I do. And of course, the other implementers out there. But we're the two on BQN.
01:17:06 [CH]
Awesome. And with that, we will say, Happy array programming!
01:17:10 [All]
Happy Array Programming
01:17:11 [MUSIC]
01:17:25 [ST]
No, that can't be true. We haven't just done an entire episode on a language that doesn't have a logo.