Transcript

Transcript prepared by Bob Therriault and Sanjay Cherian

Show Notes

00:00:00 [Bob Smith]

No, I'm saying they're implicit. I'm saying the implementation assumes that you meant to put a jot there. This is wild thinking, by the way. So, and I'd like to point out mine's called an experimental system.

00:00:23 [Conor Hoekstra]

Welcome to episode 80 of Ar ArrayCast. I'm your host, Conor. And today with us, we have a special guest who I am super interested to get to interviewing. But before we do that, we're going to go around and do brief introductions of our panelists. We'll start with Bob, then go to Stephen, then to Adam and finish with Marshall.

00:00:39 [Bob Therriault]

I'm Bob Therriault and I am a J enthusiast.

00:00:41 [Stephen Taylor]

I'm Stephen Taylor and I work with APL and Q.

00:00:45 [Adám Brudzewsky]

I'm Adám Brudzewsky, and I work with APL.

00:00:48 [Marshall Lochbaum]

I'm Marshall Lochbaum done other stuff in the past, but now I work with BQN and Singeli.

00:00:51 [CH]

And as mentioned before, my name's Conor, massive array language fan, love all the array languages. And we're definitely going to be diving into a lot more about APL today when we get to interviewing our guest. But before we do that, I think we've got a few announcements. We'll go to Bob with the first two and a half, then two from Adam, and then we'll finish up with one from Stephen.

00:01:09 [BT]

Okay, starting with my first one and a half. [01] The first half, we've mentioned that Prog Lancast was having a meetup in Portugal at Caldeira Estrela. It's been moved to July 6th. I think it was June 1st before, but we did mention it. Now it's July 6th. So it's been moved ahead a month. If you're interested, that will be in show notes. That's the half. The other one of the one and a half is that they actually did an interview with me and there's an hour of me talking about myself. And if you're interested in programming, you're probably not interested in this interview because there's really not very much programming in it. But then you'll find out maybe why there's not very much programming in it. In any case, that's on their site too. And we'll put that up in show notes. And then my second big announcement involves Tacit Talk, which we had rumored on this podcast I think a couple of episodes ago. And Conor has started a new podcast. And the thing is, his first one was with Kai Schmidt. It was really good. I watched it.It's really good. The one I haven't had a chance to watch yet, I think it's on by the time we actually maybe get this out. If not, it'll be shortly after.But it's Marshall.

00:02:25 [ML]

Really?

00:02:26 [BT]

Conor has already stolen Marshall.

00:02:27 [ML]

I gotta prepare for this thing.

00:02:29 [CH]

So yes, it only took, I think I told the brief story last time, but it only took like three months after I said this might happen to make it happen. And then when it did come together, I think it was like a Wednesday when I was like, you know what, I'm just going to email Kai. I'll buy the domain. And then that's why we didn't announce it. But yes, episode one was with Kai. Had a lot of fun. Episode two, we can announce. There's already a link on YouTube that you can go like notify yourself for. It's May 27th, which is a Monday. It will be at noon Eastern Standard Time, which depending on where you are in the globe, may be watchable. It may not be watchable, but then once it's on YouTube, it'll be there. You can go watch it after the fact. And I've also uploaded it. I will say we try to make it audio friendly, similar to because this is now the third Array programming podcast. The first was Ar ArrayCast. And then Adam, I'm not sure what happened to it, but there was, I think, 10 plus or minus episodes of Notation as a Tool of Thought. I think that was roughly what the name of it was. Now we have a third rival podcast. I mean, is it a rival? If I'm on both of them, I'm not really sure. But yes, we're going to be talking about BQN. We might talk about I, except to Marshall. You know, he's the guest. We might talk about Singhele. Link in the show notes for folks that are interested. Probably not going to be as freaking as this one. I'm thinking once a month. It started. We'll see how long it goes for. I think maybe we'll move on to the next announcement, which is over to Adam.

00:03:53 [AB]

Okay. So if you hurry up right after this episode comes out, on the 27th of May, there's an APL meetup in Taipei City in Taiwan. And you don't have to sign up, but we'll have a link in the show notes for where you can sign up. And there'll be some presentations. There's not really a fixed schedule. And then Max Sun, who is arranging it, is also treating everybody who attends to dinner at a local eatery there. So if you're in that part of the world, that might be interesting. And then it's kind of an announcement and a reminder that if all these podcasts aren't enough for you and you also want something more interactive, remember that the British APL Association have their vector webinars, they call them, every other week. And it's just a handful of people generally that show up on Zoom and discuss whatever comes up about array programming and various programming languages. Sometimes people demonstrate things. Sometimes people ask questions and get some help there. And generally a nice, cozy feeling there. And on June 6th, there's a special episode of it because it's the Annual General Assembly, I think it's called, of BAA, where they discuss which direction the organization is going to take and plans for the future, maybe events. So if you want to have some influence or hear what's going on, then that's a special one to attend.

00:05:27 [CH]

All right. Last but not least, I think we'll throw it over to Stephen.

00:05:30 [ST]

Oh, yes. And the biggest to last. It started off as a textbook on vector programming in q with an emphasis on mastering the syntax and semantics of the language. It morphed into a website, which has now got a small team working on it and which I hope will eventually become a completely community curated resource. It's called Q201. Q201.org. You can find it by the time this goes to air. Alex will have announced it at the KDB meetup in London, and I'm going to be presenting it next month at the Madrid KDB+ meeting. Q201.org. It's a study resource. There's a pin board where you can leave your details to find study partners. There's a chat room on Stack Exchange. We're excited about it, looking forward to a lot of useful work coming out of that.

00:06:31 [CH]

Awesome. So as always, links to everything will be in the show notes, which you can find in your podcast app or at our website. And with all of that out of the way, we are happy to introduce our guest for today, Bob Smith. Bob, for the avid listener, will have been mentioned, I want to say at least three to five times on other episodes before. I think some of your papers have come up on Boolean arithmetic, and obviously the NARS2000 APL interpreter has come up. But Bob is a prolific individual in the Array communities. And he at one point worked at STSC, which I believe in a couple of our guests that we've had interviews has come up as one of the massive APL companies back in the day. And probably these days, Bob is most well known for his NARS2000 APL interpreter, which is going to be super interesting to dive into because it implements a ton of experimental features in both sort of the algorithm world, the operator world, the Hyperator world, and data types. But before we get to all of that, we're going to throw it over to Bob.And hopefully you can take us on a whirlwind tour of your history with the Array languages and APL specifically, all the way back to, you know, when you first started getting involved with computers, if you want. So over to you, Bob.

00:07:44 [BS]

Thank you. Thank you for inviting me. [02] At the Minnowbrook APL workshops, we usually go around at the beginning and introduce each other. And I remember early on, people would say when they first learned about APL, and there were a great many people who learned about it in the same year, in 1969. So we think of this as the 1969 APL club of all the people who became acquainted with APL at that point. And I was working for NSA at the time, and they had a terminal attached to STSC, and that's where I first learned about APL. I got a little demo, and I gotta say, it was love at first sight. And so that was in 1969, and I was at NSA from '67 to '71, and I left in '71 to go work for STSC. That is, I had become so enamored with APL that I really wanted to do full-time. And so I started in marketing, can you believe that? And then became a marketing manager, and then eventually got to what I really wanted to do, which was implementation and language design. So I got to do a bunch of that at STSC, including the NARAR system, Nested Arrays Research System, and that was a huge amount of fun, largely based upon Jim Brown's APL thesis, his doctorate in APL. And so I'm sure a great many people read that cover to cover many times, and that's essentially the design of the language that I used. And Jim and I are really great friends. In fact, our respective wives were each referred to as Mrs. Nested Arrays. So we had, and I go visit Jim and Karen whenever I can. So the NARS2000 system came about after I had retired and wanted to have more fun designing and implementing features in APL. So that's the reason. It came about in September of 2006 and started as an open source project. So it's used all over the place. I really don't know all the customers I have, but it's very gratifying to hear from them at one time or another. It's, of course, I'm very selfish about it because I really did it for myself as a vehicle to be able to do language design and implementation. And so I don't know about anybody else, but I'm having a huge amount of fun working with the language design, putting in features as they appear to me. And so that's where multi-precision arithmetic came from in several different forms. Hyper-complex numbers, so not just real and complex, but also quaternion and octonian. And I've put in calculus, so there's a numerical and integral differentiation. And that was very interesting, finding routines for those and then hooking them into the language. And that then led to partial differentiation. And then a really interesting topic of fractional calculus, which involves taking derivatives that are not of an integer order. So it's entirely possible to talk about a half derivative. And this has a lot of similarity with the fractals, it turns out, because fractals have filled in the gaps between length, area, and volume with numbers. So there was a famous paper by Mandelbrot about how long is the coastline of Britain. And in it he shows that length is the wrong measure for the coastline of Britain. And subsequent work has suggested that the metric for the coastline of Britain is about 1.25. And in a similar manner, fractional calculus extends between first order, second order, third order of derivatives, some fractional values in there. Lately I've been working on taking bits of linear algebra and making it available. And when I say that, I mean not just the real number of portions, but also complex and quaternion. And octonian turns out to be more difficult for some technical reasons. And then after that, I want to look at group theory and see what I can do to hook that into APL. There's some good group theory libraries. GAP is one, G-A-P. And there's a ton of code in there that I hope I'll be able to use it. And largely I've relied upon other people's code, so I did not implement my own multi-precision arithmetic. I used DLLs that were out there, and high-quality DLLs, so that the whole implementation is really resting upon a good deal of other open source projects that are there. And there are a great many other features that have been a lot of fun. The combinatorial operator struck me as a delightful design because MIT mathematician Giancarlo Roda came up with a way of describing 12 combinatorial algorithms that fit into a 2x2x3 array. And they differed in really interesting ways. And so you can essentially index a 2x2x3 array and pick out a different combinatorial algorithm. And they're all knitted extremely well together. So these are combinations, permutations, partitions, multiple types of partitions, and a variety of other interesting combinatorial algorithms. So it was called the 12-fold way. And when I saw it, it looked so APL-like. I don't think I could have stopped myself from implementing it. One last thing that I've had a lot of fun to do is, I was given a problem from Eric Lescaze through Roy Sykes, that if you have an integer matrix and you give someone the row sums and the column sums, can you recreate the original matrix? And it turns out you can. You may also recreate an infinite number of matrices that have the same row and column sums, all integer matrices. And that involves essentially solving what are called linear, simultaneous linear Diophantine equations. Diophantine meaning integer only. And so I implemented algorithms to do that. And so if you have a Diophantine equation like that, you can solve it pretty easily with some of the codes there. As I said, there are a lot of other interesting features, point notation, etc. But I'll leave it at that.

00:15:43 [CH]

So this is once again, one of these times where I have a thousand questions, but I'll choose, I'll choose this, my top one. So when you were talking about the two by two by three verb or operator, it sounded, it reminded me of the, I guess, P colon [03] is what first comes to mind from J, but then also the, I think the circle constants, which I very rarely ever use in APL code, but where you can, you know, choose a integer as the left argument. So my first thought is, is there an advantage to, cause it sounded like you specify, you know, one, one, two, if you want a specific algorithm from that, that not cube, I guess, but like prism of numbers, a 3d matrix. Is there an advantage to doing that over just flattening it out and putting, you know, one, two, three, four, five, six, seven, eight, nine, 10, 11, 12. or is, is that actually the way it is used? And I just missed that in your explanation.

00:16:35 [BS]

There is more detail to it and the two by two by three array, the coordinates are the first two are binary. And if you remember the old probability and statistics where they talked about balls in an urn and things like that. It's very similar to that. So that, one of the coordinates is, are the ball we're counting, arrangements. And so the first one is of the balls, are they, labeled or not? That's a binary choice. And then the urns, the bins that you're placing them in, are they labeled or not? Or we, we distinguish the label one from the label two, or is essentially just, one unlabeled bin. And then the third dimension comes about in terms of the count, any limitations on the count.And so the first one is it's, there's only one, ball allowed. And then, up to a certain number of balls in that urn. Right. Anyway, that's, those are three choices that has to be full, all of them. Right. So that that's the two by two by three array. So the indices that you use, if you, grasp those concepts and the indices fit those particular areas, and you can translate those indices to a particular algorithm. And so you choose the right ones for say combinations. and you look at the way that works and you'll look at it and say, yes, of course, that's the algorithm for generating combinations.

00:18:27 [AB]

It's expressed in, in base 10 for ease of human use. Right. So it's, so that's right. 112 is actually one, one, two in base two, two, three, a fake decimal.

00:18:39 [BS]

That's right, that was purely an aesthetic issue.

00:18:40 [CH]

Which is convenient that APL is one index then.

00:18:46 [AB]

No, it doesn't matter. They're just numbers.

00:18:49 [CH]

Well, how do you how do you?

00:18:50 [CH]

Do 001.

00:18:51 [AB]

No, no, they are. They begin with zero, zero, zero. So zero, zero, zero is just the number zero because zero, zero, zero. Without spaces in APL is just the number zero. You have to spell out all three digits and pad it with zeros on the left until it's three digits for it to make sense.

00:19:07 [BS]

Actually, I allow the underbar as an intermediate element that's ignored. So you could say zero underbar, zero underbar two, if you wanted some separation between them. And that would be equivalent to the number two.

00:19:23 [CH]

That is quite nice. I was, cause I was going to, I was thinking in my head, having to put a three element array as a left argument, it could be a bit cumbersome, but if it's just a single integer that is produced by encoding or decoding, you know, base 10, that is much, much nicer. I like that even more. That's very nice.

00:19:39 [BS]

There are additional elements in the left operand, so the first element is as we described, and the second element says whether you're just counting. Or whether you're generating the actual values. So both of those algorithms are implemented in there. And one of the things I really liked about that is that, at the same time I was reading Knuth volume four of his Art of Computer Programming and volume four A is on combinatorial algorithms. And so it was perfect timing to be able to implement a number of the algorithms in the combinatorial operator using Knuth's high quality code. So it was a lovely design, very APL like backed by some high quality Donald Knuth algorithms.

00:20:37 [AB]

But why is the, the count is again, it's, it should be, it's a base four digits for the count generate because you, the different ways to generate them in the ordering, if it's unspecified or lexicographical or, or Gray grade code.

00:20:50 [BS]

It's a second element, not a base. It's a second element.

00:20:54 [AB]

Yeah. But again, it's a, it's a single digit in base four, right? Zero one, two or three to choose. Why wasn't that, wasn't that slammed together with other digits for a single code to say what you want to do?

00:21:04 [BS]

Well, it allows you to have a default value of say one, one, two, and then so it's except a one, two or three element. There's another element in the left operand that specifies whether you want lexicographic order or gray coding order for algorithms that are sensitive to that.

00:21:27 [AB]

So again, why? Why is it an operator and not a function?

00:21:30 [BS]

Well, why is I-Beam an operator?

00:21:32 [AB]

Because it has to be able to take a left argument, but this one. Doesn't take a left argument.

00:21:34 [BS]

Not yet, no.

00:21:39 [AB]

Ohh you heard it here first.

00:21:40 [ML]

Well, I would imagine you get some extra possibilities for extensions. Like you could also say, I want to not enumerate all of them, but to pick a specific one from the enumeration or to pick one at random from the enumeration.

00:21:53 [BS]

And that's one of the things I want to put in there at some point is to be able to generate them one by one. So you provide an index to whatever the total length is and it'll get you that one. And I'm trying to decide whether to then automatically generate the next one, or then you have to update the index. But in any case, yes, that's, that's on the design board, not implemented, but, I very much would like to see that. Good for you, Marshall. That's a great idea.

00:22:27 [AB]

Yeah. And choosing a random one from among those is also a very useful thing.

00:22:31 [BS]

Sure, just pick a random number.

00:22:32 [AB]

Yeah. But then you have to know how many there are first and then you can do it.

00:22:35 [BS]

Yeah. Well, the zero will give you a count and then you can go from there.

00:22:39 [CH]

Stephen, you were going to ask something earlier.

00:22:42 [ST]

Well, a few years ago on the NARS2000 for there was some discussion about functions as first class objects and your two by two by three array. Sounds like a perfect use for one of those two functions as first class objects come to something.

00:22:58 [BS]

Not yet. Essentially, I'm limited by the fact that it's just one of me,but yeah, there's so many great ideas to put out there.

00:23:07 [ML]

Well, I would even say even with this example, for the enumeration, what I might do in BQN is have something that returns a function where each time you call the function, it gives you another instance from the enumeration. So that way you can go through them all without. It can store whatever state it wants for efficiency. Also, you're not wasting memory by spitting them all out at once.

00:23:32 [BS]

Not annoying the customer by having to increment a counter every time. No, I heartily agree. I could use you as an implementer.

00:23:40 [ML]

Another language. Yeah, you have to have the infrastructure for it.

00:23:44 [AB]

Now 2000 doesn't allow function values at all, right? Only array values.

00:23:49 [BS]

Well, that's what operators produce.

00:23:52 [AB]

You can store them, but you can't return them. Is that how it works? I tried in NARS 2002 to do something like execute quote plus quote to return a plus. That would allow me to say have different functions and use indexing to choose which character, which primitive to return, but then I have to change that character into an actual function. I tried to use execute, which you can do in Dyalog APL, but it says syntax error. So it seems like a function value is not something I can return.

00:24:30 [BS]

Well, that's something that can be fixed. Yeah. I'm ahead of some people and way behind others. So we're at different phases. But yeah, I think that's a perfectly reasonable thing to do. Put it on the list.

00:24:46 [BT]

So would that be something like the gerunds in J, so that you're actually creating something that you can move around as an object, but you do have to transfer back and forth to actually make it into a working function?

00:24:57 [BS]

I'm not sure about that.

00:24:58 [AB]

In Dyalog, it isn't. It's almost magical how it can work parsing-wise. But as soon as you get a function value at some place in your expression, then the expression is parsed with that function there. So let's have a fun experiment here with people listening to this. A classic example I give is, let's start by calling a three element vector. So we have one open paren, and then something inside the parentheses, we'll come back to that, close paren, two. So if what's inside the parentheses is another array, then say the number three, then it gets stranded together, we get a numeric vector, one, three, two. If what's in there would be a function, say plus, then we get one plus two and we get the result three. So now if what we put inside the parentheses is the character vector with the characters plus and three, and then we index into that with a random number, one or two or zero or one, depending on your index origin, that means that randomly you'll get out a plus, and randomly you'll get a three.,So either the result will be number one and character plus, and then number two, or it will be one, character three, and number two. Now we modify this again and we insert the execute function, evaluate function inside the parentheses. So now if we get the character plus, it becomes the function plus. If we get the character three, it becomes the number three. So with three it's easy, you get the numbers one, three, two, no problem. If it becomes a plus, then the whole parentheses evaluates to a function, and you get one plus two, and the result is three. So the parsing is only decided at runtime, depending on the random number that comes out. But that means this whole parentheses evaluates to a function value that's used in place right there.And then is it sane? Maybe not. Should you do this in industrial production code? Probably not. Is it useful sometimes to do this kind of thing? Yeah, but you probably want to stay consistent on whether what's in there is a function or an array value, not just have it sometimes one, sometimes the other.

00:27:22 [BT]

Yeah, it seems to me you're putting a lot of trust into the person who's programming it to make sure that those edge cases are all handled.

00:27:30 [AB]

I mean implement.

00:27:31 [BS]

That's what we do all the time. I have the advantage over Dyalog of being an experimental system. So I can put things in, take things out, modify the way they work, try it out for a while and see if I like it.

00:27:45 [AB]

And NARS2000 has no promise of stability at all, right?

00:27:48 [BS]

In terms of the design, I wouldn't recommend it for production code, but I really am not making major changes, incompatible changes.

00:27:59 [ML]

Incompatible with mainstream APL as opposed to with itself?

00:28:03 [BS]

More with itself, actually, because I'm doing things differently than mainstream APL. There are things that I do one way, Dyalog does another, APL 2 does a different way. Those are differences. And certainly anybody trying to port code from one to the other is gonna have to be aware of those differences.

00:28:27 [CH]

So you mentioned the fact that NARS2000 is experimental and anyone that goes and checks out the main site will see, right, I believe on the homepage that there's a section called, I think, experimental versions or something like that. And then it lists alpha, gamma, beta of, I think, three or four different experimental versions that implement some sort of new experimental feature. I'm not sure if you wanna give us an overview of all of them, but I'm specifically super interested in the HyperRator. I can't remember if, Adam, you mentioned it once on the podcast or it was off podcast, but you said that I would be very keen to take a look at them, because I think it came up in the context of modifier trains in J that I actually discovered at the last Minnowbrook conference in October where we met Bob. And I had always known about verb trains and I just thought the verb was implicit because all trains had to be verb trains, but J had this thing called modifier trains. And then I came back to ArrayCast and then Adam said, "Oh, well, you know, actually there's, it's not like the only thing out there." So yeah, if you wanna give us an overview of all the experimental versions or you just wanna focus on HyperRators, I'll throw it to you and let you run with this sort of tangent.

00:29:40 [BS]

Sure, so one of the experimental versions is hyperators and I have not released this version yet, but I have already incorporated that into the main release. And another that's an experimental version is a really fascinating idea called "Ball arithmetic", [04] and I'm also prepared to incorporate that into the main release. So it'll be just a single release that incorporates everything else along with the Ball arithmetic and hyperators. And I think there's another experimental version which is support for Java. A good friend, David Ravenhorst, did the Java support, and so I want to get that released as well. I'll talk a moment about Ball arithmetic and then about hyperators. So Ball arithmetic is a very different idea. It differs qualitatively from other scalars in APL. So a Ball is, in this parlance, a midpoint and a radius. So it's not just a single number. It's an interval of numbers, and that allows you to do some really interesting things in the numerical analyst area. There's a good library that does ball arithmetic. And essentially, the guy who put the library together is guaranteeing that you're going to have the correct answer in the resulting ball. So it's not a question of looking at a floating-point algorithm and wondering how accurate is this result. When you get a ball back as a result, you know how accurate it is. You can look at the radius, and if the radius is tiny, then you've got, probably, a pretty good algorithm, but certainly a pretty good result. And so the idea of having Ball arithmetic, extend that now to your own algorithms. And you may have had questions before (you should have had questions before), of any floating-point algorithm you've written as to whether it's ill-conditioned or goes off tangent at some point. Now run it with balls as the arguments. You get back a ball or two, and you can look there and see just how accurate that result is. It takes a lot of the guesswork out of looking at floating-point algorithms. Essentially, it's turning the individual programmer into a numerical analyst without having to go through all the classes.

00:32:33 [AB]

And the reason you call it ball arithmetic: it's interval arithmetic, but in multiple dimensions because of your complex numbers?

00:32:41 [BS]

Yeah, that's right. So the hyper-complex numbers allow integers, floating-point values, multi-precision integer rationals, multi-precision floating-point, and balls. So those are all the kinds of coefficients. You can't mix them from one class to another, but you could have a hyper-complex number [for which] all of its coefficients are balls or intervals, and then you get essentially a four-dimensional ball.

00:33:18 [AB]

And these hyper-complex numbers, are they actually ever useful to work with like this? I mean, I know there are things out there in some sciences that use them, but have you bumped into something where you could use them?

00:33:36 [BS]

Oh, quaternions in particular are widespread. Anybody doing graphics programming is going to be using quaternions. They're ideal for doing a combination of translation, rotation, and scaling in one operation. And so you could take a matrix of quaternions representing, say, three-dimensional points, multiply it by a single quaternion, and achieve all of rotation, translation and scaling in that one operation. So NASA uses this with respect to the spacecraft that are out there. People who write graphics programs are constantly moving figures around on the screen, and that's done largely by using quaternions. There's a large body of literature for that.

00:34:36 [BT]

And I believe quaternions have the advantage of not getting into gimbal lock, right? They avoid that problem.

00:34:41 [BS]

I've heard about that, yes. I couldn't say with personal experience, but I've heard about that, yes.

00:34:47 [AB]

We should explain to the listeners [chuckles] what that is.

00:34:51 [CH]

Sounds like a character from like World of Warcraft or something, gimbal lock.

00:34:54 [AB]

No, you have a gimbal in the airplane, which is like ... [sentence left incomplete]. There's also a toy of these multiple rings that are suspended inside each other, so they can rotate in orthogonal axes. And because of inertia, when the airplane changes the direction that it's pointing, the center of the gimbal will stay upright. You could say it stays in the stay as it is, more or less. And if you paint half of it blue and half of it brown, then you've got an artificial horizon. So even if you can't see anything outside (say it's cloudy or you've got vertigo) you can look at the gimbal and trust that that's where the horizon actually is. And you can know how to react to it. But it has the issue that there are certain movements that are too extreme and they will topple over when you do certain movements with the airplane. I know that acrobatic airplanes have a brake that they can put on the gimbal for certain maneuvers, and then they can release the brake to continue afterwards. If we look at this as quaternions, I guess, there are certain operations you could potentially could do that would mimic the physical manifestation of this [chuckles], and things will just go haywire. But apparently, it doesn't with this kind of math, but this is where my knowledge stops [laughs].

00:36:22 [BT]

There's a really good video on quaternions done by 3Blue1Brown (Grant Sanderson) and if you're interested in this, it's fascinating. And he does a good explanation graphically and visually of how quaternions work and how they basically translate the three dimensions into the quaternion dimensions. He actually has a working site where you can go in and manipulate the actual inputs yourself and see how it affects the outcoming quaternion. But you're right, Adam, when I was a kid we had a thing called a gyroscope, and it was a toy, but the same thing: it conserves angular momentum. You can turn it and balance it on all sorts of things because it's always going to try and keep its angular momentum oriented. It's amazing because you get on a high enough speed and it really, really resists you moving out of that orientation.

00:37:22 [AB]

So is it because you've got four dimensions instead of only three that you don't get a gimbal lock? That there's always some kind of check on the value? I don't know.

00:37:31 [BT]

I'd have to go back and look at that video again, but I believe that's true. I think essentially, with three dimensions you can get yourself into a situation where you actually lock it up. With four dimensions you're giving it a degree of freedom and it doesn't happen.

00:37:45 [ML]

Yeah, well I think the problem with two rotations is that you get to a point where you're at like a ... [sentence left incomplete]. I think it would be a singularity and some sort of mapping from the sphere to a two-dimensional space. But you get where to pass a point, you'll have a point where one of the rotations doesn't matter, so it could be in any direction. And to go past that point what you would have to do is go up to there and then flip the other rotation around, or flip the rotation that doesn't matter around really quickly, and then you'll be able to move off in the direction you want to. But that's not a natural thing for the representation to do, so actually you'll get all sorts of messy artifacts if you try to move it past this point because there's no system that figures out which rotations you need to support this.

00:38:33 [BT]

And the extra dimension provides the framework to track that, I guess.

00:38:38 [ML]

Well by mapping it into more dimensions you don't necessarily have to have that singularity there.

00:38:44 [AB]

It's kind of a way to remember which way you were pointing.

00:38:49 [ML]

Well it's just that you don't ... [sentence left incomplete]. Like there's always enough parameters that are free to move that you don't have one that's controlling everything in that way.

00:39:00 [BT]

So [with] the ball arithmetic, if you set a range of numbers through your algorithm, you're going to get a range of radii, would you not? Is there a way to actually average that, or is that something the experimenter is going to have to go in and figure out for themselves?

00:39:16 [AB]

Shouldn't it be the maximum?

00:39:17 [BS]

If you get a range of numbers, then they may each have different radii. That's a function of the algorithm.

00:39:24 [ML]

So I think what you're getting at is like, sure you can pass one number through and see how precise that particular computation was, but if you want to know how it's going to be in the general case, I guess the question is what do you do [chuckles]?

00:39:35 [BS]

You just run your algorithm normally, the exact same way you would have done using just a floating point code. No changes, and then take a look at what you get out as a result. If you get multiple values, then you need to look at each value and see whether the radius is tight enough, and if it isn't, then you get to go back to your algorithm and try to figure out why.

00:40:02 [BT]

And I'm guessing if you get multiple radii, then that might tell you something about your algorithm and how it's responding to the different inputs.

00:40:09 [BS]

Yeah, mainly what you're trying to do is find ill-conditioned algorithms, and those are ones where you get a result and the midpoint is 2 and the radius is 5. That's probably not a good answer.

00:40:23 [AB]

Not if you're trying to land the spacecraft, at least [everyone laughs].

00:40:28 [ML]

I mean, that's good if you're in a situation where you have a particular computation you want to do, if you're writing a paper or something, you've collected some data and you want to analyze it. That's good in that case. But if you want to build a program that someone else is going to run, then you don't have the freedom. You can have your program inspect the radius and error, but what you'd really prefer is to get the person to program with the right algorithm that doesn't error. I mean, I would think that Bala arithmetic might help you in designing that, but it can't really ensure that you can make such a program.

00:41:02 [BS]

No, it's all providing information to the programmer (the person who's writing the algorithm) to be able to have a higher degree of confidence that what you have written is going to be reasonably accurate. If you're comfortable with that, then go ahead.

00:41:21 [AB]

I suppose it has quite a high cost, computationally.

00:41:25 [BS]

You know, I haven't done much performance analysis. I remember talking with John Scholes [05] over the phone some years ago, and he was interested in the multi-precision floating-point arithmetic. And so over the phone, we did some calculations and timings, and it turns out that he was comparing it against, I think, the 128-bit floating-point software that Dyalog was using. And the two of them turned out to be quite comparable in terms of speed, until I realized that I had set my default number of bits of precision to 512. And so they were about the same speed when the code I was using was using four times the number of bits of precision. And then when you drop it down to 128, you see much more of an advantage. So I think multi-precision is a wonderful idea, and ball arithmetic, for me, is the real culmination of my delve into precision. That's why I wanted multi-precision, because of trying to solve that problem, and ball arithmetic, that was the cat's meow. It really gave me everything I wanted about precision.

00:42:58 [CH]

All right, so next up (that's our ball arithmetic experimental feature): I think Hyperators was the next one on your list [for] covering.

00:43:09 [BS]

Yes, that's really owned by John Scholes. He wrote an early paper, I think it was at an APL gathering in the UK. I don't remember the details of it, but he presented the concept of Hyperators. And once I saw that, I thought, you know, we got to have that. And so I implemented it, but unfortunately, I was too late for him. But the idea is that there are orders of these objects. So data is an order zero, functions are order one, in that they take data and return data. Operators are order two, meaning they take data or functions and return a function. And then Hyperators extend that one more level. So they take operators or functions or data as their hyperands, and then return an operator as its result. And then you can then use that operator in various ways. And so one example of a Hyperator (and Phil Last has a bunch of examples as well, in some of his writings): one that I defined in NARS2000 is a "transform" hyperator. And so it would do multiple transforms, so transforms such as the fast Fourier transform. And then correspondingly, there are maybe a dozen more that fit the transform context. So that's one example. And again, there are many others out there. But Hyperators really started with John Scholes and we just implemented it. And so it'll be in the main release in a bit.

00:45:12 [AB]

One that I think I most often want is very simple: just like we have the commute operator to change the position of the arguments of a dyadic function. I sometimes want to change the position of the operands of an operator. So I would need a Hyperator for that [Bob agrees]. Because of the way operators bind, it might be simpler to write things. If I want the long, complicated right operand, but a simple left operand, it's easier to write it on the left.

00:45:38 [BS]

Good idea. There's also another Hyperator, which is an extension of the power operator. [06] So the power operator, in its simplest form, will apply a function sequentially to its argument. And you can define a version of that as a Hyperator, where the function is composed with itself, as it were. And so this is how you would get a second and third derivative of a function. You could use that. Of course, there's other ways of doing that with the actual derivative operator. But you may want to apply a function in a different manner than the way that the power operator does.

00:46:29 [AB]

Oh, another one: you could say, sometimes you want to apply a function "each"; function "each", "each"; function "each", "each", "each". And so a potential Hyperator would be just like the power operator, but on operators. So I want to apply that. Okay, turn this operand ... [sentence left incomplete]. Yeah.

00:46:46 [BS]

... this number of times. But you could also have something like the outer product operator, but for operators [Bob agrees]. I want all these combinations of all these different operators applied to these functions. But of course, what about Superators that take Superands and return Hyperators?

00:47:13 [BS]

Yeah, there's really no limit to what you want to do. It's just, if there's a call for it, I'll do it.

00:47:23 [AB]

[laughs] I mean, the syntax is pretty obvious. If you're using Dyalog, if you use "alpha, alpha; omega, omega" for the operands of an operator, then you just stack more alphas and omegas in sequence to do hyperands and so on. But at some point, doesn't it then become clear that there is some pattern here? Instead of giving them a new name and making them entirely distinct from each other, you simply want first class functions. So a function can take a function and done. It serves the role of an operator.

00:47:52 [BS]

Sure. That's another way of looking at it.

00:47:55 [CH]

Yeah, that's why my brain is spinning right now, which is a good thing.

00:48:00 [ML]

It's gimbal locked, actually! [everyone laughs]

00:48:03 [AB]

You should have put in the brake on your gimbal before you started this [everyone continues laughing].

00:48:08 [CH]

No, no, this is fantastic. But that's what I was thinking. And I am not an APL or array language implementer. So I was trying to figure out what was preventing a language from doing kind of what Adam just said. And I will say that that example that Adam brought up, I have multiple times forgotten that the swap (or no ... yeah, swap, commute, glyph, whatever you want to call it, typically it's in BQN ... [sentence left incomplete]). And I'll be doing some, we'll call it tacit ninja-ing, where because of the way that modifiers compose or what you call operators in other languages, you want to do something like left to right. And if you have a plus reduction at your right, in my head, I'll be like: "oh, if I can put that on the left, that'll bind first, but then I don't need a monadic before, I need a monadic after. And so in order to get that to work, I would need to commute the monadic after". Who knows if it would actually work. But then I end up trying to call commute or swap on one of these modifiers and then it doesn't parse because that's not how they work. And I forget [that] right, these don't work on modifiers. Modifiers don't work on modifiers. Anyway, tangent over. So I was thinking when Adam made that point, I was like: "ah, yes, his was sensible." I have tried to use that same trick in nonsensical ways. But the question ultimately is: is there something preventing a language like BQN, APL, NARS2000 from having a commute or swap modifier that doesn't operate just on functions? Is there something preventing that from happening?

00:49:52 [AB]

Clearly not, since NARS2000 can do it, and it has basically the same syntax as BQN and Dyalog has.

00:49:59 [CH]

Wait, so does that mean that ... [sentence left incomplete]. Because from the explanation I heard it sounded ... [sentence left incomplete].

00:50:03 [ML]

It's a different value though, right?

00:50:08 [AB]

It's not the same. You probably can't have a commute that's ... [sentence left incomplete].

00:50:13 [ML]

Well, I don't know. So applying an operator to another operator is not defined. I don't think you could get a general syntax around it, but you could probably define operator swap.

00:50:26 [CH]

With the same symbol, or would you need a new symbol?

00:50:29 [ML]

Well, either way. with a new one there's a more clear path to do it. But with the same one, I feel like you could still put in a special case for it at least. But I would not want to.

00:50:39 [BS]

That's exactly what a Hyperator would do too.

00:50:42 [ML]

Yeah, well if it was a new symbol you'd make it a Hyperator. I mean, I guess I'd say the reason I don't want to do this in BQN is because I want to keep the number of concepts in the language very small. BQN is very minimal. But other languages have different priorities.

00:50:55 [AB]

Yeah, there's also a number of arguments. So when I said you could generalize this to just first class functions (take functions, return functions) the number of, in a loose sense, arguments also tends to go up. A function can take a single argument, or a single input (let's call it like that) and an operator can take no less than two. It has to take at least one operand deriving some function that takes one argument. So that's two inputs. A Hyperator at minimum would take three. It has to take an operator and derive a new operator that takes a function and then derives a new function that takes an argument. So that's three inputs. So being that the normal commute just takes two, three inputs, it takes a single function, derives a new function that has the argument swapped. And a Hyperator commute would need to take one level more. Then you can't just use the same syntactic element for both functionalities unless you start having placeholder inputs. Let's say you had some special symbol, say a jot, that you could put into the slot of an operator or other syntactic construct to say: "nothing goes here". Say for example, if you look at an inner product as a reduction over the diagonals of an outer product, and then you say: "plus dot times," (so it's a summation over the diagonals of all the possibilities of multiplication). But let's say that you did not want to do that final reduction so you put in a placeholder function on the left. Instead of writing "plus dot times", you just write "jot dot times". But jot isn't an operator, it's just some placeholder thing. That would then mean: "well, don't reduce over the diagonals, just give me the outer product." Hence, jot dot times is the outer product. That's why it is the outer product in APL. It's not a syntactic anomaly as much as it's a concept that was never generalized in most APLs, but which NARS2000 does to a certain extent generalize. And the original proposal for NARS does generalize, if I remember right. The power limit, instead of being power infinity, it was power jot, saying the power operator says: "run this function as many times as I specify and if I don't specify any, if I just fill the slot with a placeholder, then just keep running it until nothing more happens".

00:53:26 [ML]

Well, I don't know if I call that generalization so much as just using it more. Like, there's no actual concept. I mean, you can say, it has to be interpreted differently for each situation. There's no consistent way to say what it means to not apply a function in a definition that has a function application.

00:53:48 [AB]

I'm not sure. I'm not so sure about that. You could say that you look at the code inside, right? We've had this with the various combinators, where the difference between using a combinator, the derived function, mondically and dyadically.

00:54:04 [ML]

Well, so for example, with the outer product, I mean, yeah, you can define the inner product as a reduction across the diagonal of the outer product but it's also a reduction across one axis of a shared product and that's a much more normal definition for it. Like, to do the simple example, the dot product, you've got a dot product of two vectors. You're going to multiply them all together and then do a sum. But that's not the same as the definition with the outer product, because you're sharing this axis. So you can have several interpretations of what it means to not reduce at the end. And the most obvious one is not the outer product.

00:54:44 [AB]

Okay. be that as it may, you could possibly generalize this in the same way as we look at the monadic form of various conversational operators, where we say: "well, if there's no left argument, then the entire leg of the diagram just disappears". Wherever this value is mentioned, that leg just disappears. And so say, if you fill in a placeholder, whatever would depend on having a value there, it just doesn't; just gets skipped. And you could define an inner/outer product operator like that. Doesn't mean you have to. And that probably means that the behavior will be vary, depending on the exact implementation algorithm. And I'm not saying this is a good idea to do [chuckles], but what I'm saying is that there could be such a thing as filling slots such that the same syntactic element could be used both as in the role of an operator and the role of a hyperator, possibly.

00:55:46 [CH]

Yeah, this this is super super interesting. So a couple a couple things to note. I tried to look up the paper that you mentioned Bob, that John Scholes presented. I don't think I found it, but I did find a Vector article summarizing: it's called the San Quirico Moot? From 2007?

00:56:05 [BS]

Yes, That's it.

00:56:16 [CH]

it's reported by Adrian Smith. It actually mentions ours truly, Stephen Taylor. And in the midst of this it has a big section on Hyper-operators, which I guess were renamed to Hyperators and this is by John Scholes and In it it actually notes that the next level up from Hyperators is not Superators; it's Hyper-Hyperators. So there's already a naming mechanism and he beat you to it and I guess this kind of leads to a question. So the syntax, for those that have been doing their best to hang on to this deep dive of a topic, is that you can use the the double alpha and the double omega when creating what they call ... what are they called in a Dyalog APL? There's the dfns and deops [07] (is that what they're called? Yeah, I can't remember). So deops are the ones that use ("op" short for operator) the double alpha and double omega to refer to arguments that are operators. Here it mentions that there's an extension of that. Just add an alpha, add an omega, which means you get a triple alpha and triple omega. And that leads to I guess what is not implemented in Dyalog APL but _is_ implemented in NARS2000, at least in the experimental gamma version: Hyperators. So I imagine that if Dyalog did implement it they'd call those de-hypes. De-highs? we don't know.

00:57:35 [AB]

De-hips!

00:57:36 [CH]

[chuckles] You could generalize the concept to implement hyper-hyper-operators and then you'd have a de-hype-hype or something like that and it could just go on ad infinitum. But it's a very interesting idea and it kind of also makes me think of (when I was talking to Kai on episode one of tacit talk) where I actually said in that episode (I totally misspoke) that he is able to dispatch because Uiua Doesn't have ambivalence. It has a fixed arity for every single function. He is able to dispatch to different sort of versions of what he calls modifiers (you know NARS2000 called operators as does Dyalog APL) based on the arity of the function arguments that a modifier takes and it makes me wonder how far can you go and if you had non-ambivalence, does it enable you to do more. Like what walls do you run up against? Because this is what we're kind of talking about: not dispatching based on arity but dispatching based on quote-unquote: "what your argument is". Is it a function? Is it an operator? Is it a hyperator which gives you a hyper-hyperator? Anyways, my brain's just buzzing or spinning as I said before. But it's a very interesting idea and should be noted that NARS2000 actually has implemented it. Or I guess we should throw it to ask. I imagine that you have implemented the the triple alphas and omegas and does does it come with any built-in hyperators similar to the operators that exist in NARS2000?

00:59:19 [BS]

Yes, there's a transform operator Hyperator. That's part of the implementation That I mentioned does fast foyer transforms and other similar transforms now There's another idea relative to this that I've been just toying with it during this discussion is What if we defined the diaresis as a Hyperator?

00:59:45 [AB]

Any meaning each

00:59:47 [BS]

Yeah, we'll call it Call it each if you like, but what if we defined it as a Hyperator and now it would take a function on its left and let's just say a number on its right if we want to apply it multiple times But if we are missing some of the operands or hyper ends we ignore that fill them in with a jot Essentially an implicit jot so that we get around the syntax limitations of always requiring one or two hyper ends one or two operands, etc. Seems to me that has a chance of working

01:00:27 [AB]

But the problem is that that the way APL parses It just grabs whatever is there. Let's take a simple example as soon for now, that's that a slash reduce. It's always an operator. It doesn't really matter for our example So we have the n wise reduce That way you say for example three plus slash some numbers so it sums every group of three every window of three and Moving along and let's say then that arguments if we are meeting the plus let's say it could like Leave that slide out. We'll just default to Plus or do something else. Maybe we'll just actually maybe we'll just return the windows right without doing the actual reduction So three slash the right argument just returns the windows without doing the reduction And the problem is that because you have an operator here in my netic operator it grabs Whatever entity array or function on its immediate left as its operand So where we as we wanted to preserve the three as the left argument to the right function In fact, what we get is three slash as a derived function. Never mind replicate Let's just assume it's an operator for now. We get three slash and that's an operator with its operand deriving a function Whatever it does and that's then applied Monadically, so we didn't achieve leaving out one of the inputs APL just grabbed something from the left instead.

01:02:05 [BS]

Yeah, not in that case, But that's the way I've implemented mesh and mask, is slash with a data value on the left, enclose that in parentheses, and now you've got a left and right argument.

01:02:21 [AB]

So you have to use parentheses to force, to prevent it from grabbing things outside.

01:02:27 [BS]

That's right, to limit stranding, for example, on the left.

01:02:30 [AB]

But here, even if you were to enclose the slash in parentheses, it would just derive to itself. Assuming the parentheses can return an operator value, then the parenthesized slash returns the operator slash, and then that will then grab the three on the left, and you're still stuck with not getting what you want.

01:02:50 [BS]

I was doing a more specific example with respect to each, so that we could generalize each to be able to apply it as an operator multiple times, and then have the implementation. Essentially, the implementation is going to look at it and say, "In the old world, this is a syntax error." Or a valence error, put it that way. Not provided enough of the operands, hyperands, or the like. But in this case, I'm just going to essentially put a jot in there as a fill, and then just keep going.

01:03:28 [AB]

Yeah, sounds like you have to put jots in, otherwise it becomes ambiguous what you mean.

01:03:33 [BS]

No, I'm saying they're implicit. I'm saying the implementation assumes that you meant to put a jot there. This is wild thinking, by the way. And I'd like to point out, mine's called an experimental system, too.

01:03:49 [CH]

I think this is fantastic. I just looked at it, I realized I hadn't checked the time, and I don't know. Whenever it was, however many minutes ago, that we started talking about this topic, because my mind's just been buzzing since. Wild thinking, yes. I like the sound of that. Because some might be scared off by that, but I think it's fantastic. Definitely this qualifies as one of our many Tacit episodes within the ArrayCast. I guess, actually, I don't want to start a new tangent. But we've been dancing around the topic of Tacit programming. Do you have thoughts? I think the last time we had an implementer, was it Kap with Elias? [08] And we asked him, and he had some interesting thoughts. Do you have thoughts or personal feelings about explicit versus Tacit programming in APL?

01:04:35 [BS]

Oh, I love the idea. That is functional programming where you're just putting a chain of, a chain or train of a function together. I love the idea of anonymous functions, which is another aspect of that. And by the way, anonymous functions are perfect for dealing with integration and differentiation. Because typically the function you want to integrate or differentiate is fairly straightforward. And so you can just define it as an anonymous function, which is my term for defunds, etc. So, yeah, I love the idea. I think it's a great way to program. And also, one of the things I've not done is idiom recognition. And I think that's another place where it would apply quite nicely in Tacit programming and functional programming.

01:05:34 [CH]

That leads me to another completely random question. You mentioned idiom recognition. And then my first thought was, wait, NARS2000 is experimental. Does NARS2000 have a last glyph? Because most APLs, they have a first glyph, but then they have the rightTacReduce as the idiom. It's a shot in the dark, but does it happen to have a last glyph?

01:05:54 [BS]

Not a special glyph, no. But it's got to be a Tac slash.

01:05:57 [AB]

But that's not the same, though. The first glyph, whatever it is, up arrow or right shoe, does a disclose. And if you have a matrix, it takes the top left element. Basically, it discloses the first element in revel order. A rightTac/, or better, maybe rightTac/bar, gives you the last major cell. It's very different. But I know...

01:06:20 [CH]

On vectors, though, it's always the same.

01:06:22 [AB]

On a simple vector, it's the same. But on a nested vector, it's not the same. If you have a vector of words...

01:06:28 [BS]

Well, reduction on a nested array is doing an implicit disclose anyway. And so it's a question of most interesting reductions have a disclose on the result.

01:06:41 [AB]

Well, they need a disclose on the result. Reduce encloses the result, right? And you need to disclose.

01:06:48 [BS]

Yeah, I'm saying most interesting reductions, though, are going to be disclose......and F reduce.

01:06:54 [AB]

Ah, yeah, yeah, that's what you mean. But you could actually add it, because NARS2000 does have, say, right shoe underbar, which it uses as the set function. That's right. But as far as I can tell, there's no monadic meaning to it. So if right shoe is first... Oh, it's not, because it's up arrow-ish first. Oh, well. But you don't have a meaning for monadic down arrow, right?

01:07:22 [BS]

I do not.

01:07:25 [AB]

Well, that could be less.

01:07:28 [BS]

Yeah. And I don't think I even do monadic up arrow. Monadic up arrow is first. Yeah. I don't think I do either one of them.

01:07:30 [AB]

Monadic up arrow is first, like APL2.

01:07:32 [ML]

I kind of think that's better. I was going to say I'd much rather introduce a disclosing reduction, as opposed to introducing this last function that's really useful only for very specific things. So you get...

01:07:46 [AB]

It's just so darn useful, though.

01:08:00 [ML]

Yeah, you have to write two characters instead of one. But you have much more possibilities. And then the way that you get the first major cell and the way that you get the last major cell corresponds to the way you get the first and last element. So it's a much more uniform language that way.

01:08:03 [BS]

If you're not concerned about the symbols, then as Conor was pointing out, this is really a place for idiom recognition.

01:08:16 [CH]

Yeah. So it could just look at those symbols and say, "Oh, I know what to do." Yeah, it's it's for non-practical Reasons that I can't I did this come up a couple episodes ago that I was I can't remember if it came up, but I was solving some problem and J was one of the languages I did in and I had the the three the mnemonic three train of last minus first And it upset me probably an unreasonable amount that There was

01:08:43 [AB]

No, no, I think I think you're justified for the record. I think you're justified in that

01:08:46 [CH]

Yeah, yeah, and it's it's really just about the beauty and the symmetry that if you had the you know symmetric first and last functions It just it goes from being an asymmetric Mnemonic three train to just beautiful and it's rare that I say that jay had one of the more beautiful solutions But J actually has first and last even though it's not perfectly symmetric It's there in the dot and the double dot on the left brace, I believe and anyway So it's it's just because in the last month or two I solved this one problem that happened to have a mnemonic three train that needed last as one of the primitives And I just I discovered it wasn't there so probably in a year It won't really be top of mind But when you said idiom recognition, my brain was like ah last the missing glyph in all of the array languages Except for except for jay like like Marshall was pointing out like, it's really from a pragmatic point of view,There's other more useful things. My motivation is purely just to have a a beautiful mnemonic three train. We'll we'll go to bob then we'll go to Adám.

01:09:44 [BS]

Wasn't that in your advent of code solution? Seems I saw something like that in that

01:09:52 [CH]

it might have been this might have come up It probably actually did come up like a couple years ago in a video that I might have done and then I forgot about it And then it's come up again recently so yes, you're you're almost definitely correct because I don't think it's the first time I've gone searching for the last primitive. But I think and actually there there might have been that might have been one of the more more recent, youtube videos I think it was it was yeah It was something to do with prime numbers prime number difference or something like that And it was looking for the first and last prime. Yeah, I did release that. and yeah, I mean on a random note I've been getting really into j and k, you know expect more content in the future because i've discovered I always knew about code golfing folks. I always knew about it did not know how beautiful How beautiful code golf i'm not sure if this website got recently refreshed or i've just for the last five years been living under a rock But check it out folks in the description code golf and i've also been looking into k now It's official we're announcing it here first on ar ArrayCast k6 is the official k and that's because that's the k that code golf uses k6 by ngn which I believe Marshall is the one that's looking after these days and i've lot i've learned a lot of k because you get little points. I i'm a real sucker for leaderboards I literally discovered this website like a week ago and i'm already got like 12 000 points on the site and We'll soon be top 60. Will we get the top 10? Nobody knows and honestly like i'm like i'm just messing around with j and k trying to figure out like how do you do this? Simple thing just discovered the other day that three P colon can be shortened to q colon in j. We love that folks Or no, sorry. I said in j I said in j hopefully anyways and then in k to decode or encode I never remember which is which you can just do a single integer slash. Oh, i've forgotten how amazing it is To discover that you can just spell something in like two characters you know i've been doing it for too long and it's it's it's been it's been a while since I you know Typed something anyways folks. There's my little my little monologue. I'm sure there's going to be like half of the listeners now We're going to code code.com. They got little badges It's it's fantastic. We need to get nars2000 or one of the apls up there It's very sad that you can't solve these in apl yet.

01:12:06 [AB]

Well, I I just wanted to point point out J's Hyperators and Because you speak about these most of you most of all you've gonna speak about these these combinators [09] and We have operators that can take one or two operands And combine them somehow or modify how they're being being applied and then we have the special three train That takes three functions and combines them in a special way so clearly the three train has the role of an operator, right? It takes some some number of functions and returns derives a new function but being that j now has these invisible Constructs that are not just function trains of which some of them resolve to operators Or whatever you want to call them adverbs and conjunctions Then clearly those invisible constructs Have the role of Hyperators.

01:13:07 [CH]

Yes. I mean isn't there I wouldn't say it's a one-to-one correspondence, but it is a you know double squiggly line between the modifier trains, which you just described and Hyperators.

01:13:17 [AB]

No, it depends what they depends what they return.

01:13:19 [ML]

Well, each one could be implemented as a Hyperator, but Yeah, if Hyperators take three inputs at some times.

01:13:23 [AB]

Yeah. Yeah, exactly Just like just like a three normal three train function train could be implemented as an operator If operators could take three operands, right?

01:13:36 [BT]

And in fact playing around with j and the invisible modifiers, um they can produce well, it can produce all parts of the language because you know essentially the they're almost a superset of the kind of the original things we think about either A three train or a hook or all those kind of things that are sort of predefined you can extend it using the I can never remember. Is it lex and dex? I can't remember but the the two conjunctions that will pull the operands from either side the verbs from either side If you combine that with the direct definition, which is a double curly braces You can extend it in a very sometimes very odd ways to do multiple operands you can do four operands and two arguments using that Combination it's not it's sort of halfway to tacit because you you have to specify The u's and the v's for the operators, but then you can use the lex and the dex conjunctions to pull out other ones So it's really quite weird and wonderful We'll have Henry on I think talking about our next full tacit sometime in July, I think but that's going to be a conversation for sure because it it's got immense power, but it also I think can lead to incredible objectification. Objectification. Yeah.

01:15:02 [AB]

Obfuscation you mean?

01:15:04 [BT]

I've obfuscated obfuscation It can put another layer on which is great If you know the layers are there and what you're doing with them But if you don't it becomes a real almost opaque Process to try and figure out what might be going on.

01:15:23 [CH]

Yeah, I mean this is, is it good for the podcast that I love talking about this stuff so much we don't know.

01:15:32 [BT]

It gives us topics.

01:15:33 [CH]

It is incredibly...Yeah, you called it wild, Bob. I just thought provocative, like my brain is spinning, buzzing in a fantastic way. And I just... You know, the whole time this is going on, it's a tragedy that we... We've been trying to get someone to talk about Jelly. I've been able to get a hold of Dennis Mitchell, or at least that's their online moniker. But I've been trying to track down some folks because Jelly's model of tacit programming uses something basically that I refer to as arity chain pattern matching, where it also has functions that all have fixed arity. And it basically just has a kind of pattern matching table that if you can get the arities of each function, so if your arities are 1, 2, 1, then it says that corresponds to the equivalent of a monadic fork in APL. But it has just a whole table of these things. So once again, it's a non-ambivalent array language, but it has a different sort of way of composing these functions. And then in KAP, in their APL, they, as Adam mentioned, you can spell forks with symbols. It doesn't need to be these trains. I guess that's... I imagine, Bob, that in NARS2000, you adopt the Dyalog APL, three train forks, and two trains are just composed in a top. Is that safe to assume?

01:17:01 [BS]

Yeah, I did it differently at first, but I was convinced that Dyalog had a good design. So I shifted over to that. Yeah.

01:17:09 [CH]

And here's... Maybe we're going way past time, but here's another random question. I just, as I've been diving into J a little bit more, I realized that CAP in J, I used to think of those mentally, my mental model, and what I would call them in my head were fork breaks, because it is taking a... Well, it's not even taking a train, because it's adding the... It's taking the left tine, you put CAP there. But I always thought that that is forcing it from being a three train to a true train. But that's actually not what's happening in J, because a two train in J is the S combinator. It's a hook. And it's really taking what would be a three train and either a monadic or dyadic fork. It's pushing that to be a different two train, which is the B combinator. Which, so then calling it like a fork break isn't actually a good name for it, because it's kind of like a two train changer or something. It changes the definition. And then I was thinking, how does that compare to BQN's nothing? Is that actually still like to call that a fork break or fork breaker? Is that fine?Or is it actually doing something similar to J? And I'm not sure if Bob and Nars2000, if you have something similar to J's CAP or BQN's nothing.

01:18:25 [ML]

You could call it a tine clipper, tine trimmer.

01:18:28 [CH]

A tine tripper? That's nice.

01:18:30 [ML]

Tine trimmer. So you got your fork here and you just take some scissors to it.

01:18:34 [AB]

A tine trimmer. It's like a jot, right? If jot hadn't been overloaded, if it was still this null-y thing, then it could have been put into that slot as well for this kind of thing. It's filling a slot that otherwise would be used. And so it's basically saying, well, yeah, you would do... It very much does that in J, right? Because the two train is very different in J. But the top and the fork are not very different. It says, the three train says, apply this function and apply that function, and then feed those results to this middle function. And if you fill the left slot with whatever symbol that stands for that, it says that whole leg of computation just disappears. So nothing gets fed into this left argument. And essentially, it does that in BQN as well. It just happens to be that that's the same as a two train.

01:19:30 [ML]

Well, I mean, in BQN, you can use it not just in trains, but you can use it in an ordinary expression. So it does mean it is a syntactic thing that you can put in a function's argument to say there is no left argument here. So that's a general rule.

01:19:52 [AB]

I've been studying lately APL+ and it has something called quad no value, which is...

01:19:57 [CH]

How is that spelled?

01:19:58 [AB]

Quad, like all the other system functions? N-O-V-A-L-U-E, quad no value.

01:20:05 [ML]

You thought it had a glyph.

01:20:08 [CH]

Like in my head, I hadn't said this out loud. I was like, the reason why I've been spending brain energy and this came up is that for code golfing purposes, I mean, I've already expressed on this podcast before that cap and J, it breaks my heart because it's not as ergonomic as the three train, two train model in Dyalog APL. But when you want the B combinator, it's cumbersome. You need two different symbols. And I also don't think it looks great. And it's composing unary functions. It's such a common thing. Anyway, so the fact that it's two characters, it's the left bracket and then the colon, it breaks my heart every time I have to spell it. And actually, I've just started discovering is that a lot of J, we'll call them pros, or the more idiomatic way to do things in J is to kind of avoid cap. If there's a way that you can use an ampersand or an at to get the same composition, that's actually what the more idiomatic thing to do is. But when you're coming from APL and you're so used to monadic three trains and the two train being composed, my default is to just like, oh, in order to get what I want, put a cap here. But there's usually a different way to do it. And I'm just not used to doing it that way. Bob?

01:21:18 [BT]

There's kind of two tribes in J. There are people who use caps to extend the list of applications. And there are people that instead use ats and atops, depending on how they want the ranks to come through. And I've seen both used. And depending on who you're reading, they'll predominantly use one or the other. I think you can switch back and forth depending on what you think is clearer to people. But sometimes an at or an atop makes things just very clear. And other times, it makes more sense to use a cap. I will say I do find it difficult to read multiple caps as they extend.For some reason, my brain doesn't quite hook to that.

01:22:05 [ML]

No pun intended.

01:22:06 [AB]

Ohh, come on, that was like the best pun ever.

01:22:11 [BT]

And I don't even know what I said.

01:22:12 [CH]

Said you said hook And this is not a video, a video podcast, but I think it was half or two thirds of everyone's instantly started laughing or smiling.

01:22:24 [BT]

When I edit it, I'll figure it out.

01:22:26 [AB]

That was epic.

01:22:30 [CH]

And what I was thinking too is part of the reason I think that my bias or whatever, my inclination was to reach for caps is that, I mean, when I first discovered this, it blew my mind that there's the 13 colon foreign function, I think, or I mean, plus or minus one that can convert an explicit expression to a tacit expression. And a lot of the times, whatever the algorithm is that is used to do that, you will end up with a lot of caps in your code. So using that tool, seeing the caps show up, I think my brain thought, oh yeah, okay, this is, that makes sense. And it checks out with my, you know, two train, three train Dyalog, APL model. And, but then as I've spent more time on the J forum and on code golfing solution sites, trying to learn the tips and tricks of the trade, I've started realizing that, okay, maybe that's not the, not the only way to, to think of that. Anyways, we should probably say, are there any final questions for Bob? Cause we've fallen off the deep end here a little bit. And I just have to say that,

01:23:30 [AB]

I have a question. Yes. Will you come back for another episode with us?

01:23:36 [BS]

Oh, I'd love to. This has been a huge amount of fun. And the fact that it heads off in so many directions all at once is even more attractive.

01:23:48 [BT]

Yeah, we're famous for that.

01:23:50 [CH]

I have to, I have to say, I absolutely love, I mean, I love all the interviews that we have, but there's something, a special place in my heart when we have implementers on, because half of the panelists here are implementers themselves. And then these are the episodes where I like, I'm really looking forward to listening to the edit before we release this because Marshall will be saying something and then you'll say something, Bob, and then Adam will say something. And I'm like, okay, how am I going to moderate this? Cause I don't actually know what's happening right now. I've, I've been, I've been lost on what's being talked about. Which for the listener, you know, they, I think they tend to like the more technical episodes, but yeah, it's just an absolute blast. Cause you know, I, I did not imagine that this was going the way it was going to go, but I am overjoyed that it did. So yeah, we'll, we'll have to have you back and hopefully we'll get to see each other again at the future Minnowbrook conference as well. Cause that was a ton of fun getting to see the latest updates from NARS2000. And I guess we should say that we will throw in the show notes, a link for anyone that wants to go. It's free to download, I believe, and free to try. You know, as Bob mentioned, not necessarily recommended for, you know, a production system that you might be making millions or losing millions off of, but if you want to go and you know, try out some of these experimental features, definitely sounds like there's a ton of stuff that you'll find in there that you're not going to find in any of the other APLs out on the markets today.

01:25:07 [BS]

You know, the next time, the next time I come on here, I'd like to talk some more about the implementations. Yeah, absolutely. There's a lot of really interesting things going on there in, in the implementation, particularly with hyper complex numbers where you think, well, that's gotta be a Morris of complications, but actually it's, it's much simpler than you might think.

01:25:31 [CH]

Awesome. Put it, put it in the, the Google doc of implementing an array language with Bob Smith. His second time on the podcast.

01:25:37 [BS]

Next time next time.

01:25:39 [CH]

We'll we'll get that slotted for some time in the future. All right, we'll throw it over to Bob Therriault for where to contact us if you wish to.

01:25:48 [BT]

You can get in touch with us at contact@ArrayCast.com. And if you have questions that you would like to see asked on future episodes, feel free to include them because it's always interesting to see some of those. Also a shout out to our transcribers Sanjay and Igor for doing the work to produce a text version of this that many people access that way too, if they sometimes maybe just to search, but we always like it when people listen to us. Some people really enjoy listening to us and we thank them for their comments. And this has been a really, really fun episode.

01:26:24 [BS]

A huge amount of fun. Shout out to you all.

01:26:26 [CH]

Yes. Once again, thank you so much for coming on, Bob. I'm very much looking forward to part two when we, we make that happen. And with that, we will say happy array programming.

01:26:36 [ALL]

Happy array programming..

01:26:38 [Music]