Transcript
Thanks to Rodrigo Girão Serrão for providing the transcript.
00:00:00 [Bob Therriault]
This will be a test of our listeners’ devotion.
00:00:03 [Adám Brudzewsky]
If you still have a listener after this, they can take anything.
00:00:06 [Music Theme]
00:00:16 [Conor Hoekstra]
Welcome to another episode of ArrayCast.
00:00:19 [CH]
My name is Conor. I'm your host for today and I guess for all the other episodes until I get kicked off, we're going to go around and start with brief introductions of our panelists. We'll start with Bob and go to Stephen. And then go to Adám.
00:00:29 [BT]
Well, up until recently I would just say that I'm I've been a J enthusiasm for 20 years and then somebody asked me a question and I didn't step back from it, so now I'm actually coordinating the revamping of the J Wiki, which is a bit of a monster, but we're getting on it under way, we've had a meeting and people are coming together, so I actually have an official responsibility, but yeah, that's what I'm doing right now with J.
00:00:57 [AB]
I'm Adám Brudzewsky. I I totally feel your pain Bob. I do a lot of editing on the APL Wiki. But other than that, I'm a full-time APLer at Dyalog Ltd.
00:01:08 [Stephen Taylor]
I'm Stephen Taylor.
00:01:09 [ST]
I used to edit as well. I used to edit Vector, the Journal of the British APL Association for some years. I've been an APL programmer since forever and q and kdb programmer for the last several years. Currently the Kx librarian.
00:01:26 [CH]
And my name is Conor as I mentioned before. I'm a professional C++ developer, but I'm a huge array language enthusiast and super excited about today's topic. I was trying to jog my memory. Have I done any editing? I did for Father's Day once when I was like 8 make a little three page newspaper with my three sisters. I'm not sure if that counts, but I think. That's the only editing I've done in my life? But yes, we don't have any announcements for today, unless if we want to chat a little bit more about what you're doing, Bob, I did see on the reflector or the emails or whatever they call the J forum. Are there things we can expect or should we should we wait until more meetings have happened and you've got news to announce? Or do you want to say anything more more about the the revamping of the J wiki?
00:02:12 [BT]
Well, as Adám acknowledges, this is a really big project and whenever I hit into a really big project, the first thing I do is sort of walk in and look around and find out what I'm working with and and the people I'm working with and luckily I'm working with a number of really excellent people. A lot of different skills, but I think we still have to come to terms with how we're going to deploy that. So it's not going to happen quickly, but most large projects don't happen quick. And I guess the I mean generally, in a general direction, we're just trying to make the wiki a bit more accessible, a little easier to search, which I think a lot of people will be very happy about, and maybe a bit more friendly towards people who are just picking up J and that's, those are the sort of the main areas, but the the wiki, the example I gave is like a mansion that's been left, you know, out in the wilderness for a few years, and it's got great bones and it's got a lot of different things and you could turn it into Disney World if you wanted. But but it's going to, you know it's it's, there's a few rooms there that should be, you know, carved off and and maybe a few others that should be expanded. So yeah, at this point, other than the general thing is making it easier to search and more friendly to newcomers, I don't think there's too much amount more we can talk about right now, but we are going through and figuring out what we're going to trim off.
00:03:28 [CH]
All right, well, it sounds like over the next however many months or how, this could be a year long project, they'll they'll be announcements along the way, so we'll look, we'll look forward to hearing more about that as time goes on. So today we are, this is number 3 I think of talking about tacit programming, AKA kind of point free programming. We've discussed the subtle differences between the word tacit and point free in our two prior episodes. We will link to both of those in the show notes for folks that want to go check those out. I always have a super fun time recording these episodes. And today we're going to do something a little bit different. I'm going to, we're going to start by trying to solve a problem live, which I have not fielded to my 3 panelists. So Bob, Stephen, and Adám are going to try and solve this live. There's no real right or wrong answer. I'm interested just to see what these array minds, what their first initial thought of; and you don't actually have to, you know, spell out the oh, it's a reverse and then a parentheses and whatever. It's just like the verbal explanation of how you would do this because I think diving into you know the character by character solution of this is probably not super interesting to the to the listener. But the problem is as follows. I believe it's it's from a Perl weekly challenge, and it's called well, actually, I won't tell you what it's called, because then you might be able to look it up well while I'm explaining it. But basically you're given an array of numbers. So for instance we'll call it 1 2 3 4 and you want to return another array of numbers which is equal to the product of each of them, but where you have removed one element each time, so for 1 2 3 4 you want to first remove one and then take the product of that which will be 2 * 3 * 4 which is 12. So that'll be your first number in your output. Your second number will be removing the two, so 1 * 3 times or no, so 2 * 3 * 4 is 24 sorry. Removing the two, which is 1 * 3 * 4, which is going to be 12 and then so it's 24 12. Then you remove the three is going to be 1 * 2 * 4 is 8, and then you're going to remove the four, which is going to be 1 * 2 * 3 which is 6, so I mean that's sort of a trivial example where it's increasing, but you could have like 10 5, you know, 7 8. And so you're basically one by one removing an element and then taking the product of that array. And that corresponds to an element in the output array, so there's several different ways that, when I first stumbled across this, I solved it and I'm interested to think or I'm interested to hear. So whoever wants to go first, or you know whatever first comes to mind.
00:06:19 [ST]
Just to be clear, make sure I've I've followed the problem. If you've got N elements in the array, then the, hm, then the result to be returned is going to be an array of N elements, but each one is a is a product formed in the way described.
00:06:40 [CH]
Correct, yes, where technically the index of the output is going to correspond to the product of the array with that index, the corresponding index, removed.
00:06:49 [ST]
Oh cool, then it's trivial.
00:06:50 [CH]
Oh, trivial, uh?
00:06:52 [ST]
So I could put it in a marker. I'm going to put a marker down for an immediate solution, but I won't say what it is and let's hear from my colleagues.
00:06:59 [CH]
OK, so I think Stephen, and this this could go poorly Because, this could go poorly if all three of you jumped to the the solution that I have in mind. So Adám, well, actually maybe tell me how many characters. Uh, in whichever language you have. Even if you're off by a couple, it doesn't really matter, just, so, ballpark it, that I know.
00:07:20 [AB]
I've got 16 in, as a APL dfn.
00:07:26 [CH]
OK, roughly 16 for for Adám. Stephen?
00:07:30 [ST]
I got 9 as a dfn.
00:07:32 [CH]
OK, that's that's that's ballpark what I'm expecting in. Bob?
00:07:35 [BT]
I've got six, but I haven't run, I I mean, I'm just doing it in my head. I'm not, I could do it on the machine I guess.
00:07:41 [CH]
OK, yeah yeah, plus or minus is just ballpark, so let's go with the let's go with the longest of them first. So we'll start with Adám and like I said, you don't need to read it character by character, but just verbally explain how you would go about doing this.
00:07:52 [AB]
Right, so assuming that the input array is is a list, or vector, so I generate all the indices for that vector. And then I use a set difference where I take the entire set of those indices and pair them, pair it up with set difference with each one of those indices. So I use the rank operator for that. And then that gives me a 2 dimensional array of indices, which I can then use to index into the original array. And now I have on each row another subset of the input array, which I can then just, uh, multiply across.
00:08:34 [CH]
OK so I I don't think I understood the set difference part, but it sounded like you were building up a matrix where each row in the matrix corresponds to a mask that's being applied to your original array, and then you're just doing...
00:08:47 [AB]
Not a mask but a list of indices.
00:08:51 [CH]
Or a list of indices. And then you're using that to what? Pull or pick?
00:08:55 [AB]
So, now I have a matrix of indices and I can just use that to index into the full array, so that gives matrix of elements, and then each row corresponds to a subset that I can then multiply across.
00:08:59 [CH]
OK. OK yeah, that makes sense. Do either Stephen or Bob want clarification just in case the listener or, so, yeah, probably depending on what level you're at in your array journey, that may or may not have made sense, but what we'll do is maybe after this I'll collect or ask folks to type up their solutions and we can find a way to link that, if people want to take a look and actually read through these. OK, so that's we got the first solution from Adám, which is sort of building up a matrix of indices that are used to sort of extract out the corresponding indices, so you can get the subarrays and then you just do multiplication on those subarrays. We'll hop over to Stephen, who's got the the next shortest solution, plus or minus a couple characters.
00:09:46 [ST]
Yeah, actually Adám’s approach was the first one I was thinking of. And if you express it the in in the terms of like equivalent masks, you can get that by generating an identity matrix. But you don't need to do that. The the solution is the if yeah, if if the original array is X, it's multiply-reduce X and then divide by X.
00:10:17 [CH]
Yes, Stephen has, Stephen has gotten to what will be, uh, close to the optimal solution. Basically very close. Where, do you want to explain that? Again, just in case listeners didn't catch it?
00:10:32 [ST]
So we've got an, we've got a list with N items in it and we want the product, but we want N products, each each one a product with one item taken out. So that's equivalent to the product of the, all the items in the list, divided by the items in the list.
00:10:56 [AB]
What do you do if one of the elements in the list is a 0?
00:11:02 [ST]
Nasty
00:11:02 [CH]
We'll ignore that corner case and assume that that does not exist and all all the values have to be either negative or positive, but can't be 0. But yeah, that's a good, that's a good corner case point, but yeah, and so, and the reason that this works so nicely in an array language is that you can... So if you get the product you end up with a scalar. So in our example over in 1 2 3 4 equals 24, and then you can divide that by an array to get the resulting array, but many languages you'd have to set up, you know some sort of map in a functional language, or list comprehension in a language like Python or something. But let's get to Bob to see if he had a even different solution than the two stated there.
00:11:45 [BT]
OK so my solution actually has, and again, it's all in my head, so when we come to writing it out, it may be completely wrong. There is a verb in in J called outfix and I and I'm smiling when I say it because at one Point, I think it was when you were doing one of your live things with trying to revamp the J source and turn it over to C++, we found infix and you said, you know, “This is called infix. What kind of a name is that? And what happens if you do this other thing?” And I said, well, that's outfix and you said, “Oh that's even more ridiculous”. Well outfix is exactly what you're looking at, so all outfix does is it takes the list and removes one at a time and it gives you the array of items without the missing one each time. Now, you can change the number of items that you take out each time. But, in this case, I think it's negative one I do, so it doesn't, it's only taking out one at a time. Anyway, once I've got that list, all I'm doing is I'm doing, uh, product reduce across each line, so I’ just multiplying each line across. That's how we do it.
00:12:53 [CH]
Awesome, yeah, so that I, at least to my knowledge, that outfix operation does not exist in APL or BQN, although I do have a limited knowledge of BQN, so it's possible. But yeah, that is that would be a very useful utility to have if you're trying to solve a problem like this. So this is this is actually I was concerned for a sec that everyone was going to come up with the exact same solution because Stephen got it so quickly. But so when I was solving this, I'll walk through the three different solutions that I initially came to and then the third one and how that relates to Stephen. So the 1st way I sort of solved this was similar to Adám's, although with the twist that Stephen mentioned where you're taking masks and so you sort of get the identity matrix, you not that identity matrix so that you get zeros down the middle and then you can sort of using mix or whatever you know and replicate filter out the ones you want. And then do the product across. The second thing I was like, well, this probably isn't the best, but I'll give it a shot. I tried basically using an iota sequence and rotating and then dropping the first element so a little bit more work, but it actually turns out to sort of be the same number of characters as the initial solution using sort of the Boolean masks and filtering out. And then I stared at it for a while, because I'm starting to get this intuition is when I look at a solution that has a certain number of parentheses or just the length to it, based on the complexity of the problem, that there's gotta be a simpler way to do it, and that's when I came to Stephen solution. But Stephen solution in tacit is 4 characters. It's plus slash or sorry multiply slash for your reduction division for your binary operation and then identity for your right operation. Which is actually the equivalent, I believe, of a hook in J. And this problem like to me, and this is what's going to lead into us talking about our favorite tacit expressions, is such a good demonstration of why tacit programming is so incredibly powerful and beautiful! Because if you compare my 2 original solutions of building up the indices using the identity matrix and not-ing it, and then you know you've got to mix it and filter it out, probably there's a more polished way to do it, 'cause I'm not the world's best, APLer, but it's it's, you know, a double digit number of characters, and there's a lot going on and then you do it the rotate way, that's definitely suboptimal, and then you come to this 4 character fork where it's just basically doing a reduction and a division and identity. And it's just like and that's, the thing is Stephen, you jump to it pretty quickly in that recognizing that you can just basically do a single reduction, uh, multiplies reduction and then divide out each of the individual characters. I didn't jump to that until I was looking for a fork or something simpler and it was the APL, the fact that APL and these array languages have these combinators that like pushed me in the direction to think is there like an easier solution, because I think naively, when you have all these algorithms at your disposal, building up sort of a Boolean mask or indices to sort of select out the elements you want and then multiply them one by one. That to me, is like the what like is the most obvious or intuitive thing to do, because that's kind of what you would do in real life if you were just one by one, like when I was explaining it, I explained it 1 by 1. Sort of going through the first example, drop, dropping out a character. I didn't explain it by multiplying them all up and then dividing them out because that's not really what the problem was asking for. It's a way of solving the problem, but at least for me it didn't become obvious that you could do that until I was looking for sort of a fork or a combinator expression. Anyways, I'll stop there. Get folks' thoughts because yeah and and maybe I'll make a YouTube video along with this to sort of launch when this episode comes out, because probably there was a few folks that are just starting out in the array world and they're like what in God's name are they talking about? But yeah, I'll pause. What do, what do folks think or do they agree? Disagree? Bob, you go ahead.
00:16:56 [BT]
Well, one thing, and it goes actually back to the most recent episode of your other podcast, adsp, because the fact that you've got a shorter list of verbs or operators or conjunctions, whatever you call them, doesn't mean that it's more efficient. It's the implementation, so you would need to look at the different things the the different ways it's being done in the ways the languages are actually interpreting it. There might be certain combinations that are even quicker, but they might not be as short, but I take your point that if you are looking at tacit, it does tend to stimulate your mind to look for other solutions, because you may only be typing out, you know four or five characters, and so it's easy to try it and you'd come back, which is another thing I was thinking about when you asked the question. If you're thinking about typically how I would solve this problem, I'm listening to your question originally, and I'm kind of well as close as I come to panicking because the way I would normally do this is probably jump it onto a keyboard, try a bunch of things, and 15 minutes later I might come up with something and 15 minutes after that, I might actually come up with something that I'm fairly satisfied with. But it's not something I would do usually off the top of my head, but luckily as soon as you said something I said, well, that's it, that's outfix, I said. I said, you know, the the stress level dropped, so the, I guess the thing I'm saying is the way I approach it, it sounds like you know we can come up with these things off the top of my head, but for me, or the top of our heads, but for me that's very atypical. Normally I would be, you know, “Yeah, Adám's sounds like a good solution.”. I mean, that's about the best I could come.
00:18:34 [CH]
Yeah, and I will admit it's a tiny bit unfair because when I solved this I did exactly that. I don't think I came to that solution, I'm not sure if it was 15 minutes or 30 minutes, but I did exactly that. I started, you know, playing around, sort of, as I've done in previous YouTube videos where I, this is the most naive thing I'll do that we'll see, and then I'll try something else. And then I'll I'll sit and I'll be like there's got to be something better than this, right? And it's not until I've tried a couple different things with respect to the performance, though 100%, I've seen, you know, the APL wizards that know what under the hood is going on with Dyalog APL, they'll be like “Oh well, this is great, but if you do this or this” and it's a little bit longer and some of the times you're just like oh, they know some optimization that under a certain length, this one algorithm is going to be blazingly fast? Uhm, for this problem though, I actually think the fork is by far the most performant because the two other solutions are going to involve quadratic space in a sense, because you're building up all these masks or list of indices, whereas in this one it's a single reduction with another, basically a scalar with an array operation, which is going to be quite fast, so that being said though, I think this is a unique problem in that many times there always is some longer form that relies on extra knowledge, that'll be faster if if you really know the ins and outs of the interpreter.
00:19:53 [ST]
As best I can reconstruct my, my thoughts on this, I jumped initially to the same approach that Adám was using, which I would say is a more general approach. Because if you're going to like take some elements out of the list and then product, then generating a bunch of masks, like it seems like a good place to start, and it was only when I saw I'm only taking out one in each case and it's a different one each time, that I got the the insight and that I only needed to do the one for a product of it. I've got an extension to cope with Adám's Adám's Corner case.
00:20:40 [CH]
Alright, let's hear it.
00:20:42 [ST]
So I said take the product of X divided by X. To cope the Adám's corner case, you divide by X + X = 0.
00:20:56 [CH]
Oh yeah.
00:20:56 [ST]
And I wonder now that's a simple and obvious extension, which I put in in a heartbeat as soon as Adám made his annoying comment. I put that into into the d-function, but I'm thinking. So what do I do now if I'm if I'm still wanting to go tacit?
00:21:16 [AB]
In k, no chance I guess. In APL not a problem at all, you can literally write the product, and then J is the same thing, the product divided by the identity plus 0 equals the identity. Or you could combine the zero and the equals by binding the argument, currying the argument so, so identity plus this.
00:21:42 [BT]
What would we want the result to be? Because if I'm thinking, say there was one 0 in your string. What you're going to end up as a result is a bunch of zeros except for the one where the zero's taken out and you'd get a a positive integer, right?
00:21:56 [AB]
This is what you asked for.
00:21:57 [BT]
Yeah, I know, but that's what I'm saying is I'm I'm not clear on where it creates such a problem with, well, I guess if you're doing the division...
00:22:05 [CH]
Division by 0, yeah, that’s the problem.
00:22:06 [BT]
Division by zero, which in J gives you infinity, it actually gives you a value which is infinity, it’s not undefined.
00:22:13 [CH]
Go ahead, Stephen.
00:22:14 [ST]
I just going to say my fix doesn't work at all, because if you don't add that into... what you take? It doesn't work because when you initially take the product of X, it's going to be 0.
00:22:27 [AB]
Yeah, so you can write it tacit it doesn't help you, it's still wrong. You need to get, you, what you need to do is, you need to yeah, even that you couldn't ever, thinking you could remove all the zeros first, but that's not going to help you either. 'cause if there's multiple zeros they need to be factored in. The another thing that I thought about when you said you think, well, it's all very clever, the product about it by by all the elements and I think the morale out of this maybe is “know your data”. Conor comes in afterwards, says there are no zeros. Well, he didn't tell me that right? And and and also and what about nested data, right? This is not going to work if every element of your list itself has multiple elements. That's just going to be BMS and hm...
00:23:12 [ST]
Right, right?
00:23:15 [AB]
Whereas using masks or indices will give you exactly what you're asking for, and as as as you mentioned before, also, if we wanted to switch up the operations here so it's not multiplying, it's some other operation. Yeah, it only works because you have this nice relationship between multiplication and division, but not everything can be reversed. What if it was max or min? There's nothing you can do, it's lossy.
00:23:43 [BT]
Yeah, I'm really glad this this discussions turned into this sort of deeper thing. People who are starting out with the array languages are going to be going “Oh my goodness, this is so confusing” and the point is, is that none of us, I don't think, in most general circumstances would have come up with something off the top of our heads if not for our experience. And when we were starting out, we certainly would have been trying things and putting things together because that's the nature of a language that's interpreted. So if if the if the solution doesn't strike you because of your experience right away, and you're walking the swamp trying to figure out what's, you know what's water and what's mud you would be in with the rest of us. If if it doesn't come from experience, then it comes from working through the problem, and as Adám says, really important to think about your data, because that's where it starts from. You're making a transformation on the data. If you don't understand the data, it would be very hard to come up with the right transformation.
00:24:37 [CH]
Yeah, and it's it's a great point from from Adám that I think I, uh, of really valid criticism that I get a lot on the YouTube videos that I make is that I'm solving these little leetcode problems where they have these really nice constraints on like “oh well, we know that the list is going to be less than 1000 characters and it's always going to be flat”. And you're not going to have any, you know, bad input or something, it's designed for you to solve the algorithm part and not have to think about all the corner cases of data and you know, stuff that you might run into the real world where, when you're actually writing production code, you know, to deal with the real world. That stuff is very important. Yeah, so I, I completely think that's a great point and I'll actually, I'll go and find the problem statement and see, but my guess is that zeros are included and I think I submitted that as solutions and so my solutions are technically broken for for that corner case.
00:25:35 [BT]
And if you had multiple zeros then you're going to have zeros right across the board in your your solution.
00:25:41 [AB]
Yeah, so so if this was production code, very important Code, you could speed up things by starting off with some, testing some things. Are there any zeros? How many zeros are there? In which case early outs.
00:25:54 [CH]
Yeah, I guess I guess I'm not even sure what production. Maybe that's we can we can all go away and what would we write but one version is just to have, you Know, is the 0 count greater than one? And then it's just zeros across the board. Is the 0 count one? In which case it's just the filtering out that one zero in the product and then the fork works perfectly. But is that better or more performant than sort of, the index one or the the Boolean mask one or even the out the outfix one.
00:26:25 [AB]
Oh absolutely.
00:26:27 [ST]
If you've only got one 0, then there's only one of your, there's only one non-zero element in your result.
00:26:34 [AB]
Yeah, I mean in the general case where there are no zeros, even then, I mean then we're talking not comparable at all, right? Yeah, you can use, you can use vector processing and and and we're talking about a, about 2N multiplications, right, or divisions, whatever. So it doesn't even become nowhere close the O(N²) [Oh-en-squared] stuff that we do otherwise, but again, it's not as general right, right?
00:27:04 [CH]
Yeah, and I I will say that that pattern of building up the indices, whether that's using an identity matrix to filter out, you know a single one, or if it's you know using the the two base to get sort of the binary representations of different numbers between certain ranges.
00:27:21 [CH]
It's an extremely useful pattern.
00:27:24 [CH]
That is, you know if you if you learn how to write that solution, you've also it's, you know, this is where if Aaron Hsu is listening, to be like, “that's exactly my point” is you know you learn that sort of 1 pattern and, or idiom I guess if you will, and it extends to so many different use cases, not where you're just dropping a single element, but if you have some other patterns where you need to filter based on some sort of operation or predicate, it becomes very useful, but at this point, let's segue now into our favorite or sorry, Stephen, you're going to, one follow up thing.
00:27:55 [ST]
Yeah, I want to pick up on a point that was raised, I think it was last week about test driven design. So you've talked about how you actually do these things in the REPL, and we Bob was saying, you know, I sit down and type some stuff and I'm thinking again if you weren't deliberately catching me on the hop in front of the microphone, I would’ve sat down and tried a few expressions at a fairly early point, I did, generated a bunch of vectors to try this on. And do that in the REPL without having to set up a test truck framework or server. And satisfy myself that I was getting the right answers across a variety of cases and I do that pretty informally. It's not, it doesn't really qualify as test driven design 'cause the tests would be stuff I've made up on the fly, but it pretty quickly gets me to some code that works over a wide number of cases, and sometimes I've been able to put in in a a comment line in a one or two line function, A single expression which, when executed, will test whether the function is working. So well, that's the point I guess I'm reaching for here. Is that the, well, what I guess we think of is the natural way of working in the REPL in the array languages incorporates a lot of test driven design principles.
00:29:33 [CH]
Yeah, I completely agree that's basically every time I'm solving one of these, you know, cute problems: I'll start with the first test case and just work until I get a solution that solves that, and then I'll immediately either you know I don't actually know how to nicely build up and iterate over binary functions, but if they're unary, I'll just create like a, whatever list of lists with the parentheses and APL, and then just do a foreach and I'll check, do my answers match the answers? And so I I don't think that's purely, I think there's different flavors of test, test driven design, but one of them is sort of like the red, green or whatever where you you set up your first test and it fails, and you get that to pass, and then you go and try and find another one that fails. But like this is sort of loosely that you're you're getting your first test, getting it to pass, and then ideally, if you did it right, you know all your other test cases, they just light up green and you're good to go, but it's it's honestly like when you're in that REPL... I don't even know, is there another way to develop? Like, it just seems like the natural way to like, that's, the thing is I've worked at companies where there was no unit testing, but that's just because it was this sort of behemoth program that had what they called regression testing, where they would sort of output just a crazy amount of data and they would check, oh, with all that data equal to each other. But it's very hard if you don't build unit testing in from the beginning to sort of like shoehorn that on afterwards, but if you build unit testing into your system from from the get go and then you just have like a file of unit tests. Or you know a bunch of files of unit tests like the first thing, anytime I'm adding some new functionality is I just go write a unit test just 'cause it's like, oh well, I know I would need to get this to run and I'm going to have to write it at the end anyways. 'cause like you know you don't submit code without adding tests for it, or at least that that's the way it works on my team and hopefully most teams aren't releasing code without testing it, that like if you have that framework, I don't even know what the alternative is like it just, I guess you could start coding and then after the fact write the test, but it's like if you can write the test and then just sort of have a shell that calls a function that starts out empty and then you start building up, it just to me it makes, that's just why would you do it any other way? Because at some point you're going to finish that function, it's going to light up green and you're going to get this little, you know, warm fuzzy feeling in your brain as I did it, let's go write another test and see if I forgot or I got all the cases. Or I'm not sure if folks have alternative methods if they want to share if everyone is sort of just, that's how, that's how they operate.
00:31:54 [AB]
I I think it can, the tests can be distracting if I have an idea about how I'm going to solve this problem, I need to get that out of my head and into the interpreter as soon as possible. And then I can go and write my tests afterwards. If I have to start off by writing tests and then deal with a bunch of messages coming out that this is failing, this is failing as I go along, it's distracting, it's noise.
00:32:21 [CH]
So I guess my question is, is when you're building when you're getting that idea out of your head, are you doing it, as like a tacit expression with no data to the right of it, that's not evaluating anything, or like do you, do you at least have a simple like case. That's, the thing is, anytime I'm in a REPL I always have some data to the right of my function. I'm not building up a function and then adding a plus reduce and then adding a filter with no data, because how do I know if I'm typing it correctly like I need the interpreter to tell me. Oh wait, you didn't do that right? Go fix it at this point. So like, at least for me, there's always a small piece of data to the right of the expression I'm building up, which is, I guess, kind of the point that I'm making is is that that is inherently a form of kind of test driven design. Sure you didn't go write 50 tests in advance, but a lot of the times that's what it worked. You write one test, you get that to work and then you go write a bunch of others to see if you got it right.
00:33:10 [AB]
I would say it depends. I switch method. Like, uh, if it's something small I might like, sometimes I do code golf challenges and I might very well then take all the inputs and all the outputs that are specified as test cases and then develop in between those and match them all the time so I can see I get a Boolean output, something, where they, whether they match, sometimes I'll do top down programming, in which case nothing will work at all until I'm done. And then I can run the tests right? The test at the end, and sometimes I build it up from the from the bottom. Like you say, with some test cases, in which case I I tend to try to find the most difficult case, the one that has the most pitfalls and things in it, uh? I think all of these are are valid ways of doing it.
00:34:05 [BT]
Yeah, I like to think of it like I'm playing around with it. I mean, it sounds silly, but I'm I'm playing with it like I'll I'll probe it. I'll try different things and the other thing I do because J has an awful lot of primitives and I I can't even keep track of all of them sometimes. I'll I'll go to the to the NuVoc page and look at what all the primitives are and then just see what might fit and try a few things and I am I am I am using it on data so I think that's pretty constant. I wouldn't just write tacit without trying it. The trying it is really important, so I need data to have it run through. But I'm not always focusing maybe as much on the data as I should, but I'm I'm running something through each time and seeing what I end up and the first couple of times I go through, maybe it works, or maybe it doesn't work. Usually it doesn't work and I'm learning through that process and and it just comes with experience. I might now know, look, have a better idea where I want to look for the solution than I did when I started out. But that just came from being willing to play with it, you know, and it's very, it's not very structured. That was something I think when Ron Murray brought the test driven up in the last episode, I'm I'm it's, I certainly don't do any formal type of test until I've written what I want, and then what I will use tests for is to make sure that any changes I put in don't break. I definitely do that.
00:35:38 [CH]
Alright, so it sounds like it sounds like we need to have a full blown episode on test driven development, whether that's in APL. Or just sort of in software programming at large. But yeah, I've always, I've heard of these sort of test driven development frameworks and certain languages like Ruby where you can, it's very easy to sort of write your tests and then just like it's built into your sort of workflow. And I will say that, at least from the APL programming or J programming that I've done, that is not, it's not built in, It is very easy to sort of set that up. But I I love the idea of some sort of like IDE where, if you're building up just one expression, you sort of have a single line at the top and then like so do you have this pane where you can just type in some data and the result that you expect and that like it will initially show, you know, pink or red or something? And as you start to build things up and get it to work, it'll just go green, green, green, like a little Christmas tree with light bulb colors changing and and I don't know that tickles my brain whenever whenever I'm iterating like that, but I mean Adám's technique of, of top down or being able to build it all up, you know, for the hardest case like that, sounds like next level. Like you know, once you've once you've reached a certain level of APL, or you become enlightened, although even if I, even if I could do that at some point, I have too much, I feel like I have too much fun in the REPL, like just watching the result change that like, I would never want to, I mean I I shouldn't say never, but just like whether it's Haskell or APL or closure, iterating and being like oh now I'm going to map now I'm going to filter and then reduce. And there's my answer. Like getting to see the results Change, it, it's like it gives me confidence that I'm going in the right direction, whereas even if I get to the point where I could build it up in my head or like sort of type it out all at once. I would be like, what's the word, depriving, reprieving, or like whatever, I would be holding back that little confidence boost that I get to actually know that I'm like, I would just be, “I think this is right” each time, “I think this is right”, and then at the end it would be either a very jubilant moment of like I nailed it or like, yeah, no clearly clearly my thinking that I was right was a little bit off. And then I missed something somewhere which I will say is mostly what C++ programming is like. Is you program, you program, you compile, and then you deal with compiler errors for the next hour until you sort those out and then it works and then you go OK.
00:38:00 [AB]
No, but see, that's where they for me, I love that very much of the interactive debugging, the development comes in, so I might write all my code and it should work, but you know the computer doesn't understand what I mean. It only does what I tell it to do, so I start running my code and I hit some error or I get the wrong result. Or I might start tracing through. But now I can go and change my code, one that's on the stack. Because APL allows that, right? And then, so I backtracked, say oh, so this is not working. I can inspect the values as we go along. This doesn't look right. Backtrack 2 lines, OK so far. It's looking good. Have a look at the next line. The next things are happening, edit those. Then execute the next line. Yeah, this is looking good and then even just continue and then it all goes bad. Stop again, go back until I've worked my way through the program and fix all the issues and I think, I think there's a potential pitfall in the doing everything based on tests because you might, you might have all these tests or say numbers that you want multiplied, except for each element in each time you go through them. And you've passed all the tests and it's all great, but if your focus is not on those test cases, your focus is on the algorithms that you call them or the primitives, then I see division as I'm implementing it and I think oh, any zeros here will mess up. If I'd only based myself on the test cases, I would have missed that, but if I'm focusing on the algorithms involved then I think right away: “This is dangerous. There's an edge case lurking here.”
00:39:38 [CH]
Hmm, that's a great point. That's a great point. Speaking of which, I actually did, I can't remember when 10 minutes ago or something, I looked up the problem and the problem statement is you are given an array of positive integers. So if I had done my research correctly, I would have saved myself from the corner case, although I still think it leads to a great discussion because, like I said, these cute problems typically they put really nice, you know, restrictions around the data, which makes it lovely to solve. But when you translate that to the real world is is not as as nice and cute as that with.
00:40:11 [AB]
What's the result if the list is empty?
00:40:17 [CH]
It doesn't say that you're given a minimum number of elements, so. Adám here is just our corner case which is. You want him on the uh, overseeing the QA and the the the unit test writing. The reason that I brought this up was going to be to transition into sort of. What are our favorite tacit expressions? And I think, to date, this was like the most excited I ever got about discovering a tacit solution. Uhm, because of the difference between my first two initial solutions, and not even that those were bad solutions, but just like how much clearer like I could, I can read 4 characters of APL, especially when two of them form a reduction like in less than 5 seconds, whereas those other ones, if I had been reading someone else's code, I could read it, it would just take me a second to figure out what was going on. So yeah, for me it's this sort of array of product is the name of the problem solution. And if I had to choose, you know one other off the top of my head. It’d probably be “is palindrome”. That's like the classic case that whenever I want to explain what a a fork is to people because it's just reverse match identity and you can even do it as a hook in J if I feel like I want to explain that to folks (I typically don't). But yeah, that's “is palindrome” and then also array of product. But yeah my go to is is you know probably favorite and I've tweeted it a couple times, The “is palindrome”, but maybe let's go to Adám and then to Stephen and Bob and I'm not sure. I think Adám has quite a few, so I'm not sure if we want to go through them all or, how much explanation is required for each one? But yeah, just take it away and we'll go from there.
00:42:00 [AB]
So we're doing what? Trains? Any tacit function? There's so many they're all interesting. Uhm, maybe I would say. There's a primitive in in APL and also in J, and that's it's really a powerful one. It's called find and, and it takes 2 arrays of arbitrary shape and type and everything. And then it returns a Boolean mask corresponding to the look up array with a single bit for every element in the look up array indicating if the is the needle, should say, it starts in the look up array at this point? Hey so as a very simple example, let's say we have “ABAB” as a text, that's our haystack, and then our needle is just “AB” so we can see that “AB” begins at the first character and at the third character, so the result would be true, false, true false or in APL patterns, 1010. This is very powerful and general, but very often I want to find out whether a, whether one array is a prefix of another array. And so we have a function called first. And we have this find function and we just stick them together to form an atop, the first of the find. So if the looked-for array is located in the top left corner, so to say, but in as many dimensions as you want of the of the look up array, then the very first bit in the result is true, and so the first of that tells me whether or not it's a prefix of the other one. I think that's pretty cool and it hints at many other things you could potentially do with this, so I'll, even though you asked me for one, I'll throw in a couple of variations. If instead you do an an OR-reduction of such a find that tells you whether it's found as a subarray anywhere in there. Or you could do a plus reduction and ask how many places can you find this array. And so on. And you can. Then you can start combining it with, reverses to find if it's a, if it's a a suffix array rather than the prefix array, and so on and so on. There are endless possibilities there.
00:44:39 [BT]
Because your result is Boolean, you can, you can whenever you have a one, you have an indication that there's that sub array and if there if you had no ones there would be no sub array. So you can do all sorts of different manipulations with that.
00:44:52 [CH]
Yeah, that's really cool. I don't use find, ever, because it's not in my like lexicon of APL verbs that I know, but it's for those of you that are learning APL, and that are also in my position. It's epsilon underbar. And yeah, that's very cool and atop for those that aren't familiar, it's the B1 combinator where you evaluate the binary function on the right first and then pass that to the unary function on the left. Which I love. The B1 combinator. I'm not sure if it's my favorite combinator. It's what's yeah what is referred to us an atop. But yeah, that is, that's very beautiful. I always end up learning learning stuff in these episodes, and I think it's it's just a, there seems to be like a huge, not divide, but just like there's different camps of and not even just camps. I'd call it like there's a spectrum of, like people that are sort of, you use tacit when it's you know, super short, super expressive, but not really any other time, and then folks that are, you know, tacit-all-the-things, like that's what J was initially, I think intended to be sort of everything tacit. I've heard of folks in the J community. I won't remember their name, but they're on the reflector and apparently they have like custom versions of the J, you know, 90 something interpreter that they've gone and they they write everything tacit.
00:46:16 [BT]
José Mário Quintana, and it’s Pepe.
00:46:18 [CH]
Yeah, Pepe is the nickname that I remember, and so, and just like, you know, clearly everyone in this community like that was our first episode. Why do you love array languages? Why do you love the paradigm? So we're all in love with, you know whether APL, J, k, q, BQN, but within that community there's definitely a spectrum of how people feel like even Henry when we had him on, you know, he said he he personally really enjoyed writing tacit expressions but from the teaching perspective and what he viewed that a lot of the times you end up going too far. And so I think it's yeah I, I just find it a fascinating conversation to get different folks' takes on what their motivations for loving it or not loving it aren't.
00:47:01 [BT]
And on a recent forum you were talking about reflector or the forums, I think it was Ian Clark came up with the point that he feels that J should probably hide tacit as much as possible for beginners because it really isn't a thing that, it doesn't help the understanding of beginners. If they've come from most other programming environments, it'd be lovely to see what would happen if you had somebody come from a mathematical combinatorics environment, they'd probably find tacit really easy to figure out, but for most people who've worked with computers, tacit really is another jump, and it's not the one that, that you need to do initially to understand the language. You're easy, it's easier to start with some variables in places that you'd expect and then you work with those and then eventually you find out, oh well, I could get rid of these, but it's another step to do it, I think. Aaron Hsu, at one point in the discussion, talked about it being a different dimension of the way you look at a problem rather than linear. It becomes more like a tree and I think that's a that's a step that you probably don't want to put on beginners, unless they're already thinking in that way, they have that kind of training.
00:48:11 [CH]
Yeah, I do think there are some less great things about tacit, just in the way that things are evaluated. Like probably I think the most, not upsetting or confusing, but one of one of the most upsetting or confusing things is that when you assign a tacit expression like a fork to a function, and then if you just want to inline that per se, so you'd take your “is palindrome”, that's named and then you're using that, but then you just want to copy the contents of that function and replace it. Sometimes you can't do that and you need to parenthesize it in order to let the interpreter know that this is a 3 train, and I think that is very, very counter intuitive coming from basically any other language because having things that you know three in a row evaluate to a certain thing. Uhm, I think it's pretty novel and it doesn't exist in Python or whatever. Like if a lot of the times, if you've got some one liner function and you just copy in place, copy and paste it, it'll work. But for this, those parentheses are essential for it evaluating correctly.
00:49:16 [BT]
Well, and the thing that you're doing in that point when you're defining it without parentheses, you're relying on the fact that everything to the right of your assignment is essentially parenthesized right? Because I mean you're taking that as a block and sending it over and assigning it to a name? So you actually, you've parenthesized it without realizing you have.
00:49:37 [CH]
Right, right.
00:49:37 [BT]
And the problem is is you take that string without doing the assignment and drop it into the place that you're going to do that fork. And now, instead of doing it like a fork, it's doing it sequentially because that's how it does things when it sees a line without parentheses. So I think that's where that comes from. And yeah, it's a common, it's a common place where people, you know make a mistake.
00:50:00 [AB]
Yeah, there's a concept of isolation right, that, it's it has to be isolated from from surroundings and if you don't do that then it it won't work and it's of course extremely common to to mess this up. Hm may be a benefit in NARS 2000 implementation of APL has recently added trains as well, but it never allows trains without parenthesis. So even if you say foo gets some train, you must put parentheses around it, otherwise it will bark at you. It's a syntax error, and while it might seem a bit excessive to to require that, it does prevent some errors and later so I can, I can see the value of that, but it's not the only case by the way. And even if you have an an operator, dyadic operator that takes a an array right operand, That will clash if you put it in line with an argument.
00:51:01 [CH]
Right, right, right. Yeah, yeah, it's interesting that yeah, you mentioned that NARS, that interpreter might be nicer, and even though I just finished explaining how that was like, one of the biggest warts or confusing parts like in my brain, I was like, yeah, but that's I mean, I know now, and that's two extra characters.
00:51:18 [AB]
You're writing something that's four characters, and now you have to add 2 characters of syntax just because, it's
00:51:25 [CH]
50 yeah, 50% longer I. I don't like it. I don't like it even though it did remove some ambiguity. I I don't like it.
00:51:33 [BT]
Yeah, but the parser likes it.
00:51:36 [AB]
It does. I want to hear what Stephen has to say about about tacit trains because APL and J, well, they have some really powerful constructs in the form of trains. Hm trains become really awkward if you need to to apply multiple monadic functions in succession, we've all been there and all given up and switched to explicit mode, right? Uh, either that, or you end up with lots of parenthesis and or lots of combinatory operator. Yes, but k doesn't do that. 'cause all the trains are just straightforward. First you take one or two arguments and then you just apply monadic functions one by one. Or you could apply a dyadic function with a constant left argument, which is kind of bound together. It's all very straightforward, very easy.
00:52:25 [ST]
I won't speak for k, but for q certainly piling up a list of unary functions is pretty simple. You can compose them just by listing them and then sticking an @ at the end. That gives you a a composition. Generally speaking, for tacit I stay strictly in the shallow end of the pool. But my, but comment you were asking about my favorite my favorite tacit expression, and that took decades to emerge.
00:52:59 [CH]
This is going to be good. This is going to be good.
00:53:01 [ST]
And it may help those of our listeners who are struggling with the very idea of tacit to get some idea of of one, what the attraction might be. The, my favorite is basically the function for range where you convert a pair of positive numbers one large, the second larger than the first into all the intervening points, and I found way back in the in the 70s. So, you know, range of 5 10 would be 5 6 7 8 9 10. Though in practice, because of the problems I was encountering when the ones I was working on when I first noticed this, my range is what is it exclusive at the at the far end? So my real world ones were 5 11 would give me 5 6 7 8 9 10. Eleven was always the next item that would appear in a queue. So I was doing this often enough that I wrote a trivial little range function which would take the two numbers and return me the list, but it annoyed me that I had to have a function sitting around in my working utilities set up to do this and I thought, surely there's some way of moving from the two numbers to the list without putting a function on the stack and assigning variables and so on, skipping some intermediate steps, There's two, there's two moves to this. The first is that in APL, if you reduce a pair, a list that's a pair, you can use a binary function and basically the, if you use a binary function to reduce a pair, the first item on the list becomes the alpha, the second item on the list becomes the omega. So you could use reduction as a way of placing a binary, sort of dyadic function between two items in the list.
00:55:22 [AB]
Hold on, it might be obvious to us, but alpha and omega, the leftmost and rightmost characters of the Greek alphabet. Therefore, being the left and right argument. So yes, if you reduce over a list of two elements, the left element becomes the left argument to this binary function and the right element becomes the right argument. Continue, sorry.
00:55:43 [ST]
[In Greek:] ευχαριστώ, which is Greek for thank you. Yeah, and the the second part was when I learned that the compose operator would allow me to compose drop and iota so that a down arrow would drop followed by compose, followed by the iota will give me a binary function which would take the index of its right argument and use its left argument to drop elements of the beginning. So putting this argument in between my two, my beginning and end of my range, gave me the, all the index points that I wanted. I wrote an article blog post about three months ago to explain this. It's on my, it's called, “the rest is silence” and it's on my blog, 5jt.com easy to find. You can see that there it was. I was able to replace my range function with a few characters, basically an idiom which converts 2 numbers into the range.
00:56:59 [CH]
So what was the expression at the end of the day?
00:57:02 [AB]
Disclose drop jot iota reduction [ ⊃↓∘⍳/ ].
00:57:07 [ST]
That's correct.
00:57:08 [AB]
So the idea is you're inserting drop iota, so drop range which means if you insert that between 5 and 11 you get 5 dropped from the first 11 Integers, and the disclose is there because APL reduces the rank from one to zero, so it has to be enclosed the range.
00:57:29 [ST]
Now, annoyingly, q doesn't let me do that. Composition of a dyadic with a monadic function or a binary with a unary as we would say, but it does have a, it does have a mechanism other than the reduce, the reduce, or the reduce will still work between a 2-item list but the apply operator will let me take such a function and a, q, q's lambdas can have up to 8 arguments and the apply operator will let me apply such a function to a list of up to 8 items. And basically they all get mapped to its arguments and, but of course, that doesn't give me the tacit. I'm still using a Lambda to do it.
00:58:26 [AB]
But OK, but that means the k's programmers favorite tacit expression is written in APL ho ho. Well that is the truth though, no, but I mean I totally understand the simplicity of k's trains, if you want, or tacit expressions. But then again, the the difference is 3 characters or four or so. And you can add braces and names of the arguments and done and that you can do in J and APL as well. OK, and J, you have to double those braces now and, but then you completely lose that, what is the name, Conor of that, the combinator that the three trains are?
00:59:13 [CH]
The three train, the unary one is an S prime combinator.
00:59:18 [AB]
And the binary one?
00:59:19 [CH]
The binary one I call the Golden Eagle. The S prime corresponds to the Phoenix or the Starling prime. The Golden Eagle technically doesn't exist. It's a specialization of the E hat combinator, but I mean, I, it's already in, in Marshall Lockbaum's BQN birds site. He has a link to it and I basically he links to my Twitter. So now it's official that it's like a thing. So I just need to write like, an academic paper, and, and get it officially. So yeah, it's a, I don't know. I don't know what the letter of the combinator would be. Maybe we'll just call it the E hat hat or something hat squared? I don't know. I'll come up with it, but the the bird is the golden, The golden eagle. Even though, I made that bird up.
01:00:03 [AB]
I'll claim that this is one of the oldest combinators. Because mathematics has been using plus minus, for instance, forever. And so you can write a ± b and that just means a + b juxtaposed to a - b. And that's a train.
01:00:24 [CH]
Alright, so yeah, we are getting past time a little bit but we I want to give and so I'm not even sure if we fully delved into the dyadic hook so I think it's going to become like the Matt Damon of our tacit series where unfortunately we didn't have time to to talk about the dyadic hook, but tune in next time where we definitely will. But yeah, let's let's quickly go to Bob and get his favorite tacit expression to round this episode out.
01:00:47 [BT]
Oh, I'm going to be such a tease because I think mine, given what you guys are explaining, what, I came up with a problem that I thought I could do with dyadic hooks and I can, and essentially the problem is that if I want to do one operation on my right argument and one operation on my left argument and then combine them, I can do that in two ways. I can do that as a fork and all I do for the fork is, on my right tine I'm going to start off with the right verb, so I'm only looking at the right argument and my left tine I'm going to start off with the left verb and I'm going to only look at the left argument, and then I'm going to give them whatever operation I want. So, and then the middle tine combines them, so that's pretty straightforward. With the fork, you can do this with dyadic hooks and the reason you can do it with dyadic hooks is 'cause the way the hook works is it's asymmetric. And your right tine only operates on your right argument. Your left tine now combines your left and the result of the right tine working on the right argument. So what you start off with is, you transpose or reverse the two arguments because you want to start on your, you start, want to start working on your left argument first, so the first thing you do is reflex, which is a flip. So now I flip the two arguments and in the middle of that flip I'm going to put a hook. And that first right tine on that hook is going to be working on what was the left argument, but is now the right argument because I've flipped them and so now I've got my operation only working on my one argument and on my left tine I'm going to do another hook and I'm going to flip that one and when I do that now, the in the inner hook, the right tine on the inner hook will only be working on what was the right tine of the original before I flipped it twice. And then the middle tine is just, becomes the, ... Adám's laughing at me? The middle.
01:03:00 [AB]
His face, Conor's face, I mean.
01:03:01 [CH]
I'm doing, this is, the thing is I I'm really trying hard to follow 'cause this sounds like it's up my alley. I love proposing combinators to do things where you can isolate, oh, I really only wanted to do this, you know, yep, blah, blah. Well, uh, but trying to keep up with the verbal explanation, definitely, my IQ is it's not helping me out here.
01:03:22 [BT]
Yeah, it's that's that's why I laughed when you said, well, what's your favorite? Because I mean to me it's complex. I'm doing a hook within a hook and I'm flipping them twice, and in fact the first time I saw anybody do this kind of thing it was Raul Miller and and he sent this and this was back in 2011. I went back and looked in the forums for it and he sent it to Roger. And Roger said, yeah, I don't like it when you do a double flip. And that was his only response. Yeah, it would work. I don't like that. I don't that that just confuses me. And so if Roger says it confuses him. But tThe thing is, and this is why I kind of find it intriguing, 'cause I thought about there should be a way to do this and it all is based on the fact that hooks are asymmetric so you can do one argument to one side and one argument to the other. And then you do one within the other and flip them. And in that process you can do a separate operation on each of your arguments and then combine. And then, the end result is instead of doing 5 verbs because you have to use that left and right and a fork, you end and and then also two conjunctions to combine the right, left and the operators. So you've got all these, you know, moving parts; a little easier understand, but you've got a lot of moving parts there. You end up with two hooks, only the original 3 operators, the only original 3 verbs, should call them verbs. It actually simplifies it, if you can look for that pattern, it actually simplifies it, but it's you're doing mental gymnastics to get into that pattern. But once you see that pattern, you go, oh that's kind of cool.
01:04:55 [CH]
So is this, if I've got this, this is probably wrong, but it could be right. Uhm, is this like the variation on this Y combinator? Where, like for instance, if you're taking the difference of the lengths of two strings, you do like a tally For, this is in APL, you do a tally or on the sides, and you do that for both strings, and then you take the difference, which is just a minus. This sounds like it's that but you don't want the size to be the same operation.
01:05:21 [BT]
Exactly, yeah.
01:05:24 [AB]
Exactly, yeah.
01:05:24 [CH]
We want two different operations. OK and you can do this with a binary fork AKA the Golden Eagle, but it involves doing an atop or a B1 combinator composed with left and right, which is “hella” irritating. So what you're actually describing is very useful, and I've come across it multiple times. Yeah, so yes, I know where you're at Bob, and that does sound awesome, although I'm not sure if we if we a couple if anyone got anything else to say but that's going to be the perfect point to end. Tune in next time for the dyadic hook argument because it sounds like that was using dyadic hooks potentially and people have thoughts.
01:06:03 [AB]
I feel a bit bad for the listener, or how many, there might only be one left after this and I think I think I can as a non non J or I can explain this a little bit differently, I'd like to explain these these compositional operators as pre and post processing and the idea here is actually very simple, whereas what Conor was mentioning we call “over” is pre-processing the arguments to a dyadic function with single monadic function and here we have again two incoming arguments and we're pre-processing both of them, but they, we're processing them with two different functions, one for the left side and one for the right side. And all this flipping and things that that's yeah I couldn't follow it either. Uhm, but it's quite simple like, if you think of the hook uhm, as preprocessing its right argument with the right function so they have two functions next to each other. The left side function is the middle function in a 3 train. The right side function is a preprocessor for the right argument, and then there's a hidden passing in the left argument from the left, and so all you're doing is saying. Well, since we can only preprocess on the right and we don't have a functionality to preprocess on the left, then, OK, let's preprocess the right argument; done. Now flip the arguments around and now the left argument is the right argument. So now we can preprocess that using another hook and then flip it back again where it belongs and apply the final step in the process. I don't know if that was any better.
01:07:41 [CH]
I mean, I'm sure for someone out there that hasn't hung up on this episode that was better, but I can tell you I understood what you're saying, but it was just, it was, I'm not sure if it was as bad, but we'll have to revisit this.
01:07:54 [BT]
I'm going to do a video on it.
01:07:56 [AB]
I want to add one more step Conor, one more step. This construct, I don't know if it has an official name, but Marshall Lochbaum called it split compose in the, the idea is that the two different, the two sides take their own tracks. And then it's all composed together in the middle and and the problem here actually is that there's no left hook, and J has a construct, the two train, which is a right hook. It preprocesses the right argument. It doesn't have a 2 train for preprocessing the left argument, because we used up the two train. In Dyalog APL, we have the the jot as a very similar thing. It preprocesses the right argument and then applies the left function. There, there is one more compositional operator that Dyalog APL doesn't have and J doesn't have, which is the hook the other way, that preprocesses the left one. If you give that a symbol, and in my my model for an extended Dyalog APL has that as a symbol, then everything is very simple. Then you can write F reverse compose G compose H. And then the reading order matches the visual order matches exactly what's going on. You have the two arguments coming in from right and left. The first thing they see are the outer functions. Each one is being processed by its own preprocessor, and then the final product in the middle where they're being combined and BQN indeed does have this as native Uhm, combinator, operators and then it just looks beautiful.
01:09:36 [CH]
This is the it's the battle of how many things to add, right? 'cause like in my head I'm thinking, well, I mean it's just that's reflex or the C Combinator or flip or commute. Whatever you want to call it from whatever language with the original left hook or right hook, whichever it was. But it's the same thing as when I think you know we have first in APL, but we have no last. You have to do first compose with reverse and like I'm like God, should we have, but that's like where do you draw the line? Like adding, you know primitives or operators to map to this stuff so that you admittedly, I think a single symbol for last would be more beautiful, but is it worth like you can get it with two more characters? Similarly, with hook, you can reflect it, but let's pause there. We've overwhelmed the listener. We, I apologize for those folks that were really enjoying this podcast, and we went off, you know I I remember Stephen saying that he was in the shallow end and I think we chuckled and said ha, that's fine and we'll go swim with the sharks in the ocean. We're not even in the deep end anymore because I was confused. So thanks for everyone that's hung in there or hooked in there. And yeah, we will. We'll say happy array programming in a second stay tuned for episode 4 where we may or may not talk about dyadic hooks. Any last thoughts before we we close out here?
01:10:55 [BT]
I will do a video on what I was proposing, 'cause I think that's the only way somebody will actually follow through, so we'll put a link for that.
01:11:01 [AB]
You'll have to make something like where it rotates around when you flip it around something fancy.
01:11:08 [BT]
I'll do it in as many dimensions as you want.
01:11:11 [CH]
Alright, with that we will say happy array programming.
01:11:13 [All]
Happy array programming.
01:11:15 [Music Theme]